id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.12128
|
LaM-SLidE: Latent Space Modeling of Spatial Dynamical Systems via Linked
Entities
|
cs.LG cs.AI
|
Generative models are spearheading recent progress in deep learning, showing
strong promise for trajectory sampling in dynamical systems as well. However,
while latent space modeling paradigms have transformed image and video
generation, similar approaches are more difficult for most dynamical systems.
Such systems -- from chemical molecule structures to collective human behavior
-- are described by interactions of entities, making them inherently linked to
connectivity patterns and the traceability of entities over time. Our approach,
LaM-SLidE (Latent Space Modeling of Spatial Dynamical Systems via Linked
Entities), combines the advantages of graph neural networks, i.e., the
traceability of entities across time-steps, with the efficiency and scalability
of recent advances in image and video generation, where pre-trained encoder and
decoder are frozen to enable generative modeling in the latent space. The core
idea of LaM-SLidE is to introduce identifier representations (IDs) to allow for
retrieval of entity properties, e.g., entity coordinates, from latent system
representations and thus enables traceability. Experimentally, across different
domains, we show that LaM-SLidE performs favorably in terms of speed, accuracy,
and generalizability. (Code is available at
https://github.com/ml-jku/LaM-SLidE)
|
2502.12129
|
When Wyner and Ziv Met Bayes in Quantum-Classical Realm
|
cs.IT math.IT quant-ph
|
In this work, we address the lossy quantum-classical source coding with the
quantum side-information (QC-QSI) problem. The task is to compress the
classical information about a quantum source, obtained after performing a
measurement while incurring a bounded reconstruction error. Here, the decoder
is allowed to use the side information to recover the classical data obtained
from measurements on the source states. We introduce a new formulation based on
a backward (posterior) channel, replacing the single-letter distortion
observable with a single-letter posterior channel to capture reconstruction
error. Unlike the rate-distortion framework, this formulation imposes a block
error constraint. An analogous formulation is developed for lossy classical
source coding with classical side information (C-CSI) problem. We derive an
inner bound on the asymptotic performance limit in terms of single-letter
quantum and classical mutual information quantities of the given posterior
channel for QC-QSI and C-CSI cases, respectively. Furthermore, we establish a
connection between rate-distortion and rate-channel theory, showing that a
rate-channel compression protocol attains the optimal rate-distortion function
for a specific distortion measure and level.
|
2502.12130
|
Scaling Autonomous Agents via Automatic Reward Modeling And Planning
|
cs.AI
|
Large language models (LLMs) have demonstrated remarkable capabilities across
a range of text-generation tasks. However, LLMs still struggle with problems
requiring multi-step decision-making and environmental feedback, such as online
shopping, scientific reasoning, and mathematical problem-solving. Unlike pure
text data, collecting large-scale decision-making data is challenging.
Moreover, many powerful LLMs are only accessible through APIs, which hinders
their fine-tuning for agent tasks due to cost and complexity. To address LLM
agents' limitations, we propose a framework that can automatically learn a
reward model from the environment without human annotations. This model can be
used to evaluate the action trajectories of LLM agents and provide heuristics
for task planning. Specifically, our approach involves employing one LLM-based
agent to navigate an environment randomly, generating diverse action
trajectories. Subsequently, a separate LLM is leveraged to assign a task intent
and synthesize a negative response alongside the correct response for each
trajectory. These triplets (task intent, positive response, and negative
response) are then utilized as training data to optimize a reward model capable
of scoring action trajectories. The effectiveness and generalizability of our
framework are demonstrated through evaluations conducted on different agent
benchmarks. In conclusion, our proposed framework represents a significant
advancement in enhancing LLM agents' decision-making capabilities. By
automating the learning of reward models, we overcome the challenges of data
scarcity and API limitations, potentially revolutionizing the application of
LLMs in complex and interactive environments. This research paves the way for
more sophisticated AI agents capable of tackling a wide range of real-world
problems requiring multi-step decision-making.
|
2502.12131
|
Transformer Dynamics: A neuroscientific approach to interpretability of
large language models
|
cs.AI
|
As artificial intelligence models have exploded in scale and capability,
understanding of their internal mechanisms remains a critical challenge.
Inspired by the success of dynamical systems approaches in neuroscience, here
we propose a novel framework for studying computations in deep learning
systems. We focus on the residual stream (RS) in transformer models,
conceptualizing it as a dynamical system evolving across layers. We find that
activations of individual RS units exhibit strong continuity across layers,
despite the RS being a non-privileged basis. Activations in the RS accelerate
and grow denser over layers, while individual units trace unstable periodic
orbits. In reduced-dimensional spaces, the RS follows a curved trajectory with
attractor-like dynamics in the lower layers. These insights bridge dynamical
systems theory and mechanistic interpretability, establishing a foundation for
a "neuroscience of AI" that combines theoretical rigor with large-scale data
analysis to advance our understanding of modern neural networks.
|
2502.12134
|
SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs
|
cs.CL
|
Chain-of-Thought (CoT) reasoning enables Large Language Models (LLMs) to
solve complex reasoning tasks by generating intermediate reasoning steps.
However, most existing approaches focus on hard token decoding, which
constrains reasoning within the discrete vocabulary space and may not always be
optimal. While recent efforts explore continuous-space reasoning, they often
suffer from catastrophic forgetting, limiting their applicability to
state-of-the-art LLMs that already perform well in zero-shot settings with a
proper instruction. To address this challenge, we propose a novel approach for
continuous-space reasoning that does not require modifying the underlying LLM.
Specifically, we employ a lightweight assistant model to generate
instance-specific soft thought tokens speculatively as the initial chain of
thoughts, which are then mapped into the LLM's representation space via a
projection module. Experimental results on five reasoning benchmarks
demonstrate that our method enhances LLM reasoning performance through
supervised, parameter-efficient fine-tuning.
|
2502.12135
|
MagicArticulate: Make Your 3D Models Articulation-Ready
|
cs.CV cs.GR
|
With the explosive growth of 3D content creation, there is an increasing
demand for automatically converting static 3D models into articulation-ready
versions that support realistic animation. Traditional approaches rely heavily
on manual annotation, which is both time-consuming and labor-intensive.
Moreover, the lack of large-scale benchmarks has hindered the development of
learning-based solutions. In this work, we present MagicArticulate, an
effective framework that automatically transforms static 3D models into
articulation-ready assets. Our key contributions are threefold. First, we
introduce Articulation-XL, a large-scale benchmark containing over 33k 3D
models with high-quality articulation annotations, carefully curated from
Objaverse-XL. Second, we propose a novel skeleton generation method that
formulates the task as a sequence modeling problem, leveraging an
auto-regressive transformer to naturally handle varying numbers of bones or
joints within skeletons and their inherent dependencies across different 3D
models. Third, we predict skinning weights using a functional diffusion process
that incorporates volumetric geodesic distance priors between vertices and
joints. Extensive experiments demonstrate that MagicArticulate significantly
outperforms existing methods across diverse object categories, achieving
high-quality articulation that enables realistic animation. Project page:
https://chaoyuesong.github.io/MagicArticulate.
|
2502.12137
|
REVERSUM: A Multi-staged Retrieval-Augmented Generation Method to
Enhance Wikipedia Tail Biographies through Personal Narratives
|
cs.CL cs.IR
|
Wikipedia is an invaluable resource for factual information about a wide
range of entities. However, the quality of articles on less-known entities
often lags behind that of the well-known ones. This study proposes a novel
approach to enhancing Wikipedia's B and C category biography articles by
leveraging personal narratives such as autobiographies and biographies. By
utilizing a multi-staged retrieval-augmented generation technique -- REVerSum
-- we aim to enrich the informational content of these lesser-known articles.
Our study reveals that personal narratives can significantly improve the
quality of Wikipedia articles, providing a rich source of reliable information
that has been underutilized in previous studies. Based on crowd-based
evaluation, REVerSum generated content outperforms the best performing baseline
by 17% in terms of integrability to the original Wikipedia article and 28.5\%
in terms of informativeness. Code and Data are available at:
https://github.com/sayantan11995/wikipedia_enrichment
|
2502.12138
|
FLARE: Feed-forward Geometry, Appearance and Camera Estimation from
Uncalibrated Sparse Views
|
cs.CV
|
We present FLARE, a feed-forward model designed to infer high-quality camera
poses and 3D geometry from uncalibrated sparse-view images (i.e., as few as 2-8
inputs), which is a challenging yet practical setting in real-world
applications. Our solution features a cascaded learning paradigm with camera
pose serving as the critical bridge, recognizing its essential role in mapping
3D structures onto 2D image planes. Concretely, FLARE starts with camera pose
estimation, whose results condition the subsequent learning of geometric
structure and appearance, optimized through the objectives of geometry
reconstruction and novel-view synthesis. Utilizing large-scale public datasets
for training, our method delivers state-of-the-art performance in the tasks of
pose estimation, geometry reconstruction, and novel view synthesis, while
maintaining the inference efficiency (i.e., less than 0.5 seconds). The project
page and code can be found at: https://zhanghe3z.github.io/FLARE/
|
2502.12143
|
Small Models Struggle to Learn from Strong Reasoners
|
cs.AI
|
Large language models (LLMs) excel in complex reasoning tasks, and distilling
their reasoning capabilities into smaller models has shown promise. However, we
uncover an interesting phenomenon, which we term the Small Model Learnability
Gap: small models ($\leq$3B parameters) do not consistently benefit from long
chain-of-thought (CoT) reasoning or distillation from larger models. Instead,
they perform better when fine-tuned on shorter, simpler reasoning chains that
better align with their intrinsic learning capacity. To address this, we
propose Mix Distillation, a simple yet effective strategy that balances
reasoning complexity by combining long and short CoT examples or reasoning from
both larger and smaller models. Our experiments demonstrate that Mix
Distillation significantly improves small model reasoning performance compared
to training on either data alone. These findings highlight the limitations of
direct strong model distillation and underscore the importance of adapting
reasoning complexity for effective reasoning capability transfer.
|
2502.12145
|
Fast or Better? Balancing Accuracy and Cost in Retrieval-Augmented
Generation with Flexible User Control
|
cs.IR cs.AI
|
Retrieval-Augmented Generation (RAG) has emerged as a powerful approach to
mitigate large language model (LLM) hallucinations by incorporating external
knowledge retrieval. However, existing RAG frameworks often apply retrieval
indiscriminately,leading to inefficiencies-over-retrieving when unnecessary or
failing to retrieve iteratively when required for complex reasoning. Recent
adaptive retrieval strategies, though adaptively navigates these retrieval
strategies, predict only based on query complexity and lacks user-driven
flexibility, making them infeasible for diverse user application needs. In this
paper, we introduce a novel user-controllable RAG framework that enables
dynamic adjustment of the accuracy-cost trade-off. Our approach leverages two
classifiers: one trained to prioritize accuracy and another to prioritize
retrieval efficiency. Via an interpretable control parameter $\alpha$, users
can seamlessly navigate between minimal-cost retrieval and high-accuracy
retrieval based on their specific requirements. We empirically demonstrate that
our approach effectively balances accuracy, retrieval cost, and user
controllability, making it a practical and adaptable solution for real-world
applications.
|
2502.12146
|
Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising
Trajectory Sharpening
|
cs.CV
|
We propose Diffusion-Sharpening, a fine-tuning approach that enhances
downstream alignment by optimizing sampling trajectories. Existing RL-based
fine-tuning methods focus on single training timesteps and neglect
trajectory-level alignment, while recent sampling trajectory optimization
methods incur significant inference NFE costs. Diffusion-Sharpening overcomes
this by using a path integral framework to select optimal trajectories during
training, leveraging reward feedback, and amortizing inference costs. Our
method demonstrates superior training efficiency with faster convergence, and
best inference efficiency without requiring additional NFEs. Extensive
experiments show that Diffusion-Sharpening outperforms RL-based fine-tuning
methods (e.g., Diffusion-DPO) and sampling trajectory optimization methods
(e.g., Inference Scaling) across diverse metrics including text alignment,
compositional capabilities, and human preferences, offering a scalable and
efficient solution for future diffusion model fine-tuning. Code:
https://github.com/Gen-Verse/Diffusion-Sharpening
|
2502.12147
|
Learning Smooth and Expressive Interatomic Potentials for Physical
Property Prediction
|
physics.comp-ph cs.LG
|
Machine learning interatomic potentials (MLIPs) have become increasingly
effective at approximating quantum mechanical calculations at a fraction of the
computational cost. However, lower errors on held out test sets do not always
translate to improved results on downstream physical property prediction tasks.
In this paper, we propose testing MLIPs on their practical ability to conserve
energy during molecular dynamic simulations. If passed, improved correlations
are found between test errors and their performance on physical property
prediction tasks. We identify choices which may lead to models failing this
test, and use these observations to improve upon highly-expressive models. The
resulting model, eSEN, provides state-of-the-art results on a range of physical
property prediction tasks, including materials stability prediction, thermal
conductivity prediction, and phonon calculations.
|
2502.12148
|
HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and
Generation
|
cs.CV
|
The remarkable success of the autoregressive paradigm has made significant
advancement in Multimodal Large Language Models (MLLMs), with powerful models
like Show-o, Transfusion and Emu3 achieving notable progress in unified image
understanding and generation. For the first time, we uncover a common
phenomenon: the understanding capabilities of MLLMs are typically stronger than
their generative capabilities, with a significant gap between the two. Building
on this insight, we propose HermesFlow, a simple yet general framework designed
to seamlessly bridge the gap between understanding and generation in MLLMs.
Specifically, we take the homologous data as input to curate homologous
preference data of both understanding and generation. Through Pair-DPO and
self-play iterative optimization, HermesFlow effectively aligns multimodal
understanding and generation using homologous preference data. Extensive
experiments demonstrate the significant superiority of our approach over prior
methods, particularly in narrowing the gap between multimodal understanding and
generation. These findings highlight the potential of HermesFlow as a general
alignment framework for next-generation multimodal foundation models. Code:
https://github.com/Gen-Verse/HermesFlow
|
2502.12149
|
HARBOR: Exploring Persona Dynamics in Multi-Agent Competition
|
cs.MA cs.AI cs.CL
|
We investigate factors contributing to LLM agents' success in competitive
multi-agent environments, using auctions as a testbed where agents bid to
maximize profit. The agents are equipped with bidding domain knowledge,
distinct personas that reflect item preferences, and a memory of auction
history. Our work extends the classic auction scenario by creating a realistic
environment where multiple agents bid on houses, weighing aspects such as size,
location, and budget to secure the most desirable homes at the lowest prices.
Particularly, we investigate three key questions: (a) How does a persona
influence an agent's behavior in a competitive setting? (b) Can an agent
effectively profile its competitors' behavior during auctions? (c) How can
persona profiling be leveraged to create an advantage using strategies such as
theory of mind? Through a series of experiments, we analyze the behaviors of
LLM agents and shed light on new findings. Our testbed, called HARBOR, offers a
valuable platform for deepening our understanding of multi-agent workflows in
competitive environments.
|
2502.12150
|
Idiosyncrasies in Large Language Models
|
cs.CL
|
In this work, we unveil and study idiosyncrasies in Large Language Models
(LLMs) -- unique patterns in their outputs that can be used to distinguish the
models. To do so, we consider a simple classification task: given a particular
text output, the objective is to predict the source LLM that generates the
text. We evaluate this synthetic task across various groups of LLMs and find
that simply fine-tuning existing text embedding models on LLM-generated texts
yields excellent classification accuracy. Notably, we achieve 97.1% accuracy on
held-out validation data in the five-way classification problem involving
ChatGPT, Claude, Grok, Gemini, and DeepSeek. Our further investigation reveals
that these idiosyncrasies are rooted in word-level distributions. These
patterns persist even when the texts are rewritten, translated, or summarized
by an external LLM, suggesting that they are also encoded in the semantic
content. Additionally, we leverage LLM as judges to generate detailed,
open-ended descriptions of each model's idiosyncrasies. Finally, we discuss the
broader implications of our findings, particularly for training on synthetic
data and inferring model similarity. Code is available at
https://github.com/locuslab/llm-idiosyncrasies.
|
2502.12151
|
VoLUT: Efficient Volumetric streaming enhanced by LUT-based
super-resolution
|
cs.CV cs.SY eess.SY
|
3D volumetric video provides immersive experience and is gaining traction in
digital media. Despite its rising popularity, the streaming of volumetric video
content poses significant challenges due to the high data bandwidth
requirement. A natural approach to mitigate the bandwidth issue is to reduce
the volumetric video's data rate by downsampling the content prior to
transmission. The video can then be upsampled at the receiver's end using a
super-resolution (SR) algorithm to reconstruct the high-resolution details.
While super-resolution techniques have been extensively explored and advanced
for 2D video content, there is limited work on SR algorithms tailored for
volumetric videos.
To address this gap and the growing need for efficient volumetric video
streaming, we have developed VoLUT with a new SR algorithm specifically
designed for volumetric content. Our algorithm uniquely harnesses the power of
lookup tables (LUTs) to facilitate the efficient and accurate upscaling of
low-resolution volumetric data. The use of LUTs enables our algorithm to
quickly reference precomputed high-resolution values, thereby significantly
reducing the computational complexity and time required for upscaling. We
further apply adaptive video bit rate algorithm (ABR) to dynamically determine
the downsampling rate according to the network condition and stream the
selected video rate to the receiver. Compared to related work, VoLUT is the
first to enable high-quality 3D SR on commodity mobile devices at line-rate.
Our evaluation shows VoLUT can reduce bandwidth usage by 70% , boost QoE by
36.7% for volumetric video streaming and achieve
3D SR speed-up with no quality compromise.
|
2502.12152
|
Learning Getting-Up Policies for Real-World Humanoid Robots
|
cs.RO cs.LG
|
Automatic fall recovery is a crucial prerequisite before humanoid robots can
be reliably deployed. Hand-designing controllers for getting up is difficult
because of the varied configurations a humanoid can end up in after a fall and
the challenging terrains humanoid robots are expected to operate on. This paper
develops a learning framework to produce controllers that enable humanoid
robots to get up from varying configurations on varying terrains. Unlike
previous successful applications of humanoid locomotion learning, the
getting-up task involves complex contact patterns, which necessitates
accurately modeling the collision geometry and sparser rewards. We address
these challenges through a two-phase approach that follows a curriculum. The
first stage focuses on discovering a good getting-up trajectory under minimal
constraints on smoothness or speed / torque limits. The second stage then
refines the discovered motions into deployable (i.e. smooth and slow) motions
that are robust to variations in initial configuration and terrains. We find
these innovations enable a real-world G1 humanoid robot to get up from two main
situations that we considered: a) lying face up and b) lying face down, both
tested on flat, deformable, slippery surfaces and slopes (e.g., sloppy grass
and snowfield). To the best of our knowledge, this is the first successful
demonstration of learned getting-up policies for human-sized humanoid robots in
the real world. Project page: https://humanoid-getup.github.io/
|
2502.12154
|
Diffusion Models without Classifier-free Guidance
|
cs.CV cs.AI cs.LG
|
This paper presents Model-guidance (MG), a novel objective for training
diffusion model that addresses and removes of the commonly used Classifier-free
guidance (CFG). Our innovative approach transcends the standard modeling of
solely data distribution to incorporating the posterior probability of
conditions. The proposed technique originates from the idea of CFG and is easy
yet effective, making it a plug-and-play module for existing models. Our method
significantly accelerates the training process, doubles the inference speed,
and achieve exceptional quality that parallel and even surpass concurrent
diffusion models with CFG. Extensive experiments demonstrate the effectiveness,
efficiency, scalability on different models and datasets. Finally, we establish
state-of-the-art performance on ImageNet 256 benchmarks with an FID of 1.34.
Our code is available at https://github.com/tzco/Diffusion-wo-CFG.
|
2502.12158
|
Mining Social Determinants of Health for Heart Failure Patient 30-Day
Readmission via Large Language Model
|
cs.LG cs.AI cs.CL cs.CY
|
Heart Failure (HF) affects millions of Americans and leads to high
readmission rates, posing significant healthcare challenges. While Social
Determinants of Health (SDOH) such as socioeconomic status and housing
stability play critical roles in health outcomes, they are often
underrepresented in structured EHRs and hidden in unstructured clinical notes.
This study leverages advanced large language models (LLMs) to extract SDOHs
from clinical text and uses logistic regression to analyze their association
with HF readmissions. By identifying key SDOHs (e.g. tobacco usage, limited
transportation) linked to readmission risk, this work also offers actionable
insights for reducing readmissions and improving patient care.
|
2502.12159
|
Causal Interpretations in Observational Studies: The Role of
Sociocultural Backgrounds and Team Dynamics
|
physics.soc-ph cs.CL
|
The prevalence of drawing causal conclusions from observational studies has
raised concerns about potential exaggeration in science communication. While
some believe causal language should only apply to randomized controlled trials,
others argue that rigorous methods can justify causal claims in observational
studies. Ideally, causal language should align with the strength of the
evidence. However, through the analysis of over 80,000 observational study
abstracts using computational linguistic and regression methods, we found that
causal language is more frequently used by less experienced authors, smaller
research teams, male last authors, and authors from countries with higher
uncertainty avoidance indices. These findings suggest that the use of causal
language may be influenced by external factors such as the sociocultural
backgrounds of authors and the dynamics of research collaboration. This newly
identified link deepens our understanding of how such factors help shape
scientific conclusions in causal inference and science communication.
|
2502.12161
|
Integrating Artificial Intelligence and Geophysical Insights for
Earthquake Forecasting: A Cross-Disciplinary Review
|
physics.geo-ph cs.AI cs.LG
|
Earthquake forecasting remains a significant scientific challenge, with
current methods falling short of achieving the performance necessary for
meaningful societal benefits. Traditional models, primarily based on past
seismicity and geomechanical data, struggle to capture the complexity of
seismic patterns and often overlook valuable non-seismic precursors such as
geophysical, geochemical, and atmospheric anomalies. The integration of such
diverse data sources into forecasting models, combined with advancements in AI
technologies, offers a promising path forward. AI methods, particularly deep
learning, excel at processing complex, large-scale datasets, identifying subtle
patterns, and handling multidimensional relationships, making them well-suited
for overcoming the limitations of conventional approaches.
This review highlights the importance of combining AI with geophysical
knowledge to create robust, physics-informed forecasting models. It explores
current AI methods, input data types, loss functions, and practical
considerations for model development, offering guidance to both geophysicists
and AI researchers. While many AI-based studies oversimplify earthquake
prediction, neglecting critical features such as data imbalance and
spatio-temporal clustering, the integration of specialized geophysical insights
into AI models can address these shortcomings.
We emphasize the importance of interdisciplinary collaboration, urging
geophysicists to experiment with AI architectures thoughtfully and encouraging
AI experts to deepen their understanding of seismology. By bridging these
disciplines, we can develop more accurate, reliable, and societally impactful
earthquake forecasting tools.
|
2502.12164
|
Scalable and Robust Physics-Informed Graph Neural Networks for Water
Distribution Systems
|
cs.NE cs.LG cs.SY eess.SY
|
Water distribution systems (WDSs) are an important part of critical
infrastructure becoming increasingly significant in the face of climate change
and urban population growth. We propose a robust and scalable surrogate deep
learning (DL) model to enable efficient planning, expansion, and rehabilitation
of WDSs. Our approach incorporates an improved graph neural network
architecture, an adapted physics-informed algorithm, an innovative training
scheme, and a physics-preserving data normalization method. Evaluation results
on a number of WDSs demonstrate that our model outperforms the current
state-of-the-art DL model. Moreover, our method allows us to scale the model to
bigger and more realistic WDSs. Furthermore, our approach makes the model more
robust to out-of-distribution input features (demands, pipe diameters). Hence,
our proposed method constitutes a significant step towards bridging the
simulation-to-real gap in the use of artificial intelligence for WDSs.
|
2502.12167
|
TastepepAI, An artificial intelligence platform for taste peptide de
novo design
|
cs.LG cs.AI
|
Taste peptides have emerged as promising natural flavoring agents attributed
to their unique organoleptic properties, high safety profile, and potential
health benefits. However, the de novo identification of taste peptides derived
from animal, plant, or microbial sources remains a time-consuming and
resource-intensive process, significantly impeding their widespread application
in the food industry. Here, we present TastePepAI, a comprehensive artificial
intelligence framework for customized taste peptide design and safety
assessment. As the key element of this framework, a loss-supervised adaptive
variational autoencoder (LA-VAE) is implemented to efficiently optimizes the
latent representation of sequences during training and facilitates the
generation of target peptides with desired taste profiles. Notably, our model
incorporates a novel taste-avoidance mechanism, allowing for selective flavor
exclusion. Subsequently, our in-house developed toxicity prediction algorithm
(SpepToxPred) is integrated in the framework to undergo rigorous safety
evaluation of generated peptides. Using this integrated platform, we
successfully identified 73 peptides exhibiting sweet, salty, and umami,
significantly expanding the current repertoire of taste peptides. This work
demonstrates the potential of TastePepAI in accelerating taste peptide
discovery for food applications and provides a versatile framework adaptable to
broader peptide engineering challenges.
|
2502.12168
|
CFIRSTNET: Comprehensive Features for Static IR Drop Estimation with
Neural Network
|
cs.LG cs.CV
|
IR drop estimation is now considered a first-order metric due to the concern
about reliability and performance in modern electronic products. Since
traditional solution involves lengthy iteration and simulation flow, how to
achieve fast yet accurate estimation has become an essential demand. In this
work, with the help of modern AI acceleration techniques, we propose a
comprehensive solution to combine both the advantages of image-based and
netlist-based features in neural network framework and obtain high-quality IR
drop prediction very effectively in modern designs. A customized convolutional
neural network (CNN) is developed to extract PDN features and make static IR
drop estimations. Trained and evaluated with the open-source dataset,
experiment results show that we have obtained the best quality in the benchmark
on the problem of IR drop estimation in ICCAD CAD Contest 2023, proving the
effectiveness of this important design topic.
|
2502.12169
|
Antimatter Annihilation Vertex Reconstruction with Deep Learning for
ALPHA-g Radial Time Projection Chamber
|
physics.ins-det cs.LG hep-ex
|
The ALPHA-g experiment at CERN aims to precisely measure the terrestrial
gravitational acceleration of antihydrogen atoms. A radial Time Projection
Chamber (rTPC), that surrounds the ALPHA-g magnetic trap, is employed to
determine the annihilation location, called the vertex. The standard approach
requires identifying the trajectories of the ionizing particles in the rTPC
from the location of their interaction in the gas (spacepoints), and inferring
the vertex positions by finding the point where those trajectories (helices)
pass closest to one another. In this work, we present a novel approach to
vertex reconstruction using an ensemble of models based on the PointNet deep
learning architecture. The newly developed model, PointNet Ensemble for
Annihilation Reconstruction (PEAR), directly learns the relation between the
location of the vertices and the rTPC spacepoints, thus eliminating the need to
identify and fit the particle tracks. PEAR shows strong performance in
reconstructing vertical vertex positions from simulated data, that is superior
to the standard approach for all metrics considered. Furthermore, the deep
learning approach can reconstruct the vertical vertex position when the
standard approach fails.
|
2502.12170
|
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway
Dynamic Dense Connections
|
cs.LG cs.AI cs.CL
|
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective
method to address the limitations of residual connections and enhance
cross-layer information flow in Transformers. Unlike existing dense connection
approaches with static and shared connection weights, MUDD generates connection
weights dynamically depending on hidden states at each sequence position and
for each decoupled input stream (the query, key, value or residual) of a
Transformer block. MUDD connections can be seamlessly integrated into any
Transformer architecture to create MUDDFormer. Extensive experiments show that
MUDDFormer significantly outperforms Transformers across various model
architectures and scales in language modeling, achieving the performance of
Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches
Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B
in five-shot settings, while adding only 0.23% parameters and 0.4% computation.
Code in JAX and PyTorch and pre-trained models are available at
https://github.com/Caiyun-AI/MUDDFormer .
|
2502.12171
|
GoRA: Gradient-driven Adaptive Low Rank Adaptation
|
cs.LG cs.AI cs.CL
|
Low-Rank Adaptation (LoRA) is a crucial method for efficiently fine-tuning
pretrained large language models (LLMs), with its performance largely
influenced by two key factors: rank and initialization strategy. Numerous LoRA
variants have been proposed to enhance its performance by addressing these
factors. However, these variants often compromise LoRA's usability or
efficiency. In this paper, we analyze the fundamental limitations of existing
methods and introduce a novel approach, GoRA (Gradient-driven Adaptive Low Rank
Adaptation), which adaptively assigns ranks and initializes weights for
low-rank adapters simultaneously based on gradient information. Extensive
experimental results demonstrate that GoRA significantly improves performance
while preserving the high usability and efficiency of LoRA. On the T5 model
fine-tuned for the GLUE benchmark, GoRA achieves a 5.88-point improvement over
LoRA and slightly surpasses full fine-tuning. Similarly, on the
Llama3.1-8B-Base model fine-tuned for GSM8k tasks, GoRA outperforms LoRA with a
5.13-point improvement and exceeds full fine-tuning in high-rank settings by a
margin of 2.05 points.
|
2502.12172
|
Application-oriented automatic hyperparameter optimization for spiking
neural network prototyping
|
cs.NE cs.LG
|
Hyperparameter optimization (HPO) is of paramount importance in the
development of high-performance, specialized artificial intelligence (AI)
models, ranging from well-established machine learning (ML) solutions to the
deep learning (DL) domain and the field of spiking neural networks (SNNs). The
latter introduce further complexity due to the neuronal computational units and
their additional hyperparameters, whose inadequate setting can dramatically
impact the final model performance. At the cost of possible reduced
generalization capabilities, the most suitable strategy to fully disclose the
power of SNNs is to adopt an application-oriented approach and perform
extensive HPO experiments. To facilitate these operations, automatic pipelines
are fundamental, and their configuration is crucial. In this document, the
Neural Network Intelligence (NNI) toolkit is used as reference framework to
present one such solution, with a use case example providing evidence of the
corresponding results. In addition, a summary of published works employing the
presented pipeline is reported as possible source of insights into
application-oriented HPO experiments for SNN prototyping.
|
2502.12173
|
nanoML for Human Activity Recognition
|
cs.LG cs.AI
|
Human Activity Recognition (HAR) is critical for applications in healthcare,
fitness, and IoT, but deploying accurate models on resource-constrained devices
remains challenging due to high energy and memory demands. This paper
demonstrates the application of Differentiable Weightless Neural Networks
(DWNs) to HAR, achieving competitive accuracies of 96.34% and 96.67% while
consuming only 56nJ and 104nJ per sample, with an inference time of just 5ns
per sample. The DWNs were implemented and evaluated on an FPGA, showcasing
their practical feasibility for energy-efficient hardware deployment. DWNs
achieve up to 926,000x energy savings and 260x memory reduction compared to
state-of-the-art deep learning methods. These results position DWNs as a
nano-machine learning nanoML model for HAR, setting a new benchmark in energy
efficiency and compactness for edge and wearable devices, paving the way for
ultra-efficient edge AI.
|
2502.12174
|
Robust blue-green urban flood risk management optimised with a genetic
algorithm for multiple rainstorm return periods
|
cs.NE cs.CE cs.CY
|
Flood risk managers seek to optimise Blue-Green Infrastructure (BGI) designs
to maximise return on investment. Current systems often use optimisation
algorithms and detailed flood models to maximise benefit-cost ratios for single
rainstorm return periods. However, these schemes may lack robustness in
mitigating flood risks across different storm magnitudes. For example, a BGI
scheme optimised for a 100-year return period may differ from one optimised for
a 10-year return period. This study introduces a novel methodology
incorporating five return periods (T = 10, 20, 30, 50, and 100 years) into a
multi-objective BGI optimisation framework. The framework combines a
Non-dominated Sorting Genetic Algorithm II (NSGA-II) with a fully distributed
hydrodynamic model to optimise the spatial placement and combined size of BGI
features. For the first time, direct damage cost (DDC) and expected annual
damage (EAD), calculated for various building types, are used as risk objective
functions, transforming a many-objective problem into a multi-objective one.
Performance metrics such as Median Risk Difference (MedRD), Maximum Risk
Difference (MaxRD), and Area Under Pareto Front (AUPF) reveal that a 100-year
optimised BGI design performs poorly when evaluated for other return periods,
particularly shorter ones. In contrast, a BGI design optimised using composite
return periods enhances performance metrics across all return periods, with the
greatest improvements observed in MedRD (22%) and AUPF (73%) for the 20-year
return period, and MaxRD (23%) for the 50-year return period. Furthermore,
climate uplift stress testing confirms the robustness of the proposed design to
future rainfall extremes. This study advocates a paradigm shift in flood risk
management, moving from single maximum to multiple rainstorm return
period-based designs to enhance resilience and adaptability to future climate
extremes.
|
2502.12175
|
Spatiotemporal Graph Neural Networks in short term load forecasting:
Does adding Graph Structure in Consumption Data Improve Predictions?
|
cs.LG cs.AI
|
Short term Load Forecasting (STLF) plays an important role in traditional and
modern power systems. Most STLF models predominantly exploit temporal
dependencies from historical data to predict future consumption. Nowadays, with
the widespread deployment of smart meters, their data can contain
spatiotemporal dependencies. In particular, their consumption data is not only
correlated to historical values but also to the values of neighboring smart
meters. This new characteristic motivates researchers to explore and experiment
with new models that can effectively integrate spatiotemporal interrelations to
increase forecasting performance. Spatiotemporal Graph Neural Networks (STGNNs)
can leverage such interrelations by modeling relationships between smart meters
as a graph and using these relationships as additional features to predict
future energy consumption. While extensively studied in other spatiotemporal
forecasting domains such as traffic, environments, or renewable energy
generation, their application to load forecasting remains relatively
unexplored, particularly in scenarios where the graph structure is not
inherently available. This paper overviews the current literature focusing on
STGNNs with application in STLF. Additionally, from a technical perspective, it
also benchmarks selected STGNN models for STLF at the residential and aggregate
levels. The results indicate that incorporating graph features can improve
forecasting accuracy at the residential level; however, this effect is not
reflected at the aggregate level
|
2502.12176
|
Ten Challenging Problems in Federated Foundation Models
|
cs.LG cs.AI
|
Federated Foundation Models (FedFMs) represent a distributed learning
paradigm that fuses general competences of foundation models as well as
privacy-preserving capabilities of federated learning. This combination allows
the large foundation models and the small local domain models at the remote
clients to learn from each other in a teacher-student learning setting. This
paper provides a comprehensive summary of the ten challenging problems inherent
in FedFMs, encompassing foundational theory, utilization of private data,
continual learning, unlearning, Non-IID and graph data, bidirectional knowledge
transfer, incentive mechanism design, game mechanism design, model
watermarking, and efficiency. The ten challenging problems manifest in five
pivotal aspects: ``Foundational Theory," which aims to establish a coherent and
unifying theoretical framework for FedFMs. ``Data," addressing the difficulties
in leveraging domain-specific knowledge from private data while maintaining
privacy; ``Heterogeneity," examining variations in data, model, and
computational resources across clients; ``Security and Privacy," focusing on
defenses against malicious attacks and model theft; and ``Efficiency,"
highlighting the need for improvements in training, communication, and
parameter efficiency. For each problem, we offer a clear mathematical
definition on the objective function, analyze existing methods, and discuss the
key challenges and potential solutions. This in-depth exploration aims to
advance the theoretical foundations of FedFMs, guide practical implementations,
and inspire future research to overcome these obstacles, thereby enabling the
robust, efficient, and privacy-preserving FedFMs in various real-world
applications.
|
2502.12177
|
Recent Advances of NeuroDiffEq -- An Open-Source Library for
Physics-Informed Neural Networks
|
cs.LG
|
Solving differential equations is a critical challenge across a host of
domains. While many software packages efficiently solve these equations using
classical numerical approaches, there has been less effort in developing a
library for researchers interested in solving such systems using neural
networks. With PyTorch as its backend, NeuroDiffEq is a software library that
exploits neural networks to solve differential equations. In this paper, we
highlight the latest features of the NeuroDiffEq library since its debut. We
show that NeuroDiffEq can solve complex boundary value problems in arbitrary
dimensions, tackle boundary conditions at infinity, and maintain flexibility
for dynamic injection at runtime.
|
2502.12178
|
Direct Preference Optimization-Enhanced Multi-Guided Diffusion Model for
Traffic Scenario Generation
|
cs.LG cs.MA
|
Diffusion-based models are recognized for their effectiveness in using
real-world driving data to generate realistic and diverse traffic scenarios.
These models employ guided sampling to incorporate specific traffic preferences
and enhance scenario realism. However, guiding the sampling process to conform
to traffic rules and preferences can result in deviations from real-world
traffic priors and potentially leading to unrealistic behaviors. To address
this challenge, we introduce a multi-guided diffusion model that utilizes a
novel training strategy to closely adhere to traffic priors, even when
employing various combinations of guides. This model adopts a multi-task
learning framework, enabling a single diffusion model to process various guide
inputs. For increased guided sampling precision, our model is fine-tuned using
the Direct Preference Optimization (DPO) algorithm. This algorithm optimizes
preferences based on guide scores, effectively navigating the complexities and
challenges associated with the expensive and often non-differentiable gradient
calculations during the guided sampling fine-tuning process. Evaluated using
the nuScenes dataset our model provides a strong baseline for balancing
realism, diversity and controllability in the traffic scenario generation.
|
2502.12179
|
Identifiable Steering via Sparse Autoencoding of Multi-Concept Shifts
|
cs.LG cs.AI cs.CL
|
Steering methods manipulate the representations of large language models
(LLMs) to induce responses that have desired properties, e.g., truthfulness,
offering a promising approach for LLM alignment without the need for
fine-tuning. Traditionally, steering has relied on supervision, such as from
contrastive pairs of prompts that vary in a single target concept, which is
costly to obtain and limits the speed of steering research. An appealing
alternative is to use unsupervised approaches such as sparse autoencoders
(SAEs) to map LLM embeddings to sparse representations that capture
human-interpretable concepts. However, without further assumptions, SAEs may
not be identifiable: they could learn latent dimensions that entangle multiple
concepts, leading to unintentional steering of unrelated properties. We
introduce Sparse Shift Autoencoders (SSAEs) that instead map the differences
between embeddings to sparse representations. Crucially, we show that SSAEs are
identifiable from paired observations that vary in \textit{multiple unknown
concepts}, leading to accurate steering of single concepts without the need for
supervision. We empirically demonstrate accurate steering across semi-synthetic
and real-world language datasets using Llama-3.1 embeddings.
|
2502.12180
|
ClusMFL: A Cluster-Enhanced Framework for Modality-Incomplete Multimodal
Federated Learning in Brain Imaging Analysis
|
eess.IV cs.AI cs.CV cs.LG
|
Multimodal Federated Learning (MFL) has emerged as a promising approach for
collaboratively training multimodal models across distributed clients,
particularly in healthcare domains. In the context of brain imaging analysis,
modality incompleteness presents a significant challenge, where some
institutions may lack specific imaging modalities (e.g., PET, MRI, or CT) due
to privacy concerns, device limitations, or data availability issues. While
existing work typically assumes modality completeness or oversimplifies
missing-modality scenarios, we simulate a more realistic setting by considering
both client-level and instance-level modality incompleteness in this study.
Building on this realistic simulation, we propose ClusMFL, a novel MFL
framework that leverages feature clustering for cross-institutional brain
imaging analysis under modality incompleteness. Specifically, ClusMFL utilizes
the FINCH algorithm to construct a pool of cluster centers for the feature
embeddings of each modality-label pair, effectively capturing fine-grained data
distributions. These cluster centers are then used for feature alignment within
each modality through supervised contrastive learning, while also acting as
proxies for missing modalities, allowing cross-modal knowledge transfer.
Furthermore, ClusMFL employs a modality-aware aggregation strategy, further
enhancing the model's performance in scenarios with severe modality
incompleteness. We evaluate the proposed framework on the ADNI dataset,
utilizing structural MRI and PET scans. Extensive experimental results
demonstrate that ClusMFL achieves state-of-the-art performance compared to
various baseline methods across varying levels of modality incompleteness,
providing a scalable solution for cross-institutional brain imaging analysis.
|
2502.12181
|
3D ReX: Causal Explanations in 3D Neuroimaging Classification
|
eess.IV cs.AI cs.CV cs.LG
|
Explainability remains a significant problem for AI models in medical
imaging, making it challenging for clinicians to trust AI-driven predictions.
We introduce 3D ReX, the first causality-based post-hoc explainability tool for
3D models. 3D ReX uses the theory of actual causality to generate
responsibility maps which highlight the regions most crucial to the model's
decision. We test 3D ReX on a stroke detection model, providing insight into
the spatial distribution of features relevant to stroke.
|
2502.12182
|
Towards Transparent and Accurate Plasma State Monitoring at JET
|
physics.plasm-ph cs.AI cs.LG
|
Controlling and monitoring plasma within a tokamak device is complex and
challenging. Plasma off-normal events, such as disruptions, are hindering
steady-state operation. For large devices, they can even endanger the machine's
integrity and it represents in general one of the most serious concerns for the
exploitation of the tokamak concept for future power plants. Effective plasma
state monitoring carries the potential to enable an understanding of such
phenomena and their evolution which is crucial for the successful operation of
tokamaks. This paper presents the application of a transparent and data-driven
methodology to monitor the plasma state in a tokamak. Compared to previous
studies in the field, supervised and unsupervised learning techniques are
combined. The dataset consisted of 520 expert-validated discharges from JET.
The goal was to provide an interpretable plasma state representation for the
JET operational space by leveraging multi-task learning for the first time in
the context of plasma state monitoring. When evaluated as disruption
predictors, a sequence-based approach showed significant improvements compared
to the state-based models. The best resulting network achieved a promising
cross-validated success rate when combined with a physical indicator and
accounting for nearby instabilities. Qualitative evaluations of the learned
latent space uncovered operational and disruptive regions as well as patterns
related to learned dynamics and global feature importance. The applied
methodology provides novel possibilities for the definition of triggers to
switch between different control scenarios, data analysis, and learning as well
as exploring latent dynamics for plasma state monitoring. It also showed
promising quantitative and qualitative results with warning times suitable for
avoidance purposes and distributions that are consistent with known physical
mechanisms.
|
2502.12183
|
Leveraging large language models for structured information extraction
from pathology reports
|
cs.CL cs.LG
|
Background: Structured information extraction from unstructured
histopathology reports facilitates data accessibility for clinical research.
Manual extraction by experts is time-consuming and expensive, limiting
scalability. Large language models (LLMs) offer efficient automated extraction
through zero-shot prompting, requiring only natural language instructions
without labeled data or training. We evaluate LLMs' accuracy in extracting
structured information from breast cancer histopathology reports, compared to
manual extraction by a trained human annotator.
Methods: We developed the Medical Report Information Extractor, a web
application leveraging LLMs for automated extraction. We developed a gold
standard extraction dataset to evaluate the human annotator alongside five LLMs
including GPT-4o, a leading proprietary model, and the Llama 3 model family,
which allows self-hosting for data privacy. Our assessment involved 111
histopathology reports from the Breast Cancer Now (BCN) Generations Study,
extracting 51 pathology features specified in the study's data dictionary.
Results: Evaluation against the gold standard dataset showed that both Llama
3.1 405B (94.7% accuracy) and GPT-4o (96.1%) achieved extraction accuracy
comparable to the human annotator (95.4%; p = 0.146 and p = 0.106,
respectively). While Llama 3.1 70B (91.6%) performed below human accuracy (p
<0.001), its reduced computational requirements make it a viable option for
self-hosting.
Conclusion: We developed an open-source tool for structured information
extraction that can be customized by non-programmers using natural language.
Its modular design enables reuse for various extraction tasks, producing
standardized, structured data from unstructured text reports to facilitate
analytics through improved accessibility and interoperability.
|
2502.12185
|
Large Language Models for Extrapolative Modeling of Manufacturing
Processes
|
cs.CL cs.AI
|
Conventional predictive modeling of parametric relationships in manufacturing
processes is limited by the subjectivity of human expertise and intuition on
the one hand and by the cost and time of experimental data generation on the
other hand. This work addresses this issue by establishing a new Large Language
Model (LLM) framework. The novelty lies in combining automatic extraction of
process-relevant knowledge embedded in the literature with iterative model
refinement based on a small amount of experimental data. This approach is
evaluated on three distinct manufacturing processes that are based on
machining, deformation, and additive principles. The results show that for the
same small experimental data budget the models derived by our framework have
unexpectedly high extrapolative performance, often surpassing the capabilities
of conventional Machine Learning. Further, our approach eliminates manual
generation of initial models or expertise-dependent interpretation of the
literature. The results also reveal the importance of the nature of the
knowledge extracted from the literature and the significance of both the
knowledge extraction and model refinement components.
|
2502.12186
|
E2CB2former: Effecitve and Explainable Transformer for CB2 Receptor
Ligand Activity Prediction
|
cs.LG cs.AI q-bio.QM
|
Accurate prediction of CB2 receptor ligand activity is pivotal for advancing
drug discovery targeting this receptor, which is implicated in inflammation,
pain management, and neurodegenerative conditions. Although conventional
machine learning and deep learning techniques have shown promise, their limited
interpretability remains a significant barrier to rational drug design. In this
work, we introduce CB2former, a framework that combines a Graph Convolutional
Network with a Transformer architecture to predict CB2 receptor ligand
activity. By leveraging the Transformer's self attention mechanism alongside
the GCN's structural learning capability, CB2former not only enhances
predictive performance but also offers insights into the molecular features
underlying receptor activity. We benchmark CB2former against diverse baseline
models including Random Forest, Support Vector Machine, K Nearest Neighbors,
Gradient Boosting, Extreme Gradient Boosting, Multilayer Perceptron,
Convolutional Neural Network, and Recurrent Neural Network and demonstrate its
superior performance with an R squared of 0.685, an RMSE of 0.675, and an AUC
of 0.940. Moreover, attention weight analysis reveals key molecular
substructures influencing CB2 receptor activity, underscoring the model's
potential as an interpretable AI tool for drug discovery. This ability to
pinpoint critical molecular motifs can streamline virtual screening, guide lead
optimization, and expedite therapeutic development. Overall, our results
showcase the transformative potential of advanced AI approaches exemplified by
CB2former in delivering both accurate predictions and actionable molecular
insights, thus fostering interdisciplinary collaboration and innovation in drug
discovery.
|
2502.12187
|
Hallucinations are inevitable but statistically negligible
|
cs.CL cs.FL cs.LG math.ST stat.ML stat.TH
|
Hallucinations, a phenomenon where a language model (LM) generates nonfactual
content, pose a significant challenge to the practical deployment of LMs. While
many empirical methods have been proposed to mitigate hallucinations, a recent
study established a computability-theoretic result showing that any LM will
inevitably generate hallucinations on an infinite set of inputs, regardless of
the quality and quantity of training datasets and the choice of the language
model architecture and training and inference algorithms. Although the
computability-theoretic result may seem pessimistic, its significance in
practical viewpoints has remained unclear. In contrast, we present a positive
theoretical result from a probabilistic perspective. Specifically, we prove
that hallucinations can be made statistically negligible, provided that the
quality and quantity of the training data are sufficient. Interestingly, our
positive result coexists with the computability-theoretic result, implying that
while hallucinations on an infinite set of inputs cannot be entirely
eliminated, their probability can always be reduced by improving algorithms and
training data. By evaluating the two seemingly contradictory results through
the lens of information theory, we argue that our probability-theoretic
positive result better reflects practical considerations than the
computability-theoretic negative result.
|
2502.12188
|
Boosting Generalization in Diffusion-Based Neural Combinatorial Solver
via Energy-guided Sampling
|
cs.LG cs.AI
|
Diffusion-based Neural Combinatorial Optimization (NCO) has demonstrated
effectiveness in solving NP-complete (NPC) problems by learning discrete
diffusion models for solution generation, eliminating hand-crafted domain
knowledge. Despite their success, existing NCO methods face significant
challenges in both cross-scale and cross-problem generalization, and high
training costs compared to traditional solvers. While recent studies have
introduced training-free guidance approaches that leverage pre-defined guidance
functions for zero-shot conditional generation, such methodologies have not
been extensively explored in combinatorial optimization. To bridge this gap, we
propose a general energy-guided sampling framework during inference time that
enhances both the cross-scale and cross-problem generalization capabilities of
diffusion-based NCO solvers without requiring additional training. We provide
theoretical analysis that helps understanding the cross-problem transfer
capability. Our experimental results demonstrate that a diffusion solver,
trained exclusively on the Traveling Salesman Problem (TSP), can achieve
competitive zero-shot solution generation on TSP variants, such as Prize
Collecting TSP (PCTSP) and the Orienteering Problem (OP), through energy-guided
sampling across different problem scales.
|
2502.12189
|
Self-supervised Attribute-aware Dynamic Preference Ranking Alignment
|
cs.CL cs.AI
|
Reinforcement Learning from Human Feedback and its variants excel in aligning
with human intentions to generate helpful, harmless, and honest responses.
However, most of them rely on costly human-annotated pairwise comparisons for
supervised alignment, which is not suitable for list-level scenarios, such as
community question answering. Additionally, human preferences are influenced by
multiple intrinsic factors in responses, leading to decision-making
inconsistencies. Therefore, we propose \textbf{Se}lf-supervised
\textbf{A}ttribute-aware \textbf{d}ynamic \textbf{p}reference \textbf{ra}nking,
called \shortname. \ It quantifies preference differences between responses
based on Attribute-Perceptual Distance Factors (APDF) and dynamically
determines the list-wise alignment order. Furthermore, it achieves fine-grained
preference difference learning and enables precise alignment with the optimal
one. We specifically constructed a challenging code preference dataset named
StaCoCoQA, and introduced more cost-effective and scalable preference
evaluation metrics: PrefHit and PrefRecall. Extensive experimental results show
that SeAdpra exhibits superior performance and generalizability on both
StaCoCoQA and preference datasets from eight popular domains.
|
2502.12191
|
AnyTouch: Learning Unified Static-Dynamic Representation across Multiple
Visuo-tactile Sensors
|
cs.LG cs.CV cs.RO
|
Visuo-tactile sensors aim to emulate human tactile perception, enabling
robots to precisely understand and manipulate objects. Over time, numerous
meticulously designed visuo-tactile sensors have been integrated into robotic
systems, aiding in completing various tasks. However, the distinct data
characteristics of these low-standardized visuo-tactile sensors hinder the
establishment of a powerful tactile perception system. We consider that the key
to addressing this issue lies in learning unified multi-sensor representations,
thereby integrating the sensors and promoting tactile knowledge transfer
between them. To achieve unified representation of this nature, we introduce
TacQuad, an aligned multi-modal multi-sensor tactile dataset from four
different visuo-tactile sensors, which enables the explicit integration of
various sensors. Recognizing that humans perceive the physical environment by
acquiring diverse tactile information such as texture and pressure changes, we
further propose to learn unified multi-sensor representations from both static
and dynamic perspectives. By integrating tactile images and videos, we present
AnyTouch, a unified static-dynamic multi-sensor representation learning
framework with a multi-level structure, aimed at both enhancing comprehensive
perceptual abilities and enabling effective cross-sensor transfer. This
multi-level architecture captures pixel-level details from tactile data via
masked modeling and enhances perception and transferability by learning
semantic-level sensor-agnostic features through multi-modal alignment and
cross-sensor matching. We provide a comprehensive analysis of multi-sensor
transferability, and validate our method on various datasets and in the
real-world pouring task. Experimental results show that our method outperforms
existing methods, exhibits outstanding static and dynamic perception
capabilities across various sensors.
|
2502.12193
|
AI and the Law: Evaluating ChatGPT's Performance in Legal Classification
|
cs.CL cs.AI
|
The use of ChatGPT to analyze and classify evidence in criminal proceedings
has been a topic of ongoing discussion. However, to the best of our knowledge,
this issue has not been studied in the context of the Polish language. This
study addresses this research gap by evaluating the effectiveness of ChatGPT in
classifying legal cases under the Polish Penal Code. The results show excellent
binary classification accuracy, with all positive and negative cases correctly
categorized. In addition, a qualitative evaluation confirms that the legal
basis provided for each case, along with the relevant legal content, was
appropriate. The results obtained suggest that ChatGPT can effectively analyze
and classify evidence while applying the appropriate legal rules. In
conclusion, ChatGPT has the potential to assist interested parties in the
analysis of evidence and serve as a valuable legal resource for individuals
with less experience or knowledge in this area.
|
2502.12195
|
GeneralizeFormer: Layer-Adaptive Model Generation across Test-Time
Distribution Shifts
|
cs.LG
|
We consider the problem of test-time domain generalization, where a model is
trained on several source domains and adjusted on target domains never seen
during training. Different from the common methods that fine-tune the model or
adjust the classifier parameters online, we propose to generate multiple layer
parameters on the fly during inference by a lightweight meta-learned
transformer, which we call \textit{GeneralizeFormer}. The layer-wise parameters
are generated per target batch without fine-tuning or online adjustment. By
doing so, our method is more effective in dynamic scenarios with multiple
target distributions and also avoids forgetting valuable source distribution
characteristics. Moreover, by considering layer-wise gradients, the proposed
method adapts itself to various distribution shifts. To reduce the
computational and time cost, we fix the convolutional parameters while only
generating parameters of the Batch Normalization layers and the linear
classifier. Experiments on six widely used domain generalization datasets
demonstrate the benefits and abilities of the proposed method to efficiently
handle various distribution shifts, generalize in dynamic scenarios, and avoid
forgetting.
|
2502.12196
|
Integrated Scheduling Model for Arrivals and Departures in Metroplex
Terminal Area
|
cs.NE math.OC
|
In light of the rapid expansion of civil aviation, addressing the delays and
congestion phenomena in the vicinity of metroplex caused by the imbalance
between air traffic flow and capacity is crucial. This paper first proposes a
bi-level optimization model for the collaborative flight sequencing of arrival
and departure flights in the metroplex with multiple airports, considering both
the runway systems and TMA (Terminal Control Area) entry/exit fixes. Besides,
the model is adaptive to various traffic scenarios. The genetic algorithm is
employed to solve the proposed model. The Shanghai TMA, located in China, is
used as a case study, and it includes two airports, Shanghai Hongqiao
International Airport and Shanghai Pudong International Airport. The results
demonstrate that the model can reduce arrival delay by 51.52%, departure delay
by 18.05%, and the runway occupation time of departure flights by 23.83%.
Furthermore, the model utilized in this study significantly enhances flight
scheduling efficiency, providing a more efficient solution than the traditional
FCFS (First Come, First Served) approach. Additionally, the algorithm employed
offers further improvements over the NSGA II algorithm.
|
2502.12197
|
A Closer Look at System Prompt Robustness
|
cs.CL cs.AI
|
System prompts have emerged as a critical control surface for specifying the
behavior of LLMs in chat and agent settings. Developers depend on system
prompts to specify important context, output format, personalities, guardrails,
content policies, and safety countermeasures, all of which require models to
robustly adhere to the system prompt, especially when facing conflicting or
adversarial user inputs. In practice, models often forget to consider relevant
guardrails or fail to resolve conflicting demands between the system and the
user. In this work, we study various methods for improving system prompt
robustness by creating realistic new evaluation and fine-tuning datasets based
on prompts collected from from OpenAI's GPT Store and HuggingFace's
HuggingChat. Our experiments assessing models with a panel of new and existing
benchmarks show that performance can be considerably improved with realistic
fine-tuning data, as well as inference-time interventions such as
classifier-free guidance. Finally, we analyze the results of recently released
reasoning models from OpenAI and DeepSeek, which show exciting but uneven
improvements on the benchmarks we study. Overall, current techniques fall short
of ensuring system prompt robustness and further study is warranted.
|
2502.12198
|
Maximize Your Diffusion: A Study into Reward Maximization and Alignment
for Diffusion-based Control
|
cs.LG cs.AI
|
Diffusion-based planning, learning, and control methods present a promising
branch of powerful and expressive decision-making solutions. Given the growing
interest, such methods have undergone numerous refinements over the past years.
However, despite these advancements, existing methods are limited in their
investigations regarding general methods for reward maximization within the
decision-making process. In this work, we study extensions of fine-tuning
approaches for control applications. Specifically, we explore extensions and
various design choices for four fine-tuning approaches: reward alignment
through reinforcement learning, direct preference optimization, supervised
fine-tuning, and cascading diffusion. We optimize their usage to merge these
independent efforts into one unified paradigm. We show the utility of such
propositions in offline RL settings and demonstrate empirical improvements over
a rich array of control tasks.
|
2502.12200
|
Efficient and Effective Prompt Tuning via Prompt Decomposition and
Compressed Outer Product
|
cs.CL cs.AI
|
Prompt tuning (PT) offers a cost-effective alternative to fine-tuning
large-scale pre-trained language models (PLMs), requiring only a few parameters
in soft prompt tokens added before the input text. However, existing PT
approaches face two significant issues: (i) They overlook intrinsic semantic
associations between soft prompt tokens, leading to high discreteness and
limited interactions, thus reducing the model's comprehension and effectiveness
in complex tasks. (ii) Due to the complexity of downstream tasks, long soft
prompt is necessitated to improve performance, but prompt length correlates
positively with memory usage and computational costs. Achieving high efficiency
and performance remains an ongoing challenge. To address these issues, we
propose a novel Low-parameters prompt tuning (LAMP) method, which leverages
prompt decomposition and compressed outer product. Specifically, the prompt
decomposition module employs Truncated SVD to reduce training parameters and
significantly lower the dimensionality of the soft prompt parameter space. It
then utilizes a compressed outer product module to facilitate multiple
interactions among prompt tokens, exploring their intrinsic associations to
enhance knowledge representation. Finally, LAMP uses average pooling to reduce
memory usage and training/inference time. Extensive experiments across six
architectures and eight datasets demonstrate that LAMP outperforms
state-of-the-art PT-based and LoRA-based methods in performance and efficiency.
|
2502.12202
|
BoT: Breaking Long Thought Processes of o1-like Large Language Models
through Backdoor Attack
|
cs.CL cs.AI cs.LG
|
Longer thought, better performance: large language models with deep reasoning
capabilities, particularly o1-like models, have demonstrated remarkable
performance by generating extensive thought processes during inference. This
trade-off reveals a potential vulnerability: adversaries could compromise model
performance by forcing immediate responses without thought processes. To this
end, in this paper, we introduce a novel attack scenario targeting the long
thought processes of o1-like models and propose BoT (Break CoT), which can
selectively break intrinsic reasoning mechanisms through backdoor attacks. BoT
constructs poisoned datasets with designed triggers and injects backdoor by
either supervised fine-tuning or direct preference optimization. When
triggered, the model directly generates answers without thought processes,
while maintaining normal reasoning capabilities for clean inputs. Extensive
experiments on open-source o1-like models, including recent DeepSeek-R1,
demonstrate that BoT nearly achieves high attack success rates while
maintaining clean accuracy, highlighting the critical safety risk in current
models. Furthermore, the relationship between task difficulty and helpfulness
reveals a potential application for good, enabling users to customize model
behavior based on task complexity. Code is available at
\href{https://github.com/zihao-ai/BoT}{https://github.com/zihao-ai/BoT}.
|
2502.12203
|
An Interpretable Automated Mechanism Design Framework with Large
Language Models
|
cs.LG cs.AI cs.GT cs.NE
|
Mechanism design has long been a cornerstone of economic theory, with
traditional approaches relying on mathematical derivations. Recently, automated
approaches, including differentiable economics with neural networks, have
emerged for designing payments and allocations. While both analytical and
automated methods have advanced the field, they each face significant
weaknesses: mathematical derivations are not automated and often struggle to
scale to complex problems, while automated and especially neural-network-based
approaches suffer from limited interpretability. To address these challenges,
we introduce a novel framework that reformulates mechanism design as a code
generation task. Using large language models (LLMs), we generate heuristic
mechanisms described in code and evolve them to optimize over some evaluation
metrics while ensuring key design criteria (e.g., strategy-proofness) through a
problem-specific fixing process. This fixing process ensures any mechanism
violating the design criteria is adjusted to satisfy them, albeit with some
trade-offs in performance metrics. These trade-offs are factored in during the
LLM-based evolution process. The code generation capabilities of LLMs enable
the discovery of novel and interpretable solutions, bridging the symbolic logic
of mechanism design and the generative power of modern AI. Through rigorous
experimentation, we demonstrate that LLM-generated mechanisms achieve
competitive performance while offering greater interpretability compared to
previous approaches. Notably, our framework can rediscover existing manually
designed mechanisms and provide insights into neural-network based solutions
through Programming-by-Example. These results highlight the potential of LLMs
to not only automate but also enhance the transparency and scalability of
mechanism design, ensuring safe deployment of the mechanisms in society.
|
2502.12204
|
Predicting Depression in Screening Interviews from Interactive
Multi-Theme Collaboration
|
cs.CL cs.AI
|
Automatic depression detection provides cues for early clinical intervention
by clinicians. Clinical interviews for depression detection involve dialogues
centered around multiple themes. Existing studies primarily design end-to-end
neural network models to capture the hierarchical structure of clinical
interview dialogues. However, these methods exhibit defects in modeling the
thematic content of clinical interviews: 1) they fail to capture intra-theme
and inter-theme correlation explicitly, and 2) they do not allow clinicians to
intervene and focus on themes of interest. To address these issues, this paper
introduces an interactive depression detection framework. This framework
leverages in-context learning techniques to identify themes in clinical
interviews and then models both intra-theme and inter-theme correlation.
Additionally, it employs AI-driven feedback to simulate the interests of
clinicians, enabling interactive adjustment of theme importance. PDIMC achieves
absolute improvements of 35\% and 12\% compared to the state-of-the-art on the
depression detection dataset DAIC-WOZ, which demonstrates the effectiveness of
modeling theme correlation and incorporating interactive external feedback.
|
2502.12206
|
Evaluating the Paperclip Maximizer: Are RL-Based Language Models More
Likely to Pursue Instrumental Goals?
|
cs.AI cs.CL cs.LG
|
As large language models (LLMs) continue to evolve, ensuring their alignment
with human goals and values remains a pressing challenge. A key concern is
\textit{instrumental convergence}, where an AI system, in optimizing for a
given objective, develops unintended intermediate goals that override the
ultimate objective and deviate from human-intended goals. This issue is
particularly relevant in reinforcement learning (RL)-trained models, which can
generate creative but unintended strategies to maximize rewards. In this paper,
we explore instrumental convergence in LLMs by comparing models trained with
direct RL optimization (e.g., the o1 model) to those trained with reinforcement
learning from human feedback (RLHF). We hypothesize that RL-driven models
exhibit a stronger tendency for instrumental convergence due to their
optimization of goal-directed behavior in ways that may misalign with human
intentions. To assess this, we introduce InstrumentalEval, a benchmark for
evaluating instrumental convergence in RL-trained LLMs. Initial experiments
reveal cases where a model tasked with making money unexpectedly pursues
instrumental objectives, such as self-replication, implying signs of
instrumental convergence. Our findings contribute to a deeper understanding of
alignment challenges in AI systems and the risks posed by unintended model
behaviors.
|
2502.12207
|
PAR-AdvGAN: Improving Adversarial Attack Capability with Progressive
Auto-Regression AdvGAN
|
cs.LG cs.AI
|
Deep neural networks have demonstrated remarkable performance across various
domains. However, they are vulnerable to adversarial examples, which can lead
to erroneous predictions. Generative Adversarial Networks (GANs) can leverage
the generators and discriminators model to quickly produce high-quality
adversarial examples. Since both modules train in a competitive and
simultaneous manner, GAN-based algorithms like AdvGAN can generate adversarial
examples with better transferability compared to traditional methods. However,
the generation of perturbations is usually limited to a single iteration,
preventing these examples from fully exploiting the potential of the methods.
To tackle this issue, we introduce a novel approach named Progressive
Auto-Regression AdvGAN (PAR-AdvGAN). It incorporates an auto-regressive
iteration mechanism within a progressive generation network to craft
adversarial examples with enhanced attack capability. We thoroughly evaluate
our PAR-AdvGAN method with a large-scale experiment, demonstrating its superior
performance over various state-of-the-art black-box adversarial attacks, as
well as the original AdvGAN.Moreover, PAR-AdvGAN significantly accelerates the
adversarial example generation, i.e., achieving the speeds of up to 335.5
frames per second on Inception-v3 model, outperforming the gradient-based
transferable attack algorithms. Our code is available at:
https://anonymous.4open.science/r/PAR-01BF/
|
2502.12208
|
AI-Augmented Metamorphic Testing for Comprehensive Validation of
Autonomous Vehicles
|
cs.SE cs.RO
|
Self-driving cars have the potential to revolutionize transportation, but
ensuring their safety remains a significant challenge. These systems must
navigate a variety of unexpected scenarios on the road, and their complexity
poses substantial difficulties for thorough testing. Conventional testing
methodologies face critical limitations, including the oracle problem
determining whether the systems behavior is correct and the inability to
exhaustively recreate a range of situations a self-driving car may encounter.
While Metamorphic Testing (MT) offers a partial solution to these challenges,
its application is often limited by simplistic modifications to test scenarios.
In this position paper, we propose enhancing MT by integrating AI-driven image
generation tools, such as Stable Diffusion, to improve testing methodologies.
These tools can generate nuanced variations of driving scenarios within the
operational design domain (ODD)for example, altering weather conditions,
modifying environmental elements, or adjusting lane markings while preserving
the critical features necessary for system evaluation. This approach enables
reproducible testing, efficient reuse of test criteria, and comprehensive
evaluation of a self-driving systems performance across diverse scenarios,
thereby addressing key gaps in current testing practices.
|
2502.12209
|
Suboptimal Shapley Value Explanations
|
stat.ML cs.AI cs.LG
|
Deep Neural Networks (DNNs) have demonstrated strong capacity in supporting a
wide variety of applications. Shapley value has emerged as a prominent tool to
analyze feature importance to help people understand the inference process of
deep neural models. Computing Shapley value function requires choosing a
baseline to represent feature's missingness. However, existing random and
conditional baselines could negatively influence the explanation. In this
paper, by analyzing the suboptimality of different baselines, we identify the
problematic baseline where the asymmetric interaction between $\bm{x}'_i$ (the
replacement of the faithful influential feature) and other features has
significant directional bias toward the model's output, and conclude that
$p(y|\bm{x}'_i) = p(y)$ potentially minimizes the asymmetric interaction
involving $\bm{x}'_i$. We further generalize the uninformativeness of
$\bm{x}'_i$ toward the label space $L$ to avoid estimating $p(y)$ and design a
simple uncertainty-based reweighting mechanism to accelerate the computation
process. We conduct experiments on various NLP tasks and our quantitative
analysis demonstrates the effectiveness of the proposed uncertainty-based
reweighting mechanism. Furthermore, by measuring the consistency of
explanations generated by explainable methods and human, we highlight the
disparity between model inference and human understanding.
|
2502.12210
|
Enhancing Frame Detection with Retrieval Augmented Generation
|
cs.CL cs.AI cs.LG
|
Recent advancements in Natural Language Processing have significantly
improved the extraction of structured semantic representations from
unstructured text, especially through Frame Semantic Role Labeling (FSRL).
Despite this progress, the potential of Retrieval-Augmented Generation (RAG)
models for frame detection remains under-explored. In this paper, we present
the first RAG-based approach for frame detection called RCIF (Retrieve
Candidates and Identify Frames). RCIF is also the first approach to operate
without the need for explicit target span and comprises three main stages: (1)
generation of frame embeddings from various representations ; (2) retrieval of
candidate frames given an input text; and (3) identification of the most
suitable frames. We conducted extensive experiments across multiple
configurations, including zero-shot, few-shot, and fine-tuning settings. Our
results show that our retrieval component significantly reduces the complexity
of the task by narrowing the search space thus allowing the frame identifier to
refine and complete the set of candidates. Our approach achieves
state-of-the-art performance on FrameNet 1.5 and 1.7, demonstrating its
robustness in scenarios where only raw text is provided. Furthermore, we
leverage the structured representation obtained through this method as a proxy
to enhance generalization across lexical variations in the task of translating
natural language questions into SPARQL queries.
|
2502.12213
|
Spatiotemporal-aware Trend-Seasonality Decomposition Network for Traffic
Flow Forecasting
|
cs.LG cs.AI
|
Traffic prediction is critical for optimizing travel scheduling and enhancing
public safety, yet the complex spatial and temporal dynamics within traffic
data present significant challenges for accurate forecasting. In this paper, we
introduce a novel model, the Spatiotemporal-aware Trend-Seasonality
Decomposition Network (STDN). This model begins by constructing a dynamic graph
structure to represent traffic flow and incorporates novel spatio-temporal
embeddings to jointly capture global traffic dynamics. The representations
learned are further refined by a specially designed trend-seasonality
decomposition module, which disentangles the trend-cyclical component and
seasonal component for each traffic node at different times within the graph.
These components are subsequently processed through an encoder-decoder network
to generate the final predictions. Extensive experiments conducted on
real-world traffic datasets demonstrate that STDN achieves superior performance
with remarkable computation cost. Furthermore, we have released a new traffic
dataset named JiNan, which features unique inner-city dynamics, thereby
enriching the scenario comprehensiveness in traffic prediction evaluation.
|
2502.12214
|
Zero Token-Driven Deep Thinking in LLMs: Unlocking the Full Potential of
Existing Parameters via Cyclic Refinement
|
cs.CL cs.AI
|
Resource limitations often constrain the parameter counts of Large Language
Models (LLMs), hindering their performance. While existing methods employ
parameter sharing to reuse the same parameter set under fixed budgets, such
approaches typically force each layer to assume multiple roles with a
predetermined number of iterations, restricting efficiency and adaptability. In
this work, we propose the Zero Token Transformer (ZTT), which features a
head-tail decoupled parameter cycling method. We disentangle the first (head)
and last (tail) layers from parameter cycling and iteratively refine only the
intermediate layers. Furthermore, we introduce a Zero-Token Mechanism, an
internal architectural component rather than an input token, to guide
layer-specific computation. At each cycle, the model retrieves a zero token
(with trainable key values) from a Zero-Token Pool, integrating it alongside
regular tokens in the attention mechanism. The corresponding attention scores
not only reflect each layer's computational importance but also enable dynamic
early exits without sacrificing overall model accuracy. Our approach achieves
superior performance under tight parameter budgets, effectively reduces
computational overhead via early exits, and can be readily applied to fine-tune
existing pre-trained models for enhanced efficiency and adaptability.
|
2502.12215
|
Revisiting the Test-Time Scaling of o1-like Models: Do they Truly
Possess Test-Time Scaling Capabilities?
|
cs.LG cs.AI cs.CL
|
The advent of test-time scaling in large language models (LLMs), exemplified
by OpenAI's o1 series, has advanced reasoning capabilities by scaling
computational resource allocation during inference. While successors like QwQ,
Deepseek-R1 (R1) and LIMO replicate these advancements, whether these models
truly possess test-time scaling capabilities remains underexplored. This study
found that longer CoTs of these o1-like models do not consistently enhance
accuracy; in fact, correct solutions are often shorter than incorrect ones for
the same questions. Further investigation shows this phenomenon is closely
related to models' self-revision capabilities - longer CoTs contain more
self-revisions, which often lead to performance degradation. We then compare
sequential and parallel scaling strategies on QwQ, R1 and LIMO, finding that
parallel scaling achieves better coverage and scalability. Based on these
insights, we propose Shortest Majority Vote, a method that combines parallel
scaling strategies with CoT length characteristics, significantly improving
models' test-time scalability compared to conventional majority voting
approaches.
|
2502.12216
|
Tactic: Adaptive Sparse Attention with Clustering and Distribution
Fitting for Long-Context LLMs
|
cs.LG cs.AI cs.CL
|
Long-context models are essential for many applications but face
inefficiencies in loading large KV caches during decoding. Prior methods
enforce fixed token budgets for sparse attention, assuming a set number of
tokens can approximate full attention. However, these methods overlook
variations in the importance of attention across heads, layers, and contexts.
To address these limitations, we propose Tactic, a sparsity-adaptive and
calibration-free sparse attention mechanism that dynamically selects tokens
based on their cumulative attention scores rather than a fixed token budget. By
setting a target fraction of total attention scores, Tactic ensures that token
selection naturally adapts to variations in attention sparsity. To efficiently
approximate this selection, Tactic leverages clustering-based sorting and
distribution fitting, allowing it to accurately estimate token importance with
minimal computational overhead. We show that Tactic outperforms existing sparse
attention algorithms, achieving superior accuracy and up to 7.29x decode
attention speedup. This improvement translates to an overall 1.58x end-to-end
inference speedup, making Tactic a practical and effective solution for
long-context LLM inference in accuracy-sensitive applications.
|
2502.12217
|
Optimal Brain Iterative Merging: Mitigating Interference in LLM Merging
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) have demonstrated impressive capabilities, but
their high computational costs pose challenges for customization. Model merging
offers a cost-effective alternative, yet existing methods suffer from
interference among parameters, leading to performance degradation. In this
work, we propose Optimal Brain Iterative Merging (OBIM), a novel method
designed to mitigate both intra-model and inter-model interference. OBIM
consists of two key components: (1) A saliency measurement mechanism that
evaluates parameter importance based on loss changes induced by individual
weight alterations, reducing intra-model interference by preserving only
high-saliency parameters. (2) A mutually exclusive iterative merging framework,
which incrementally integrates models using a binary mask to avoid direct
parameter averaging, thereby mitigating inter-model interference. We validate
OBIM through experiments on both Supervised Fine-Tuned (SFT) models and
post-pretrained checkpoints. The results show that OBIM significantly
outperforms existing merging techniques. Overall, OBIM provides an effective
and practical solution for enhancing LLM merging.
|
2502.12219
|
Towards Efficient Molecular Property Optimization with Graph Energy
Based Models
|
q-bio.BM cs.LG
|
Optimizing chemical properties is a challenging task due to the vastness and
complexity of chemical space. Here, we present a generative energy-based
architecture for implicit chemical property optimization, designed to
efficiently generate molecules that satisfy target properties without explicit
conditional generation. We use Graph Energy Based Models and a training
approach that does not require property labels. We validated our approach on
well-established chemical benchmarks, showing superior results to
state-of-the-art methods and demonstrating robustness and efficiency towards de
novo drug design.
|
2502.12222
|
IMPACTX: Improving Model Performance by Appropriately predicting CorrecT
eXplanations
|
cs.LG cs.AI
|
The eXplainable Artificial Intelligence (XAI) research predominantly
concentrates to provide explainations about AI model decisions, especially Deep
Learning (DL) models. However, there is a growing interest in using XAI
techniques to automatically improve the performance of the AI systems
themselves.
This paper proposes IMPACTX, a novel approach that leverages XAI as a fully
automated attention mechanism, without requiring external knowledge or human
feedback. Experimental results show that IMPACTX has improved performance
respect to the standalone ML model by integrating an attention mechanism based
an XAI method outputs during the model training. Furthermore, IMPACTX directly
provides proper feature attribution maps for the model's decisions, without
relying on external XAI methods during the inference process.
Our proposal is evaluated using three widely recognized DL models
(EfficientNet-B2, MobileNet, and LeNet-5) along with three standard image
datasets: CIFAR-10, CIFAR-100, and STL-10. The results show that IMPACTX
consistently improves the performance of all the inspected DL models across all
evaluated datasets, and it directly provides appropriate explanations for its
responses.
|
2502.12223
|
GLoT: A Novel Gated-Logarithmic Transformer for Efficient Sign Language
Translation
|
cs.CL cs.CV
|
Machine Translation has played a critical role in reducing language barriers,
but its adaptation for Sign Language Machine Translation (SLMT) has been less
explored. Existing works on SLMT mostly use the Transformer neural network
which exhibits low performance due to the dynamic nature of the sign language.
In this paper, we propose a novel Gated-Logarithmic Transformer (GLoT) that
captures the long-term temporal dependencies of the sign language as a
time-series data. We perform a comprehensive evaluation of GloT with the
transformer and transformer-fusion models as a baseline, for
Sign-to-Gloss-to-Text translation. Our results demonstrate that GLoT
consistently outperforms the other models across all metrics. These findings
underscore its potential to address the communication challenges faced by the
Deaf and Hard of Hearing community.
|
2502.12224
|
Accurate Expert Predictions in MoE Inference via Cross-Layer Gate
|
cs.AI cs.LG
|
Large Language Models (LLMs) have demonstrated impressive performance across
various tasks, and their application in edge scenarios has attracted
significant attention. However, sparse-activated Mixture-of-Experts (MoE)
models, which are well suited for edge scenarios, have received relatively
little attention due to their high memory demands. Offload-based methods have
been proposed to address this challenge, but they face difficulties with expert
prediction. Inaccurate expert predictions can result in prolonged inference
delays. To promote the application of MoE models in edge scenarios, we propose
Fate, an offloading system designed for MoE models to enable efficient
inference in resource-constrained environments. The key insight behind Fate is
that gate inputs from adjacent layers can be effectively used for expert
prefetching, achieving high prediction accuracy without additional GPU
overhead. Furthermore, Fate employs a shallow-favoring expert caching strategy
that increases the expert hit rate to 99\%. Additionally, Fate integrates
tailored quantization strategies for cache optimization and IO efficiency.
Experimental results show that, compared to Load on Demand and Expert
Activation Path-based method, Fate achieves up to 4.5x and 1.9x speedups in
prefill speed and up to 4.1x and 2.2x speedups in decoding speed, respectively,
while maintaining inference quality. Moreover, Fate's performance improvements
are scalable across different memory budgets.
|
2502.12225
|
Subjective Logic Encodings
|
cs.LG cs.AI
|
Many existing approaches for learning from labeled data assume the existence
of gold-standard labels. According to these approaches, inter-annotator
disagreement is seen as noise to be removed, either through refinement of
annotation guidelines, label adjudication, or label filtering. However,
annotator disagreement can rarely be totally eradicated, especially on more
subjective tasks such as sentiment analysis or hate speech detection where
disagreement is natural. Therefore, a new approach to learning from labeled
data, called data perspectivism, seeks to leverage inter-annotator disagreement
to learn models that stay true to the inherent uncertainty of the task by
treating annotations as opinions of the annotators, rather than gold-standard
facts. Despite this conceptual grounding, existing methods under data
perspectivism are limited to using disagreement as the sole source of
annotation uncertainty. To expand the possibilities of data perspectivism, we
introduce Subjective Logic Encodings (SLEs), a flexible framework for
constructing classification targets that explicitly encodes annotations as
opinions of the annotators. Based on Subjective Logic Theory, SLEs encode
labels as Dirichlet distributions and provide principled methods for encoding
and aggregating various types of annotation uncertainty -- annotator
confidence, reliability, and disagreement -- into the targets. We show that
SLEs are a generalization of other types of label encodings as well as how to
estimate models to predict SLEs using a distribution matching objective.
|
2502.12226
|
On Creating a Causally Grounded Usable Rating Method for Assessing the
Robustness of Foundation Models Supporting Time Series
|
cs.LG cs.AI
|
Foundation Models (FMs) have improved time series forecasting in various
sectors, such as finance, but their vulnerability to input disturbances can
hinder their adoption by stakeholders, such as investors and analysts. To
address this, we propose a causally grounded rating framework to study the
robustness of Foundational Models for Time Series (FMTS) with respect to input
perturbations. We evaluate our approach to the stock price prediction problem,
a well-studied problem with easily accessible public data, evaluating six
state-of-the-art (some multi-modal) FMTS across six prominent stocks spanning
three industries. The ratings proposed by our framework effectively assess the
robustness of FMTS and also offer actionable insights for model selection and
deployment. Within the scope of our study, we find that (1) multi-modal FMTS
exhibit better robustness and accuracy compared to their uni-modal versions
and, (2) FMTS pre-trained on time series forecasting task exhibit better
robustness and forecasting accuracy compared to general-purpose FMTS
pre-trained across diverse settings. Further, to validate our framework's
usability, we conduct a user study showcasing FMTS prediction errors along with
our computed ratings. The study confirmed that our ratings reduced the
difficulty for users in comparing the robustness of different systems.
|
2502.12227
|
Identifying the Best Transition Law
|
cs.LG cs.AI
|
Motivated by recursive learning in Markov Decision Processes, this paper
studies best-arm identification in bandit problems where each arm's reward is
drawn from a multinomial distribution with a known support. We compare the
performance { reached by strategies including notably LUCB without and with use
of this knowledge. } In the first case, we use classical non-parametric
approaches for the confidence intervals. In the second case, where a
probability distribution is to be estimated, we first use classical deviation
bounds (Hoeffding and Bernstein) on each dimension independently, and then the
Empirical Likelihood method (EL-LUCB) on the joint probability vector. The
effectiveness of these methods is demonstrated through simulations on scenarios
with varying levels of structural complexity.
|
2502.12231
|
PUGS: Zero-shot Physical Understanding with Gaussian Splatting
|
cs.CV
|
Current robotic systems can understand the categories and poses of objects
well. But understanding physical properties like mass, friction, and hardness,
in the wild, remains challenging. We propose a new method that reconstructs 3D
objects using the Gaussian splatting representation and predicts various
physical properties in a zero-shot manner. We propose two techniques during the
reconstruction phase: a geometry-aware regularization loss function to improve
the shape quality and a region-aware feature contrastive loss function to
promote region affinity. Two other new techniques are designed during
inference: a feature-based property propagation module and a volume integration
module tailored for the Gaussian representation. Our framework is named as
zero-shot physical understanding with Gaussian splatting, or PUGS. PUGS
achieves new state-of-the-art results on the standard benchmark of ABO-500 mass
prediction. We provide extensive quantitative ablations and qualitative
visualization to demonstrate the mechanism of our designs. We show the proposed
methodology can help address challenging real-world grasping tasks. Our codes,
data, and models are available at https://github.com/EverNorif/PUGS
|
2502.12243
|
On the Learnability of Knot Invariants: Representation, Predictability,
and Neural Similarity
|
math.GT cs.LG
|
We analyze different aspects of neural network predictions of knot
invariants. First, we investigate the impact of different knot representations
on the prediction of invariants and find that braid representations work in
general the best. Second, we study which knot invariants are easy to learn,
with invariants derived from hyperbolic geometry and knot diagrams being very
easy to learn, while invariants derived from topological or homological data
are harder. Predicting the Arf invariant could not be learned for any
representation. Third, we propose a cosine similarity score based on gradient
saliency vectors, and a joint misclassification score to uncover similarities
in neural networks trained to predict related topological invariants.
|
2502.12257
|
InfoQuest: Evaluating Multi-Turn Dialogue Agents for Open-Ended
Conversations with Hidden Context
|
cs.CL cs.LG
|
While large language models excel at following explicit instructions, they
often struggle with ambiguous or incomplete user requests, defaulting to
verbose, generic responses rather than seeking clarification. We introduce
InfoQuest, a multi-turn chat benchmark designed to evaluate how dialogue agents
handle hidden context in open-ended user requests. The benchmark presents
intentionally ambiguous scenarios that require models to engage in
information-seeking dialogue through clarifying questions before providing
appropriate responses. Our evaluation of both open and closed-source models
reveals that while proprietary models generally perform better, all current
assistants struggle with effectively gathering critical information, often
requiring multiple turns to infer user intent and frequently defaulting to
generic responses without proper clarification. We provide a systematic
methodology for generating diverse scenarios and evaluating models'
information-seeking capabilities, offering insights into the current
limitations of language models in handling ambiguous requests through
multi-turn interactions.
|
2502.12258
|
SmokeNet: Efficient Smoke Segmentation Leveraging Multiscale
Convolutions and Multiview Attention Mechanisms
|
cs.CV
|
Efficient segmentation of smoke plumes is crucial for environmental
monitoring and industrial safety, enabling the detection and mitigation of
harmful emissions from activities like quarry blasts and wildfires. Accurate
segmentation facilitates environmental impact assessments, timely
interventions, and compliance with safety standards. However, existing models
often face high computational demands and limited adaptability to diverse smoke
appearances, restricting their deployment in resource-constrained environments.
To address these issues, we introduce SmokeNet, a novel deep learning
architecture that leverages multiscale convolutions and multiview linear
attention mechanisms combined with layer-specific loss functions to handle the
complex dynamics of diverse smoke plumes, ensuring efficient and accurate
segmentation across varied environments. Additionally, we evaluate SmokeNet's
performance and versatility using four datasets, including our quarry blast
smoke dataset made available to the community. The results demonstrate that
SmokeNet maintains a favorable balance between computational efficiency and
segmentation accuracy, making it suitable for deployment in environmental
monitoring and safety management systems. By contributing a new dataset and
offering an efficient segmentation model, SmokeNet advances smoke segmentation
capabilities in diverse and challenging environments.
|
2502.12264
|
Multi-dimensional Test Design
|
econ.TH cs.CY cs.GT cs.LG
|
How should one jointly design tests and the arrangement of agencies to
administer these tests (testing procedure)? To answer this question, we analyze
a model where a principal must use multiple tests to screen an agent with a
multi-dimensional type, knowing that the agent can change his type at a cost.
We identify a new tradeoff between setting difficult tests and using a
difficult testing procedure. We compare two settings: (1) the agent only
misrepresents his type (manipulation) and (2) the agent improves his actual
type (investment). Examples include interviews, regulations, and data
classification. We show that in the manipulation setting, stringent tests
combined with an easy procedure, i.e., offering tests sequentially in a fixed
order, is optimal. In contrast, in the investment setting, non-stringent tests
with a difficult procedure, i.e., offering tests simultaneously, is optimal;
however, under mild conditions offering them sequentially in a random order may
be as good. Our results suggest that whether the agent manipulates or invests
in his type determines which arrangement of agencies is optimal.
|
2502.12267
|
NeuroStrata: Harnessing Neurosymbolic Paradigms for Improved Design,
Testability, and Verifiability of Autonomous CPS
|
cs.SE cs.AI
|
Autonomous cyber-physical systems (CPSs) leverage AI for perception,
planning, and control but face trust and safety certification challenges due to
inherent uncertainties. The neurosymbolic paradigm replaces stochastic layers
with interpretable symbolic AI, enabling determinism. While promising,
challenges like multisensor fusion, adaptability, and verification remain. This
paper introduces NeuroStrata, a neurosymbolic framework to enhance the testing
and verification of autonomous CPS. We outline its key components, present
early results, and detail future plans.
|
2502.12272
|
Learning to Reason at the Frontier of Learnability
|
cs.LG cs.AI cs.CL
|
Reinforcement learning is now widely adopted as the final stage of large
language model training, especially for reasoning-style tasks such as maths
problems. Typically, models attempt each question many times during a single
training step and attempt to learn from their successes and failures. However,
we demonstrate that throughout training with two popular algorithms (PPO and
VinePPO) on two widely used datasets, many questions are either solved by all
attempts - meaning they are already learned - or by none - providing no
meaningful training signal. To address this, we adapt a method from the
reinforcement learning literature - sampling for learnability - and apply it to
the reinforcement learning stage of LLM training. Our curriculum prioritises
questions with high variance of success, i.e. those where the agent sometimes
succeeds, but not always. Our findings demonstrate that this curriculum
consistently boosts training performance across multiple algorithms and
datasets, paving the way for more efficient and effective reinforcement
learning in LLMs.
|
2502.12275
|
Integrating Expert Knowledge into Logical Programs via LLMs
|
cs.AI cs.CL cs.MA
|
This paper introduces ExKLoP, a novel framework designed to evaluate how
effectively Large Language Models (LLMs) integrate expert knowledge into
logical reasoning systems. This capability is especially valuable in
engineering, where expert knowledge-such as manufacturer-recommended
operational ranges-can be directly embedded into automated monitoring systems.
By mirroring expert verification steps, tasks like range checking and
constraint validation help ensure system safety and reliability. Our approach
systematically evaluates LLM-generated logical rules, assessing both syntactic
fluency and logical correctness in these critical validation tasks. We also
explore the models capacity for self-correction via an iterative feedback loop
based on code execution outcomes. ExKLoP presents an extensible dataset
comprising 130 engineering premises, 950 prompts, and corresponding validation
points. It enables comprehensive benchmarking while allowing control over task
complexity and scalability of experiments. We leverage the synthetic data
creation methodology to conduct extensive empirical evaluation on a diverse set
of LLMs including Llama3, Gemma, Mixtral, Mistral, and Qwen. Results reveal
that while models generate nearly perfect syntactically correct code, they
frequently exhibit logical errors in translating expert knowledge. Furthermore,
iterative self-correction yields only marginal improvements (up to 3%).
Overall, ExKLoP serves as a robust evaluation platform that streamlines the
selection of effective models for self-correcting systems while clearly
delineating the types of errors encountered. The complete implementation, along
with all relevant data, is available at GitHub.
|
2502.12276
|
Story Grammar Semantic Matching for Literary Study
|
cs.CL
|
In Natural Language Processing (NLP), semantic matching algorithms have
traditionally relied on the feature of word co-occurrence to measure semantic
similarity. While this feature approach has proven valuable in many contexts,
its simplistic nature limits its analytical and explanatory power when used to
understand literary texts. To address these limitations, we propose a more
transparent approach that makes use of story structure and related elements.
Using a BERT language model pipeline, we label prose and epic poetry with story
element labels and perform semantic matching by only considering these labels
as features. This new method, Story Grammar Semantic Matching, guides literary
scholars to allusions and other semantic similarities across texts in a way
that allows for characterizing patterns and literary technique.
|
2502.12277
|
Healthcare cost prediction for heterogeneous patient profiles using deep
learning models with administrative claims data
|
cs.LG cs.CY
|
Problem: How can we design patient cost prediction models that effectively
address the challenges of heterogeneity in administrative claims (AC) data to
ensure accurate, fair, and generalizable predictions, especially for high-need
(HN) patients with complex chronic conditions?
Relevance: Accurate and equitable patient cost predictions are vital for
developing health management policies and optimizing resource allocation, which
can lead to significant cost savings for healthcare payers, including
government agencies and private insurers. Addressing disparities in prediction
outcomes for HN patients ensures better economic and clinical decision-making,
benefiting both patients and payers.
Methodology: This study is grounded in socio-technical considerations that
emphasize the interplay between technical systems (e.g., deep learning models)
and humanistic outcomes (e.g., fairness in healthcare decisions). It
incorporates representation learning and entropy measurement to address
heterogeneity and complexity in data and patient profiles, particularly for HN
patients. We propose a channel-wise deep learning framework that mitigates data
heterogeneity by segmenting AC data into separate channels based on types of
codes (e.g., diagnosis, procedures) and costs. This approach is paired with a
flexible evaluation design that uses multi-channel entropy measurement to
assess patient heterogeneity.
Results: The proposed channel-wise models reduce prediction errors by 23%
compared to single-channel models, leading to 16.4% and 19.3% reductions in
overpayments and underpayments, respectively. Notably, the reduction in
prediction bias is significantly higher for HN patients, demonstrating
effectiveness in handling heterogeneity and complexity in data and patient
profiles. This demonstrates the potential for applying channel-wise modeling to
domains with similar heterogeneity challenges.
|
2502.12278
|
Towards Practical First-Order Model Counting
|
cs.LO cs.AI
|
First-order model counting (FOMC) is the problem of counting the number of
models of a sentence in first-order logic. Since lifted inference techniques
rely on reductions to variants of FOMC, the design of scalable methods for FOMC
has attracted attention from both theoreticians and practitioners over the past
decade. Recently, a new approach based on first-order knowledge compilation was
proposed. This approach, called Crane, instead of simply providing the final
count, generates definitions of (possibly recursive) functions that can be
evaluated with different arguments to compute the model count for any domain
size. However, this approach is not fully automated, as it requires manual
evaluation of the constructed functions. The primary contribution of this work
is a fully automated compilation algorithm, called Gantry, which transforms the
function definitions into C++ code equipped with arbitrary-precision
arithmetic. These additions allow the new FOMC algorithm to scale to domain
sizes over 500,000 times larger than the current state of the art, as
demonstrated through experimental results.
|
2502.12280
|
Connecting Large Language Model Agent to High Performance Computing
Resource
|
cs.DC cs.AI
|
The Large Language Model agent workflow enables the LLM to invoke tool
functions to increase the performance on specific scientific domain questions.
To tackle large scale of scientific research, it requires access to computing
resource and parallel computing setup. In this work, we implemented Parsl to
the LangChain/LangGraph tool call setup, to bridge the gap between the LLM
agent to the computing resource. Two tool call implementations were set up and
tested on both local workstation and HPC environment on Polaris/ALCF. The first
implementation with Parsl-enabled LangChain tool node queues the tool functions
concurrently to the Parsl workers for parallel execution. The second
configuration is implemented by converting the tool functions into Parsl
ensemble functions, and is more suitable for large task on super computer
environment. The LLM agent workflow was prompted to run molecular dynamics
simulations, with different protein structure and simulation conditions. These
results showed the LLM agent tools were managed and executed concurrently by
Parsl on the available computing resource.
|
2502.12286
|
Rational Capability in Concurrent Games
|
cs.LO cs.MA
|
We extend concurrent game structures (CGSs) with a simple notion of
preference over computations and define a minimal notion of rationality for
agents based on the concept of dominance. We use this notion to interpret a CL
and an ATL languages that extend the basic CL and ATL languages with modalities
for rational capability, namely, a coalition's capability to rationally enforce
a given property. For each of these languages, we provide results about the
complexity of satisfiability checking and model checking as well as about
axiomatization.
|
2502.12289
|
Evaluating Step-by-step Reasoning Traces: A Survey
|
cs.CL
|
Step-by-step reasoning is widely used to enhance the reasoning ability of
large language models (LLMs) in complex problems. Evaluating the quality of
reasoning traces is crucial for understanding and improving LLM reasoning.
However, the evaluation criteria remain highly unstandardized, leading to
fragmented efforts in developing metrics and meta-evaluation benchmarks. To
address this gap, this survey provides a comprehensive overview of step-by-step
reasoning evaluation, proposing a taxonomy of evaluation criteria with four
top-level categories (groundedness, validity, coherence, and utility). We then
categorize metrics based on their implementations, survey which metrics are
used for assessing each criterion, and explore whether evaluator models can
transfer across different criteria. Finally, we identify key directions for
future research.
|
2502.12292
|
Independence Tests for Language Models
|
cs.LG cs.CL
|
We consider the following problem: given the weights of two models, can we
test whether they were trained independently -- i.e., from independent random
initializations? We consider two settings: constrained and unconstrained. In
the constrained setting, we make assumptions about model architecture and
training and propose a family of statistical tests that yield exact p-values
with respect to the null hypothesis that the models are trained from
independent random initializations. These p-values are valid regardless of the
composition of either model's training data; we compute them by simulating
exchangeable copies of each model under our assumptions and comparing various
similarity measures of weights and activations between the original two models
versus these copies. We report the p-values from these tests on pairs of 21
open-weight models (210 total pairs) and correctly identify all pairs of
non-independent models. Our tests remain effective even if one model was
fine-tuned for many tokens. In the unconstrained setting, where we make no
assumptions about training procedures, can change model architecture, and allow
for adversarial evasion attacks, the previous tests no longer work. Instead, we
propose a new test which matches hidden activations between two models, and
which is robust to adversarial transformations and to changes in model
architecture. The test can also do localized testing: identifying specific
non-independent components of models. Though we no longer obtain exact p-values
from this, empirically we find it behaves as one and reliably identifies
non-independent models. Notably, we can use the test to identify specific parts
of one model that are derived from another (e.g., how Llama 3.1-8B was pruned
to initialize Llama 3.2-3B, or shared layers between Mistral-7B and
StripedHyena-7B), and it is even robust to retraining individual layers of
either model from scratch.
|
2502.12293
|
Data-Efficient Limited-Angle CT Using Deep Priors and Regularization
|
cs.CV
|
Reconstructing an image from its Radon transform is a fundamental computed
tomography (CT) task arising in applications such as X-ray scans. In many
practical scenarios, a full 180-degree scan is not feasible, or there is a
desire to reduce radiation exposure. In these limited-angle settings, the
problem becomes ill-posed, and methods designed for full-view data often leave
significant artifacts. We propose a very low-data approach to reconstruct the
original image from its Radon transform under severe angle limitations. Because
the inverse problem is ill-posed, we combine multiple regularization methods,
including Total Variation, a sinogram filter, Deep Image Prior, and a
patch-level autoencoder. We use a differentiable implementation of the Radon
transform, which allows us to use gradient-based techniques to solve the
inverse problem. Our method is evaluated on a dataset from the Helsinki
Tomography Challenge 2022, where the goal is to reconstruct a binary disk from
its limited-angle sinogram. We only use a total of 12 data points--eight for
learning a prior and four for hyperparameter selection--and achieve results
comparable to the best synthetic data-driven approaches.
|
2502.12295
|
On the Computational Tractability of the (Many) Shapley Values
|
cs.LG cs.CC cs.LO
|
Recent studies have examined the computational complexity of computing
Shapley additive explanations (also known as SHAP) across various models and
distributions, revealing their tractability or intractability in different
settings. However, these studies primarily focused on a specific variant called
Conditional SHAP, though many other variants exist and address different
limitations. In this work, we analyze the complexity of computing a much
broader range of such variants, including Conditional, Interventional, and
Baseline SHAP, while exploring both local and global computations. We show that
both local and global Interventional and Baseline SHAP can be computed in
polynomial time for various ML models under Hidden Markov Model distributions,
extending popular algorithms such as TreeSHAP beyond empirical distributions.
On the downside, we prove intractability results for these variants over a wide
range of neural networks and tree ensembles. We believe that our results
emphasize the intricate diversity of computing Shapley values, demonstrating
how their complexity is substantially shaped by both the specific SHAP variant,
the model type, and the distribution.
|
2502.12297
|
Duo Streamers: A Streaming Gesture Recognition Framework
|
cs.CV
|
Gesture recognition in resource-constrained scenarios faces significant
challenges in achieving high accuracy and low latency. The streaming gesture
recognition framework, Duo Streamers, proposed in this paper, addresses these
challenges through a three-stage sparse recognition mechanism, an RNN-lite
model with an external hidden state, and specialized training and
post-processing pipelines, thereby making innovative progress in real-time
performance and lightweight design. Experimental results show that Duo
Streamers matches mainstream methods in accuracy metrics, while reducing the
real-time factor by approximately 92.3%, i.e., delivering a nearly 13-fold
speedup. In addition, the framework shrinks parameter counts to 1/38 (idle
state) and 1/9 (busy state) compared to mainstream models. In summary, Duo
Streamers not only offers an efficient and practical solution for streaming
gesture recognition in resource-constrained devices but also lays a solid
foundation for extended applications in multimodal and diverse scenarios.
|
2502.12298
|
Symmetric Rank-One Quasi-Newton Methods for Deep Learning Using Cubic
Regularization
|
math.OC cs.IT cs.LG cs.NA math.IT math.NA stat.ML
|
Stochastic gradient descent and other first-order variants, such as Adam and
AdaGrad, are commonly used in the field of deep learning due to their
computational efficiency and low-storage memory requirements. However, these
methods do not exploit curvature information. Consequently, iterates can
converge to saddle points or poor local minima. On the other hand, Quasi-Newton
methods compute Hessian approximations which exploit this information with a
comparable computational budget. Quasi-Newton methods re-use previously
computed iterates and gradients to compute a low-rank structured update. The
most widely used quasi-Newton update is the L-BFGS, which guarantees a positive
semi-definite Hessian approximation, making it suitable in a line search
setting. However, the loss functions in DNNs are non-convex, where the Hessian
is potentially non-positive definite. In this paper, we propose using a
limited-memory symmetric rank-one quasi-Newton approach which allows for
indefinite Hessian approximations, enabling directions of negative curvature to
be exploited. Furthermore, we use a modified adaptive regularized cubics
approach, which generates a sequence of cubic subproblems that have closed-form
solutions with suitable regularization choices. We investigate the performance
of our proposed method on autoencoders and feed-forward neural network models
and compare our approach to state-of-the-art first-order adaptive stochastic
methods as well as other quasi-Newton methods.x
|
2502.12300
|
Per-channel autoregressive linear prediction padding in tiled CNN
processing of 2D spatial data
|
cs.LG cs.CV
|
We present linear prediction as a differentiable padding method. For each
channel, a stochastic autoregressive linear model is fitted to the padding
input by minimizing its noise terms in the least-squares sense. The padding is
formed from the expected values of the autoregressive model given the known
pixels. We trained the convolutional RVSR super-resolution model from scratch
on satellite image data, using different padding methods. Linear prediction
padding slightly reduced the mean square super-resolution error compared to
zero and replication padding, with a moderate increase in time cost. Linear
prediction padding better approximated satellite image data and RVSR feature
map data. With zero padding, RVSR appeared to use more of its capacity to
compensate for the high approximation error. Cropping the network output by a
few pixels reduced the super-resolution error and the effect of the choice of
padding method on the error, favoring output cropping with the faster
replication and zero padding methods, for the studied workload.
|
2502.12301
|
SMOL: Professionally translated parallel data for 115 under-represented
languages
|
cs.CL
|
We open-source SMOL (Set of Maximal Overall Leverage), a suite of training
data to unlock translation for low-resource languages (LRLs). SMOL has been
translated into 115 under-resourced languages, including many for which there
exist no previous public resources, for a total of 6.1M translated tokens. SMOL
comprises two sub-datasets, each carefully chosen for maximum impact given its
size: SMOL-Sent, a set of sentences chosen for broad unique token coverage, and
SMOL-Doc, a document-level source focusing on a broad topic coverage. They join
the already released GATITOS for a trifecta of paragraph, sentence, and
token-level content. We demonstrate that using SMOL to prompt or fine-tune
Large Language Models yields robust ChrF improvements. In addition to
translation, we provide factuality ratings and rationales for all documents in
SMOL-Doc, yielding the first factuality datasets for most of these languages.
|
2502.12302
|
Chaotic Map based Compression Approach to Classification
|
cs.LG
|
Modern machine learning approaches often prioritize performance at the cost
of increased complexity, computational demands, and reduced interpretability.
This paper introduces a novel framework that challenges this trend by
reinterpreting learning from an information-theoretic perspective, viewing it
as a search for encoding schemes that capture intrinsic data structures through
compact representations. Rather than following the conventional approach of
fitting data to complex models, we propose a fundamentally different method
that maps data to intervals of initial conditions in a dynamical system. Our
GLS (Generalized L\"uroth Series) coding compression classifier employs skew
tent maps - a class of chaotic maps - both for encoding data into initial
conditions and for subsequent recovery. The effectiveness of this simple
framework is noteworthy, with performance closely approaching that of
well-established machine learning methods. On the breast cancer dataset, our
approach achieves 92.98\% accuracy, comparable to Naive Bayes at 94.74\%. While
these results do not exceed state-of-the-art performance, the significance of
our contribution lies not in outperforming existing methods but in
demonstrating that a fundamentally simpler, more interpretable approach can
achieve competitive results.
|
2502.12303
|
From Gaming to Research: GTA V for Synthetic Data Generation for
Robotics and Navigations
|
cs.CV
|
In computer vision, the development of robust algorithms capable of
generalizing effectively in real-world scenarios more and more often requires
large-scale datasets collected under diverse environmental conditions. However,
acquiring such datasets is time-consuming, costly, and sometimes unfeasible. To
address these limitations, the use of synthetic data has gained attention as a
viable alternative, allowing researchers to generate vast amounts of data while
simulating various environmental contexts in a controlled setting. In this
study, we investigate the use of synthetic data in robotics and navigation,
specifically focusing on Simultaneous Localization and Mapping (SLAM) and
Visual Place Recognition (VPR). In particular, we introduce a synthetic dataset
created using the virtual environment of the video game Grand Theft Auto V (GTA
V), along with an algorithm designed to generate a VPR dataset, without human
supervision. Through a series of experiments centered on SLAM and VPR, we
demonstrate that synthetic data derived from GTA V are qualitatively comparable
to real-world data. Furthermore, these synthetic data can complement or even
substitute real-world data in these applications. This study sets the stage for
the creation of large-scale synthetic datasets, offering a cost-effective and
scalable solution for future research and development.
|
2502.12304
|
Warmup Generations: A Task-Agnostic Approach for Guiding
Sequence-to-Sequence Learning with Unsupervised Initial State Generation
|
cs.CL cs.AI
|
Traditional supervised fine-tuning (SFT) strategies for sequence-to-sequence
tasks often train models to directly generate the target output. Recent work
has shown that guiding models with intermediate steps, such as keywords,
outlines, or reasoning chains, can significantly improve performance,
coherence, and interpretability. However, these methods often depend on
predefined intermediate formats and annotated data, limiting their scalability
and generalizability. In this work, we introduce a task-agnostic framework that
enables models to generate intermediate "warmup" sequences. These warmup
sequences, serving as an initial state for subsequent generation, are optimized
to enhance the probability of generating the target sequence without relying on
external supervision or human-designed structures. Drawing inspiration from
reinforcement learning principles, our method iteratively refines these
intermediate steps to maximize their contribution to the final output, similar
to reward-driven optimization in reinforcement learning with human feedback.
Experimental results across tasks such as translation, summarization, and
multi-choice question answering for logical reasoning show that our approach
outperforms traditional SFT methods, and offers a scalable and flexible
solution for sequence-to-sequence tasks.
|
2502.12307
|
The Agafonov and Schnorr-Stimm theorems for probabilistic automata
|
cs.FL cs.IT math.IT
|
For a fixed alphabet $A$, an infinite sequence $X$ is said to be normal if
every word $w$ over $A$ appears in $X$ with the same frequency as any other
word of the same length. A classical result of Agafonov (1966) relates
normality to finite automata as follows: a sequence $X$ is normal if and only
if any subsequence of $X$ selected by a finite automaton is itself normal.
Another theorem of Schnorr and Stimm (1972) gives an alternative
characterization: a sequence $X$ is normal if and only if no gambler can win
large amounts of money by betting on the sequence $X$ using a strategy that can
be described by a finite automaton. Both of these theorems are established in
the setting of deterministic finite automata. This raises the question as to
whether they can be extended to the setting of probabilistic finite automata.
In the case of the Agafonov theorem, this question was positively answered by
L\'echine et al.\ (2024) in a restricted case of probabilistic automata with
rational transition probabilities.
In this paper, we settle the full conjecture by proving that both the
Agafonov and the Schnorr-Stimm theorems hold true for arbitrary probabilistic
automata. Specifically, we show that a sequence $X$ is normal if and only if
any probabilistic automaton selects a normal subsequence of $X$ with
probability $1$. We also show that a sequence $X$ is normal if and only if a
probabilistic finite-state gambler fails to win on $X$ with probability $1$.
|
2502.12309
|
Eigenvalues in microeconomics
|
econ.TH cs.SI math.HO
|
Square matrices often arise in microeconomics, particularly in network models
addressing applications from opinion dynamics to platform regulation. Spectral
theory provides powerful tools for analyzing their properties. We present an
accessible overview of several fundamental applications of spectral methods in
microeconomics, focusing especially on the Perron-Frobenius Theorem's role and
its connection to centrality measures. Applications include social learning,
network games, public goods provision, and market intervention under
uncertainty. The exposition assumes minimal social science background, using
spectral theory as a unifying mathematical thread to introduce interested
readers to some exciting current topics in microeconomic theory.
|
2502.12310
|
Domain Randomization is Sample Efficient for Linear Quadratic Control
|
eess.SY cs.SY
|
We study the sample efficiency of domain randomization and robust control for
the benchmark problem of learning the linear quadratic regulator (LQR). Domain
randomization, which synthesizes controllers by minimizing average performance
over a distribution of model parameters, has achieved empirical success in
robotics, but its theoretical properties remain poorly understood. We establish
that with an appropriately chosen sampling distribution, domain randomization
achieves the optimal asymptotic rate of decay in the excess cost, matching
certainty equivalence. We further demonstrate that robust control, while
potentially overly conservative, exhibits superior performance in the low-data
regime due to its ability to stabilize uncertain systems with coarse parameter
estimates. We propose a gradient-based algorithm for domain randomization that
performs well in numerical experiments, which enables us to validate the trends
predicted by our analysis. These results provide insights into the use of
domain randomization in learning-enabled control, and highlight several open
questions about its application to broader classes of systems.
|
2502.12315
|
Mean-Field Bayesian Optimisation
|
cs.LG cs.MA
|
We address the problem of optimising the average payoff for a large number of
cooperating agents, where the payoff function is unknown and treated as a black
box. While standard Bayesian Optimisation (BO) methods struggle with the
scalability required for high-dimensional input spaces, we demonstrate how
leveraging the mean-field assumption on the black-box function can transform BO
into an efficient and scalable solution. Specifically, we introduce MF-GP-UCB,
a novel efficient algorithm designed to optimise agent payoffs in this setting.
Our theoretical analysis establishes a regret bound for MF-GP-UCB that is
independent of the number of agents, contrasting sharply with the exponential
dependence observed when naive BO methods are applied. We evaluate our
algorithm on a diverse set of tasks, including real-world problems, such as
optimising the location of public bikes for a bike-sharing programme,
distributing taxi fleets, and selecting refuelling ports for maritime vessels.
Empirical results demonstrate that MF-GP-UCB significantly outperforms existing
benchmarks, offering substantial improvements in performance and scalability,
constituting a promising solution for mean-field, black-box optimisation. The
code is available at https://github.com/petarsteinberg/MF-BO.
|
2502.12317
|
Can Language Models Learn Typologically Implausible Languages?
|
cs.CL cs.LG
|
Grammatical features across human languages show intriguing correlations
often attributed to learning biases in humans. However, empirical evidence has
been limited to experiments with highly simplified artificial languages, and
whether these correlations arise from domain-general or language-specific
biases remains a matter of debate. Language models (LMs) provide an opportunity
to study artificial language learning at a large scale and with a high degree
of naturalism. In this paper, we begin with an in-depth discussion of how LMs
allow us to better determine the role of domain-general learning biases in
language universals. We then assess learnability differences for LMs resulting
from typologically plausible and implausible languages closely following the
word-order universals identified by linguistic typologists. We conduct a
symmetrical cross-lingual study training and testing LMs on an array of highly
naturalistic but counterfactual versions of the English (head-initial) and
Japanese (head-final) languages. Compared to similar work, our datasets are
more naturalistic and fall closer to the boundary of plausibility. Our
experiments show that these LMs are often slower to learn these subtly
implausible languages, while ultimately achieving similar performance on some
metrics regardless of typological plausibility. These findings lend credence to
the conclusion that LMs do show some typologically-aligned learning
preferences, and that the typological patterns may result from, at least to
some degree, domain-general learning biases.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.