id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.06719
|
Gaussian Approximation and Multiplier Bootstrap for Stochastic Gradient
Descent
|
stat.ML cs.LG math.OC math.PR math.ST stat.TH
|
In this paper, we establish non-asymptotic convergence rates in the central
limit theorem for Polyak-Ruppert-averaged iterates of stochastic gradient
descent (SGD). Our analysis builds on the result of the Gaussian approximation
for nonlinear statistics of independent random variables of Shao and Zhang
(2022). Using this result, we prove the non-asymptotic validity of the
multiplier bootstrap for constructing the confidence sets for the optimal
solution of an optimization problem. In particular, our approach avoids the
need to approximate the limiting covariance of Polyak-Ruppert SGD iterates,
which allows us to derive approximation rates in convex distance of order up to
$1/\sqrt{n}$.
|
2502.06722
|
HetSwarm: Cooperative Navigation of Heterogeneous Swarm in Dynamic and
Dense Environments through Impedance-based Guidance
|
cs.RO
|
With the growing demand for efficient logistics and warehouse management,
unmanned aerial vehicles (UAVs) are emerging as a valuable complement to
automated guided vehicles (AGVs). UAVs enhance efficiency by navigating dense
environments and operating at varying altitudes. However, their limited flight
time, battery life, and payload capacity necessitate a supporting ground
station. To address these challenges, we propose HetSwarm, a heterogeneous
multi-robot system that combines a UAV and a mobile ground robot for
collaborative navigation in cluttered and dynamic conditions. Our approach
employs an artificial potential field (APF)-based path planner for the UAV,
allowing it to dynamically adjust its trajectory in real time. The ground robot
follows this path while maintaining connectivity through impedance links,
ensuring stable coordination. Additionally, the ground robot establishes
temporal impedance links with low-height ground obstacles to avoid local
collisions, as these obstacles do not interfere with the UAV's flight.
Experimental validation of HetSwarm in diverse environmental conditions
demonstrated a 90% success rate across 30 test cases. The ground robot
exhibited an average deviation of 45 cm near obstacles, confirming effective
collision avoidance. Extensive simulations in the Gym PyBullet environment
further validated the robustness of our system for real-world applications,
demonstrating its potential for dynamic, real-time task execution in cluttered
environments.
|
2502.06725
|
AgilePilot: DRL-Based Drone Agent for Real-Time Motion Planning in
Dynamic Environments by Leveraging Object Detection
|
cs.RO
|
Autonomous drone navigation in dynamic environments remains a critical
challenge, especially when dealing with unpredictable scenarios including
fast-moving objects with rapidly changing goal positions. While traditional
planners and classical optimisation methods have been extensively used to
address this dynamic problem, they often face real-time, unpredictable changes
that ultimately leads to sub-optimal performance in terms of adaptiveness and
real-time decision making. In this work, we propose a novel motion planner,
AgilePilot, based on Deep Reinforcement Learning (DRL) that is trained in
dynamic conditions, coupled with real-time Computer Vision (CV) for object
detections during flight. The training-to-deployment framework bridges the
Sim2Real gap, leveraging sophisticated reward structures that promotes both
safety and agility depending upon environment conditions. The system can
rapidly adapt to changing environments, while achieving a maximum speed of 3.0
m/s in real-world scenarios. In comparison, our approach outperforms classical
algorithms such as Artificial Potential Field (APF) based motion planner by 3
times, both in performance and tracking accuracy of dynamic targets by using
velocity predictions while exhibiting 90% success rate in 75 conducted
experiments. This work highlights the effectiveness of DRL in tackling
real-time dynamic navigation challenges, offering intelligent safety and
agility.
|
2502.06726
|
Rough Stochastic Pontryagin Maximum Principle and an Indirect Shooting
Method
|
math.OC cs.RO cs.SY eess.SY math.PR
|
We derive first-order Pontryagin optimality conditions for stochastic optimal
control with deterministic controls for systems modeled by rough differential
equations (RDE) driven by Gaussian rough paths. This Pontryagin Maximum
Principle (PMP) applies to systems following stochastic differential equations
(SDE) driven by Brownian motion, yet it does not rely on forward-backward SDEs
and involves the same Hamiltonian as the deterministic PMP. The proof consists
of first deriving various integrable error bounds for solutions to nonlinear
and linear RDEs by leveraging recent results on Gaussian rough paths. The PMP
then follows using standard techniques based on needle-like variations. As an
application, we propose the first indirect shooting method for nonlinear
stochastic optimal control and show that it converges 10x faster than a direct
method on a stabilization task.
|
2502.06727
|
Application of Artificial Intelligence (AI) in Civil Engineering
|
cs.AI
|
Hard computing generally deals with precise data, which provides ideal
solutions to problems. However, in the civil engineering field, amongst other
disciplines, that is not always the case as real-world systems are continuously
changing. Here lies the need to explore soft computing methods and artificial
intelligence to solve civil engineering shortcomings. The integration of
advanced computational models, including Artificial Neural Networks (ANNs),
Fuzzy Logic, Genetic Algorithms (GAs), and Probabilistic Reasoning, has
revolutionized the domain of civil engineering. These models have significantly
advanced diverse sub-fields by offering innovative solutions and improved
analysis capabilities. Sub-fields such as: slope stability analysis, bearing
capacity, water quality and treatment, transportation systems, air quality,
structural materials, etc. ANNs predict non-linearities and provide accurate
estimates. Fuzzy logic uses an efficient decision-making process to provide a
more precise assessment of systems. Lastly, while GAs optimizes models (based
on evolutionary processes) for better outcomes, probabilistic reasoning lowers
their statistical uncertainties.
|
2502.06728
|
FlexDeMo: Decoupled Momentum Optimization for Fully and Hybrid Sharded
Training
|
cs.LG cs.AI
|
Training large neural network models requires extensive computational
resources, often distributed across several nodes and accelerators. Recent
findings suggest that it may be sufficient to only exchange the fast moving
components of the gradients, while accumulating momentum locally (Decoupled
Momentum, or DeMo). However, when considering larger models that do not fit on
a single accelerate, the exchange of gradient information and the integration
of DeMo needs to be reconsidered. Here, we propose employing a hybrid strategy,
FlexDeMo, whereby nodes fully synchronize locally between different GPUs and
inter-node communication is improved through only using the fast-moving
components. This effectively combines previous hybrid sharding strategies with
the advantages of decoupled momentum. Our experimental results show that
FlexDeMo is on par with AdamW in terms of validation loss, demonstrating its
viability.
|
2502.06733
|
Dynamic Loss-Based Sample Reweighting for Improved Large Language Model
Pretraining
|
cs.LG cs.AI
|
Pretraining large language models (LLMs) on vast and heterogeneous datasets
is crucial for achieving state-of-the-art performance across diverse downstream
tasks. However, current training paradigms treat all samples equally,
overlooking the importance or relevance of individual samples throughout the
training process. Existing reweighting strategies, which primarily focus on
group-level data importance, fail to leverage fine-grained instance-level
information and do not adapt dynamically to individual sample importance as
training progresses. In this paper, we introduce novel algorithms for dynamic,
instance-level data reweighting aimed at improving both the efficiency and
effectiveness of LLM pretraining. Our methods adjust the weight of each
training sample based on its loss value in an online fashion, allowing the
model to dynamically focus on more informative or important samples at the
current training stage. In particular, our framework allows us to
systematically devise reweighting strategies deprioritizing redundant or
uninformative data, which we find tend to work best. Furthermore, we develop a
new theoretical framework for analyzing the impact of loss-based reweighting on
the convergence of gradient-based optimization, providing the first formal
characterization of how these strategies affect convergence bounds. We
empirically validate our approach across a spectrum of tasks, from pretraining
7B and 1.4B parameter LLMs to smaller-scale language models and linear
regression problems, demonstrating that our loss-based reweighting approach can
lead to faster convergence and significantly improved performance.
|
2502.06734
|
Se\~norita-2M: A High-Quality Instruction-based Dataset for General
Video Editing by Video Specialists
|
cs.CV
|
Recent advancements in video generation have spurred the development of video
editing techniques, which can be divided into inversion-based and end-to-end
methods. However, current video editing methods still suffer from several
challenges. Inversion-based methods, though training-free and flexible, are
time-consuming during inference, struggle with fine-grained editing
instructions, and produce artifacts and jitter. On the other hand, end-to-end
methods, which rely on edited video pairs for training, offer faster inference
speeds but often produce poor editing results due to a lack of high-quality
training video pairs. In this paper, to close the gap in end-to-end methods, we
introduce Se\~norita-2M, a high-quality video editing dataset. Se\~norita-2M
consists of approximately 2 millions of video editing pairs. It is built by
crafting four high-quality, specialized video editing models, each crafted and
trained by our team to achieve state-of-the-art editing results. We also
propose a filtering pipeline to eliminate poorly edited video pairs.
Furthermore, we explore common video editing architectures to identify the most
effective structure based on current pre-trained generative model. Extensive
experiments show that our dataset can help to yield remarkably high-quality
video editing results. More details are available at
https://senorita.github.io.
|
2502.06735
|
Enhancing Pneumonia Diagnosis and Severity Assessment through Deep
Learning: A Comprehensive Approach Integrating CNN Classification and
Infection Segmentation
|
cs.CV
|
Lung disease poses a substantial global health challenge, with pneumonia
being a prevalent concern. This research focuses on leveraging deep learning
techniques to detect and assess pneumonia, addressing two interconnected
objectives. Initially, Convolutional Neural Network (CNN) models are introduced
for pneumonia classification, emphasizing the necessity of comprehensive
diagnostic assessments considering COVID-19. Subsequently, the study advocates
for the utilization of deep learning-based segmentation to determine the
severity of infection. This dual-pronged approach offers valuable insights for
medical professionals, facilitating a more nuanced understanding and effective
treatment of pneumonia. Integrating deep learning aims to elevate the accuracy
and efficiency of pneumonia detection, thereby contributing to enhanced
healthcare outcomes on a global scale.
|
2502.06736
|
Low-power Spike-based Wearable Analytics on RRAM Crossbars
|
cs.ET cs.AI cs.AR
|
This work introduces a spike-based wearable analytics system utilizing
Spiking Neural Networks (SNNs) deployed on an In-memory Computing engine based
on RRAM crossbars, which are known for their compactness and energy-efficiency.
Given the hardware constraints and noise characteristics of the underlying RRAM
crossbars, we propose online adaptation of pre-trained SNNs in real-time using
Direct Feedback Alignment (DFA) against traditional backpropagation (BP).
Direct Feedback Alignment (DFA) learning, that allows layer-parallel gradient
computations, acts as a fast, energy & area-efficient method for online
adaptation of SNNs on RRAM crossbars, unleashing better algorithmic performance
against those adapted using BP. Through extensive simulations using our
in-house hardware evaluation engine called DFA_Sim, we find that DFA achieves
upto 64.1% lower energy consumption, 10.1% lower area overhead, and a 2.1x
reduction in latency compared to BP, while delivering upto 7.55% higher
inference accuracy on human activity recognition (HAR) tasks.
|
2502.06737
|
VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data
|
cs.LG
|
Process Reward Models (PRMs) have proven effective at enhancing mathematical
reasoning for Large Language Models (LLMs) by leveraging increased
inference-time computation. However, they are predominantly trained on
mathematical data and their generalizability to non-mathematical domains has
not been rigorously studied. In response, this work first shows that current
PRMs have poor performance in other domains. To address this limitation, we
introduce VersaPRM, a multi-domain PRM trained on synthetic reasoning data
generated using our novel data generation and annotation method. VersaPRM
achieves consistent performance gains across diverse domains. For instance, in
the MMLU-Pro category of Law, VersaPRM via weighted majority voting, achieves a
7.9% performance gain over the majority voting baseline -- surpassing
Qwen2.5-Math-PRM's gain of 1.3%. We further contribute to the community by
open-sourcing all data, code and models for VersaPRM.
|
2502.06738
|
Resurrecting saturated LLM benchmarks with adversarial encoding
|
cs.LG
|
Recent work showed that small changes in benchmark questions can reduce LLMs'
reasoning and recall. We explore two such changes: pairing questions and adding
more answer options, on three benchmarks: WMDP-bio, GPQA, and MMLU variants. We
find that for more capable models, these predictably reduce performance,
essentially heightening the performance ceiling of a benchmark and unsaturating
it again. We suggest this approach can resurrect old benchmarks.
|
2502.06739
|
A note on the physical interpretation of neural PDE's
|
cs.LG cond-mat.dis-nn physics.comp-ph
|
We highlight a formal and substantial analogy between Machine Learning (ML)
algorithms and discrete dynamical systems (DDS) in relaxation form. The analogy
offers a transparent interpretation of the weights in terms of physical
information-propagation processes and identifies the model function of the
forward ML step with the local attractor of the corresponding discrete
dynamics. Besides improving the explainability of current ML applications, this
analogy may also facilitate the development of a new class ML algorithms with a
reduced number of weights.
|
2502.06741
|
ViSIR: Vision Transformer Single Image Reconstruction Method for Earth
System Models
|
cs.CV
|
Purpose: Earth system models (ESMs) integrate the interactions of the
atmosphere, ocean, land, ice, and biosphere to estimate the state of regional
and global climate under a wide variety of conditions. The ESMs are highly
complex, and thus, deep neural network architectures are used to model the
complexity and store the down-sampled data. In this paper, we propose the
Vision Transformer Sinusoidal Representation Networks (ViSIR) to improve the
single image SR (SR) reconstruction task for the ESM data.
Methods: ViSIR combines the SR capability of Vision Transformers (ViT) with
the high-frequency detail preservation of the Sinusoidal Representation Network
(SIREN) to address the spectral bias observed in SR tasks.
Results: The ViSIR outperforms ViT by 4.1 dB, SIREN by 7.5 dB, and
SR-Generative Adversarial (SR-GANs) by 7.1dB PSNR on average for three
different measurements.
Conclusion: The proposed ViSIR is evaluated and compared with
state-of-the-art methods. The results show that the proposed algorithm is
outperforming other methods in terms of Mean Square Error(MSE),
Peak-Signal-to-Noise-Ratio(PSNR), and Structural Similarity Index
Measure(SSIM).
|
2502.06742
|
Gradient Multi-Normalization for Stateless and Scalable LLM Training
|
cs.LG cs.AI
|
Training large language models (LLMs) typically relies on adaptive optimizers
like Adam (Kingma & Ba, 2015) which store additional state information to
accelerate convergence but incur significant memory overhead. Recent efforts,
such as SWAN (Ma et al., 2024) address this by eliminating the need for
optimizer states while achieving performance comparable to Adam via a
multi-step preprocessing procedure applied to instantaneous gradients.
Motivated by the success of SWAN, we introduce a novel framework for designing
stateless optimizers that normalizes stochastic gradients according to multiple
norms. To achieve this, we propose a simple alternating scheme to enforce the
normalization of gradients w.r.t these norms. We show that our procedure can
produce, up to an arbitrary precision, a fixed-point of the problem, and that
SWAN is a particular instance of our approach with carefully chosen norms,
providing a deeper understanding of its design. However, SWAN's computationally
expensive whitening/orthogonalization step limit its practicality for large
LMs. Using our principled perspective, we develop of a more efficient,
scalable, and practical stateless optimizer. Our algorithm relaxes the
properties of SWAN, significantly reducing its computational cost while
retaining its memory efficiency, making it applicable to training large-scale
models. Experiments on pre-training LLaMA models with up to 1 billion
parameters demonstrate a 3X speedup over Adam with significantly reduced memory
requirements, outperforming other memory-efficient baselines.
|
2502.06747
|
Wandering around: A bioinspired approach to visual attention through
object motion sensitivity
|
cs.CV
|
Active vision enables dynamic visual perception, offering an alternative to
static feedforward architectures in computer vision, which rely on large
datasets and high computational resources. Biological selective attention
mechanisms allow agents to focus on salient Regions of Interest (ROIs),
reducing computational demand while maintaining real-time responsiveness.
Event-based cameras, inspired by the mammalian retina, enhance this capability
by capturing asynchronous scene changes enabling efficient low-latency
processing. To distinguish moving objects while the event-based camera is in
motion the agent requires an object motion segmentation mechanism to accurately
detect targets and center them in the visual field (fovea). Integrating
event-based sensors with neuromorphic algorithms represents a paradigm shift,
using Spiking Neural Networks to parallelize computation and adapt to dynamic
environments. This work presents a Spiking Convolutional Neural Network
bioinspired attention system for selective attention through object motion
sensitivity. The system generates events via fixational eye movements using a
Dynamic Vision Sensor integrated into the Speck neuromorphic hardware, mounted
on a Pan-Tilt unit, to identify the ROI and saccade toward it. The system,
characterized using ideal gratings and benchmarked against the Event Camera
Motion Segmentation Dataset, reaches a mean IoU of 82.2% and a mean SSIM of 96%
in multi-object motion segmentation. The detection of salient objects reaches
88.8% accuracy in office scenarios and 89.8% in low-light conditions on the
Event-Assisted Low-Light Video Object Segmentation Dataset. A real-time
demonstrator shows the system's 0.12 s response to dynamic scenes. Its
learning-free design ensures robustness across perceptual scenes, making it a
reliable foundation for real-time robotic applications serving as a basis for
more complex architectures.
|
2502.06748
|
Institutional Preferences in the Laboratory
|
cs.SI cs.GT
|
Getting a group to adopt cooperative norms is an enduring challenge. But in
real-world settings, individuals don't just passively accept static
environments, they act both within and upon the social systems that structure
their interactions. Should we expect the dynamism of player-driven changes to
the "rules of the game" to hinder cooperation -- because of the substantial
added complexity -- or help it, as prosocial agents tweak their environment
toward non-zero-sum games? We introduce a laboratory setting to test whether
groups can guide themselves to cooperative outcomes by manipulating the
environmental parameters that shape their emergent cooperation process. We test
for cooperation in a set of economic games that impose different social
dilemmas. These games vary independently in the institutional features of
stability, efficiency, and fairness. By offering agency over behavior along
with second-order agency over the rules of the game, we understand emergent
cooperation in naturalistic settings in which the rules of the game are
themselves dynamic and subject to choice. The literature on transfer learning
in games suggests that interactions between features are important and might
aid or hinder the transfer of cooperative learning to new settings.
|
2502.06749
|
Incentivizing Desirable Effort Profiles in Strategic Classification: The
Role of Causality and Uncertainty
|
cs.GT cs.CY cs.LG
|
We study strategic classification in binary decision-making settings where
agents can modify their features in order to improve their classification
outcomes. Importantly, our work considers the causal structure across different
features, acknowledging that effort in a given feature may affect other
features. The main goal of our work is to understand \emph{when and how much
agent effort is invested towards desirable features}, and how this is
influenced by the deployed classifier, the causal structure of the agent's
features, their ability to modify them, and the information available to the
agent about the classifier and the feature causal graph.
In the complete information case, when agents know the classifier and the
causal structure of the problem, we derive conditions ensuring that rational
agents focus on features favored by the principal. We show that designing
classifiers to induce desirable behavior is generally non-convex, though
tractable in special cases. We also extend our analysis to settings where
agents have incomplete information about the classifier or the causal graph.
While optimal effort selection is again a non-convex problem under general
uncertainty, we highlight special cases of partial uncertainty where this
selection problem becomes tractable. Our results indicate that uncertainty
drives agents to favor features with higher expected importance and lower
variance, potentially misaligning with principal preferences. Finally,
numerical experiments based on a cardiovascular disease risk study illustrate
how to incentivize desirable modifications under uncertainty.
|
2502.06750
|
Accelerating Data Processing and Benchmarking of AI Models for Pathology
|
cs.CV
|
Advances in foundation modeling have reshaped computational pathology.
However, the increasing number of available models and lack of standardized
benchmarks make it increasingly complex to assess their strengths, limitations,
and potential for further development. To address these challenges, we
introduce a new suite of software tools for whole-slide image processing,
foundation model benchmarking, and curated publicly available tasks. We
anticipate that these resources will promote transparency, reproducibility, and
continued progress in the field.
|
2502.06751
|
What makes a good feedforward computational graph?
|
cs.LG cs.AI cs.SI stat.ML
|
As implied by the plethora of literature on graph rewiring, the choice of
computational graph employed by a neural network can make a significant impact
on its downstream performance. Certain effects related to the computational
graph, such as under-reaching and over-squashing, may even render the model
incapable of learning certain functions. Most of these effects have only been
thoroughly studied in the domain of undirected graphs; however, recent years
have seen a significant rise in interest in feedforward computational graphs:
directed graphs without any back edges. In this paper, we study the desirable
properties of a feedforward computational graph, discovering two important
complementary measures: fidelity and mixing time, and evaluating a few popular
choices of graphs through the lens of these measures. Our study is backed by
both theoretical analyses of the metrics' asymptotic behaviour for various
graphs, as well as correlating these metrics to the performance of trained
neural network models using the corresponding graphs.
|
2502.06753
|
Case for a unified surrogate modelling framework in the age of AI
|
stat.CO cs.LG
|
Surrogate models are widely used in natural sciences, engineering, and
machine learning to approximate complex systems and reduce computational costs.
However, the current landscape lacks standardisation across key stages of the
pipeline, including data collection, sampling design, model class selection,
evaluation metrics, and downstream task performance analysis. This
fragmentation limits reproducibility, reliability, and cross-domain
applicability. The issue has only been exacerbated by the AI revolution and a
new suite of surrogate model classes that it offers. In this position paper, we
argue for the urgent need for a unified framework to guide the development and
evaluation of surrogate models. We outline essential steps for constructing a
comprehensive pipeline and discuss alternative perspectives, such as the
benefits of domain-specific frameworks. By advocating for a standardised
approach, this paper seeks to improve the reliability of surrogate modelling,
foster cross-disciplinary knowledge transfer, and, as a result, accelerate
scientific progress.
|
2502.06755
|
Sparse Autoencoders for Scientifically Rigorous Interpretation of Vision
Models
|
cs.CV
|
To truly understand vision models, we must not only interpret their learned
features but also validate these interpretations through controlled
experiments. Current approaches either provide interpretable features without
the ability to test their causal influence, or enable model editing without
interpretable controls. We present a unified framework using sparse
autoencoders (SAEs) that bridges this gap, allowing us to discover
human-interpretable visual features and precisely manipulate them to test
hypotheses about model behavior. By applying our method to state-of-the-art
vision models, we reveal key differences in the semantic abstractions learned
by models with different pre-training objectives. We then demonstrate the
practical usage of our framework through controlled interventions across
multiple vision tasks. We show that SAEs can reliably identify and manipulate
interpretable visual features without model re-training, providing a powerful
tool for understanding and controlling vision model behavior. We provide code,
demos and models on our project website: https://osu-nlp-group.github.io/SAE-V.
|
2502.06756
|
SAMRefiner: Taming Segment Anything Model for Universal Mask Refinement
|
cs.CV
|
In this paper, we explore a principal way to enhance the quality of widely
pre-existing coarse masks, enabling them to serve as reliable training data for
segmentation models to reduce the annotation cost. In contrast to prior
refinement techniques that are tailored to specific models or tasks in a
close-world manner, we propose SAMRefiner, a universal and efficient approach
by adapting SAM to the mask refinement task. The core technique of our model is
the noise-tolerant prompting scheme. Specifically, we introduce a multi-prompt
excavation strategy to mine diverse input prompts for SAM (i.e.,
distance-guided points, context-aware elastic bounding boxes, and
Gaussian-style masks) from initial coarse masks. These prompts can collaborate
with each other to mitigate the effect of defects in coarse masks. In
particular, considering the difficulty of SAM to handle the multi-object case
in semantic segmentation, we introduce a split-then-merge (STM) pipeline.
Additionally, we extend our method to SAMRefiner++ by introducing an additional
IoU adaption step to further boost the performance of the generic SAMRefiner on
the target dataset. This step is self-boosted and requires no additional
annotation. The proposed framework is versatile and can flexibly cooperate with
existing segmentation methods. We evaluate our mask framework on a wide range
of benchmarks under different settings, demonstrating better accuracy and
efficiency. SAMRefiner holds significant potential to expedite the evolution of
refinement tools. Our code is available at
https://github.com/linyq2117/SAMRefiner.
|
2502.06759
|
Rationalization Models for Text-to-SQL
|
cs.CL cs.AI cs.DB
|
We introduce a framework for generating Chain-of-Thought (CoT) rationales to
enhance text-to-SQL model fine-tuning. These rationales consist of intermediate
SQL statements and explanations, serving as incremental steps toward
constructing the final SQL query. The process begins with manually annotating a
small set of examples, which are then used to prompt a large language model in
an iterative, dynamic few-shot knowledge distillation procedure from a teacher
model. A rationalization model is subsequently trained on the validated
decomposed queries, enabling extensive synthetic CoT annotations for
text-to-SQL datasets. To evaluate the approach, we fine-tune small language
models with and without these rationales on the BIRD dataset. Results indicate
that step-by-step query generation improves execution accuracy, especially for
moderately and highly complex queries, while also enhancing explainability.
|
2502.06760
|
Infinite-Horizon Value Function Approximation for Model Predictive
Control
|
cs.RO
|
Model Predictive Control has emerged as a popular tool for robots to generate
complex motions. However, the real-time requirement has limited the use of hard
constraints and large preview horizons, which are necessary to ensure safety
and stability. In practice, practitioners have to carefully design cost
functions that can imitate an infinite horizon formulation, which is tedious
and often results in local minima. In this work, we study how to approximate
the infinite horizon value function of constrained optimal control problems
with neural networks using value iteration and trajectory optimization.
Furthermore, we demonstrate how using this value function approximation as a
terminal cost provides global stability to the model predictive controller. The
approach is validated on two toy problems and a real-world scenario with online
obstacle avoidance on an industrial manipulator where the value function is
conditioned to the goal and obstacle.
|
2502.06761
|
When, Where and Why to Average Weights?
|
cs.LG
|
Averaging checkpoints along the training trajectory is a simple yet powerful
approach to improve the generalization performance of Machine Learning models
and reduce training time. Motivated by these potential gains, and in an effort
to fairly and thoroughly benchmark this technique, we present an extensive
evaluation of averaging techniques in modern Deep Learning, which we perform
using AlgoPerf \citep{dahl_benchmarking_2023}, a large-scale benchmark for
optimization algorithms. We investigate whether weight averaging can reduce
training time, improve generalization, and replace learning rate decay, as
suggested by recent literature. Our evaluation across seven architectures and
datasets reveals that averaging significantly accelerates training and yields
considerable efficiency gains, at the price of a minimal implementation and
memory cost, while mildly improving generalization across all considered
workloads. Finally, we explore the relationship between averaging and learning
rate annealing and show how to optimally combine the two to achieve the best
performances.
|
2502.06764
|
History-Guided Video Diffusion
|
cs.LG cs.CV
|
Classifier-free guidance (CFG) is a key technique for improving conditional
generation in diffusion models, enabling more accurate control while enhancing
sample quality. It is natural to extend this technique to video diffusion,
which generates video conditioned on a variable number of context frames,
collectively referred to as history. However, we find two key challenges to
guiding with variable-length history: architectures that only support
fixed-size conditioning, and the empirical observation that CFG-style history
dropout performs poorly. To address this, we propose the Diffusion Forcing
Transformer (DFoT), a video diffusion architecture and theoretically grounded
training objective that jointly enable conditioning on a flexible number of
history frames. We then introduce History Guidance, a family of guidance
methods uniquely enabled by DFoT. We show that its simplest form, vanilla
history guidance, already significantly improves video generation quality and
temporal consistency. A more advanced method, history guidance across time and
frequency further enhances motion dynamics, enables compositional
generalization to out-of-distribution history, and can stably roll out
extremely long videos. Website: https://boyuan.space/history-guidance
|
2502.06765
|
Are all models wrong? Fundamental limits in distribution-free empirical
model falsification
|
math.ST cs.LG stat.ML stat.TH
|
In statistics and machine learning, when we train a fitted model on available
data, we typically want to ensure that we are searching within a model class
that contains at least one accurate model -- that is, we would like to ensure
an upper bound on the model class risk (the lowest possible risk that can be
attained by any model in the class). However, it is also of interest to
establish lower bounds on the model class risk, for instance so that we can
determine whether our fitted model is at least approximately optimal within the
class, or, so that we can decide whether the model class is unsuitable for the
particular task at hand. Particularly in the setting of interpolation learning
where machine learning models are trained to reach zero error on the training
data, we might ask if, at the very least, a positive lower bound on the model
class risk is possible -- or are we unable to detect that "all models are
wrong"? In this work, we answer these questions in a distribution-free setting
by establishing a model-agnostic, fundamental hardness result for the problem
of constructing a lower bound on the best test error achievable over a model
class, and examine its implications on specific model classes such as
tree-based methods and linear regression.
|
2502.06766
|
Exploiting Sparsity for Long Context Inference: Million Token Contexts
on Commodity GPUs
|
cs.CL
|
There is growing demand for performing inference with hundreds of thousands
of input tokens on trained transformer models. Inference at this extreme scale
demands significant computational resources, hindering the application of
transformers at long contexts on commodity (i.e not data center scale)
hardware. To address the inference time costs associated with running
self-attention based transformer language models on long contexts and enable
their adoption on widely available hardware, we propose a tunable mechanism
that reduces the cost of the forward pass by attending to only the most
relevant tokens at every generation step using a top-k selection mechanism. We
showcase the efficiency gains afforded by our method by performing inference on
context windows up to 1M tokens using approximately 16GB of GPU RAM. Our
experiments reveal that models are capable of handling the sparsity induced by
the reduced number of keys and values. By attending to less than 2% of input
tokens, we achieve over 95% of model performance on common benchmarks (RULER,
AlpacaEval, and Open LLM Leaderboard).
|
2502.06768
|
Train for the Worst, Plan for the Best: Understanding Token Ordering in
Masked Diffusions
|
cs.LG
|
In recent years, masked diffusion models (MDMs) have emerged as a promising
alternative approach for generative modeling over discrete domains. Compared to
autoregressive models (ARMs), MDMs trade off complexity at training time with
flexibility at inference time. At training time, they must learn to solve an
exponentially large number of infilling problems, but at inference time, they
can decode tokens in essentially arbitrary order. In this work, we closely
examine these two competing effects. On the training front, we theoretically
and empirically demonstrate that MDMs indeed train on computationally
intractable subproblems compared to their autoregressive counterparts. On the
inference front, we show that a suitable strategy for adaptively choosing the
token decoding order significantly enhances the capabilities of MDMs, allowing
them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that
adaptive inference can boost solving accuracy in pretrained MDMs from $<7$% to
$\approx 90$%, even outperforming ARMs with $7\times$ as many parameters and
that were explicitly trained via teacher forcing to learn the right order of
decoding.
|
2502.06770
|
Parameter-Dependent Control Lyapunov Functions for Stabilizing Nonlinear
Parameter-Varying Systems
|
math.OC cs.SY eess.SY
|
This paper introduces the concept of parameter-dependent (PD) control
Lyapunov functions (CLFs) for gainscheduled stabilization of nonlinear
parameter-varying (NPV) systems under input constraints. It shows that given a
PD-CLF, a locally Lipschitz control law can be constructed by solving a robust
quadratic program. For polynomial control-affine NPV systems, it provides
convex conditions, based on the sum of squares (SOS) programming, to jointly
synthesize both a PD-CLF and a PD controller, aimed at maximizing the PD region
of stabilization. Input constraints can be straightforwardly incorporated into
the synthesis procedure. Unlike traditional linear parameter-varying (LPV)
methods that rely on linearization or over-approximation to get an LPV model,
the proposed framework fully captures the nonlinearities of the system
dynamics. Simulation results validate the efficacy of the method, showcasing
its potential for stabilizing NPV systems under input constraints.
|
2502.06771
|
Unsupervised Particle Tracking with Neuromorphic Computing
|
hep-ex cs.ET cs.LG cs.NE
|
We study the application of a neural network architecture for identifying
charged particle trajectories via unsupervised learning of delays and synaptic
weights using a spike-time-dependent plasticity rule. In the considered model,
the neurons receive time-encoded information on the position of particle hits
in a tracking detector for a particle collider, modeled according to the
geometry of the Compact Muon Solenoid Phase II detector. We show how a spiking
neural network is capable of successfully identifying in a completely
unsupervised way the signal left by charged particles in the presence of
conspicuous noise from accidental or combinatorial hits. These results open the
way to applications of neuromorphic computing to particle tracking, motivating
further studies into its potential for real-time, low-power particle tracking
in future high-energy physics experiments.
|
2502.06772
|
ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
|
cs.CL cs.AI cs.LG
|
We present that hierarchical LLM reasoning via scaling thought templates can
effectively optimize the reasoning search space and outperform the mathematical
reasoning capabilities of powerful LLMs like OpenAI o1-preview and DeepSeek V3.
We train our ReasonFlux-32B model with only 8 GPUs and introduces three
innovations: (i) a structured and generic thought template library, containing
around 500 high-level thought templates capable of generalizing to similar or
relevant reasoning problems; (ii) performing hierarchical reinforcement
learning on a sequence of thought templates instead of long CoTs, optimizing a
base LLM to plan out an optimal template trajectory for gradually handling
complex problems; (iii) a brand new inference scaling system that enables
hierarchical LLM reasoning by adaptively scaling thought templates at inference
time. With a template trajectory containing sequential thought templates, our
ReasonFlux-32B significantly advances math reasoning capabilities to
state-of-the-art levels. Notably, on the MATH benchmark, it achieves an
accuracy of 91.2% and surpasses o1-preview by 6.7%. On the USA Math Olympiad
(AIME) benchmark, ReasonFlux-32B solves an average of 56.7% of problems,
surpassing o1-preview and DeepSeek-V3 by 27% and 45%, respectively. Code:
https://github.com/Gen-Verse/ReasonFlux
|
2502.06773
|
On the Emergence of Thinking in LLMs I: Searching for the Right
Intuition
|
cs.AI cs.CL cs.LG
|
Recent AI advancements, such as OpenAI's new models, are transforming LLMs
into LRMs (Large Reasoning Models) that perform reasoning during inference,
taking extra time and compute for higher-quality outputs. We aim to uncover the
algorithmic framework for training LRMs. Methods like self-consistency, PRM,
and AlphaZero suggest reasoning as guided search. We ask: what is the simplest,
most scalable way to enable search in LLMs?
We propose a post-training framework called Reinforcement Learning via
Self-Play (RLSP). RLSP involves three steps: (1) supervised fine-tuning with
human or synthetic demonstrations of the reasoning process, (2) using an
exploration reward signal to encourage diverse and efficient reasoning
behaviors, and (3) RL training with an outcome verifier to ensure correctness
while preventing reward hacking. Our key innovation is to decouple exploration
and correctness signals during PPO training, carefully balancing them to
improve performance and efficiency.
Empirical studies in the math domain show that RLSP improves reasoning. On
the Llama-3.1-8B-Instruct model, RLSP can boost performance by 23% in MATH-500
test set; On AIME 2024 math problems, Qwen2.5-32B-Instruct improved by 10% due
to RLSP. However, a more important finding of this work is that the models
trained using RLSP, even with the simplest exploration reward that encourages
the model to take more intermediate steps, showed several emergent behaviors
such as backtracking, exploration of ideas, and verification. These findings
demonstrate that RLSP framework might be enough to enable emergence of complex
reasoning abilities in LLMs when scaled. Lastly, we propose a theory as to why
RLSP search strategy is more suitable for LLMs inspired by a remarkable result
that says CoT provably increases computational power of LLMs, which grows as
the number of steps in CoT \cite{li2024chain,merrill2023expresssive}.
|
2502.06774
|
ENFORCE: Exact Nonlinear Constrained Learning with Adaptive-depth Neural
Projection
|
cs.LG
|
Ensuring neural networks adhere to domain-specific constraints is crucial for
addressing safety and ethical concerns while also enhancing prediction
accuracy. Despite the nonlinear nature of most real-world tasks, existing
methods are predominantly limited to affine or convex constraints. We introduce
ENFORCE, a neural network architecture that guarantees predictions to satisfy
nonlinear constraints exactly. ENFORCE is trained with standard unconstrained
gradient-based optimizers (e.g., Adam) and leverages autodifferentiation and
local neural projections to enforce any $\mathcal{C}^1$ constraint to arbitrary
tolerance $\epsilon$. We build an adaptive-depth neural projection (AdaNP)
module that dynamically adjusts its complexity to suit the specific problem and
the required tolerance levels. ENFORCE guarantees satisfaction of equality
constraints that are nonlinear in both inputs and outputs of the neural network
with minimal (and adjustable) computational cost.
|
2502.06775
|
Enhancing Performance of Explainable AI Models with Constrained Concept
Refinement
|
cs.LG
|
The trade-off between accuracy and interpretability has long been a challenge
in machine learning (ML). This tension is particularly significant for emerging
interpretable-by-design methods, which aim to redesign ML algorithms for
trustworthy interpretability but often sacrifice accuracy in the process. In
this paper, we address this gap by investigating the impact of deviations in
concept representations-an essential component of interpretable models-on
prediction performance and propose a novel framework to mitigate these effects.
The framework builds on the principle of optimizing concept embeddings under
constraints that preserve interpretability. Using a generative model as a
test-bed, we rigorously prove that our algorithm achieves zero loss while
progressively enhancing the interpretability of the resulting model.
Additionally, we evaluate the practical performance of our proposed framework
in generating explainable predictions for image classification tasks across
various benchmarks. Compared to existing explainable methods, our approach not
only improves prediction accuracy while preserving model interpretability
across various large-scale benchmarks but also achieves this with significantly
lower computational cost.
|
2502.06776
|
Towards Internet-Scale Training For Agents
|
cs.LG cs.AI
|
The predominant approach for training web navigation agents gathers human
demonstrations for a set of popular websites and hand-written tasks, but it is
becoming clear that human data are an inefficient resource. We develop a
pipeline to facilitate Internet-scale training for agents without laborious
human annotations. In the first stage, an LLM generates tasks for 150k diverse
websites. In the next stage, LLM agents complete tasks and produce
trajectories. In the final stage, an LLM reviews the trajectories and judges
their success. Language models are competitive with human annotators, detecting
and filtering out harmful content with an accuracy of 97%, generating feasible
tasks with an 89% rate, and judging successful trajectories with an 82.6%
accuracy. Scaling the pipeline, agents based on Llama 3.1 70B solve 16.7% of
tasks for 150k sites. Training on the data generated by our pipeline is
competitive with training on human demonstrations. In data-limited settings
derived from Mind2Web and WebLINX, we improve Step Accuracy by up to +89.5% and
+122.1% respectively for agents trained on mixtures of data from our pipeline,
and human data. When training agents with all available human data from these
benchmarks, agents fail to generalize to diverse real sites, and adding our
data improves their generalization by +149.0% for WebLINX and +156.3% for
Mind2Web. Code will be available at: data-for-agents.github.io.
|
2502.06777
|
Learning an Optimal Assortment Policy under Observational Data
|
stat.ML cs.LG math.OC math.ST stat.TH
|
We study the fundamental problem of offline assortment optimization under the
Multinomial Logit (MNL) model, where sellers must determine the optimal subset
of the products to offer based solely on historical customer choice data. While
most existing approaches to learning-based assortment optimization focus on the
online learning of the optimal assortment through repeated interactions with
customers, such exploration can be costly or even impractical in many
real-world settings. In this paper, we consider the offline learning paradigm
and investigate the minimal data requirements for efficient offline assortment
optimization. To this end, we introduce Pessimistic Rank-Breaking (PRB), an
algorithm that combines rank-breaking with pessimistic estimation. We prove
that PRB is nearly minimax optimal by establishing the tight suboptimality
upper bound and a nearly matching lower bound. This further shows that "optimal
item coverage" - where each item in the optimal assortment appears sufficiently
often in the historical data - is both sufficient and necessary for efficient
offline learning. This significantly relaxes the previous requirement of
observing the complete optimal assortment in the data. Our results provide
fundamental insights into the data requirements for offline assortment
optimization under the MNL model.
|
2502.06779
|
KARST: Multi-Kernel Kronecker Adaptation with Re-Scaling Transmission
for Visual Classification
|
cs.CV cs.AI
|
Fine-tuning pre-trained vision models for specific tasks is a common practice
in computer vision. However, this process becomes more expensive as models grow
larger. Recently, parameter-efficient fine-tuning (PEFT) methods have emerged
as a popular solution to improve training efficiency and reduce storage needs
by tuning additional low-rank modules within pre-trained backbones. Despite
their advantages, they struggle with limited representation capabilities and
misalignment with pre-trained intermediate features. To address these issues,
we introduce an innovative Multi-Kernel Kronecker Adaptation with Re-Scaling
Transmission (KARST) for various recognition tasks. Specifically, its
multi-kernel design extends Kronecker projections horizontally and separates
adaptation matrices into multiple complementary spaces, reducing parameter
dependency and creating more compact subspaces. Besides, it incorporates extra
learnable re-scaling factors to better align with pre-trained feature
distributions, allowing for more flexible and balanced feature aggregation.
Extensive experiments validate that our KARST outperforms other PEFT
counterparts with a negligible inference cost due to its re-parameterization
characteristics. Code is publicly available at:
https://github.com/Lucenova/KARST.
|
2502.06781
|
Exploring the Limit of Outcome Reward for Learning Mathematical
Reasoning
|
cs.CL cs.LG
|
Reasoning abilities, especially those for solving complex math problems, are
crucial components of general intelligence. Recent advances by proprietary
companies, such as o-series models of OpenAI, have made remarkable progress on
reasoning tasks. However, the complete technical details remain unrevealed, and
the techniques that are believed certainly to be adopted are only reinforcement
learning (RL) and the long chain of thoughts. This paper proposes a new RL
framework, termed OREAL, to pursue the performance limit that can be achieved
through \textbf{O}utcome \textbf{RE}w\textbf{A}rd-based reinforcement
\textbf{L}earning for mathematical reasoning tasks, where only binary outcome
rewards are easily accessible. We theoretically prove that behavior cloning on
positive trajectories from best-of-N (BoN) sampling is sufficient to learn the
KL-regularized optimal policy in binary feedback environments. This formulation
further implies that the rewards of negative samples should be reshaped to
ensure the gradient consistency between positive and negative samples. To
alleviate the long-existing difficulties brought by sparse rewards in RL, which
are even exacerbated by the partial correctness of the long chain of thought
for reasoning tasks, we further apply a token-level reward model to sample
important tokens in reasoning trajectories for learning. With OREAL, for the
first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL,
being on par with 32B models. OREAL-32B also surpasses previous 32B models
trained by distillation with 95.0 pass@1 accuracy on MATH-500. Our
investigation also indicates the importance of initial policy models and
training queries for RL. Code, models, and data will be released to benefit
future research\footnote{https://github.com/InternLM/OREAL}.
|
2502.06782
|
Lumina-Video: Efficient and Flexible Video Generation with Multi-scale
Next-DiT
|
cs.CV
|
Recent advancements have established Diffusion Transformers (DiTs) as a
dominant framework in generative modeling. Building on this success,
Lumina-Next achieves exceptional performance in the generation of
photorealistic images with Next-DiT. However, its potential for video
generation remains largely untapped, with significant challenges in modeling
the spatiotemporal complexity inherent to video data. To address this, we
introduce Lumina-Video, a framework that leverages the strengths of Next-DiT
while introducing tailored solutions for video synthesis. Lumina-Video
incorporates a Multi-scale Next-DiT architecture, which jointly learns multiple
patchifications to enhance both efficiency and flexibility. By incorporating
the motion score as an explicit condition, Lumina-Video also enables direct
control of generated videos' dynamic degree. Combined with a progressive
training scheme with increasingly higher resolution and FPS, and a multi-source
training scheme with mixed natural and synthetic data, Lumina-Video achieves
remarkable aesthetic quality and motion smoothness at high training and
inference efficiency. We additionally propose Lumina-V2A, a video-to-audio
model based on Next-DiT, to create synchronized sounds for generated videos.
Codes are released at https://www.github.com/Alpha-VLLM/Lumina-Video.
|
2502.06784
|
RelGNN: Composite Message Passing for Relational Deep Learning
|
cs.LG cs.AI cs.DB
|
Predictive tasks on relational databases are critical in real-world
applications spanning e-commerce, healthcare, and social media. To address
these tasks effectively, Relational Deep Learning (RDL) encodes relational data
as graphs, enabling Graph Neural Networks (GNNs) to exploit relational
structures for improved predictions. However, existing heterogeneous GNNs often
overlook the intrinsic structural properties of relational databases, leading
to modeling inefficiencies. Here we introduce RelGNN, a novel GNN framework
specifically designed to capture the unique characteristics of relational
databases. At the core of our approach is the introduction of atomic routes,
which are sequences of nodes forming high-order tripartite structures. Building
upon these atomic routes, RelGNN designs new composite message passing
mechanisms between heterogeneous nodes, allowing direct single-hop interactions
between them. This approach avoids redundant aggregations and mitigates
information entanglement, ultimately leading to more efficient and accurate
predictive modeling. RelGNN is evaluated on 30 diverse real-world tasks from
RelBench (Fey et al., 2024), and consistently achieves state-of-the-art
accuracy with up to 25% improvement.
|
2502.06785
|
DeepCrossAttention: Supercharging Transformer Residual Connections
|
cs.LG
|
Transformer networks have achieved remarkable success across diverse domains,
leveraging a variety of architectural innovations, including residual
connections. However, traditional residual connections, which simply sum the
outputs of previous layers, can dilute crucial information. This work
introduces DeepCrossAttention (DCA), an approach that enhances residual
learning in transformers. DCA employs learnable, input-dependent weights to
dynamically combine layer outputs, enabling the model to selectively focus on
the most relevant information in any of the previous layers. Furthermore, DCA
incorporates depth-wise cross-attention, allowing for richer interactions
between layers at different depths. Our language modeling experiments show that
DCA achieves improved perplexity for a given training time. Moreover, DCA
obtains the same model quality up to 3x faster while adding a negligible number
of parameters. Theoretical analysis confirms that DCA provides an improved
trade-off between accuracy and model size when the ratio of collective layer
ranks to the ambient dimension falls below a critical threshold.
|
2502.06786
|
Matryoshka Quantization
|
cs.LG cs.AI
|
Quantizing model weights is critical for reducing the communication and
inference costs of large models. However, quantizing models -- especially to
low precisions like int4 or int2 -- requires a trade-off in model quality;
int2, in particular, is known to severely degrade model quality. Consequently,
practitioners are often forced to maintain multiple models with different
quantization levels or serve a single model that best satisfies the
quality-latency trade-off. On the other hand, integer data types, such as int8,
inherently possess a nested (Matryoshka) structure where smaller bit-width
integers, like int4 or int2, are nested within the most significant bits. This
paper proposes Matryoshka Quantization (MatQuant), a novel multi-scale
quantization technique that addresses the challenge of needing multiple
quantized models. It allows training and maintaining just one model, which can
then be served at different precision levels. Furthermore, due to the
co-training and co-distillation regularization provided by MatQuant, the int2
precision models extracted by MatQuant can be up to $10\%$ more accurate than
standard int2 quantization (using techniques like QAT or OmniQuant). This
represents significant progress in model quantization, demonstrated by the fact
that, with the same recipe, an int2 FFN-quantized Gemma-2 9B model is more
accurate than an int8 FFN-quantized Gemma-2 2B model.
|
2502.06787
|
Visual Agentic AI for Spatial Reasoning with a Dynamic API
|
cs.CV
|
Visual reasoning -- the ability to interpret the visual world -- is crucial
for embodied agents that operate within three-dimensional scenes. Progress in
AI has led to vision and language models capable of answering questions from
images. However, their performance declines when tasked with 3D spatial
reasoning. To tackle the complexity of such reasoning problems, we introduce an
agentic program synthesis approach where LLM agents collaboratively generate a
Pythonic API with new functions to solve common subproblems. Our method
overcomes limitations of prior approaches that rely on a static, human-defined
API, allowing it to handle a wider range of queries. To assess AI capabilities
for 3D understanding, we introduce a new benchmark of queries involving
multiple steps of grounding and inference. We show that our method outperforms
prior zero-shot models for visual reasoning in 3D and empirically validate the
effectiveness of our agentic framework for 3D spatial reasoning tasks. Project
website: https://glab-caltech.github.io/vadar/
|
2502.06788
|
EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
|
cs.CV cs.AI
|
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the
performance gap with their encoder-based counterparts, highlighting the
promising potential for unified multimodal systems with structural simplicity
and efficient deployment. We systematically clarify the performance gap between
VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist
visual layers from scratch, deeply excavating the under-examined
characteristics of encoder-free VLMs. We develop efficient strategies for
encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth
investigation, we launch EVEv2.0, a new and improved family of encoder-free
VLMs. We show that: (i) Properly decomposing and hierarchically associating
vision and language within a unified model reduces interference between
modalities. (ii) A well-designed training strategy enables effective
optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0
represents a thorough study for developing a decoder-only architecture across
modalities, demonstrating superior data efficiency and strong vision-reasoning
capability. Code is publicly available at: https://github.com/baaivision/EVE.
|
2502.06789
|
Information-theoretic Bayesian Optimization: Survey and Tutorial
|
cs.LG cs.AI cs.IT math.IT stat.ML
|
Several scenarios require the optimization of non-convex black-box functions,
that are noisy expensive to evaluate functions with unknown analytical
expression, whose gradients are hence not accessible. For example, the
hyper-parameter tuning problem of machine learning models. Bayesian
optimization is a class of methods with state-of-the-art performance delivering
a solution to this problem in real scenarios. It uses an iterative process that
employs a probabilistic surrogate model, typically a Gaussian process, of the
objective function to be optimized computing a posterior predictive
distribution of the black-box function. Based on the information given by this
posterior predictive distribution, Bayesian optimization includes the
computation of an acquisition function that represents, for every input space
point, the utility of evaluating that point in the next iteraiton if the
objective of the process is to retrieve a global extremum. This paper is a
survey of the information theoretical acquisition functions, whose performance
typically outperforms the rest of acquisition functions. The main concepts of
the field of information theory are also described in detail to make the reader
aware of why information theory acquisition functions deliver great results in
Bayesian optimization and how can we approximate them when they are
intractable. We also cover how information theory acquisition functions can be
adapted to complex optimization scenarios such as the multi-objective,
constrained, non-myopic, multi-fidelity, parallel and asynchronous settings and
provide further lines of research.
|
2502.06798
|
Prompt-Aware Scheduling for Efficient Text-to-Image Inferencing System
|
cs.LG cs.DC cs.GR
|
Traditional ML models utilize controlled approximations during high loads,
employing faster, but less accurate models in a process called accuracy
scaling. However, this method is less effective for generative text-to-image
models due to their sensitivity to input prompts and performance degradation
caused by large model loading overheads. This work introduces a novel
text-to-image inference system that optimally matches prompts across multiple
instances of the same model operating at various approximation levels to
deliver high-quality images under high loads and fixed budgets.
|
2502.06800
|
Analyzing Geospatial and Socioeconomic Disparities in Breast Cancer
Screening Among Populations in the United States: Machine Learning Approach
|
cs.LG stat.AP
|
Breast cancer screening plays a pivotal role in early detection and
subsequent effective management of the disease, impacting patient outcomes and
survival rates. This study aims to assess breast cancer screening rates
nationwide in the United States and investigate the impact of social
determinants of health on these screening rates. Data on mammography screening
at the census tract level for 2018 and 2020 were collected from the Behavioral
Risk Factor Surveillance System. We developed a large dataset of social
determinants of health, comprising 13 variables for 72337 census tracts.
Spatial analysis employing Getis-Ord Gi statistics was used to identify
clusters of high and low breast cancer screening rates. To evaluate the
influence of these social determinants, we implemented a random forest model,
with the aim of comparing its performance to linear regression and support
vector machine models. The models were evaluated using R2 and root mean squared
error metrics. Shapley Additive Explanations values were subsequently used to
assess the significance of variables and direction of their influence.
Geospatial analysis revealed elevated screening rates in the eastern and
northern United States, while central and midwestern regions exhibited lower
rates. The random forest model demonstrated superior performance, with an
R2=64.53 and root mean squared error of 2.06 compared to linear regression and
support vector machine models. Shapley Additive Explanations values indicated
that the percentage of the Black population, the number of mammography
facilities within a 10-mile radius, and the percentage of the population with
at least a bachelor's degree were the most influential variables, all
positively associated with mammography screening rates.
|
2502.06802
|
Solving the Content Gap in Roblox Game Recommendations: LLM-Based
Profile Generation and Reranking
|
cs.IR cs.AI cs.CL cs.LG
|
With the vast and dynamic user-generated content on Roblox, creating
effective game recommendations requires a deep understanding of game content.
Traditional recommendation models struggle with the inconsistent and sparse
nature of game text features such as titles and descriptions. Recent
advancements in large language models (LLMs) offer opportunities to enhance
recommendation systems by analyzing in-game text data. This paper addresses two
challenges: generating high-quality, structured text features for games without
extensive human annotation, and validating these features to ensure they
improve recommendation relevance. We propose an approach that extracts in-game
text and uses LLMs to infer attributes such as genre and gameplay objectives
from raw player interactions. Additionally, we introduce an LLM-based
re-ranking mechanism to assess the effectiveness of the generated text
features, enhancing personalization and user satisfaction. Beyond
recommendations, our approach supports applications such as user
engagement-based integrity detection, already deployed in production. This
scalable framework demonstrates the potential of in-game text understanding to
improve recommendation quality on Roblox and adapt recommendations to its
unique, user-generated ecosystem.
|
2502.06803
|
Emotion Recognition and Generation: A Comprehensive Review of Face,
Speech, and Text Modalities
|
cs.LG cs.AI cs.CV
|
Emotion recognition and generation have emerged as crucial topics in
Artificial Intelligence research, playing a significant role in enhancing
human-computer interaction within healthcare, customer service, and other
fields. Although several reviews have been conducted on emotion recognition and
generation as separate entities, many of these works are either fragmented or
limited to specific methodologies, lacking a comprehensive overview of recent
developments and trends across different modalities. In this survey, we provide
a holistic review aimed at researchers beginning their exploration in emotion
recognition and generation. We introduce the fundamental principles underlying
emotion recognition and generation across facial, vocal, and textual
modalities. This work categorises recent state-of-the-art research into
distinct technical approaches and explains the theoretical foundations and
motivations behind these methodologies, offering a clearer understanding of
their application. Moreover, we discuss evaluation metrics, comparative
analyses, and current limitations, shedding light on the challenges faced by
researchers in the field. Finally, we propose future research directions to
address these challenges and encourage further exploration into developing
robust, effective, and ethically responsible emotion recognition and generation
systems.
|
2502.06805
|
Efficient Diffusion Models: A Survey
|
cs.LG cs.GR
|
Diffusion models have emerged as powerful generative models capable of
producing high-quality contents such as images, videos, and audio,
demonstrating their potential to revolutionize digital content creation.
However, these capabilities come at the cost of their significant computational
resources and lengthy generation time, underscoring the critical need to
develop efficient techniques for practical deployment. In this survey, we
provide a systematic and comprehensive review of research on efficient
diffusion models. We organize the literature in a taxonomy consisting of three
main categories, covering distinct yet interconnected efficient diffusion model
topics from algorithm-level, system-level, and framework perspective,
respectively. We have also created a GitHub repository where we organize the
papers featured in this survey at
https://github.com/AIoT-MLSys-Lab/Efficient-Diffusion-Model-Survey. We hope our
survey can serve as a valuable resource to help researchers and practitioners
gain a systematic understanding of efficient diffusion model research and
inspire them to contribute to this important and exciting field.
|
2502.06806
|
Logits are All We Need to Adapt Closed Models
|
cs.LG cs.AI cs.CL
|
Many commercial Large Language Models (LLMs) are often closed-source,
limiting developers to prompt tuning for aligning content generation with
specific applications. While these models currently do not provide access to
token logits, we argue that if such access were available, it would enable more
powerful adaptation techniques beyond prompt engineering. In this paper, we
propose a token-level probability reweighting framework that, given access to
logits and a small amount of task-specific data, can effectively steer
black-box LLMs toward application-specific content generation. Our approach
views next-token prediction through the lens of supervised classification. We
show that aligning black-box LLMs with task-specific data can be formulated as
a label noise correction problem, leading to \emph{Plugin} model -- an
autoregressive probability reweighting model that operates solely on logits. We
provide theoretical justification for why reweighting logits alone is
sufficient for task adaptation. Extensive experiments with multiple datasets,
LLMs, and reweighting models demonstrate the effectiveness of our method,
advocating for broader access to token logits in closed-source models.
|
2502.06807
|
Competitive Programming with Large Reasoning Models
|
cs.LG cs.AI cs.CL
|
We show that reinforcement learning applied to large language models (LLMs)
significantly boosts performance on complex coding and reasoning tasks.
Additionally, we compare two general-purpose reasoning models - OpenAI o1 and
an early checkpoint of o3 - with a domain-specific system, o1-ioi, which uses
hand-engineered inference strategies designed for competing in the 2024
International Olympiad in Informatics (IOI). We competed live at IOI 2024 with
o1-ioi and, using hand-crafted test-time strategies, placed in the 49th
percentile. Under relaxed competition constraints, o1-ioi achieved a gold
medal. However, when evaluating later models such as o3, we find that o3
achieves gold without hand-crafted domain-specific strategies or relaxed
constraints. Our findings show that although specialized pipelines such as
o1-ioi yield solid improvements, the scaled-up, general-purpose o3 model
surpasses those results without relying on hand-crafted inference heuristics.
Notably, o3 achieves a gold medal at the 2024 IOI and obtains a Codeforces
rating on par with elite human competitors. Overall, these results indicate
that scaling general-purpose reinforcement learning, rather than relying on
domain-specific techniques, offers a robust path toward state-of-the-art AI in
reasoning domains, such as competitive programming.
|
2502.06808
|
On the Benefits of Attribute-Driven Graph Domain Adaptation
|
cs.LG cs.AI
|
Graph Domain Adaptation (GDA) addresses a pressing challenge in cross-network
learning, particularly pertinent due to the absence of labeled data in
real-world graph datasets. Recent studies attempted to learn domain invariant
representations by eliminating structural shifts between graphs. In this work,
we show that existing methodologies have overlooked the significance of the
graph node attribute, a pivotal factor for graph domain alignment.
Specifically, we first reveal the impact of node attributes for GDA by
theoretically proving that in addition to the graph structural divergence
between the domains, the node attribute discrepancy also plays a critical role
in GDA. Moreover, we also empirically show that the attribute shift is more
substantial than the topology shift, which further underscores the importance
of node attribute alignment in GDA. Inspired by this finding, a novel
cross-channel module is developed to fuse and align both views between the
source and target graphs for GDA. Experimental results on a variety of
benchmarks verify the effectiveness of our method.
|
2502.06809
|
Neurons Speak in Ranges: Breaking Free from Discrete Neuronal
Attribution
|
cs.LG cs.AI cs.CL
|
Interpreting and controlling the internal mechanisms of large language models
(LLMs) is crucial for improving their trustworthiness and utility. Recent
efforts have primarily focused on identifying and manipulating neurons by
establishing discrete mappings between neurons and semantic concepts. However,
such mappings struggle to handle the inherent polysemanticity in LLMs, where
individual neurons encode multiple, distinct concepts. This makes precise
control challenging and complicates downstream interventions. Through an
in-depth analysis of both encoder and decoder-based LLMs across multiple text
classification datasets, we uncover that while individual neurons encode
multiple concepts, their activation magnitudes vary across concepts in
distinct, Gaussian-like patterns. Building on this insight, we introduce
NeuronLens, a novel range-based interpretation and manipulation framework that
provides a finer view of neuron activation distributions to localize concept
attribution within a neuron. Extensive empirical evaluations demonstrate that
NeuronLens significantly reduces unintended interference, while maintaining
precise control for manipulation of targeted concepts, outperforming existing
methods.
|
2502.06810
|
Emergence of Self-Awareness in Artificial Systems: A Minimalist
Three-Layer Approach to Artificial Consciousness
|
q-bio.NC cs.AI
|
This paper proposes a minimalist three-layer model for artificial
consciousness, focusing on the emergence of self-awareness. The model comprises
a Cognitive Integration Layer, a Pattern Prediction Layer, and an Instinctive
Response Layer, interacting with Access-Oriented and Pattern-Integrated Memory
systems. Unlike brain-replication approaches, we aim to achieve minimal
self-awareness through essential elements only. Self-awareness emerges from
layer interactions and dynamic self-modeling, without initial explicit
self-programming. We detail each component's structure, function, and
implementation strategies, addressing technical feasibility. This research
offers new perspectives on consciousness emergence in artificial systems, with
potential implications for human consciousness understanding and adaptable AI
development. We conclude by discussing ethical considerations and future
research directions.
|
2502.06811
|
Aligning Human and Machine Attention for Enhanced Supervised Learning
|
cs.LG cs.AI cs.CL
|
Attention, or prioritization of certain information items over others, is a
critical element of any learning process, for both humans and machines. Given
that humans continue to outperform machines in certain learning tasks, it seems
plausible that machine performance could be enriched by aligning machine
attention with human attention mechanisms -- yet research on this topic is
sparse and has achieved only limited success. This paper proposes a new
approach to address this gap, called Human-Machine Attention Learning (HuMAL).
This approach involves reliance on data annotated by humans to reflect their
self-perceived attention during specific tasks. We evaluate several alternative
strategies for integrating such human attention data into machine learning (ML)
algorithms, using a sentiment analysis task (review data from Yelp) and a
personality-type classification task (data from myPersonality). The
best-performing HuMAL strategy significantly enhances the task performance of
fine-tuned transformer models (BERT, as well as GPT-2 and XLNET), and the
benefit is particularly pronounced under challenging conditions of imbalanced
or sparse labeled data. This research contributes to a deeper understanding of
strategies for integrating human attention into ML models and highlights the
potential of leveraging human cognition to augment ML in real-world
applications.
|
2502.06812
|
Harness Local Rewards for Global Benefits: Effective Text-to-Video
Generation Alignment with Patch-level Reward Models
|
cs.LG cs.GR
|
The emergence of diffusion models (DMs) has significantly improved the
quality of text-to-video generation models (VGMs). However, current VGM
optimization primarily emphasizes the global quality of videos, overlooking
localized errors, which leads to suboptimal generation capabilities. To address
this issue, we propose a post-training strategy for VGMs, HALO, which
explicitly incorporates local feedback from a patch reward model, providing
detailed and comprehensive training signals with the video reward model for
advanced VGM optimization. To develop an effective patch reward model, we
distill GPT-4o to continuously train our video reward model, which enhances
training efficiency and ensures consistency between video and patch reward
distributions. Furthermore, to harmoniously integrate patch rewards into VGM
optimization, we introduce a granular DPO (Gran-DPO) algorithm for DMs,
allowing collaborative use of both patch and video rewards during the
optimization process. Experimental results indicate that our patch reward model
aligns well with human annotations and HALO substantially outperforms the
baselines across two evaluation methods. Further experiments quantitatively
prove the existence of patch defects, and our proposed method could effectively
alleviate this issue.
|
2502.06813
|
Policy Guided Tree Search for Enhanced LLM Reasoning
|
cs.LG cs.AI
|
Despite their remarkable capabilities, large language models often struggle
with tasks requiring complex reasoning and planning. While existing approaches
like Chain-of-Thought prompting and tree search techniques show promise, they
are limited by their reliance on predefined heuristics and computationally
expensive exploration strategies. We propose Policy-Guided Tree Search (PGTS),
a framework that combines reinforcement learning with structured tree
exploration to efficiently navigate reasoning paths. Our key innovation is a
learned policy that dynamically decides between expanding, branching,
backtracking, or terminating exploration, eliminating the need for manual
heuristics or exhaustive search. Experiments across mathematical reasoning,
logical deduction, and planning benchmarks demonstrate that PGTS achieves
superior reasoning performance while significantly reducing computational costs
compared to existing methods. These results establish PGTS as a scalable and
effective solution for tackling complex reasoning tasks with LLMs.
|
2502.06814
|
Diffusion Instruction Tuning
|
cs.LG cs.AI cs.GR
|
We introduce Lavender, a simple supervised fine-tuning (SFT) method that
boosts the performance of advanced vision-language models (VLMs) by leveraging
state-of-the-art image generation models such as Stable Diffusion.
Specifically, Lavender aligns the text-vision attention in the VLM transformer
with the equivalent used by Stable Diffusion during SFT, instead of adapting
separate encoders. This alignment enriches the model's visual understanding and
significantly boosts performance across in- and out-of-distribution tasks.
Lavender requires just 0.13 million training examples, 2.5% of typical
large-scale SFT datasets, and fine-tunes on standard hardware (8 GPUs) in a
single day. It consistently improves state-of-the-art open-source multimodal
LLMs (e.g., Llama-3.2-11B, MiniCPM-Llama3-v2.5), achieving up to 30% gains and
a 68% boost on challenging out-of-distribution medical QA tasks. By efficiently
transferring the visual expertise of image generators with minimal supervision,
Lavender offers a scalable solution for more accurate vision-language systems.
All code, training data, and models will be shared at
https://astrazeneca.github.io/vlm/.
|
2502.06815
|
Honegumi: An Interface for Accelerating the Adoption of Bayesian
Optimization in the Experimental Sciences
|
cs.LG cond-mat.mtrl-sci
|
Bayesian optimization (BO) has emerged as a powerful tool for guiding
experimental design and decision-making in various scientific fields, including
materials science, chemistry, and biology. However, despite its growing
popularity, the complexity of existing BO libraries and the steep learning
curve associated with them can deter researchers who are not well-versed in
machine learning or programming. To address this barrier, we introduce
Honegumi, a user-friendly, interactive tool designed to simplify the process of
creating advanced Bayesian optimization scripts. Honegumi offers a dynamic
selection grid that allows users to configure key parameters of their
optimization tasks, generating ready-to-use, unit-tested Python scripts
tailored to their specific needs. Accompanying the interface is a comprehensive
suite of tutorials that provide both conceptual and practical guidance,
bridging the gap between theoretical understanding and practical
implementation. Built on top of the Ax platform, Honegumi leverages the power
of existing state-of-the-art libraries while restructuring the user experience
to make advanced BO techniques more accessible to experimental researchers. By
lowering the barrier to entry and providing educational resources, Honegumi
aims to accelerate the adoption of advanced Bayesian optimization methods
across various domains.
|
2502.06816
|
DeepCell: Multiview Representation Learning for Post-Mapping Netlists
|
cs.LG cs.AI
|
Representation learning for post-mapping (PM) netlists is a critical
challenge in Electronic Design Automation (EDA), driven by the diverse and
complex nature of modern circuit designs. Existing approaches focus on
intermediate representations like And-Inverter Graphs (AIGs), limiting their
applicability to post-synthesis stages. We introduce DeepCell, a multiview
representation learning framework that integrates structural and functional
insights from both PM netlists and AIGs to learn rich, generalizable
embeddings. At its core, DeepCell employs the novel Mask Circuit Modeling (MCM)
mechanism, which refines PM netlist representations in a self-supervised manner
using pretrained AIG encoders. DeepCell sets a new benchmark in PM netlist
representation, outperforming existing methods in predictive accuracy and
reconstruction fidelity. To validate its efficacy, we apply DeepCell to
functional Engineering Change Orders (ECO), achieving significant reductions in
patch generation costs and runtime while improving patch quality.
|
2502.06817
|
Diffusion-empowered AutoPrompt MedSAM
|
eess.IV cs.GR cs.LG
|
MedSAM, a medical foundation model derived from the SAM architecture, has
demonstrated notable success across diverse medical domains. However, its
clinical application faces two major challenges: the dependency on
labor-intensive manual prompt generation, which imposes a significant burden on
clinicians, and the absence of semantic labeling in the generated segmentation
masks for organs or lesions, limiting its practicality for non-expert users. To
address these limitations, we propose AutoMedSAM, an end-to-end framework
derived from SAM, designed to enhance usability and segmentation performance.
AutoMedSAM retains MedSAM's image encoder and mask decoder structure while
introducing a novel diffusion-based class prompt encoder. The diffusion-based
encoder employs a dual-decoder structure to collaboratively generate prompt
embeddings guided by sparse and dense prompt definitions. These embeddings
enhance the model's ability to understand and process clinical imagery
autonomously. With this encoder, AutoMedSAM leverages class prompts to embed
semantic information into the model's predictions, transforming MedSAM's
semi-automated pipeline into a fully automated workflow. Furthermore,
AutoMedSAM employs an uncertainty-aware joint optimization strategy during
training to effectively inherit MedSAM's pre-trained knowledge while improving
generalization by integrating multiple loss functions. Experimental results
across diverse datasets demonstrate that AutoMedSAM achieves superior
performance while broadening its applicability to both clinical settings and
non-expert users. Code is available at
https://github.com/HP-ML/AutoPromptMedSAM.git.
|
2502.06818
|
Globality Strikes Back: Rethinking the Global Knowledge of CLIP in
Training-Free Open-Vocabulary Semantic Segmentation
|
cs.LG
|
Recent works modify CLIP to perform open-vocabulary semantic segmentation in
a training-free manner (TF-OVSS). In CLIP, patch-wise image representations
mainly encode the homogeneous image-level properties and thus are not
discriminative enough, hindering its application to the dense prediction task.
Previous works make image features more distinct across patches, through making
each patch mainly attend to itself or the neighboring patches within a narrow
local window. However, with their modifications, the ability of CLIP to
aggregate global context information, which is known to be useful for
distinguishing confusing categories, is largely weakened. In this paper, we
propose a new method named GCLIP, which mines the beneficial global knowledge
of CLIP to facilitate the TF-OVSS task. Firstly, we aim to equip the last-block
attention with image-level properties while not introducing homogeneous
attention patterns across patches. In GCLIP, we merge the attention from the
global token emerging blocks with the Query-Query attention to realize this
goal. Secondly, we aim to make the Value embeddings of the last-block attention
module more distinct and semantically correlated. To realize this, we design a
novel channel suppression strategy. As the representation of each patch is
finally determined by the attention weights and the Value embeddings, our
method can generate more discriminative patch-level image features while
absorbing global context information. Extensive experiments on five standard
benchmarks demonstrate that our method consistently outperforms previous
state-of-the-arts.
|
2502.06819
|
Functional 3D Scene Synthesis through Human-Scene Optimization
|
cs.LG cs.GR
|
This paper presents a novel generative approach that outputs 3D indoor
environments solely from a textual description of the scene. Current methods
often treat scene synthesis as a mere layout prediction task, leading to rooms
with overlapping objects or overly structured scenes, with limited
consideration of the practical usability of the generated environment. Instead,
our approach is based on a simple, but effective principle: we condition scene
synthesis to generate rooms that are usable by humans. This principle is
implemented by synthesizing 3D humans that interact with the objects composing
the scene. If this human-centric scene generation is viable, the room layout is
functional and it leads to a more coherent 3D structure. To this end, we
propose a novel method for functional 3D scene synthesis, which consists of
reasoning, 3D assembling and optimization. We regard text guided 3D synthesis
as a reasoning process by generating a scene graph via a graph diffusion
network. Considering object functional co-occurrence, a new strategy is
designed to better accommodate human-object interaction and avoidance,
achieving human-aware 3D scene optimization. We conduct both qualitative and
quantitative experiments to validate the effectiveness of our method in
generating coherent 3D scene synthesis results.
|
2502.06820
|
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient
Fine-Tuning
|
cs.LG cs.AI
|
Low-rank adaptation (LoRA) has become a prevalent method for adapting
pre-trained large language models to downstream tasks. However, the simple
low-rank decomposition form may constrain the hypothesis space. To address this
limitation, we introduce Location-aware Cosine Adaptation (LoCA), a novel
frequency-domain parameter-efficient fine-tuning method based on inverse
Discrete Cosine Transform (iDCT) with selective locations of learnable
components. We begin with a comprehensive theoretical comparison between
frequency-domain and low-rank decompositions for fine-tuning pre-trained large
models. Our analysis reveals that frequency-domain approximation with carefully
selected frequency components can surpass the expressivity of traditional
low-rank-based methods. Furthermore, we demonstrate that iDCT offers a more
efficient implementation compared to inverse Discrete Fourier Transform (iDFT),
allowing for better selection and tuning of frequency components while
maintaining equivalent expressivity to the optimal iDFT-based adaptation. By
employing finite-difference approximation to estimate gradients for discrete
locations of learnable coefficients on the DCT spectrum, LoCA dynamically
selects the most informative frequency components during training. Experiments
on diverse language and vision fine-tuning tasks demonstrate that LoCA offers
enhanced parameter efficiency while maintains computational feasibility
comparable to low-rank-based methods.
|
2502.06822
|
DiffListener: Discrete Diffusion Model for Listener Generation
|
cs.LG cs.CL cs.GR
|
The listener head generation (LHG) task aims to generate natural nonverbal
listener responses based on the speaker's multimodal cues. While prior work
either rely on limited modalities (e.g. audio and facial information) or employ
autoregressive approaches which have limitations such as accumulating
prediction errors. To address these limitations, we propose DiffListener, a
discrete diffusion based approach for non-autoregressive listener head
generation. Our model takes the speaker's facial information, audio, and text
as inputs, additionally incorporating facial differential information to
represent the temporal dynamics of expressions and movements. With this
explicit modeling of facial dynamics, DiffListener can generate coherent
reaction sequences in a non-autoregressive manner. Through comprehensive
experiments, DiffListener demonstrates state-of-the-art performance in both
quantitative and qualitative evaluations. The user study shows that
DiffListener generates natural context-aware listener reactions that are well
synchronized with the speaker. The code and demo videos are available in
https://siyeoljung.github.io/DiffListener
|
2502.06823
|
CTR-Driven Advertising Image Generation with Multimodal Large Language
Models
|
cs.LG cs.CV cs.GR cs.IR
|
In web data, advertising images are crucial for capturing user attention and
improving advertising effectiveness. Most existing methods generate background
for products primarily focus on the aesthetic quality, which may fail to
achieve satisfactory online performance. To address this limitation, we explore
the use of Multimodal Large Language Models (MLLMs) for generating advertising
images by optimizing for Click-Through Rate (CTR) as the primary objective.
Firstly, we build targeted pre-training tasks, and leverage a large-scale
e-commerce multimodal dataset to equip MLLMs with initial capabilities for
advertising image generation tasks. To further improve the CTR of generated
images, we propose a novel reward model to fine-tune pre-trained MLLMs through
Reinforcement Learning (RL), which can jointly utilize multimodal features and
accurately reflect user click preferences. Meanwhile, a product-centric
preference optimization strategy is developed to ensure that the generated
background content aligns with the product characteristics after fine-tuning,
enhancing the overall relevance and effectiveness of the advertising images.
Extensive experiments have demonstrated that our method achieves
state-of-the-art performance in both online and offline metrics. Our code and
pre-trained models are publicly available at: https://github.com/Chenguoz/CAIG.
|
2502.06824
|
Neural Network-based Vehicular Channel Estimation Performance: Effect of
Noise in the Training Set
|
cs.LG cs.AI
|
Vehicular communication systems face significant challenges due to high
mobility and rapidly changing environments, which affect the channel over which
the signals travel. To address these challenges, neural network (NN)-based
channel estimation methods have been suggested. These methods are primarily
trained on high signal-to-noise ratio (SNR) with the assumption that training a
NN in less noisy conditions can result in good generalisation. This study
examines the effectiveness of training NN-based channel estimators on mixed SNR
datasets compared to training solely on high SNR datasets, as seen in several
related works. Estimators evaluated in this work include an architecture that
uses convolutional layers and self-attention mechanisms; a method that employs
temporal convolutional networks and data pilot-aided estimation; two methods
that combine classical methods with multilayer perceptrons; and the current
state-of-the-art model that combines Long-Short-Term Memory networks with data
pilot-aided and temporal averaging methods as post processing. Our results
indicate that using only high SNR data for training is not always optimal, and
the SNR range in the training dataset should be treated as a hyperparameter
that can be adjusted for better performance. This is illustrated by the better
performance of some models in low SNR conditions when trained on the mixed SNR
dataset, as opposed to when trained exclusively on high SNR data.
|
2502.06825
|
RLOMM: An Efficient and Robust Online Map Matching Framework with
Reinforcement Learning
|
cs.LG cs.DB
|
Online map matching is a fundamental problem in location-based services,
aiming to incrementally match trajectory data step-by-step onto a road network.
However, existing methods fail to meet the needs for efficiency, robustness,
and accuracy required by large-scale online applications, making this task
still a challenging problem. This paper introduces a novel framework that
achieves high accuracy and efficient matching while ensuring robustness in
handling diverse scenarios. To improve efficiency, we begin by modeling the
online map matching problem as an Online Markov Decision Process (OMDP) based
on its inherent characteristics. This approach helps efficiently merge
historical and real-time data, reducing unnecessary calculations. Next, to
enhance the model's robustness, we design a reinforcement learning method,
enabling robust handling of real-time data from dynamically changing
environments. In particular, we propose a novel model learning process and a
comprehensive reward function, allowing the model to make reasonable current
matches from a future-oriented perspective, and to continuously update and
optimize during the decision-making process based on feedback. Lastly, to
address the heterogeneity between trajectories and roads, we design distinct
graph structures, facilitating efficient representation learning through graph
and recurrent neural networks. To further align trajectory and road data, we
introduce contrastive learning to decrease their distance in the latent space,
thereby promoting effective integration of the two. Extensive evaluations on
three real-world datasets confirm that our method significantly outperforms
existing state-of-the-art solutions in terms of accuracy, efficiency and
robustness.
|
2502.06826
|
Transferring Graph Neural Networks for Soft Sensor Modeling using
Process Topologies
|
cs.LG cs.AI
|
Data-driven soft sensors help in process operations by providing real-time
estimates of otherwise hard- to-measure process quantities, e.g., viscosities
or product concentrations. Currently, soft sensors need to be developed
individually per plant. Using transfer learning, machine learning-based soft
sensors could be reused and fine-tuned across plants and applications. However,
transferring data-driven soft sensor models is in practice often not possible,
because the fixed input structure of standard soft sensor models prohibits
transfer if, e.g., the sensor information is not identical in all plants. We
propose a topology-aware graph neural network approach for transfer learning of
soft sensor models across multiple plants. In our method, plants are modeled as
graphs: Unit operations are nodes, streams are edges, and sensors are embedded
as attributes. Our approach brings two advantages for transfer learning: First,
we not only include sensor data but also crucial information on the plant
topology. Second, the graph neural network algorithm is flexible with respect
to its sensor inputs. This allows us to model data from different plants with
different sensor networks. We test the transfer learning capabilities of our
modeling approach on ammonia synthesis loops with different process topologies.
We build a soft sensor predicting the ammonia concentration in the product.
After training on data from one process, we successfully transfer our soft
sensor model to a previously unseen process with a different topology. Our
approach promises to extend the data-driven soft sensors to cases to leverage
data from multiple plants.
|
2502.06827
|
Learning to Synthesize Compatible Fashion Items Using Semantic Alignment
and Collocation Classification: An Outfit Generation Framework
|
cs.LG cs.AI cs.GR
|
The field of fashion compatibility learning has attracted great attention
from both the academic and industrial communities in recent years. Many studies
have been carried out for fashion compatibility prediction, collocated outfit
recommendation, artificial intelligence (AI)-enabled compatible fashion design,
and related topics. In particular, AI-enabled compatible fashion design can be
used to synthesize compatible fashion items or outfits in order to improve the
design experience for designers or the efficacy of recommendations for
customers. However, previous generative models for collocated fashion synthesis
have generally focused on the image-to-image translation between fashion items
of upper and lower clothing. In this paper, we propose a novel outfit
generation framework, i.e., OutfitGAN, with the aim of synthesizing a set of
complementary items to compose an entire outfit, given one extant fashion item
and reference masks of target synthesized items. OutfitGAN includes a semantic
alignment module, which is responsible for characterizing the mapping
correspondence between the existing fashion items and the synthesized ones, to
improve the quality of the synthesized images, and a collocation classification
module, which is used to improve the compatibility of a synthesized outfit. In
order to evaluate the performance of our proposed models, we built a
large-scale dataset consisting of 20,000 fashion outfits. Extensive
experimental results on this dataset show that our OutfitGAN can synthesize
photo-realistic outfits and outperform state-of-the-art methods in terms of
similarity, authenticity and compatibility measurements.
|
2502.06828
|
Fine-Tuning Strategies for Continual Online EEG Motor Imagery Decoding:
Insights from a Large-Scale Longitudinal Study
|
cs.LG cs.AI
|
This study investigates continual fine-tuning strategies for deep learning in
online longitudinal electroencephalography (EEG) motor imagery (MI) decoding
within a causal setting involving a large user group and multiple sessions per
participant. We are the first to explore such strategies across a large user
group, as longitudinal adaptation is typically studied in the single-subject
setting with a single adaptation strategy, which limits the ability to
generalize findings. First, we examine the impact of different fine-tuning
approaches on decoder performance and stability. Building on this, we integrate
online test-time adaptation (OTTA) to adapt the model during deployment,
complementing the effects of prior fine-tuning. Our findings demonstrate that
fine-tuning that successively builds on prior subject-specific information
improves both performance and stability, while OTTA effectively adapts the
model to evolving data distributions across consecutive sessions, enabling
calibration-free operation. These results offer valuable insights and
recommendations for future research in longitudinal online MI decoding and
highlight the importance of combining domain adaptation strategies for
improving BCI performance in real-world applications. Clinical Relevance: Our
investigation enables more stable and efficient long-term motor imagery
decoding, which is critical for neurorehabilitation and assistive technologies.
|
2502.06829
|
Convolution-Based Converter : A Weak-Prior Approach For Modeling
Stochastic Processes Based On Conditional Density Estimation
|
cs.LG cs.AI
|
In this paper, a Convolution-Based Converter (CBC) is proposed to develop a
methodology for removing the strong or fixed priors in estimating the
probability distribution of targets based on observations in the stochastic
process. Traditional approaches, e.g., Markov-based and Gaussian process-based
methods, typically leverage observations to estimate targets based on strong or
fixed priors (such as Markov properties or Gaussian prior). However, the
effectiveness of these methods depends on how well their prior assumptions
align with the characteristics of the problem. When the assumed priors are not
satisfied, these approaches may perform poorly or even become unusable. To
overcome the above limitation, we introduce the Convolution-Based converter
(CBC), which implicitly estimates the conditional probability distribution of
targets without strong or fixed priors, and directly outputs the expected
trajectory of the stochastic process that satisfies the constraints from
observations. This approach reduces the dependence on priors, enhancing
flexibility and adaptability in modeling stochastic processes when addressing
different problems. Experimental results demonstrate that our method
outperforms existing baselines across multiple metrics.
|
2502.06830
|
OrderFusion: Encoding Orderbook for Probabilistic Intraday Price
Prediction
|
q-fin.CP cs.AI cs.LG
|
Efficient and reliable probabilistic prediction of intraday electricity
prices is essential to manage market uncertainties and support robust trading
strategies. However, current methods often suffer from parameter
inefficiencies, as they fail to fully exploit the potential of modeling
interdependencies between bids and offers in the orderbook, requiring a large
number of parameters for representation learning. Furthermore, these methods
face the quantile crossing issue, where upper quantiles fall below the lower
quantiles, resulting in unreliable probabilistic predictions. To address these
two challenges, we propose an encoding method called OrderFusion and design a
hierarchical multi-quantile head. The OrderFusion encodes the orderbook into a
2.5D representation, which is processed by a tailored jump cross-attention
backbone to capture the interdependencies of bids and offers, enabling
parameter-efficient learning. The head sets the median quantile as an anchor
and predicts multiple quantiles hierarchically, ensuring reliability by
enforcing monotonicity between quantiles through non-negative functions.
Extensive experiments and ablation studies are conducted on four price indices:
60-min ID3, 60-min ID1, 15-min ID3, and 15-min ID1 using the German orderbook
over three years to ensure a fair evaluation. The results confirm that our
design choices improve overall performance, offering a parameter-efficient and
reliable solution for probabilistic intraday price prediction.
|
2502.06831
|
No Location Left Behind: Measuring and Improving the Fairness of
Implicit Representations for Earth Data
|
cs.LG cs.AI
|
Implicit neural representations (INRs) exhibit growing promise in addressing
Earth representation challenges, ranging from emissions monitoring to climate
modeling. However, existing methods disproportionately prioritize global
average performance, whereas practitioners require fine-grained insights to
understand biases and variations in these models. To bridge this gap, we
introduce FAIR-Earth: a first-of-its-kind dataset explicitly crafted to examine
and challenge inequities in Earth representations. FAIR-Earth comprises various
high-resolution Earth signals and uniquely aggregates extensive metadata along
stratifications like landmass size and population density to assess the
fairness of models. Evaluating state-of-the-art INRs across the various
modalities of FAIR-Earth, we uncover striking performance disparities. Certain
subgroups, especially those associated with high-frequency signals (e.g.,
islands, coastlines), are consistently poorly modeled by existing methods. In
response, we propose spherical wavelet encodings, building on previous spatial
encoding research. Leveraging the multi-resolution capabilities of wavelets,
our encodings yield consistent performance over various scales and locations,
offering more accurate and robust representations of the biased subgroups.
These open-source contributions represent a crucial step towards the equitable
assessment and deployment of Earth INRs.
|
2502.06832
|
Optimizing Robustness and Accuracy in Mixture of Experts: A Dual-Model
Approach
|
cs.LG cs.AI
|
Mixture of Experts (MoE) have shown remarkable success in leveraging
specialized expert networks for complex machine learning tasks. However, their
susceptibility to adversarial attacks presents a critical challenge for
deployment in robust applications. This paper addresses the critical question
of how to incorporate robustness into MoEs while maintaining high natural
accuracy. We begin by analyzing the vulnerability of MoE components, finding
that expert networks are notably more susceptible to adversarial attacks than
the router. Based on this insight, we propose a targeted robust training
technique that integrates a novel loss function to enhance the adversarial
robustness of MoE, requiring only the robustification of one additional expert
without compromising training or inference efficiency. Building on this, we
introduce a dual-model strategy that linearly combines a standard MoE model
with our robustified MoE model using a smoothing parameter. This approach
allows for flexible control over the robustness-accuracy trade-off. We further
provide theoretical foundations by deriving certified robustness bounds for
both the single MoE and the dual-model. To push the boundaries of robustness
and accuracy, we propose a novel joint training strategy JTDMoE for the
dual-model. This joint training enhances both robustness and accuracy beyond
what is achievable with separate models. Experimental results on CIFAR-10 and
TinyImageNet datasets using ResNet18 and Vision Transformer (ViT) architectures
demonstrate the effectiveness of our proposed methods.
|
2502.06833
|
Entropy Adaptive Decoding: Dynamic Model Switching for Efficient
Inference
|
cs.LG cs.AI cs.CL
|
We present Entropy Adaptive Decoding (EAD), a novel approach for efficient
language model inference that dynamically switches between different-sized
models based on prediction uncertainty. By monitoring rolling entropy in model
logit distributions, our method identifies text regions where a smaller model
suffices and switches to a larger model only when prediction uncertainty
exceeds a threshold. Unlike speculative decoding approaches that maintain
perfect output fidelity through verification, EAD accepts controlled output
divergence in exchange for computational efficiency. Our experiments on the
MATH benchmark demonstrate remarkable efficiency gains across different model
families. Using the LLaMA family, we maintain 96.7\% of the 11B model's
performance (50.4\% vs 52.1\%) while using it for only 43\% of tokens,
decreasing computational cost by 41.5\%. These gains become more pronounced
with larger size differentials in the Qwen family, where we achieve 92.9\% of
the 14B model's performance (74.3\% vs 80.0\%) while using it for just 25\% of
tokens, decreasing computational cost by 67\%. The consistency of these results
across model pairs suggests that language model computation can be
significantly optimized by selectively deploying model capacity based on local
generation complexity. Our findings indicate that current approaches to model
inference may be unnecessarily conservative in their pursuit of perfect output
fidelity, and that accepting minor performance trade-offs can enable dramatic
reductions in computational costs.
|
2502.06834
|
A Unified Knowledge-Distillation and Semi-Supervised Learning Framework
to Improve Industrial Ads Delivery Systems
|
cs.LG cs.AI
|
Industrial ads ranking systems conventionally rely on labeled impression
data, which leads to challenges such as overfitting, slower incremental gain
from model scaling, and biases due to discrepancies between training and
serving data. To overcome these issues, we propose a Unified framework for
Knowledge-Distillation and Semi-supervised Learning (UKDSL) for ads ranking,
empowering the training of models on a significantly larger and more diverse
datasets, thereby reducing overfitting and mitigating training-serving data
discrepancies. We provide detailed formal analysis and numerical simulations on
the inherent miscalibration and prediction bias of multi-stage ranking systems,
and show empirical evidence of the proposed framework's capability to mitigate
those. Compared to prior work, UKDSL can enable models to learn from a much
larger set of unlabeled data, hence, improving the performance while being
computationally efficient. Finally, we report the successful deployment of
UKDSL in an industrial setting across various ranking models, serving users at
multi-billion scale, across various surfaces, geological locations, clients,
and optimize for various events, which to the best of our knowledge is the
first of its kind in terms of the scale and efficiency at which it operates.
|
2502.06835
|
Reinforcement Learning on AYA Dyads to Enhance Medication Adherence
|
cs.LG
|
Medication adherence is critical for the recovery of adolescents and young
adults (AYAs) who have undergone hematopoietic cell transplantation (HCT).
However, maintaining adherence is challenging for AYAs after hospital
discharge, who experience both individual (e.g. physical and emotional
symptoms) and interpersonal barriers (e.g., relational difficulties with their
care partner, who is often involved in medication management). To optimize the
effectiveness of a three-component digital intervention targeting both members
of the dyad as well as their relationship, we propose a novel Multi-Agent
Reinforcement Learning (MARL) approach to personalize the delivery of
interventions. By incorporating the domain knowledge, the MARL framework, where
each agent is responsible for the delivery of one intervention component,
allows for faster learning compared with a flattened agent. Evaluation using a
dyadic simulator environment, based on real clinical data, shows a significant
improvement in medication adherence (approximately 3%) compared to purely
random intervention delivery. The effectiveness of this approach will be
further evaluated in an upcoming trial.
|
2502.06836
|
CAST: Cross Attention based multimodal fusion of Structure and Text for
materials property prediction
|
cs.LG cond-mat.mtrl-sci cs.AI
|
Recent advancements in AI have revolutionized property prediction in
materials science and accelerating material discovery. Graph neural networks
(GNNs) stand out due to their ability to represent crystal structures as
graphs, effectively capturing local interactions and delivering superior
predictions. However, these methods often lose critical global information,
such as crystal systems and repetitive unit connectivity. To address this, we
propose CAST, a cross-attention-based multimodal fusion model that integrates
graph and text modalities to preserve essential material information. CAST
combines node- and token-level features using cross-attention mechanisms,
surpassing previous approaches reliant on material-level embeddings like graph
mean-pooling or [CLS] tokens. A masked node prediction pretraining strategy
further enhances atomic-level information integration. Our method achieved up
to 22.9\% improvement in property prediction across four crystal properties
including band gap compared to methods like CrysMMNet and MultiMat. Pretraining
was key to aligning node and text embeddings, with attention maps confirming
its effectiveness in capturing relationships between nodes and tokens. This
study highlights the potential of multimodal learning in materials science,
paving the way for more robust predictive models that incorporate both local
and global information.
|
2502.06837
|
Comparison of CNN-based deep learning architectures for unsteady CFD
acceleration on small datasets
|
cs.LG physics.flu-dyn
|
CFD acceleration for virtual nuclear reactors or digital twin technology is a
primary goal in the nuclear industry. This study compares advanced
convolutional neural network (CNN) architectures for accelerating unsteady
computational fluid dynamics (CFD) simulations using small datasets based on a
challenging natural convection flow dataset. The advanced architectures such as
autoencoders, UNet, and ConvLSTM-UNet, were evaluated under identical
conditions to determine their predictive accuracy and robustness in
autoregressive time-series predictions. ConvLSTM-UNet consistently outperformed
other models, particularly in difference value calculation, achieving lower
maximum errors and stable residuals. However, error accumulation remains a
challenge, limiting reliable predictions to approximately 10 timesteps. This
highlights the need for enhanced strategies to improve long-term prediction
stability. The novelty of this work lies in its fair comparison of
state-of-the-art CNN models within the RePIT framework, demonstrating their
potential for accelerating CFD simulations while identifying limitations under
small data conditions. Future research will focus on exploring alternative
models, such as graph neural networks and implicit neural representations.
These efforts aim to develop a robust hybrid approach for long-term unsteady
CFD acceleration, contributing to practical applications in virtual nuclear
reactor.
|
2502.06838
|
TorchResist: Open-Source Differentiable Resist Simulator
|
cs.LG
|
Recent decades have witnessed remarkable advancements in artificial
intelligence (AI), including large language models (LLMs), image and video
generative models, and embodied AI systems. These advancements have led to an
explosive increase in the demand for computational power, challenging the
limits of Moore's Law. Optical lithography, a critical technology in
semiconductor manufacturing, faces significant challenges due to its high
costs. To address this, various lithography simulators have been developed.
However, many of these simulators are limited by their inadequate photoresist
modeling capabilities. This paper presents TorchResist, an open-source,
differentiable photoresist simulator.TorchResist employs an analytical approach
to model the photoresist process, functioning as a white-box system with at
most twenty interpretable parameters. Leveraging modern differentiable
programming techniques and parallel computing on GPUs, TorchResist enables
seamless co-optimization with other tools across multiple related tasks. Our
experimental results demonstrate that TorchResist achieves superior accuracy
and efficiency compared to existing solutions. The source code is publicly
available.
|
2502.06839
|
A Hybrid Model for Weakly-Supervised Speech Dereverberation
|
eess.AS cs.AI cs.SD eess.SP
|
This paper introduces a new training strategy to improve speech
dereverberation systems using minimal acoustic information and reverberant
(wet) speech. Most existing algorithms rely on paired dry/wet data, which is
difficult to obtain, or on target metrics that may not adequately capture
reverberation characteristics and can lead to poor results on non-target
metrics. Our approach uses limited acoustic information, like the reverberation
time (RT60), to train a dereverberation system. The system's output is
resynthesized using a generated room impulse response and compared with the
original reverberant speech, providing a novel reverberation matching loss
replacing the standard target metrics. During inference, only the trained
dereverberation model is used. Experimental results demonstrate that our method
achieves more consistent performance across various objective metrics used in
speech dereverberation than the state-of-the-art.
|
2502.06842
|
Integrating Generative Artificial Intelligence in ADRD: A Framework for
Streamlining Diagnosis and Care in Neurodegenerative Diseases
|
cs.CY cs.AI
|
Healthcare systems are struggling to meet the growing demand for neurological
care, with challenges particularly acute in Alzheimer's disease and related
dementias (ADRD). While artificial intelligence research has often focused on
identifying patterns beyond human perception, implementing such predictive
capabilities remains challenging as clinicians cannot readily verify insights
they cannot themselves detect. We propose that large language models (LLMs)
offer more immediately practical applications by enhancing clinicians'
capabilities in three critical areas: comprehensive data collection,
interpretation of complex clinical information, and timely application of
relevant medical knowledge. These challenges stem from limited time for proper
diagnosis, growing data complexity, and an overwhelming volume of medical
literature that exceeds any clinician's capacity to fully master. We present a
framework for responsible AI integration that leverages LLMs' ability to
communicate effectively with both patients and providers while maintaining
human oversight. This approach prioritizes standardized, high-quality data
collection to enable a system that learns from every patient encounter while
incorporating the latest clinical evidence, continuously improving care
delivery. We begin to address implementation challenges and initiate important
discussions around ethical considerations and governance needs. While developed
for ADRD, this roadmap provides principles for responsible AI integration
across neurology and other medical specialties, with potential to improve
diagnostic accuracy, reduce care disparities, and advance clinical knowledge
through a learning healthcare system.
|
2502.06843
|
Vision-Integrated LLMs for Autonomous Driving Assistance : Human
Performance Comparison and Trust Evaluation
|
cs.CV cs.AI cs.HC
|
Traditional autonomous driving systems often struggle with reasoning in
complex, unexpected scenarios due to limited comprehension of spatial
relationships. In response, this study introduces a Large Language Model
(LLM)-based Autonomous Driving (AD) assistance system that integrates a vision
adapter and an LLM reasoning module to enhance visual understanding and
decision-making. The vision adapter, combining YOLOv4 and Vision Transformer
(ViT), extracts comprehensive visual features, while GPT-4 enables human-like
spatial reasoning and response generation. Experimental evaluations with 45
experienced drivers revealed that the system closely mirrors human performance
in describing situations and moderately aligns with human decisions in
generating appropriate responses.
|
2502.06844
|
Exploring Model Invariance with Discrete Search for Ultra-Low-Bit
Quantization
|
cs.LG cs.AI cs.CL
|
Large language models have been increasing in size due to their success in a
wide range of applications. This calls for a pressing need to reduce memory
usage to make them more accessible. Post-training quantization is a popular
technique which uses fewer bits (e.g., 4--8 bits) to represent the model
without retraining it. However, it remains a challenging task to perform
quantization in an ultra-low-bit setup (e.g., 2 bits). In this paper, we
propose InvarExplore, a unified framework that systematically explores
different model invariance at the same time, allowing us to take advantage of
the synergy between each type of invariance. Importantly, InvarExplore features
a discrete search algorithm that enables us to explore permutation invariance,
which is under-studied as it cannot be optimized with gradient-based methods.
Results show that InvarExplore is compatible with existing state-of-the-art
methods, achieving an add-on performance improvement over strong competing
methods.
|
2502.06845
|
DiffNMR3: Advancing NMR Resolution Beyond Instrumental Limits
|
physics.ins-det cs.AI cs.LG
|
Nuclear Magnetic Resonance (NMR) spectroscopy is a crucial analytical
technique used for molecular structure elucidation, with applications spanning
chemistry, biology, materials science, and medicine. However, the frequency
resolution of NMR spectra is limited by the "field strength" of the instrument.
High-field NMR instruments provide high-resolution spectra but are
prohibitively expensive, whereas lower-field instruments offer more accessible,
but lower-resolution, results. This paper introduces an AI-driven approach that
not only enhances the frequency resolution of NMR spectra through
super-resolution techniques but also provides multi-scale functionality. By
leveraging a diffusion model, our method can reconstruct high-field spectra
from low-field NMR data, offering flexibility in generating spectra at varying
magnetic field strengths. These reconstructions are comparable to those
obtained from high-field instruments, enabling finer spectral details and
improving molecular characterization. To date, our approach is one of the first
to overcome the limitations of instrument field strength, achieving NMR
super-resolution through AI. This cost-effective solution makes high-resolution
analysis accessible to more researchers and industries, without the need for
multimillion-dollar equipment.
|
2502.06846
|
Prot2Chat: Protein LLM with Early Fusion of Sequence and Structure
|
cs.LG cs.AI q-bio.BM
|
Proteins play a pivotal role in living organisms, yet understanding their
functions presents significant challenges, including the limited flexibility of
classification-based methods, the inability to effectively leverage spatial
structural information, and the lack of systematic evaluation metrics for
protein Q&A systems. To address these limitations, we propose Prot2Chat, a
novel framework that integrates multimodal protein representations with natural
language through a unified module, enabling large language model (LLM)-driven
answer generation. Our model incorporates a modified ProteinMPNN encoder, which
encodes protein sequence and structural information in a unified manner, a
protein-text adapter with cross-attention mechanisms, and a LLaMA3 decoder. To
optimize training efficiency, we freeze the encoder and employ LoRA techniques
for the decoder. We conducted experiments on two datasets, both automated
metrics and expert evaluations demonstrate the superior performance of our
model. Furthermore, zero-shot prediction results highlight its strong
generalization capabilities. This framework offers a promising solution for
bridging protein domain knowledge with natural language understanding, paving
the way for transformative advancements in protein-related research.
|
2502.06847
|
A Deep Learning Framework Integrating CNN and BiLSTM for Financial
Systemic Risk Analysis and Prediction
|
cs.LG cs.CE
|
This study proposes a deep learning model based on the combination of
convolutional neural network (CNN) and bidirectional long short-term memory
network (BiLSTM) for discriminant analysis of financial systemic risk. The
model first uses CNN to extract local patterns of multidimensional features of
financial markets, and then models the bidirectional dependency of time series
through BiLSTM, to comprehensively characterize the changing laws of systemic
risk in spatial features and temporal dynamics. The experiment is based on real
financial data sets. The results show that the model is significantly superior
to traditional single models (such as BiLSTM, CNN, Transformer, and TCN) in
terms of accuracy, recall, and F1 score. The F1-score reaches 0.88, showing
extremely high discriminant ability. This shows that the joint strategy of
combining CNN and BiLSTM can not only fully capture the complex patterns of
market data but also effectively deal with the long-term dependency problem in
time series data. In addition, this study also explores the robustness of the
model in dealing with data noise and processing high-dimensional data,
providing strong support for intelligent financial risk management. In the
future, the research will further optimize the model structure, introduce
methods such as reinforcement learning and multimodal data analysis, and
improve the efficiency and generalization ability of the model to cope with a
more complex financial environment.
|
2502.06848
|
Transfer learning in Scalable Graph Neural Network for Improved Physical
Simulation
|
cs.LG cs.AI
|
In recent years, Graph Neural Network (GNN) based models have shown promising
results in simulating physics of complex systems. However, training dedicated
graph network based physics simulators can be costly, as most models are
confined to fully supervised training, which requires extensive data generated
from traditional physics simulators. To date, how transfer learning could
improve the model performance and training efficiency has remained unexplored.
In this work, we introduce a pre-training and transfer learning paradigm for
graph network simulators. We propose the scalable graph U-net (SGUNET).
Incorporating an innovative depth-first search (DFS) pooling, the SGUNET is
adaptable to different mesh sizes and resolutions for various simulation tasks.
To enable the transfer learning between differently configured SGUNETs, we
propose a set of mapping functions to align the parameters between the
pre-trained model and the target model. An extra normalization term is also
added into the loss to constrain the difference between the pre-trained weights
and target model weights for better generalization performance. To pre-train
our physics simulator we created a dataset which includes 20,000 physical
simulations of randomly selected 3D shapes from the open source A Big CAD (ABC)
dataset. We show that our proposed transfer learning methods allow the model to
perform even better when fine-tuned with small amounts of training data than
when it is trained from scratch with full extensive dataset. On the 2D
Deformable Plate benchmark dataset, our pre-trained model fine-tuned on 1/16 of
the training data achieved an 11.05\% improvement in position RMSE compared to
the model trained from scratch.
|
2502.06849
|
Model Fusion via Neuron Transplantation
|
cs.LG cs.AI
|
Ensemble learning is a widespread technique to improve the prediction
performance of neural networks. However, it comes at the price of increased
memory and inference time. In this work we propose a novel model fusion
technique called \emph{Neuron Transplantation (NT)} in which we fuse an
ensemble of models by transplanting important neurons from all ensemble members
into the vacant space obtained by pruning insignificant neurons. An initial
loss in performance post-transplantation can be quickly recovered via
fine-tuning, consistently outperforming individual ensemble members of the same
model capacity and architecture. Furthermore, NT enables all the ensemble
members to be jointly pruned and jointly trained in a combined model. Comparing
it to alignment-based averaging (like Optimal-Transport-fusion), it requires
less fine-tuning than the corresponding OT-fused model, the fusion itself is
faster and requires less memory, while the resulting model performance is
comparable or better. The code is available under the following link:
https://github.com/masterbaer/neuron-transplantation.
|
2502.06851
|
Survey on Vision-Language-Action Models
|
cs.CL cs.AI cs.CV
|
This paper presents an AI-generated review of Vision-Language-Action (VLA)
models, summarizing key methodologies, findings, and future directions. The
content is produced using large language models (LLMs) and is intended only for
demonstration purposes. This work does not represent original research, but
highlights how AI can help automate literature reviews. As AI-generated content
becomes more prevalent, ensuring accuracy, reliability, and proper synthesis
remains a challenge. Future research will focus on developing a structured
framework for AI-assisted literature reviews, exploring techniques to enhance
citation accuracy, source credibility, and contextual understanding. By
examining the potential and limitations of LLM in academic writing, this study
aims to contribute to the broader discussion of integrating AI into research
workflows. This work serves as a preliminary step toward establishing
systematic approaches for leveraging AI in literature review generation, making
academic knowledge synthesis more efficient and scalable.
|
2502.06852
|
EAP-GP: Mitigating Saturation Effect in Gradient-based Automated Circuit
Identification
|
cs.LG cs.AI
|
Understanding the internal mechanisms of transformer-based language models
remains challenging. Mechanistic interpretability based on circuit discovery
aims to reverse engineer neural networks by analyzing their internal processes
at the level of computational subgraphs. In this paper, we revisit existing
gradient-based circuit identification methods and find that their performance
is either affected by the zero-gradient problem or saturation effects, where
edge attribution scores become insensitive to input changes, resulting in noisy
and unreliable attribution evaluations for circuit components. To address the
saturation effect, we propose Edge Attribution Patching with GradPath (EAP-GP),
EAP-GP introduces an integration path, starting from the input and adaptively
following the direction of the difference between the gradients of corrupted
and clean inputs to avoid the saturated region. This approach enhances
attribution reliability and improves the faithfulness of circuit
identification. We evaluate EAP-GP on 6 datasets using GPT-2 Small, GPT-2
Medium, and GPT-2 XL. Experimental results demonstrate that EAP-GP outperforms
existing methods in circuit faithfulness, achieving improvements up to 17.7%.
Comparisons with manually annotated ground-truth circuits demonstrate that
EAP-GP achieves precision and recall comparable to or better than previous
approaches, highlighting its effectiveness in identifying accurate circuits.
|
2502.06853
|
Native Fortran Implementation of TensorFlow-Trained Deep and Bayesian
Neural Networks
|
cs.LG cs.AI
|
Over the past decade, the investigation of machine learning (ML) within the
field of nuclear engineering has grown significantly. With many approaches
reaching maturity, the next phase of investigation will determine the
feasibility and usefulness of ML model implementation in a production setting.
Several of the codes used for reactor design and assessment are primarily
written in the Fortran language, which is not immediately compatible with
TensorFlow-trained ML models. This study presents a framework for implementing
deep neural networks (DNNs) and Bayesian neural networks (BNNs) in Fortran,
allowing for native execution without TensorFlow's C API, Python runtime, or
ONNX conversion. Designed for ease of use and computational efficiency, the
framework can be implemented in any Fortran code, supporting iterative solvers
and UQ via ensembles or BNNs. Verification was performed using a two-input,
one-output test case composed of a noisy sinusoid to compare Fortran-based
predictions to those from TensorFlow. The DNN predictions showed negligible
differences and achieved a 19.6x speedup, whereas the BNN predictions exhibited
minor disagreement, plausibly due to differences in random number generation.
An 8.0x speedup was noted for BNN inference. The approach was then further
verified on a nuclear-relevant problem predicting critical heat flux (CHF),
which demonstrated similar behavior along with significant computational gains.
Discussion regarding the framework's successful integration into the CTF
thermal-hydraulics code is also included, outlining its practical usefulness.
Overall, this framework was shown to be effective at implementing both DNN and
BNN model inference within Fortran, allowing for the continued study of
ML-based methods in real-world nuclear applications.
|
2502.06854
|
Can Large Language Models Understand Intermediate Representations?
|
cs.LG cs.AI cs.CL
|
Intermediate Representations (IRs) are essential in compiler design and
program analysis, yet their comprehension by Large Language Models (LLMs)
remains underexplored. This paper presents a pioneering empirical study to
investigate the capabilities of LLMs, including GPT-4, GPT-3, Gemma 2, LLaMA
3.1, and Code Llama, in understanding IRs. We analyze their performance across
four tasks: Control Flow Graph (CFG) reconstruction, decompilation, code
summarization, and execution reasoning. Our results indicate that while LLMs
demonstrate competence in parsing IR syntax and recognizing high-level
structures, they struggle with control flow reasoning, execution semantics, and
loop handling. Specifically, they often misinterpret branching instructions,
omit critical IR operations, and rely on heuristic-based reasoning, leading to
errors in CFG reconstruction, IR decompilation, and execution reasoning. The
study underscores the necessity for IR-specific enhancements in LLMs,
recommending fine-tuning on structured IR datasets and integration of explicit
control flow models to augment their comprehension and handling of IR-related
tasks.
|
2502.06855
|
Self-Supervised Prompt Optimization
|
cs.CL cs.AI cs.LG
|
Well-designed prompts are crucial for enhancing Large language models' (LLMs)
reasoning capabilities while aligning their outputs with task requirements
across diverse domains. However, manually designed prompts require expertise
and iterative experimentation. While existing prompt optimization methods aim
to automate this process, they rely heavily on external references such as
ground truth or by humans, limiting their applicability in real-world scenarios
where such data is unavailable or costly to obtain. To address this, we propose
Self-Supervised Prompt Optimization (SPO), a cost-efficient framework that
discovers effective prompts for both closed and open-ended tasks without
requiring external reference. Motivated by the observations that prompt quality
manifests directly in LLM outputs and LLMs can effectively assess adherence to
task requirements, we derive evaluation and optimization signals purely from
output comparisons. Specifically, SPO selects superior prompts through pairwise
output comparisons evaluated by an LLM evaluator, followed by an LLM optimizer
that aligns outputs with task requirements. Extensive experiments demonstrate
that SPO outperforms state-of-the-art prompt optimization methods, achieving
comparable or superior results with significantly lower costs (e.g., 1.1% to
5.6% of existing methods) and fewer samples (e.g., three samples). The code is
available at https://github.com/geekan/MetaGPT/blob/main/examples/spo
|
2502.06857
|
Gemstones: A Model Suite for Multi-Faceted Scaling Laws
|
cs.LG cs.AI
|
Scaling laws are typically fit using a family of models with a narrow range
of frozen hyper-parameter choices. In this work we study scaling laws using a
wide range of architecture and hyper-parameter choices, and highlight their
impact on resulting prescriptions. As a primary artifact of our research, we
release the Gemstones: the most comprehensive open-source scaling law dataset
to date, consisting of over 4000 checkpoints from transformers with up to 2
billion parameters; these models have been trained with different learning
rates, cooldown schedules, and architectural shapes. Our checkpoints enable
more complex studies of scaling, such as a law that predicts language modeling
performance as a function of model width and depth. By examining the various
facets of our model suite, we find that the prescriptions of scaling laws can
be highly sensitive to the experimental design process and the specific model
checkpoints used during fitting. Code:
https://github.com/mcleish7/gemstone-scaling-laws
|
2502.06858
|
LLM-Supported Natural Language to Bash Translation
|
cs.CL cs.AI
|
The Bourne-Again Shell (Bash) command-line interface for Linux systems has
complex syntax and requires extensive specialized knowledge. Using the natural
language to Bash command (NL2SH) translation capabilities of large language
models (LLMs) for command composition circumvents these issues. However, the
NL2SH performance of LLMs is difficult to assess due to inaccurate test data
and unreliable heuristics for determining the functional equivalence of Bash
commands. We present a manually verified test dataset of 600
instruction-command pairs and a training dataset of 40,939 pairs, increasing
the size of previous datasets by 441% and 135%, respectively. Further, we
present a novel functional equivalence heuristic that combines command
execution with LLM evaluation of command outputs. Our heuristic can determine
the functional equivalence of two Bash commands with 95% confidence, a 16%
increase over previous heuristics. Evaluation of popular LLMs using our test
dataset and heuristic demonstrates that parsing, in-context learning, in-weight
learning, and constrained decoding can improve NL2SH accuracy by up to 32%. Our
findings emphasize the importance of dataset quality, execution-based
evaluation and translation method for advancing NL2SH translation. Our code is
available at https://github.com/westenfelder/NL2SH
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.