id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.00901
|
Fundamental limits of learning in sequence multi-index models and deep
attention networks: High-dimensional asymptotics and sharp thresholds
|
cs.LG cond-mat.dis-nn
|
In this manuscript, we study the learning of deep attention neural networks,
defined as the composition of multiple self-attention layers, with tied and
low-rank weights. We first establish a mapping of such models to sequence
multi-index models, a generalization of the widely studied multi-index model to
sequential covariates, for which we establish a number of general results. In
the context of Bayesian-optimal learning, in the limit of large dimension $D$
and commensurably large number of samples $N$, we derive a sharp asymptotic
characterization of the optimal performance as well as the performance of the
best-known polynomial-time algorithm for this setting --namely approximate
message-passing--, and characterize sharp thresholds on the minimal sample
complexity required for better-than-random prediction performance. Our analysis
uncovers, in particular, how the different layers are learned sequentially.
Finally, we discuss how this sequential learning can also be observed in a
realistic setup.
|
2502.00902
|
Position: More Rigorous Software Engineering Would Improve
Reproducibility in Machine Learning Research
|
cs.SE cs.LG
|
Experimental verification and falsification of scholarly work are part of the
scientific method's core. To improve the Machine Learning (ML)-communities'
ability to verify results from prior work, we argue for more robust software
engineering. We estimate the adoption of common engineering best practices by
examining repository links from all recently accepted International Conference
on Machine Learning (ICML), International Conference on Learning
Representations (ICLR) and Neural Information Processing Systems (NeurIPS)
papers as well as ICML papers over time. Based on the results, we recommend how
we, as a community, can improve reproducibility in ML-research.
|
2502.00903
|
Embracing Dialectic Intersubjectivity: Coordination of Different
Perspectives in Content Analysis with LLM Persona Simulation
|
cs.CL cs.AI cs.CY cs.SI
|
This study attempts to advancing content analysis methodology from
consensus-oriented to coordination-oriented practices, thereby embracing
diverse coding outputs and exploring the dynamics among differential
perspectives. As an exploratory investigation of this approach, we evaluate six
GPT-4o configurations to analyze sentiment in Fox News and MSNBC transcripts on
Biden and Trump during the 2020 U.S. presidential campaign, examining patterns
across these models. By assessing each model's alignment with ideological
perspectives, we explore how partisan selective processing could be identified
in LLM-Assisted Content Analysis (LACA). Findings reveal that partisan persona
LLMs exhibit stronger ideological biases when processing politically congruent
content. Additionally, intercoder reliability is higher among same-partisan
personas compared to cross-partisan pairs. This approach enhances the nuanced
understanding of LLM outputs and advances the integrity of AI-driven social
science research, enabling simulations of real-world implications.
|
2502.00916
|
The Accuracy, Robustness, and Readability of LLM-Generated
Sustainability-Related Word Definitions
|
cs.CL
|
A common language with standardized definitions is crucial for effective
climate discussions. However, concerns exist about LLMs misrepresenting climate
terms. We compared 300 official IPCC glossary definitions with those generated
by GPT-4o-mini, Llama3.1 8B, and Mistral 7B, analyzing adherence, robustness,
and readability using SBERT sentence embeddings. The LLMs scored an average
adherence of $0.57-0.59 \pm 0.15$, and their definitions proved harder to read
than the originals. Model-generated definitions vary mainly among words with
multiple or ambiguous definitions, showing the potential to highlight terms
that need standardization. The results show how LLMs could support
environmental discourse while emphasizing the need to align model outputs with
established terminology for clarity and consistency.
|
2502.00919
|
Attention Sinks and Outlier Features: A 'Catch, Tag, and Release'
Mechanism for Embeddings
|
cs.CL cs.AI cs.LG
|
Two prominent features of large language models (LLMs) is the presence of
large-norm (outlier) features and the tendency for tokens to attend very
strongly to a select few tokens. Despite often having no semantic relevance,
these select tokens, called attention sinks, along with the large outlier
features, have proven important for model performance, compression, and
streaming. Consequently, investigating the roles of these phenomena within
models and exploring how they might manifest in the model parameters has become
an area of active interest. Through an empirical investigation, we demonstrate
that attention sinks utilize outlier features to: catch a sequence of tokens,
tag the captured tokens by applying a common perturbation, and then release the
tokens back into the residual stream, where the tagged tokens are eventually
retrieved. We prove that simple tasks, like averaging, necessitate the 'catch,
tag, release' mechanism hence explaining why it would arise organically in
modern LLMs. Our experiments also show that the creation of attention sinks can
be completely captured in the model parameters using low-rank matrices, which
has important implications for model compression and substantiates the success
of recent approaches that incorporate a low-rank term to offset performance
degradation.
|
2502.00920
|
Extending the Lattice Boltzmann Method to Non-linear Solid Mechanics
|
cs.CE cs.NA math.NA
|
This work outlines a Lattice Boltzmann Method (LBM) for geometrically and
constitutively nonlinear solid mechanics to simulate large deformations under
dynamic loading conditions. The method utilizes the moment chain approach,
where the nonlinear constitutive law is incorporated via a forcing term. Stress
and deformation measures are expressed in the reference configuration. Finite
difference schemes are employed for gradient and divergence computations, and
Neumann- and Dirichlet-type boundary conditions are introduced.
Numerical studies are performed to assess the proposed method and illustrate
its capabilities. Benchmark tests for weakly dynamic uniaxial tension and
simple shear across a range of Poisson's ratios demonstrate the feasibility of
the scheme and serve as validation of the implementation. Furthermore, a
dynamic test case involving the propagation of bending waves in a cantilever
beam highlights the potential of the method to model complex dynamic phenomena.
|
2502.00921
|
Blink of an eye: a simple theory for feature localization in generative
models
|
cs.LG
|
Large language models (LLMs) can exhibit undesirable and unexpected behavior
in the blink of an eye. In a recent Anthropic demo, Claude switched from coding
to Googling pictures of Yellowstone, and these sudden shifts in behavior have
also been observed in reasoning patterns and jailbreaks. This phenomenon is not
unique to autoregressive models: in diffusion models, key features of the final
output are decided in narrow ``critical windows'' of the generation process. In
this work we develop a simple, unifying theory to explain this phenomenon. We
show that it emerges generically as the generation process localizes to a
sub-population of the distribution it models. While critical windows have been
studied at length in diffusion models, existing theory heavily relies on strong
distributional assumptions and the particulars of Gaussian diffusion. In
contrast to existing work our theory (1) applies to autoregressive and
diffusion models; (2) makes no distributional assumptions; (3) quantitatively
improves previous bounds even when specialized to diffusions; and (4) requires
basic tools and no stochastic calculus or statistical physics-based machinery.
We also identify an intriguing connection to the all-or-nothing phenomenon from
statistical inference. Finally, we validate our predictions empirically for
LLMs and find that critical windows often coincide with failures in problem
solving for various math and reasoning benchmarks.
|
2502.00922
|
Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
|
cs.LG cs.AR
|
As they become more capable, large language models (LLMs) have continued to
rapidly increase in size. This has exacerbated the difficulty in running state
of the art LLMs on small, edge devices. Standard techniques advocate solving
this problem through lossy compression techniques such as quantization or
pruning. However, such compression techniques are lossy, and have been shown to
change model behavior in unpredictable manners. We propose Huff-LLM, an
\emph{end-to-end, lossless} model compression method that lets users store LLM
weights in compressed format \emph{everywhere} -- cloud, disk, main memory, and
even in on-chip memory/buffers. This allows us to not only load larger models
in main memory, but also reduces bandwidth required to load weights on chip,
and makes more efficient use of on-chip weight buffers. In addition to the
memory savings achieved via compression, we also show latency and energy
efficiency improvements when performing inference with the compressed model.
|
2502.00926
|
Structured Pneumatic Fingerpads for Actively Tunable Grip Friction
|
cs.RO
|
Grip surfaces with tunable friction can actively modify contact conditions,
enabling transitions between higher- and lower-friction states for grasp
adjustment. Friction can be increased to grip securely and then decreased to
gently release (e.g., for handovers) or manipulate in-hand. Recent
friction-tuning surface designs using soft pneumatic chambers show good control
over grip friction; however, most require complex fabrication processes and/or
custom gripper hardware. We present a practical structured fingerpad design for
friction tuning that uses less than \$1 USD of materials, takes only seconds to
repair, and is easily adapted to existing grippers. Our design uses surface
morphology changes to tune friction. The fingerpad is actuated by pressurizing
its internal chambers, thereby deflecting its flexible grip surface out from or
into these chambers. We characterize the friction-tuning capabilities of our
design by measuring the shear force required to pull an object from a gripper
equipped with two independently actuated fingerpads. Our results show that
varying actuation pressure and timing changes the magnitude of friction forces
on a gripped object by up to a factor of 2.8. We demonstrate additional
features including macro-scale interlocking behaviour and pressure-based object
detection.
|
2502.00928
|
Mathematical Cell Deployment Optimization for Capacity and Coverage of
Ground and UAV Users
|
cs.IT eess.SP math.IT
|
We present a general mathematical framework for optimizing cell deployment
and antenna configuration in wireless networks, inspired by quantization
theory. Unlike traditional methods, our framework supports networks with
deterministically located nodes, enabling modeling and optimization under
controlled deployment scenarios. We demonstrate our framework through two
applications: joint fine-tuning of antenna parameters across base stations
(BSs) to optimize network coverage, capacity, and load balancing, and the
strategic deployment of new BSs, including the optimization of their locations
and antenna settings. These optimizations are conducted for a heterogeneous 3D
user population, comprising ground users (GUEs) and uncrewed aerial vehicles
(UAVs) along aerial corridors. Our case studies highlight the framework's
versatility in optimizing performance metrics such as the coverage-capacity
trade-off and capacity per region. Our results confirm that optimizing the
placement and orientation of additional BSs consistently outperforms approaches
focused solely on antenna adjustments, regardless of GUE distribution.
Furthermore, joint optimization for both GUEs and UAVs significantly enhances
UAV service without severely affecting GUE performance.
|
2502.00930
|
Event-Triggered Newton-Based Extremum Seeking Control
|
math.OC cs.SY eess.SY
|
This paper proposes the incorporation of static event-triggered control in
the actuation path of Newton-based extremum seeking and its comparison with the
earlier gradient version. As in the continuous methods, the convergence rate of
the gradient approach depends on the unknown Hessian of the nonlinear map to be
optimized, whereas the proposed event-triggered Newton-based extremum seeking
eliminates this dependence, becoming user-assignable. This is achieved by means
of a dynamic estimator for the Hessian's inverse, implemented as a Riccati
equation filter. Lyapunov stability and averaging theory for discontinuous
systems are applied to analyze the closed-loop system. Local exponential
practical stability is guaranteed to a small neighborhood of the extremum point
of scalar and static maps. Numerical simulations illustrate the advantages of
the proposed approach over the previous gradient method, including improved
convergence speed, followed by a reduction in the amplitude and updating
frequency of the control signals.
|
2502.00931
|
VL-Nav: Real-time Vision-Language Navigation with Spatial Reasoning
|
cs.RO cs.CV
|
Vision-language navigation in unknown environments is crucial for mobile
robots. In scenarios such as household assistance and rescue, mobile robots
need to understand a human command, such as "find a person wearing black". We
present a novel vision-language navigation (VL-Nav) system that integrates
efficient spatial reasoning on low-power robots. Unlike prior methods that rely
on a single image-level feature similarity to guide a robot, our method
integrates pixel-wise vision-language features with curiosity-driven
exploration. This approach enables robust navigation to human-instructed
instances across diverse environments. We deploy VL-Nav on a four-wheel mobile
robot and evaluate its performance through comprehensive navigation tasks in
both indoor and outdoor environments, spanning different scales and semantic
complexities. Remarkably, VL-Nav operates at a real-time frequency of 30 Hz
with a Jetson Orin NX, highlighting its ability to conduct efficient
vision-language navigation. Results show that VL-Nav achieves an overall
success rate of 86.3%, outperforming previous methods by 44.15%.
|
2502.00935
|
Generalizing Safety Beyond Collision-Avoidance via Latent-Space
Reachability Analysis
|
cs.RO cs.LG
|
Hamilton-Jacobi (HJ) reachability is a rigorous mathematical framework that
enables robots to simultaneously detect unsafe states and generate actions that
prevent future failures. While in theory, HJ reachability can synthesize safe
controllers for nonlinear systems and nonconvex constraints, in practice, it
has been limited to hand-engineered collision-avoidance constraints modeled via
low-dimensional state-space representations and first-principles dynamics. In
this work, our goal is to generalize safe robot controllers to prevent failures
that are hard -- if not impossible -- to write down by hand, but can be
intuitively identified from high-dimensional observations: for example,
spilling the contents of a bag. We propose Latent Safety Filters, a
latent-space generalization of HJ reachability that tractably operates directly
on raw observation data (e.g., RGB images) by performing safety analysis in the
latent embedding space of a generative world model. This transforms nuanced
constraint specification to a classification problem in latent space and
enables reasoning about dynamical consequences that are hard to simulate. In
simulation and hardware experiments, we use Latent Safety Filters to safeguard
arbitrary policies (from generative policies to direct teleoperation) from
complex safety hazards, like preventing a Franka Research 3 manipulator from
spilling the contents of a bag or toppling cluttered objects.
|
2502.00937
|
Towards Efficient Large Multimodal Model Serving
|
cs.DC cs.AI
|
Recent advances in generative AI have led to large multi-modal models (LMMs)
capable of simultaneously processing inputs of various modalities such as text,
images, video, and audio. While these models demonstrate impressive
capabilities, efficiently serving them in production environments poses
significant challenges due to their complex architectures and heterogeneous
resource requirements.
We present the first comprehensive systems analysis of two prominent LMM
architectures, decoder-only and cross-attention, on six representative
open-source models. We investigate their multi-stage inference pipelines and
resource utilization patterns that lead to unique systems design implications.
We also present an in-depth analysis of production LMM inference traces,
uncovering unique workload characteristics, including variable, heavy-tailed
request distributions, diverse modal combinations, and bursty traffic patterns.
Our key findings reveal that different LMM inference stages exhibit highly
heterogeneous performance characteristics and resource demands, while
concurrent requests across modalities lead to significant performance
interference. To address these challenges, we propose a decoupled serving
architecture that enables independent resource allocation and adaptive scaling
for each stage. We further propose optimizations such as stage colocation to
maximize throughput and resource utilization while meeting the latency
objectives.
|
2502.00939
|
Fruit Fly Classification (Diptera: Tephritidae) in Images, Applying
Transfer Learning
|
cs.CV cs.AI
|
This study develops a transfer learning model for the automated
classification of two species of fruit flies, Anastrepha fraterculus and
Ceratitis capitata, in a controlled laboratory environment. The research
addresses the need to optimize identification and classification, which are
currently performed manually by experts, being affected by human factors and
facing time challenges. The methodological process of this study includes the
capture of high-quality images using a mobile phone camera and a stereo
microscope, followed by segmentation to reduce size and focus on relevant
morphological areas. The images were carefully labeled and preprocessed to
ensure the quality and consistency of the dataset used to train the pre-trained
convolutional neural network models VGG16, VGG19, and Inception-v3. The results
were evaluated using the F1-score, achieving 82% for VGG16 and VGG19, while
Inception-v3 reached an F1-score of 93%. Inception-v3's reliability was
verified through model testing in uncontrolled environments, with positive
results, complemented by the Grad-CAM technique, demonstrating its ability to
capture essential morphological features. These findings indicate that
Inception-v3 is an effective and replicable approach for classifying Anastrepha
fraterculus and Ceratitis capitata, with potential for implementation in
automated monitoring systems.
|
2502.00940
|
An MDP Model for Censoring in Harvesting Sensors: Optimal and
Approximated Solutions
|
eess.SY cs.AI cs.SY
|
In this paper, we propose a novel censoring policy for energy-efficient
transmissions in energy-harvesting sensors. The problem is formulated as an
infinite-horizon Markov Decision Process (MDP). The objective to be optimized
is the expected sum of the importance (utility) of all transmitted messages.
Assuming that such importance can be evaluated at the transmitting node, we
show that, under certain conditions on the battery model, the optimal censoring
policy is a threshold function on the importance value. Specifically, messages
are transmitted only if their importance is above a threshold whose value
depends on the battery level. Exploiting this property, we propose a
model-based stochastic scheme that approximates the optimal solution, with less
computational complexity and faster convergence speed than a conventional
Q-learning algorithm. Numerical experiments in single-hop and multi-hop
networks confirm the analytical advantages of the proposed scheme.
|
2502.00943
|
Universal Abstraction: Harnessing Frontier Models to Structure
Real-World Data at Scale
|
cs.CL
|
The vast majority of real-world patient information resides in unstructured
clinical text, and the process of medical abstraction seeks to extract and
normalize structured information from this unstructured input. However,
traditional medical abstraction methods can require significant manual efforts
that can include crafting rules or annotating training labels, limiting
scalability. In this paper, we propose UniMedAbstractor (UMA), a zero-shot
medical abstraction framework leveraging Large Language Models (LLMs) through a
modular and customizable prompt template. We refer to our approach as universal
abstraction as it can quickly scale to new attributes through its universal
prompt template without curating attribute-specific training labels or rules.
We evaluate UMA for oncology applications, focusing on fifteen key attributes
representing the cancer patient journey, from short-context attributes (e.g.,
performance status, treatment) to complex long-context attributes requiring
longitudinal reasoning (e.g., tumor site, histology, TNM staging). Experiments
on real-world data show UMA's strong performance and generalizability. Compared
to supervised and heuristic baselines, UMA with GPT-4o achieves on average an
absolute 2-point F1/accuracy improvement for both short-context and
long-context attribute abstraction. For pathologic T staging, UMA even
outperforms the supervised model by 20 points in accuracy.
|
2502.00944
|
Analysis of static and dynamic batching algorithms for graph neural
networks
|
cs.LG
|
Graph neural networks (GNN) have shown promising results for several domains
such as materials science, chemistry, and the social sciences. GNN models often
contain millions of parameters, and like other neural network (NN) models, are
often fed only a fraction of the graphs that make up the training dataset in
batches to update model parameters. The effect of batching algorithms on
training time and model performance has been thoroughly explored for NNs but
not yet for GNNs. We analyze two different batching algorithms for graph based
models, namely static and dynamic batching. We use the Jraph library built on
JAX to perform our experiments, where we compare the two batching methods for
two datasets, the QM9 dataset of small molecules and the AFLOW materials
database. Our experiments show that significant training time savings can be
found from changing the batching algorithm, but the fastest algorithm depends
on the data, model, batch size and number of training steps run. Experiments
show no significant difference in model learning between the algorithms.
|
2502.00947
|
Minimax Optimality of Classical Scaling Under General Noise Conditions
|
math.ST cs.LG stat.ML stat.TH
|
We establish the consistency of classical scaling under a broad class of
noise models, encompassing many commonly studied cases in literature. Our
approach requires only finite fourth moments of the noise, significantly
weakening standard assumptions. We derive convergence rates for classical
scaling and establish matching minimax lower bounds, demonstrating that
classical scaling achieves minimax optimality in recovering the true
configuration even when the input dissimilarities are corrupted by noise.
|
2502.00952
|
Mapping the Spiral of Silence: Surveying Unspoken Opinions in Online
Communities
|
cs.SI cs.CY cs.HC
|
We often treat social media as a lens onto society. How might that lens be
distorting the actual popularity of political and social viewpoints? In this
paper, we examine the difference between the viewpoints publicly posted in a
community and the privately surveyed viewpoints of community members,
contributing a measurement of a theory called the "spiral of silence." This
theory observes that people are less likely to voice their opinion when they
believe they are in the minority--leading to a spiral where minority opinions
are less likely to be shared, so they appear even further in the minority, and
become even less likely to be shared. We surveyed active members of politically
oriented Reddit communities to gauge their willingness to post on contentious
topics, yielding 627 responses from 108 participants about 11 topics and 33
subreddits. We find that 72.6% of participants who perceive themselves in the
minority remain silent, and are only half as likely to post their viewpoint
compared to those who believe their opinion is in the majority. Communities
perceived as being more inclusive reduce the magnitude of this effect. These
results emphasize how far out of step the opinions we see online may be with
the population they purport to represent.
|
2502.00954
|
Hypo3D: Exploring Hypothetical Reasoning in 3D
|
cs.CV
|
The rise of vision-language foundation models marks an advancement in
bridging the gap between human and machine capabilities in 3D scene reasoning.
Existing 3D reasoning benchmarks assume real-time scene accessibility, which is
impractical due to the high cost of frequent scene updates. To this end, we
introduce Hypothetical 3D Reasoning, namely Hypo3D, a benchmark designed to
evaluate models' ability to reason without access to real-time scene data.
Models need to imagine the scene state based on a provided change description
before reasoning. Hypo3D is formulated as a 3D Visual Question Answering (VQA)
benchmark, comprising 7,727 context changes across 700 indoor scenes, resulting
in 14,885 question-answer pairs. An anchor-based world frame is established for
all scenes, ensuring consistent reference to a global frame for directional
terms in context changes and QAs. Extensive experiments show that
state-of-the-art foundation models struggle to reason in hypothetically changed
scenes. This reveals a substantial performance gap compared to humans,
particularly in scenarios involving movement changes and directional reasoning.
Even when the context change is irrelevant to the question, models often
incorrectly adjust their answers.
|
2502.00955
|
Efficient Multi-Agent System Training with Data Influence-Oriented Tree
Search
|
cs.CL
|
Monte Carlo Tree Search (MCTS) based methods provide promising approaches for
generating synthetic data to enhance the self-training of Large Language Model
(LLM) based multi-agent systems (MAS). These methods leverage Q-values to
estimate individual agent contributions. However, relying solely on Q-values to
identify informative data may misalign with the data synthesis objective, as
the focus should be on selecting data that best enhances model training. To
address this discrepancy, we propose Data Influence-oriented Tree Search
(DITS), a novel framework that incorporates influence scores to guide both tree
search and data selection. By leveraging influence scores, we effectively
identify the most impactful data for system improvement, thereby enhancing
model performance. Furthermore, we derive influence score estimation methods
tailored for non-differentiable metrics, significantly reducing computational
overhead by utilizing inference computations. Extensive experiments on eight
multi-agent datasets demonstrate the robustness and effectiveness of the
proposed methods. Notably, our findings reveal that allocating more inference
resources to estimate influence scores, rather than Q-values, during data
synthesis can more effectively and efficiently enhance model training.
|
2502.00960
|
SAM-guided Pseudo Label Enhancement for Multi-modal 3D Semantic
Segmentation
|
cs.CV
|
Multi-modal 3D semantic segmentation is vital for applications such as
autonomous driving and virtual reality (VR). To effectively deploy these models
in real-world scenarios, it is essential to employ cross-domain adaptation
techniques that bridge the gap between training data and real-world data.
Recently, self-training with pseudo-labels has emerged as a predominant method
for cross-domain adaptation in multi-modal 3D semantic segmentation. However,
generating reliable pseudo-labels necessitates stringent constraints, which
often result in sparse pseudo-labels after pruning. This sparsity can
potentially hinder performance improvement during the adaptation process. We
propose an image-guided pseudo-label enhancement approach that leverages the
complementary 2D prior knowledge from the Segment Anything Model (SAM) to
introduce more reliable pseudo-labels, thereby boosting domain adaptation
performance. Specifically, given a 3D point cloud and the SAM masks from its
paired image data, we collect all 3D points covered by each SAM mask that
potentially belong to the same object. Then our method refines the
pseudo-labels within each SAM mask in two steps. First, we determine the class
label for each mask using majority voting and employ various constraints to
filter out unreliable mask labels. Next, we introduce Geometry-Aware
Progressive Propagation (GAPP) which propagates the mask label to all 3D points
within the SAM mask while avoiding outliers caused by 2D-3D misalignment.
Experiments conducted across multiple datasets and domain adaptation scenarios
demonstrate that our proposed method significantly increases the quantity of
high-quality pseudo-labels and enhances the adaptation performance over
baseline methods.
|
2502.00963
|
PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs
|
cs.LG
|
While recent AI-for-math has made strides in pure mathematics, areas of
applied mathematics, particularly PDEs, remain underexplored despite their
significant real-world applications. We present PDE-Controller, a framework
that enables large language models (LLMs) to control systems governed by
partial differential equations (PDEs). Our approach enables LLMs to transform
informal natural language instructions into formal specifications, and then
execute reasoning and planning steps to improve the utility of PDE control. We
build a holistic solution comprising datasets (both human-written cases and 2
million synthetic samples), math-reasoning models, and novel evaluation
metrics, all of which require significant effort. Our PDE-Controller
significantly outperforms prompting the latest open-source and GPT models in
reasoning, autoformalization, and program synthesis, achieving up to a 62%
improvement in utility gain for PDE control. By bridging the gap between
language generation and PDE systems, we demonstrate the potential of LLMs in
addressing complex scientific and engineering challenges. We will release all
data, model checkpoints, and code at https://pde-controller.github.io/.
|
2502.00964
|
ML-Dev-Bench: Comparative Analysis of AI Agents on ML development
workflows
|
cs.SE cs.AI
|
In this report, we present ML-Dev-Bench, a benchmark aimed at testing agentic
capabilities on applied Machine Learning development tasks. While existing
benchmarks focus on isolated coding tasks or Kaggle-style competitions,
ML-Dev-Bench tests agents' ability to handle the full complexity of ML
development workflows. The benchmark assesses performance across critical
aspects including dataset handling, model training, improving existing models,
debugging, and API integration with popular ML tools. We evaluate three agents
- ReAct, Openhands, and AIDE - on a diverse set of 30 tasks, providing insights
into their strengths and limitations in handling practical ML development
challenges. We open source the benchmark for the benefit of the community at
\href{https://github.com/ml-dev-bench/ml-dev-bench}{https://github.com/ml-dev-bench/ml-dev-bench}.
|
2502.00965
|
CLIP-UP: A Simple and Efficient Mixture-of-Experts CLIP Training Recipe
with Sparse Upcycling
|
cs.CV cs.LG
|
Mixture-of-Experts (MoE) models are crucial for scaling model capacity while
controlling inference costs. While integrating MoE into multimodal models like
CLIP improves performance, training these models is notoriously challenging and
expensive. We propose CLIP-Upcycling (CLIP-UP), an efficient alternative
training strategy that converts a pre-trained dense CLIP model into a sparse
MoE architecture. Through extensive experimentation with various settings and
auxiliary losses, we demonstrate that CLIP-UP significantly reduces training
complexity and cost. Remarkably, our sparse CLIP B/16 model, trained with
CLIP-UP, outperforms its dense counterpart by 7.2% and 6.6% on COCO and
Flickr30k text-to-image Recall@1 benchmarks respectively. It even surpasses the
larger CLIP L/14 model on this task while using only 30% of the inference
FLOPs. We further demonstrate the generalizability of our training recipe
across different scales, establishing sparse upcycling as a practical and
scalable approach for building efficient, high-performance CLIP models.
|
2502.00966
|
The Beatbots: A Musician-Informed Multi-Robot Percussion Quartet
|
cs.RO cs.HC
|
Artistic creation is often seen as a uniquely human endeavor, yet robots
bring distinct advantages to music-making, such as precise tempo control,
unpredictable rhythmic complexities, and the ability to coordinate intricate
human and robot performances. While many robotic music systems aim to mimic
human musicianship, our work emphasizes the unique strengths of robots,
resulting in a novel multi-robot performance instrument called the Beatbots,
capable of producing music that is challenging for humans to replicate using
current methods. The Beatbots were designed using an ``informed prototyping''
process, incorporating feedback from three musicians throughout development. We
evaluated the Beatbots through a live public performance, surveying
participants (N=28) to understand how they perceived and interacted with the
robotic performance. Results show that participants valued the playfulness of
the experience, the aesthetics of the robot system, and the unconventional
robot-generated music. Expert musicians and non-expert roboticists demonstrated
especially positive mindset shifts during the performance, although
participants across all demographics had favorable responses. We propose design
principles to guide the development of future robotic music systems and
identify key robotic music affordances that our musician consultants considered
particularly important for robotic music performance.
|
2502.00968
|
CoDe: Blockwise Control for Denoising Diffusion Models
|
cs.CV cs.LG
|
Aligning diffusion models to downstream tasks often requires finetuning new
models or gradient-based guidance at inference time to enable sampling from the
reward-tilted posterior. In this work, we explore a simple inference-time
gradient-free guidance approach, called controlled denoising (CoDe), that
circumvents the need for differentiable guidance functions and model
finetuning. CoDe is a blockwise sampling method applied during intermediate
denoising steps, allowing for alignment with downstream rewards. Our
experiments demonstrate that, despite its simplicity, CoDe offers a favorable
trade-off between reward alignment, prompt instruction following, and inference
cost, achieving a competitive performance against the state-of-the-art
baselines. Our code is available at: https://github.com/anujinho/code.
|
2502.00969
|
Wizard of Shopping: Target-Oriented E-commerce Dialogue Generation with
Decision Tree Branching
|
cs.CL
|
The goal of conversational product search (CPS) is to develop an intelligent,
chat-based shopping assistant that can directly interact with customers to
understand shopping intents, ask clarification questions, and find relevant
products. However, training such assistants is hindered mainly due to the lack
of reliable and large-scale datasets. Prior human-annotated CPS datasets are
extremely small in size and lack integration with real-world product search
systems. We propose a novel approach, TRACER, which leverages large language
models (LLMs) to generate realistic and natural conversations for different
shopping domains. TRACER's novelty lies in grounding the generation to dialogue
plans, which are product search trajectories predicted from a decision tree
model, that guarantees relevant product discovery in the shortest number of
search conditions. We also release the first target-oriented CPS dataset Wizard
of Shopping (WoS), containing highly natural and coherent conversations (3.6k)
from three shopping domains. Finally, we demonstrate the quality and
effectiveness of WoS via human evaluations and downstream tasks.
|
2502.00972
|
Pushing the Boundaries of State Space Models for Image and Video
Generation
|
cs.CV cs.LG
|
While Transformers have become the dominant architecture for visual
generation, linear attention models, such as the state-space models (SSM), are
increasingly recognized for their efficiency in processing long visual
sequences. However, the essential efficiency of these models comes from
formulating a limited recurrent state, enforcing causality among tokens that
are prone to inconsistent modeling of N-dimensional visual data, leaving
questions on their capacity to generate long non-causal sequences. In this
paper, we explore the boundary of SSM on image and video generation by building
the largest-scale diffusion SSM-Transformer hybrid model to date (5B
parameters) based on the sub-quadratic bi-directional Hydra and self-attention,
and generate up to 2K images and 360p 8 seconds (16 FPS) videos. Our results
demonstrate that the model can produce faithful results aligned with complex
text prompts and temporal consistent videos with high dynamics, suggesting the
great potential of using SSMs for visual generation tasks.
|
2502.00973
|
A Wearable Device Dataset for Mental Health Assessment Using Laser
Doppler Flowmetry and Fluorescence Spectroscopy Sensors
|
cs.LG eess.SP
|
In this study, we introduce a novel method to predict mental health by
building machine learning models for a non-invasive wearable device equipped
with Laser Doppler Flowmetry (LDF) and Fluorescence Spectroscopy (FS) sensors.
Besides, we present the corresponding dataset to predict mental health, e.g.
depression, anxiety, and stress levels via the DAS-21 questionnaire. To our
best knowledge, this is the world's largest and the most generalized dataset
ever collected for both LDF and FS studies. The device captures cutaneous blood
microcirculation parameters, and wavelet analysis of the LDF signal extracts
key rhythmic oscillations. The dataset, collected from 132 volunteers aged
18-94 from 19 countries, explores relationships between physiological features,
demographics, lifestyle habits, and health conditions. We employed a variety of
machine learning methods to classify stress detection, in which LightGBM is
identified as the most effective model for stress detection, achieving a ROC
AUC of 0.7168 and a PR AUC of 0.8852. In addition, we also incorporated
Explainable Artificial Intelligence (XAI) techniques into our analysis to
investigate deeper insights into the model's predictions. Our results suggest
that females, younger individuals and those with a higher Body Mass Index (BMI)
or heart rate have a greater likelihood of experiencing mental health
conditions like stress and anxiety. All related code and data are published
online: https://github.com/leduckhai/Wearable_LDF-FS.
|
2502.00976
|
Learning the Integral Quadratic Constraints on Plant-Model Mismatch
|
eess.SY cs.SY
|
While a characterization of plant-model mismatch is necessary for robust
control, the mismatch usually can not be described accurately due to the lack
of knowledge about the plant model or the complexity of nonlinear plants.
Hence, this paper considers this problem in a data-driven way, where the
mismatch is captured by parametric forms of integral quadratic constraints
(IQCs) and the parameters contained in the IQC equalities are learned from
sampled trajectories from the plant. To this end, a one-class support vector
machine (OC-SVM) formulation is proposed, and its generalization performance is
analyzed based on the statistical learning theory. The proposed approach is
demonstrated by a single-input-single-output time delay mismatch and a
nonlinear two-phase reactor with a linear nominal model, showing accurate
recovery of frequency-domain uncertainties.
|
2502.00977
|
Context-Aware Hierarchical Merging for Long Document Summarization
|
cs.CL
|
Hierarchical Merging is a technique commonly used to summarize very long
texts ($>$100K tokens) by breaking down the input into smaller sections,
summarizing those sections individually, and then merging or combining those
summaries into a final coherent summary. Although it helps address the
limitations of large language models (LLMs) with fixed input length
constraints, the recursive merging process can amplify LLM hallucinations,
increasing the risk of factual inaccuracies. In this paper, we seek to mitigate
hallucinations by enriching hierarchical merging with context from the source
document. Specifically, we propose different approaches to contextual
augmentation ranging from \emph{replacing} intermediate summaries with relevant
input context, to \emph{refining} them while using the context as supporting
evidence, and \emph{aligning} them implicitly (via citations) to the input.
Experimental results on datasets representing legal and narrative domains show
that contextual augmentation consistently outperforms zero-shot and
hierarchical merging baselines for the Llama 3.1 model family. Our analysis
further reveals that refinement methods tend to perform best when paired with
extractive summarization for identifying relevant input.
|
2502.00980
|
Forecasting VIX using interpretable Kolmogorov-Arnold networks
|
cs.LG cs.AI cs.CE
|
This paper presents the use of Kolmogorov-Arnold Networks (KANs) for
forecasting the CBOE Volatility Index (VIX). Unlike traditional MLP-based
neural networks that are often criticized for their black-box nature, KAN
offers an interpretable approach via learnable spline-based activation
functions and symbolification. Based on a parsimonious architecture with
symbolic functions, KAN expresses a forecast of the VIX as a closed-form in
terms of explanatory variables, and provide interpretable insights into key
characteristics of the VIX, including mean reversion and the leverage effect.
Through in-depth empirical analysis across multiple datasets and periods, we
show that KANs achieve competitive forecasting performance while requiring
significantly fewer parameters compared to MLP-based neural network models. Our
findings demonstrate the capacity and potential of KAN as an interpretable
financial time-series forecasting method.
|
2502.00983
|
CausalCOMRL: Context-Based Offline Meta-Reinforcement Learning with
Causal Representation
|
cs.LG stat.ML
|
Context-based offline meta-reinforcement learning (OMRL) methods have
achieved appealing success by leveraging pre-collected offline datasets to
develop task representations that guide policy learning. However, current
context-based OMRL methods often introduce spurious correlations, where task
components are incorrectly correlated due to confounders. These correlations
can degrade policy performance when the confounders in the test task differ
from those in the training task. To address this problem, we propose
CausalCOMRL, a context-based OMRL method that integrates causal representation
learning. This approach uncovers causal relationships among the task components
and incorporates the causal relationships into task representations, enhancing
the generalizability of RL agents. We further improve the distinction of task
representations from different tasks by using mutual information optimization
and contrastive learning. Utilizing these causal task representations, we
employ SAC to optimize policies on meta-RL benchmarks. Experimental results
show that CausalCOMRL achieves better performance than other methods on most
benchmarks.
|
2502.00987
|
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
|
cs.CL cs.AI cs.CV
|
Low-Rank Adaptation (LoRA) and its variants have shown impressive results in
reducing the number of trainable parameters and memory requirements of large
transformer networks while maintaining fine-tuning performance. However, the
low-rank nature of the weight update inherently limits the representation power
of fine-tuned models, potentially compromising performance on complex tasks.
This raises a critical question: when a performance gap between LoRA and
standard fine-tuning is observed, is it due to the reduced number of trainable
parameters or the rank deficiency? This paper aims to answer this question by
introducing RandLoRA, a parameter-efficient method that performs full-rank
updates using a learned linear combinations of low-rank, non-trainable random
matrices. Our method limits the number of trainable parameters by restricting
optimization to diagonal scaling matrices applied to the fixed random matrices.
This allows us to effectively overcome the low-rank limitations while
maintaining parameter and memory efficiency during training. Through extensive
experimentation across vision, language, and vision-language benchmarks, we
systematically evaluate the limitations of LoRA and existing random basis
methods. Our findings reveal that full-rank updates are beneficial across
vision and language tasks individually, and even more so for vision-language
tasks, where RandLoRA significantly reduces -- and sometimes eliminates -- the
performance gap between standard fine-tuning and LoRA, demonstrating its
efficacy.
|
2502.00988
|
PlotGen: Multi-Agent LLM-based Scientific Data Visualization via
Multimodal Feedback
|
cs.CL cs.AI
|
Scientific data visualization is pivotal for transforming raw data into
comprehensible visual representations, enabling pattern recognition,
forecasting, and the presentation of data-driven insights. However, novice
users often face difficulties due to the complexity of selecting appropriate
tools and mastering visualization techniques. Large Language Models (LLMs) have
recently demonstrated potential in assisting code generation, though they
struggle with accuracy and require iterative debugging. In this paper, we
propose PlotGen, a novel multi-agent framework aimed at automating the creation
of precise scientific visualizations. PlotGen orchestrates multiple LLM-based
agents, including a Query Planning Agent that breaks down complex user requests
into executable steps, a Code Generation Agent that converts pseudocode into
executable Python code, and three retrieval feedback agents - a Numeric
Feedback Agent, a Lexical Feedback Agent, and a Visual Feedback Agent - that
leverage multimodal LLMs to iteratively refine the data accuracy, textual
labels, and visual correctness of generated plots via self-reflection.
Extensive experiments show that PlotGen outperforms strong baselines, achieving
a 4-6 percent improvement on the MatPlotBench dataset, leading to enhanced user
trust in LLM-generated visualizations and improved novice productivity due to a
reduction in debugging time needed for plot errors.
|
2502.00989
|
ChartCitor: Multi-Agent Framework for Fine-Grained Chart Visual
Attribution
|
cs.CL cs.AI
|
Large Language Models (LLMs) can perform chart question-answering tasks but
often generate unverified hallucinated responses. Existing answer attribution
methods struggle to ground responses in source charts due to limited
visual-semantic context, complex visual-text alignment requirements, and
difficulties in bounding box prediction across complex layouts. We present
ChartCitor, a multi-agent framework that provides fine-grained bounding box
citations by identifying supporting evidence within chart images. The system
orchestrates LLM agents to perform chart-to-table extraction, answer
reformulation, table augmentation, evidence retrieval through pre-filtering and
re-ranking, and table-to-chart mapping. ChartCitor outperforms existing
baselines across different chart types. Qualitative user studies show that
ChartCitor helps increase user trust in Generative AI by providing enhanced
explainability for LLM-assisted chart QA and enables professionals to be more
productive.
|
2502.00991
|
TxnSails: Achieving Serializable Transaction Scheduling with
Self-Adaptive Isolation Level Selection
|
cs.DB
|
Achieving the serializable isolation level, regarded as the gold standard for
transaction processing, is costly. Recent studies reveal that adjusting
specific query patterns within a workload can still achieve serializability
even at lower isolation levels. Nevertheless, these studies typically overlook
the trade-off between the performance advantages of lower isolation levels and
the overhead required to maintain serializability, potentially leading to
suboptimal isolation level choices that fail to maximize performance. In this
paper, we present TxnSails, a middle-tier solution designed to achieve
serializable scheduling with self-adaptive isolation level selection. First,
TxnSails incorporates a unified concurrency control algorithm that achieves
serializability at lower isolation levels with minimal additional overhead.
Second, TxnSails employs a deep learning method to characterize the trade-off
between the performance benefits and overhead associated with lower isolation
levels, thus predicting the optimal isolation level. Finally, TxnSails
implements a cross-isolation validation mechanism to ensure serializability
during real-time isolation level transitions. Extensive experiments demonstrate
that TxnSails outperforms state-of-the-art solutions by up to 26.7x and
PostgreSQL's serializable isolation level by up to 4.8x.
|
2502.00992
|
FCBoost-Net: A Generative Network for Synthesizing Multiple Collocated
Outfits via Fashion Compatibility Boosting
|
cs.CV cs.MM
|
Outfit generation is a challenging task in the field of fashion technology,
in which the aim is to create a collocated set of fashion items that complement
a given set of items. Previous studies in this area have been limited to
generating a unique set of fashion items based on a given set of items, without
providing additional options to users. This lack of a diverse range of choices
necessitates the development of a more versatile framework. However, when the
task of generating collocated and diversified outfits is approached with
multimodal image-to-image translation methods, it poses a challenging problem
in terms of non-aligned image translation, which is hard to address with
existing methods. In this research, we present FCBoost-Net, a new framework for
outfit generation that leverages the power of pre-trained generative models to
produce multiple collocated and diversified outfits. Initially, FCBoost-Net
randomly synthesizes multiple sets of fashion items, and the compatibility of
the synthesized sets is then improved in several rounds using a novel fashion
compatibility booster. This approach was inspired by boosting algorithms and
allows the performance to be gradually improved in multiple steps. Empirical
evidence indicates that the proposed strategy can improve the fashion
compatibility of randomly synthesized fashion items as well as maintain their
diversity. Extensive experiments confirm the effectiveness of our proposed
framework with respect to visual authenticity, diversity, and fashion
compatibility.
|
2502.00996
|
Self-supervised Analogical Learning using Language Models
|
cs.CL
|
Large language models have been shown to suffer from reasoning inconsistency
issues. That is, they fail more in situations unfamiliar to the training data,
even though exact or very similar reasoning paths exist in more common cases
that they can successfully solve. Such observations motivate us to propose
methods that encourage models to understand the high-level and abstract
reasoning processes during training instead of only the final answer. This way,
models can transfer the exact solution to similar cases, regardless of their
relevance to the pre-training data distribution. In this work, we propose SAL,
a self-supervised analogical learning framework. SAL mimics the human analogy
process and trains models to explicitly transfer high-quality symbolic
solutions from cases that they know how to solve to other rare cases in which
they tend to fail more. We show that the resulting models after SAL learning
outperform base language models on a wide range of reasoning benchmarks, such
as StrategyQA, GSM8K, and HotpotQA, by 2% to 20%. At the same time, we show
that our model is more generalizable and controllable through analytical
studies.
|
2502.00997
|
MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
|
cs.CL cs.AI
|
The recent success of specialized Large Language Models (LLMs) in domains
such as mathematical reasoning and coding has led to growing interest in
methods for merging these expert LLMs into a unified Mixture-of-Experts (MoE)
model, with the goal of enhancing performance in each domain while retaining
effectiveness on general tasks. However, the effective merging of expert models
remains an open challenge, especially for models with highly divergent weight
parameters or different architectures. State-of-the-art MoE merging methods
only work with homogeneous model architectures and rely on simple unweighted
averaging to merge expert layers, which does not address parameter interference
and requires extensive fine-tuning of the merged MoE to restore performance. To
address these limitations, this paper introduces new MoE merging techniques,
including strategies to mitigate parameter interference, routing heuristics to
reduce the need for MoE fine-tuning, and a novel method for merging experts
with different architectures. Extensive experiments across multiple domains
demonstrate the effectiveness of our proposed methods, reducing fine-tuning
costs, improving performance over state-of-the-art methods, and expanding the
applicability of MoE merging.
|
2502.01000
|
Adapting Foundation Models for Few-Shot Medical Image Segmentation:
Actively and Sequentially
|
cs.CV
|
Recent advances in foundation models have brought promising results in
computer vision, including medical image segmentation. Fine-tuning foundation
models on specific low-resource medical tasks has become a standard practice.
However, ensuring reliable and robust model adaptation when the target task has
a large domain gap and few annotated samples remains a challenge. Previous
few-shot domain adaptation (FSDA) methods seek to bridge the distribution gap
between source and target domains by utilizing auxiliary data. The selection
and scheduling of auxiliaries are often based on heuristics, which can easily
cause negative transfer. In this work, we propose an Active and Sequential
domain AdaPtation (ASAP) framework for dynamic auxiliary dataset selection in
FSDA. We formulate FSDA as a multi-armed bandit problem and derive an efficient
reward function to prioritize training on auxiliary datasets that align closely
with the target task, through a single-round fine-tuning. Empirical validation
on diverse medical segmentation datasets demonstrates that our method achieves
favorable segmentation performance, significantly outperforming the
state-of-the-art FSDA methods, achieving an average gain of 27.75% on MRI and
7.52% on CT datasets in Dice score. Code is available at the git repository:
https://github.com/techicoco/ASAP.
|
2502.01002
|
Multi-Resolution SAR and Optical Remote Sensing Image Registration
Methods: A Review, Datasets, and Future Perspectives
|
cs.CV
|
Synthetic Aperture Radar (SAR) and optical image registration is essential
for remote sensing data fusion, with applications in military reconnaissance,
environmental monitoring, and disaster management. However, challenges arise
from differences in imaging mechanisms, geometric distortions, and radiometric
properties between SAR and optical images. As image resolution increases, fine
SAR textures become more significant, leading to alignment issues and 3D
spatial discrepancies. Two major gaps exist: the lack of a publicly available
multi-resolution, multi-scene registration dataset and the absence of
systematic analysis of current methods. To address this, the MultiResSAR
dataset was created, containing over 10k pairs of multi-source,
multi-resolution, and multi-scene SAR and optical images. Sixteen
state-of-the-art algorithms were tested. Results show no algorithm achieves
100% success, and performance decreases as resolution increases, with most
failing on sub-meter data. XoFTR performs best among deep learning methods
(40.58%), while RIFT performs best among traditional methods (66.51%). Future
research should focus on noise suppression, 3D geometric fusion, cross-view
transformation modeling, and deep learning optimization for robust registration
of high-resolution SAR and optical images. The dataset is available at
https://github.com/betterlll/Multi-Resolution-SAR-dataset-.
|
2502.01004
|
ZeroBP: Learning Position-Aware Correspondence for Zero-shot 6D Pose
Estimation in Bin-Picking
|
cs.CV
|
Bin-picking is a practical and challenging robotic manipulation task, where
accurate 6D pose estimation plays a pivotal role. The workpieces in bin-picking
are typically textureless and randomly stacked in a bin, which poses a
significant challenge to 6D pose estimation. Existing solutions are typically
learning-based methods, which require object-specific training. Their
efficiency of practical deployment for novel workpieces is highly limited by
data collection and model retraining. Zero-shot 6D pose estimation is a
potential approach to address the issue of deployment efficiency. Nevertheless,
existing zero-shot 6D pose estimation methods are designed to leverage feature
matching to establish point-to-point correspondences for pose estimation, which
is less effective for workpieces with textureless appearances and ambiguous
local regions. In this paper, we propose ZeroBP, a zero-shot pose estimation
framework designed specifically for the bin-picking task. ZeroBP learns
Position-Aware Correspondence (PAC) between the scene instance and its CAD
model, leveraging both local features and global positions to resolve the
mismatch issue caused by ambiguous regions with similar shapes and appearances.
Extensive experiments on the ROBI dataset demonstrate that ZeroBP outperforms
state-of-the-art zero-shot pose estimation methods, achieving an improvement of
9.1% in average recall of correct poses.
|
2502.01009
|
Robust Trajectory Generation and Control for Quadrotor Motion Planning
with Field-of-View Control Barrier Certification
|
cs.RO
|
Many approaches to multi-robot coordination are susceptible to failure due to
communication loss and uncertainty in estimation. We present a real-time
communication-free distributed algorithm for navigating robots to their desired
goals certified by control barrier functions, that model and control the
onboard sensing behavior to keep neighbors in the limited field of view for
position estimation. The approach is robust to temporary tracking loss and
directly synthesizes control in real time to stabilize visual contact through
control Lyapunov-barrier functions. The main contributions of this paper are a
continuous-time robust trajectory generation and control method certified by
control barrier functions for distributed multi-robot systems and a discrete
optimization procedure, namely, MPC-CBF, to approximate the certified
controller. In addition, we propose a linear surrogate of high-order control
barrier function constraints and use sequential quadratic programming to solve
MPC-CBF efficiently. We demonstrate results in simulation with 10 robots and
physical experiments with 2 custom-built UAVs. To the best of our knowledge,
this work is the first of its kind to generate a robust continuous-time
trajectory and controller concurrently, certified by control barrier functions
utilizing piecewise splines.
|
2502.01012
|
Deep Active Learning based Experimental Design to Uncover Synergistic
Genetic Interactions for Host Targeted Therapeutics
|
cs.LG q-bio.QM stat.ME
|
Recent technological advances have introduced new high-throughput methods for
studying host-virus interactions, but testing synergistic interactions between
host gene pairs during infection remains relatively slow and labor intensive.
Identification of multiple gene knockdowns that effectively inhibit viral
replication requires a search over the combinatorial space of all possible
target gene pairs and is infeasible via brute-force experiments. Although
active learning methods for sequential experimental design have shown promise,
existing approaches have generally been restricted to single-gene knockdowns or
small-scale double knockdown datasets. In this study, we present an integrated
Deep Active Learning (DeepAL) framework that incorporates information from a
biological knowledge graph (SPOKE, the Scalable Precision Medicine Open
Knowledge Engine) to efficiently search the configuration space of a large
dataset of all pairwise knockdowns of 356 human genes in HIV infection. Through
graph representation learning, the framework is able to generate task-specific
representations of genes while also balancing the exploration-exploitation
trade-off to pinpoint highly effective double-knockdown pairs. We additionally
present an ensemble method for uncertainty quantification and an interpretation
of the gene pairs selected by our algorithm via pathway analysis. To our
knowledge, this is the first work to show promising results on double-gene
knockdown experimental data of appreciable scale (356 by 356 matrix).
|
2502.01013
|
Encrypted Large Model Inference: The Equivariant Encryption Paradigm
|
cs.CR cs.AI
|
Large scale deep learning model, such as modern language models and diffusion
architectures, have revolutionized applications ranging from natural language
processing to computer vision. However, their deployment in distributed or
decentralized environments raises significant privacy concerns, as sensitive
data may be exposed during inference. Traditional techniques like secure
multi-party computation, homomorphic encryption, and differential privacy offer
partial remedies but often incur substantial computational overhead, latency
penalties, or limited compatibility with non-linear network operations. In this
work, we introduce Equivariant Encryption (EE), a novel paradigm designed to
enable secure, "blind" inference on encrypted data with near zero performance
overhead. Unlike fully homomorphic approaches that encrypt the entire
computational graph, EE selectively obfuscates critical internal
representations within neural network layers while preserving the exact
functionality of both linear and a prescribed set of non-linear operations.
This targeted encryption ensures that raw inputs, intermediate activations, and
outputs remain confidential, even when processed on untrusted infrastructure.
We detail the theoretical foundations of EE, compare its performance and
integration complexity against conventional privacy preserving techniques, and
demonstrate its applicability across a range of architectures, from
convolutional networks to large language models. Furthermore, our work provides
a comprehensive threat analysis, outlining potential attack vectors and
baseline strategies, and benchmarks EE against standard inference pipelines in
decentralized settings. The results confirm that EE maintains high fidelity and
throughput, effectively bridging the gap between robust data confidentiality
and the stringent efficiency requirements of modern, large scale model
inference.
|
2502.01014
|
Refining Adaptive Zeroth-Order Optimization at Ease
|
cs.LG cs.AI
|
Recently, zeroth-order (ZO) optimization plays an essential role in scenarios
where gradient information is inaccessible or unaffordable, such as black-box
systems and resource-constrained environments. While existing adaptive methods
such as ZO-AdaMM have shown promise, they are fundamentally limited by their
underutilization of moment information during optimization, usually resulting
in underperforming convergence. To overcome these limitations, this paper
introduces Refined Adaptive Zeroth-Order Optimization (R-AdaZO). Specifically,
we first show the untapped variance reduction effect of first moment estimate
on ZO gradient estimation, which improves the accuracy and stability of ZO
updates. We then refine the second moment estimate based on these
variance-reduced gradient estimates to better capture the geometry of the
optimization landscape, enabling a more effective scaling of ZO updates. We
present rigorous theoretical analysis to show (I) the first analysis to the
variance reduction of first moment estimate in ZO optimization, (II) the
improved second moment estimates with a more accurate approximation of its
variance-free ideal, (III) the first variance-aware convergence framework for
adaptive ZO methods, which may be of independent interest, and (IV) the faster
convergence of R-AdaZO than existing baselines like ZO-AdaMM. Our extensive
experiments, including synthetic problems, black-box adversarial attack, and
memory-efficient fine-tuning of large language models (LLMs), further verify
the superior convergence of R-AdaZO, indicating that R-AdaZO offers an improved
solution for real-world ZO optimization challenges.
|
2502.01015
|
Efficient Model Editing with Task Vector Bases: A Theoretical Framework
and Scalable Approach
|
cs.LG
|
Task vectors, which are derived from the difference between pre-trained and
fine-tuned model weights, enable flexible task adaptation and model merging
through arithmetic operations such as addition and negation. However, existing
approaches often rely on heuristics with limited theoretical support, often
leading to performance gaps comparing to direct task fine tuning. Meanwhile,
although it is easy to manipulate saved task vectors with arithmetic for
different purposes, such compositional flexibility demands high memory usage,
especially when dealing with a huge number of tasks, limiting scalability. This
work addresses these issues with a theoretically grounded framework that
explains task vector arithmetic and introduces the task vector bases framework.
Building upon existing task arithmetic literature, our method significantly
reduces the memory cost for downstream arithmetic with little effort, while
achieving competitive performance and maintaining compositional advantage,
providing a practical solution for large-scale task arithmetic.
|
2502.01023
|
Vessel segmentation for X-separation
|
cs.CV q-bio.QM
|
$\chi$-separation is an advanced quantitative susceptibility mapping (QSM)
method that is designed to generate paramagnetic ($\chi_{para}$) and
diamagnetic ($|\chi_{dia}|$) susceptibility maps, reflecting the distribution
of iron and myelin in the brain. However, vessels have shown artifacts,
interfering with the accurate quantification of iron and myelin in
applications. To address this challenge, a new vessel segmentation method for
$\chi$-separation is developed. The method comprises three steps: 1) Seed
generation from $\textit{R}_2^*$ and the product of $\chi_{para}$ and
$|\chi_{dia}|$ maps; 2) Region growing, guided by vessel geometry, creating a
vessel mask; 3) Refinement of the vessel mask by excluding non-vessel
structures. The performance of the method was compared to conventional vessel
segmentation methods both qualitatively and quantitatively. To demonstrate the
utility of the method, it was tested in two applications: quantitative
evaluation of a neural network-based $\chi$-separation reconstruction method
($\chi$-sepnet-$\textit{R}_2^*$) and population-averaged region of interest
(ROI) analysis. The proposed method demonstrates superior performance to the
conventional vessel segmentation methods, effectively excluding the non-vessel
structures, achieving the highest Dice score coefficient. For the applications,
applying vessel masks report notable improvements for the quantitative
evaluation of $\chi$-sepnet-$\textit{R}_2^*$ and statistically significant
differences in population-averaged ROI analysis. These applications suggest
excluding vessels when analyzing the $\chi$-separation maps provide more
accurate evaluations. The proposed method has the potential to facilitate
various applications, offering reliable analysis through the generation of a
high-quality vessel mask.
|
2502.01025
|
Knowing When to Stop: Dynamic Context Cutoff for Large Language Models
|
cs.CL
|
Large language models (LLMs) process entire input contexts indiscriminately,
which is inefficient in cases where the information required to answer a query
is localized within the context. We present dynamic context cutoff, a
human-inspired method enabling LLMs to self-terminate processing upon acquiring
sufficient task-relevant information. Through analysis of model internals, we
discover that specific attention heads inherently encode "sufficiency signals"
- detectable through lightweight classifiers - that predict when critical
information has been processed. This reveals a new efficiency paradigm: models'
internal understanding naturally dictates processing needs rather than external
compression heuristics. Comprehensive experiments across six QA datasets (up to
40K tokens) with three model families (LLaMA/Qwen/Mistral, 1B0-70B) demonstrate
1.33x average token reduction while improving accuracy by 1.3%. Furthermore,
our method demonstrates better performance with the same rate of token
reduction compared to other context efficiency methods. Additionally, we
observe an emergent scaling phenomenon: while smaller models require require
probing for sufficiency detection, larger models exhibit intrinsic
self-assessment capabilities through prompting.
|
2502.01027
|
Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and
Guarantees
|
stat.ML cs.LG
|
Learning-to-Defer (L2D) facilitates optimal task allocation between AI
systems and decision-makers. Despite its potential, we show that current
two-stage L2D frameworks are highly vulnerable to adversarial attacks, which
can misdirect queries or overwhelm decision agents, significantly degrading
system performance. This paper conducts the first comprehensive analysis of
adversarial robustness in two-stage L2D frameworks. We introduce two novel
attack strategies -- untargeted and targeted -- that exploit inherent
structural vulnerabilities in these systems. To mitigate these threats, we
propose SARD, a robust, convex, deferral algorithm rooted in Bayes and
$(\mathcal{R},\mathcal{G})$-consistency. Our approach guarantees optimal task
allocation under adversarial perturbations for all surrogates in the
cross-entropy family. Extensive experiments on classification, regression, and
multi-task benchmarks validate the robustness of SARD.
|
2502.01029
|
Comprehensive Modeling Approaches for Forecasting Bitcoin Transaction
Fees: A Comparative Study
|
cs.LG cs.AI
|
Transaction fee prediction in Bitcoin's ecosystem represents a crucial
challenge affecting both user costs and miner revenue optimization. This study
presents a systematic evaluation of six predictive models for forecasting
Bitcoin transaction fees across a 24-hour horizon (144 blocks): SARIMAX,
Prophet, Time2Vec, Time2Vec with Attention, a Hybrid model combining SARIMAX
with Gradient Boosting, and the Temporal Fusion Transformer (TFT). Our approach
integrates comprehensive feature engineering spanning mempool metrics, network
parameters, and historical fee patterns to capture the multifaceted dynamics of
fee behavior.
Through rigorous 5-fold cross-validation and independent testing, our
analysis reveals that traditional statistical approaches outperform more
complex deep learning architectures. The SARIMAX model achieves superior
accuracy on the independent test set, while Prophet demonstrates strong
performance during cross-validation. Notably, sophisticated deep learning
models like Time2Vec and TFT show comparatively lower predictive power despite
their architectural complexity. This performance disparity likely stems from
the relatively constrained training dataset of 91 days, suggesting that deep
learning models may achieve enhanced results with extended historical data.
These findings offer significant practical implications for cryptocurrency
stakeholders, providing empirically-validated guidance for fee-sensitive
decision making while illuminating critical considerations in model selection
based on data constraints. The study establishes a foundation for advanced fee
prediction while highlighting the current advantages of traditional statistical
methods in this domain.
|
2502.01031
|
DiffIM: Differentiable Influence Minimization with Surrogate Modeling
and Continuous Relaxation
|
cs.LG cs.SI
|
In social networks, people influence each other through social links, which
can be represented as propagation among nodes in graphs. Influence minimization
(IMIN) is the problem of manipulating the structures of an input graph (e.g.,
removing edges) to reduce the propagation among nodes. IMIN can represent
time-critical real-world applications, such as rumor blocking, but IMIN is
theoretically difficult and computationally expensive. Moreover, the discrete
nature of IMIN hinders the usage of powerful machine learning techniques, which
requires differentiable computation. In this work, we propose DiffIM, a novel
method for IMIN with two differentiable schemes for acceleration: (1) surrogate
modeling for efficient influence estimation, which avoids time-consuming
simulations (e.g., Monte Carlo), and (2) the continuous relaxation of
decisions, which avoids the evaluation of individual discrete decisions (e.g.,
removing an edge). We further propose a third accelerating scheme,
gradient-driven selection, that chooses edges instantly based on gradients
without optimization (spec., gradient descent iterations) on each test
instance. Through extensive experiments on real-world graphs, we show that each
proposed scheme significantly improves speed with little (or even no) IMIN
performance degradation. Our method is Pareto-optimal (i.e., no baseline is
faster and more effective than it) and typically several orders of magnitude
(spec., up to 15,160X) faster than the most effective baseline while being more
effective.
|
2502.01032
|
Converting MLPs into Polynomials in Closed Form
|
cs.LG stat.ML
|
Recent work has shown that purely quadratic functions can replace MLPs in
transformers with no significant loss in performance, while enabling new
methods of interpretability based on linear algebra. In this work, we
theoretically derive closed-form least-squares optimal approximations of
feedforward networks (multilayer perceptrons and gated linear units) using
polynomial functions of arbitrary degree. When the $R^2$ is high, this allows
us to interpret MLPs and GLUs by visualizing the eigendecomposition of the
coefficients of their linear and quadratic approximants. We also show that
these approximants can be used to create SVD-based adversarial examples. By
tracing the $R^2$ of linear and quadratic approximants across training time, we
find new evidence that networks start out simple, and get progressively more
complex. Even at the end of training, however, our quadratic approximants
explain over 95% of the variance in network outputs.
|
2502.01033
|
PARA: Parameter-Efficient Fine-tuning with Prompt Aware Representation
Adjustment
|
cs.CL
|
In the realm of parameter-efficient fine-tuning (PEFT) methods, while options
like LoRA are available, there is a persistent demand in the industry for a
PEFT approach that excels in both efficiency and performance within the context
of single-backbone multi-tenant applications. This paper introduces a new and
straightforward PEFT technique, termed \underline{P}rompt \underline{A}ware
\underline{R}epresentation \underline{A}djustment (PARA). The core of our
proposal is to integrate a lightweight vector generator within each Transformer
layer. This generator produces vectors that are responsive to input prompts,
thereby adjusting the hidden representations accordingly. Our extensive
experimentation across diverse tasks has yielded promising results. Firstly,
the PARA method has been shown to surpass current PEFT benchmarks in terms of
performance, despite having a similar number of adjustable parameters.
Secondly, it has proven to be more efficient than LoRA in the single-backbone
multi-tenant scenario, highlighting its significant potential for industrial
adoption.
|
2502.01034
|
End-to-End Imitation Learning for Optimal Asteroid Proximity Operations
|
cs.RO cs.LG
|
Controlling spacecraft near asteroids in deep space comes with many
challenges. The delays involved necessitate heavy usage of limited onboard
computation resources while fuel efficiency remains a priority to support the
long loiter times needed for gathering data. Additionally, the difficulty of
state determination due to the lack of traditional reference systems requires a
guidance, navigation, and control (GNC) pipeline that ideally is both
computationally and fuel-efficient, and that incorporates a robust state
determination system. In this paper, we propose an end-to-end algorithm
utilizing neural networks to generate near-optimal control commands from raw
sensor data, as well as a hybrid model predictive control (MPC) guided
imitation learning controller delivering improvements in computational
efficiency over a traditional MPC controller.
|
2502.01035
|
UASTHN: Uncertainty-Aware Deep Homography Estimation for UAV
Satellite-Thermal Geo-localization
|
cs.RO cs.CV
|
Geo-localization is an essential component of Unmanned Aerial Vehicle (UAV)
navigation systems to ensure precise absolute self-localization in outdoor
environments. To address the challenges of GPS signal interruptions or low
illumination, Thermal Geo-localization (TG) employs aerial thermal imagery to
align with reference satellite maps to accurately determine the UAV's location.
However, existing TG methods lack uncertainty measurement in their outputs,
compromising system robustness in the presence of textureless or corrupted
thermal images, self-similar or outdated satellite maps, geometric noises, or
thermal images exceeding satellite maps. To overcome these limitations, this
paper presents \textit{UASTHN}, a novel approach for Uncertainty Estimation
(UE) in Deep Homography Estimation (DHE) tasks for TG applications.
Specifically, we introduce a novel Crop-based Test-Time Augmentation (CropTTA)
strategy, which leverages the homography consensus of cropped image views to
effectively measure data uncertainty. This approach is complemented by Deep
Ensembles (DE) employed for model uncertainty, offering comparable performance
with improved efficiency and seamless integration with any DHE model. Extensive
experiments across multiple DHE models demonstrate the effectiveness and
efficiency of CropTTA in TG applications. Analysis of detected failure cases
underscores the improved reliability of CropTTA under challenging conditions.
Finally, we demonstrate the capability of combining CropTTA and DE for a
comprehensive assessment of both data and model uncertainty. Our research
provides profound insights into the broader intersection of localization and
uncertainty estimation. The code and data is publicly available.
|
2502.01036
|
eagle: early approximated gradient based learning rate estimator
|
cs.LG cs.AI
|
We propose EAGLE update rule, a novel optimization method that accelerates
loss convergence during the early stages of training by leveraging both current
and previous step parameter and gradient values. The update algorithm estimates
optimal parameters by computing the changes in parameters and gradients between
consecutive training steps and leveraging the local curvature of the loss
landscape derived from these changes. However, this update rule has potential
instability, and to address that, we introduce an adaptive switching mechanism
that dynamically selects between Adam and EAGLE update rules to enhance
training stability. Experiments on standard benchmark datasets demonstrate that
EAGLE optimizer, which combines this novel update rule with the switching
mechanism achieves rapid training loss convergence with fewer epochs, compared
to conventional optimization methods.
|
2502.01039
|
Geoinformatics-Guided Machine Learning for Power Plant Classification
|
cs.LG
|
This paper proposes an approach in the area of Knowledge-Guided Machine
Learning (KGML) via a novel integrated framework comprising CNN (Convolutional
Neural Networks) and ViT (Vision Transformers) along with GIS (Geographic
Information Systems) to enhance power plant classification in the context of
energy management. Knowledge from geoinformatics derived through Spatial Masks
(SM) in GIS is infused into an architecture of CNN and ViT, in this proposed
KGML approach. It is found to provide much better performance compared to the
baseline of CNN and ViT only in the classification of multiple types of power
plants from real satellite imagery, hence emphasizing the vital role of the
geoinformatics-guided approach. This work makes a contribution to the main
theme of KGML that can be beneficial in many AI systems today. It makes broader
impacts on AI in Smart Cities, and Environmental Computing.
|
2502.01041
|
Multi-Object Active Search and Tracking by Multiple Agents in Untrusted,
Dynamically Changing Environments
|
cs.RO
|
This paper addresses the problem of both actively searching and tracking
multiple unknown dynamic objects in a known environment with multiple
cooperative autonomous agents with partial observability. The tracking of a
target ends when the uncertainty is below a threshold. Current methods
typically assume homogeneous agents without access to external information and
utilize short-horizon target predictive models. Such assumptions limit
real-world applications. We propose a fully integrated pipeline where the main
contributions are: (1) a time-varying weighted belief representation capable of
handling knowledge that changes over time, which includes external reports of
varying levels of trustworthiness in addition to the agents; (2) the
integration of a Long Short Term Memory-based trajectory prediction within the
optimization framework for long-horizon decision-making, which reasons in
time-configuration space, thus increasing responsiveness; and (3) a
comprehensive system that accounts for multiple agents and enables
information-driven optimization. When communication is available, our strategy
consolidates exploration results collected asynchronously by agents and
external sources into a headquarters, who can allocate each agent to maximize
the overall team's utility, using all available information. We tested our
approach extensively in simulations against baselines, and in robustness and
ablation studies. In addition, we performed experiments in a 3D physics based
engine robot simulator to test the applicability in the real world, as well as
with real-world trajectories obtained from an oceanography computational fluid
dynamics simulator. Results show the effectiveness of our method, which
achieves mission completion times 1.3 to 3.2 times faster in finding all
targets, even under the most challenging scenarios where the number of targets
is 5 times greater than that of the agents.
|
2502.01042
|
Internal Activation as the Polar Star for Steering Unsafe LLM Behavior
|
cs.LG
|
Large language models (LLMs) have demonstrated exceptional capabilities
across a wide range of tasks but also pose significant risks due to their
potential to generate harmful content. Although existing safety mechanisms can
improve model safety, they often lead to overly cautious behavior and fail to
fully utilize LLMs' internal cognitive processes. Drawing inspiration from
cognitive science, where humans rely on reflective reasoning (System 2
thinking) to regulate language and behavior, we empirically demonstrate that
LLMs also possess a similar capacity for internal assessment and regulation,
which can be actively detected.
Building on this insight, we introduce SafeSwitch, a framework that
dynamically regulates unsafe outputs by monitoring and utilizing the model's
internal states. Our empirical results show that SafeSwitch reduces harmful
outputs by over 80% on safety benchmarks while maintaining strong utility.
Compared to traditional safety alignment methods, SafeSwitch delivers more
informative and context-aware refusals, demonstrates resilience to unseen
queries, and achieves these benefits while only tuning less than 6% of the
original parameters. These features make SafeSwitch a promising approach for
implementing nuanced safety controls in LLMs.
|
2502.01043
|
Multiphysics Continuous Shape Optimization of the TAP Reactor Components
|
cs.CE
|
The Transatomic Power (TAP) reactor has an unusual design for a molten salt
reactor technology, building upon the foundation laid by the Molten Salt
Reactor Experiment (MSRE). This design introduces three key modifications to
enhance efficiency and compactness: a revised fuel salt composition, an
alternative moderator material, and moderator pins surrounded by the molten
salt fuel. Unlike traditional solid-fueled reactors that rely on excess
positive reactivity at the beginning of life, the TAP concept employs a dynamic
approach. The core's design, featuring a cylindrical geometry with square
assemblies of moderator rods surrounded by flowing fuel salt, provides
flexibility in adjusting the moderator-to-fuel ratio during operation - using
movable moderator rods - further adding criticality control capability in
addition to the control rods system. Shape optimization of the core can play a
crucial role in enhancing performance and efficiency. By applying multiphysics
continuous shape optimization techniques to key components, such as the unit
cells of the TAP reactor or its moderator assemblies, we can fine-tune the
reactor's geometry to achieve optimal performance in key physics like
neutronics and thermal hydraulics. We explore this aspect using the
optimization module in the Multiphysics Object Oriented Simulation Environment
(MOOSE) framework which allows for multiphysics continuous shape optimization.
The results reported here illustrate the benefits of applying continuous shape
optimization in the design of nuclear reactor components and can help in
extending the TAP reactor's performance.
|
2502.01044
|
Nonlinear receding-horizon differential game for drone racing along a
three-dimensional path
|
eess.SY cs.SY
|
Drone racing involves high-speed navigation of three-dimensional paths,
posing a substantial challenge in control engineering. This study presents a
game-theoretic control framework, the nonlinear receding-horizon differential
game (NRHDG), designed for competitive drone racing. NRHDG enhances robustness
in adversarial settings by predicting and countering an opponent's worst-case
behavior in real time. It extends standard nonlinear model predictive control
(NMPC), which otherwise assumes a fixed opponent model. First, we develop a
novel path-following formulation based on projection point dynamics,
eliminating the need for costly distance minimization. Second, we propose a
potential function that allows each drone to switch between overtaking and
obstructing maneuvers based on real-time race situations. Third, we establish a
new performance metric to evaluate NRHDG with NMPC under race scenarios.
Simulation results demonstrate that NRHDG outperforms NMPC in terms of both
overtaking efficiency and obstructing capabilities.
|
2502.01045
|
WonderHuman: Hallucinating Unseen Parts in Dynamic 3D Human
Reconstruction
|
cs.CV cs.GR
|
In this paper, we present WonderHuman to reconstruct dynamic human avatars
from a monocular video for high-fidelity novel view synthesis. Previous dynamic
human avatar reconstruction methods typically require the input video to have
full coverage of the observed human body. However, in daily practice, one
typically has access to limited viewpoints, such as monocular front-view
videos, making it a cumbersome task for previous methods to reconstruct the
unseen parts of the human avatar. To tackle the issue, we present WonderHuman,
which leverages 2D generative diffusion model priors to achieve high-quality,
photorealistic reconstructions of dynamic human avatars from monocular videos,
including accurate rendering of unseen body parts. Our approach introduces a
Dual-Space Optimization technique, applying Score Distillation Sampling (SDS)
in both canonical and observation spaces to ensure visual consistency and
enhance realism in dynamic human reconstruction. Additionally, we present a
View Selection strategy and Pose Feature Injection to enforce the consistency
between SDS predictions and observed data, ensuring pose-dependent effects and
higher fidelity in the reconstructed avatar. In the experiments, our method
achieves SOTA performance in producing photorealistic renderings from the given
monocular video, particularly for those challenging unseen parts. The project
page and source code can be found at https://wyiguanw.github.io/WonderHuman/.
|
2502.01046
|
Emotional Face-to-Speech
|
cs.SD cs.CV eess.AS
|
How much can we infer about an emotional voice solely from an expressive
face? This intriguing question holds great potential for applications such as
virtual character dubbing and aiding individuals with expressive language
disorders. Existing face-to-speech methods offer great promise in capturing
identity characteristics but struggle to generate diverse vocal styles with
emotional expression. In this paper, we explore a new task, termed emotional
face-to-speech, aiming to synthesize emotional speech directly from expressive
facial cues. To that end, we introduce DEmoFace, a novel generative framework
that leverages a discrete diffusion transformer (DiT) with curriculum learning,
built upon a multi-level neural audio codec. Specifically, we propose
multimodal DiT blocks to dynamically align text and speech while tailoring
vocal styles based on facial emotion and identity. To enhance training
efficiency and generation quality, we further introduce a coarse-to-fine
curriculum learning algorithm for multi-level token processing. In addition, we
develop an enhanced predictor-free guidance to handle diverse conditioning
scenarios, enabling multi-conditional generation and disentangling complex
attributes effectively. Extensive experimental results demonstrate that
DEmoFace generates more natural and consistent speech compared to baselines,
even surpassing speech-driven methods. Demos are shown at
https://demoface-ai.github.io/.
|
2502.01048
|
Sparks of Explainability: Recent Advancements in Explaining Large Vision
Models
|
cs.CV cs.AI
|
This thesis explores advanced approaches to improve explainability in
computer vision by analyzing and modeling the features exploited by deep neural
networks. Initially, it evaluates attribution methods, notably saliency maps,
by introducing a metric based on algorithmic stability and an approach
utilizing Sobol indices, which, through quasi-Monte Carlo sequences, allows a
significant reduction in computation time. In addition, the EVA method offers a
first formulation of attribution with formal guarantees via verified
perturbation analysis.
Experimental results indicate that in complex scenarios these methods do not
provide sufficient understanding, particularly because they identify only
"where" the model focuses without clarifying "what" it perceives. Two
hypotheses are therefore examined: aligning models with human reasoning --
through the introduction of a training routine that integrates the imitation of
human explanations and optimization within the space of 1-Lipschitz functions
-- and adopting a conceptual explainability approach.
The CRAFT method is proposed to automate the extraction of the concepts used
by the model and to assess their importance, complemented by MACO, which
enables their visualization. These works converge towards a unified framework,
illustrated by an interactive demonstration applied to the 1000 ImageNet
classes in a ResNet model.
|
2502.01050
|
AutoDDG: Automated Dataset Description Generation using Large Language
Models
|
cs.DB
|
The proliferation of datasets across open data portals and enterprise data
lakes presents an opportunity for deriving data-driven insights. However,
widely-used dataset search systems rely on keyword searches over dataset
metadata, including descriptions, to facilitate discovery. When these
descriptions are incomplete, missing, or inconsistent with dataset contents,
findability is severely hindered. In this paper, we address the problem of
automatic dataset description generation: how to generate informative
descriptions that enhance dataset discovery and support relevance assessment.
We introduce AutoDDG, a framework for automated dataset description generation
tailored for tabular data. To derive descriptions that are comprehensive,
accurate, readable and concise, AutoDDG adopts a data-driven approach to
summarize the contents of a dataset, and leverages LLMs to both enrich the
summaries with semantic information and to derive human-readable descriptions.
An important challenge for this problem is how to evaluate the effectiveness of
methods for data description generation and the quality of the descriptions. We
propose a multi-pronged evaluation strategy that: (1) measures the improvement
in dataset retrieval within a dataset search engine, (2) compares generated
descriptions to existing ones (when available), and (3) evaluates intrinsic
quality metrics such as readability, faithfulness to the data, and conciseness.
Additionally, we introduce two new benchmarks to support this evaluation. Our
experimental results, using these benchmarks, demonstrate that AutoDDG
generates high-quality, accurate descriptions and significantly improves
dataset retrieval performance across diverse use cases.
|
2502.01051
|
Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level
Preference Optimization
|
cs.CV
|
Preference optimization for diffusion models aims to align them with human
preferences for images. Previous methods typically leverage Vision-Language
Models (VLMs) as pixel-level reward models to approximate human preferences.
However, when used for step-level preference optimization, these models face
challenges in handling noisy images of different timesteps and require complex
transformations into pixel space. In this work, we demonstrate that diffusion
models are inherently well-suited for step-level reward modeling in the latent
space, as they can naturally extract features from noisy latent images.
Accordingly, we propose the Latent Reward Model (LRM), which repurposes
components of diffusion models to predict preferences of latent images at
various timesteps. Building on LRM, we introduce Latent Preference Optimization
(LPO), a method designed for step-level preference optimization directly in the
latent space. Experimental results indicate that LPO not only significantly
enhances performance in aligning diffusion models with general, aesthetic, and
text-image alignment preferences, but also achieves 2.5-28$\times$ training
speedup compared to existing preference optimization methods. Our code will be
available at https://github.com/casiatao/LPO.
|
2502.01053
|
Hybrid Firefly Algorithm and Sperm Swarm Optimization Algorithm using
Newton-Raphson Method (HFASSON) and its application in CR-VANET
|
cs.NE
|
This paper proposes a new hybrid algorithm, combining FA, SSO, and the N-R
method to accelerate convergence towards global optima, named the Hybrid
Firefly Algorithm and Sperm Swarm Optimization with Newton-Raphson (HFASSON).
The performance of HFASSON is evaluated using 23 benchmark functions from the
CEC 2017 suite, tested in 30, 50, and 100 dimensions. A statistical comparison
is performed to assess the effectiveness of HFASSON against FA, SSO, HFASSO,
and five hybrid algorithms: Water Cycle Moth Flame Optimization (WCMFO), Hybrid
Particle Swarm Optimization and Genetic Algorithm (HPSOGA), Hybrid Sperm Swarm
Optimization and Gravitational Search Algorithm (HSSOGSA), Grey Wolf and Cuckoo
Search Algorithm (GWOCS), and Hybrid Firefly Genetic Algorithm (FAGA). Results
from the Friedman rank test show the superior performance of HFASSON.
Additionally, HFASSON is applied to Cognitive Radio Vehicular Ad-hoc Networks
(CR-VANET), outperforming basic CR-VANET in spectrum utilization. These
findings demonstrate HFASSON's efficiency in wireless network applications.
|
2502.01055
|
On the Surprising Robustness of Sequential Convex Optimization for
Contact-Implicit Motion Planning
|
math.OC cs.RO
|
Contact-implicit motion planning-embedding contact sequencing as implicit
complementarity constraints-holds the promise of leveraging continuous
optimization to discover new contact patterns online. Nevertheless, the
resulting optimization, being an instance of Mathematical Programming with
Complementary Constraints, fails the classical constraint qualifications that
are crucial for the convergence of popular numerical solvers. We present robust
contact-implicit motion planning with sequential convex programming (CRISP), a
solver that departs from the usual primal-dual algorithmic framework but
instead only focuses on the primal problem. CRISP solves a convex quadratic
program with an adaptive trust region radius at each iteration, and its
convergence is evaluated by a merit function using weighted penalty. We (i)
provide sufficient conditions on CRISP's convergence to first-order stationary
points of the merit function; (ii) release a high-performance C++
implementation of CRISP with a generic nonlinear programming interface; and
(iii) demonstrate CRISP's surprising robustness in solving contact-implicit
planning with naive initialization. In fact, CRISP solves several
contact-implicit problems with all-zero initialization.
|
2502.01056
|
Mitigating Hallucinations in Large Vision-Language Models with Internal
Fact-based Contrastive Decoding
|
cs.CV cs.CL
|
Large Visual Language Models (LVLMs) integrate visual and linguistic
modalities, exhibiting exceptional performance across various multimodal tasks.
Nevertheless, LVLMs remain vulnerable to the issue of object hallucinations.
Previous efforts to mitigate this issue focus on supervised fine-tuning (SFT)
or incorporating external knowledge, both of which entail significant costs
related to training and the acquisition of external data. To address these
challenges, we propose a novel model-agnostic approach termed Internal
Fact-based Contrastive Decoding (IFCD), designed to mitigate and suppress
hallucinations during the inference process of LVLMs by exploiting the LVLMs'
own hallucinations. IFCD is grounded in experimental observations that
alterations to the LVLMs' internal representations tend to amplify
hallucinations caused by language bias. By contrasting disturbed distribution,
IFCD calibrates the LVLMs' output and effectively removes the hallucinatory
logits from the final predictions. Experimental results validate that IFCD
significantly alleviates both object-level and attribute-level hallucinations
while achieving an average 9% accuracy improvement on POPE and 8% accuracy
improvement on MME object hallucinations subset compared with direct decoding,
respectively.
|
2502.01057
|
FetDTIAlign: A Deep Learning Framework for Affine and Deformable
Registration of Fetal Brain dMRI
|
eess.IV cs.AI
|
Diffusion MRI (dMRI) provides unique insights into fetal brain microstructure
in utero. Longitudinal and cross-sectional fetal dMRI studies can reveal
crucial neurodevelopmental changes but require precise spatial alignment across
scans and subjects. This is challenging due to low data quality, rapid brain
development, and limited anatomical landmarks. Existing registration methods,
designed for high-quality adult data, struggle with these complexities. To
address this, we introduce FetDTIAlign, a deep learning approach for fetal
brain dMRI registration, enabling accurate affine and deformable alignment.
FetDTIAlign features a dual-encoder architecture and iterative feature-based
inference, reducing the impact of noise and low resolution. It optimizes
network configurations and domain-specific features at each registration stage,
enhancing both robustness and accuracy. We validated FetDTIAlign on data from
23 to 36 weeks gestation, covering 60 white matter tracts. It consistently
outperformed two classical optimization-based methods and a deep learning
pipeline, achieving superior anatomical correspondence. Further validation on
external data from the Developing Human Connectome Project confirmed its
generalizability across acquisition protocols. Our results demonstrate the
feasibility of deep learning for fetal brain dMRI registration, providing a
more accurate and reliable alternative to classical techniques. By enabling
precise cross-subject and tract-specific analyses, FetDTIAlign supports new
discoveries in early brain development.
|
2502.01059
|
Knowledge Synthesis of Photosynthesis Research Using a Large Language
Model
|
cs.CL cs.AI
|
The development of biological data analysis tools and large language models
(LLMs) has opened up new possibilities for utilizing AI in plant science
research, with the potential to contribute significantly to knowledge
integration and research gap identification. Nonetheless, current LLMs struggle
to handle complex biological data and theoretical models in photosynthesis
research and often fail to provide accurate scientific contexts. Therefore,
this study proposed a photosynthesis research assistant (PRAG) based on
OpenAI's GPT-4o with retrieval-augmented generation (RAG) techniques and prompt
optimization. Vector databases and an automated feedback loop were used in the
prompt optimization process to enhance the accuracy and relevance of the
responses to photosynthesis-related queries. PRAG showed an average improvement
of 8.7% across five metrics related to scientific writing, with a 25.4%
increase in source transparency. Additionally, its scientific depth and domain
coverage were comparable to those of photosynthesis research papers. A
knowledge graph was used to structure PRAG's responses with papers within and
outside the database, which allowed PRAG to match key entities with 63% and
39.5% of the database and test papers, respectively. PRAG can be applied for
photosynthesis research and broader plant science domains, paving the way for
more in-depth data analysis and predictive capabilities.
|
2502.01060
|
Learning Nonlinearity of Boolean Functions: An Experimentation with
Neural Networks
|
cs.LG cs.AI cs.CR
|
This paper investigates the learnability of the nonlinearity property of
Boolean functions using neural networks. We train encoder style deep neural
networks to learn to predict the nonlinearity of Boolean functions from
examples of functions in the form of a truth table and their corresponding
nonlinearity values. We report empirical results to show that deep neural
networks are able to learn to predict the property for functions in 4 and 5
variables with an accuracy above 95%. While these results are positive and a
disciplined analysis is being presented for the first time in this regard, we
should also underline the statutory warning that it seems quite challenging to
extend the idea to higher number of variables, and it is also not clear whether
one can get advantage in terms of time and space complexity over the existing
combinatorial algorithms.
|
2502.01061
|
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human
Animation Models
|
cs.CV
|
End-to-end human animation, such as audio-driven talking human generation,
has undergone notable advancements in the recent few years. However, existing
methods still struggle to scale up as large general video generation models,
limiting their potential in real applications. In this paper, we propose
OmniHuman, a Diffusion Transformer-based framework that scales up data by
mixing motion-related conditions into the training phase. To this end, we
introduce two training principles for these mixed conditions, along with the
corresponding model architecture and inference strategy. These designs enable
OmniHuman to fully leverage data-driven motion generation, ultimately achieving
highly realistic human video generation. More importantly, OmniHuman supports
various portrait contents (face close-up, portrait, half-body, full-body),
supports both talking and singing, handles human-object interactions and
challenging body poses, and accommodates different image styles. Compared to
existing end-to-end audio-driven methods, OmniHuman not only produces more
realistic videos, but also offers greater flexibility in inputs. It also
supports multiple driving modalities (audio-driven, video-driven and combined
driving signals). Video samples are provided on the ttfamily project page
(https://omnihuman-lab.github.io)
|
2502.01067
|
Nearly Tight Bounds for Exploration in Streaming Multi-armed Bandits
with Known Optimality Gap
|
cs.LG cs.DS
|
We investigate the sample-memory-pass trade-offs for pure exploration in
multi-pass streaming multi-armed bandits (MABs) with the *a priori* knowledge
of the optimality gap $\Delta_{[2]}$. Here, and throughout, the optimality gap
$\Delta_{[i]}$ is defined as the mean reward gap between the best and the
$i$-th best arms. A recent line of results by Jin, Huang, Tang, and Xiao
[ICML'21] and Assadi and Wang [COLT'24] have shown that if there is no known
$\Delta_{[2]}$, a pass complexity of $\Theta(\log(1/\Delta_{[2]}))$ (up to
$\log\log(1/\Delta_{[2]})$ terms) is necessary and sufficient to obtain the
*worst-case optimal* sample complexity of $O(n/\Delta^{2}_{[2]})$ with a
single-arm memory. However, our understanding of multi-pass algorithms with
known $\Delta_{[2]}$ is still limited. Here, the key open problem is how many
passes are required to achieve the complexity, i.e., $O(
\sum_{i=2}^{n}1/\Delta^2_{[i]})$ arm pulls, with a sublinear memory size.
In this work, we show that the ``right answer'' for the question is
$\Theta(\log{n})$ passes (up to $\log\log{n}$ terms). We first present a lower
bound, showing that any algorithm that finds the best arm with slightly
sublinear memory -- a memory of $o({n}/{\text{polylog}({n})})$ arms -- and
$O(\sum_{i=2}^{n}{1}/{\Delta^{2}_{[i]}}\cdot \log{(n)})$ arm pulls has to make
$\Omega(\frac{\log{n}}{\log\log{n}})$ passes over the stream. We then show a
nearly-matching algorithm that assuming the knowledge of $\Delta_{[2]}$, finds
the best arm with $O( \sum_{i=2}^{n}1/\Delta^2_{[i]} \cdot \log{n})$ arm pulls
and a *single arm* memory.
|
2502.01068
|
FastKV: KV Cache Compression for Fast Long-Context Processing with
Token-Selective Propagation
|
cs.LG cs.CL
|
While large language models (LLMs) excel at handling long-context sequences,
they require substantial key-value (KV) caches to store contextual information,
which can heavily burden computational efficiency and memory usage. Previous
efforts to compress these KV caches primarily focused on reducing memory
demands but were limited in enhancing latency. To address this issue, we
introduce FastKV, a KV cache compression method designed to enhance latency for
long-context sequences. To enhance processing speeds while maintaining
accuracy, FastKV adopts a novel Token-Selective Propagation (TSP) approach that
retains the full context information in the initial layers of LLMs and
selectively propagates only a portion of this information in deeper layers even
in the prefill stage. Additionally, FastKV incorporates grouped-query attention
(GQA)-aware KV cache compression to exploit the advantages of GQA in both
memory and computational efficiency. Our experimental results show that FastKV
achieves 2.00$\times$ and 1.40$\times$ improvements in time-to-first-token
(TTFT) and throughput, respectively, compared to HeadKV, the state-of-the-art
KV cache compression method. Moreover, FastKV successfully maintains accuracy
on long-context benchmarks at levels comparable to the baselines. Our code is
available at https://github.com/dongwonjo/FastKV.
|
2502.01070
|
An Investigation of FP8 Across Accelerators for LLM Inference
|
cs.LG cs.PF
|
The introduction of 8-bit floating-point (FP8) computation units in modern AI
accelerators has generated significant interest in FP8-based large language
model (LLM) inference. Unlike 16-bit floating-point formats, FP8 in deep
learning requires a shared scaling factor. Additionally, while E4M3 and E5M2
are well-defined at the individual value level, their scaling and accumulation
methods remain unspecified and vary across hardware and software
implementations. As a result, FP8 behaves more like a quantization format than
a standard numeric representation. In this work, we provide the first
comprehensive analysis of FP8 computation and acceleration on two AI
accelerators: the NVIDIA H100 and Intel Gaudi 2. Our findings highlight that
the Gaudi 2, by leveraging FP8, achieves higher throughput-to-power efficiency
during LLM inference, offering valuable insights into the practical
implications of FP8 adoption for datacenter-scale LLM serving.
|
2502.01071
|
Scalable, Training-Free Visual Language Robotics: A Modular Multi-Model
Framework for Consumer-Grade GPUs
|
cs.RO
|
The integration of language instructions with robotic control, particularly
through Vision Language Action (VLA) models, has shown significant potential.
However, these systems are often hindered by high computational costs, the need
for extensive retraining, and limited scalability, making them less accessible
for widespread use.
In this paper, we introduce SVLR (Scalable Visual Language Robotics), an
open-source, modular framework that operates without the need for retraining,
providing a scalable solution for robotic control. SVLR leverages a combination
of lightweight, open-source AI models including the Vision-Language Model (VLM)
Mini-InternVL, zero-shot image segmentation model CLIPSeg, Large Language Model
Phi-3, and sentence similarity model all-MiniLM to process visual and language
inputs. These models work together to identify objects in an unknown
environment, use them as parameters for task execution, and generate a sequence
of actions in response to natural language instructions. A key strength of SVLR
is its scalability. The framework allows for easy integration of new robotic
tasks and robots by simply adding text descriptions and task definitions,
without the need for retraining. This modularity ensures that SVLR can
continuously adapt to the latest advancements in AI technologies and support a
wide range of robots and tasks.
SVLR operates effectively on an NVIDIA RTX 2070 (mobile) GPU, demonstrating
promising performance in executing pick-and-place tasks. While these initial
results are encouraging, further evaluation across a broader set of tasks and
comparisons with existing VLA models are needed to assess SVLR's generalization
capabilities and performance in more complex scenarios.
|
2502.01074
|
Omni-Mol: Exploring Universal Convergent Space for Omni-Molecular Tasks
|
cs.LG
|
Building generalist models has recently demonstrated remarkable capabilities
in diverse scientific domains. Within the realm of molecular learning, several
studies have explored unifying diverse tasks across diverse domains. However,
negative conflicts and interference between molecules and knowledge from
different domain may have a worse impact in threefold. First, conflicting
molecular representations can lead to optimization difficulties for the models.
Second, mixing and scaling up training data across diverse tasks is inherently
challenging. Third, the computational cost of refined pretraining is
prohibitively high. To address these limitations, this paper presents Omni-Mol,
a scalable and unified LLM-based framework for direct instruction tuning.
Omni-Mol builds on three key components to tackles conflicts: (1) a unified
encoding mechanism for any task input; (2) an active-learning-driven data
selection strategy that significantly reduces dataset size; (3) a novel design
of the adaptive gradient stabilization module and anchor-and-reconcile MoE
framework that ensures stable convergence. Experimentally, Omni-Mol achieves
state-of-the-art performance across 15 molecular tasks, demonstrates the
presence of scaling laws in the molecular domain, and is supported by extensive
ablation studies and analyses validating the effectiveness of its design. The
code and weights of the powerful AI-driven chemistry generalist are
open-sourced at: https://anonymous.4open.science/r/Omni-Mol-8EDB.
|
2502.01076
|
qNBO: quasi-Newton Meets Bilevel Optimization
|
cs.LG math.OC
|
Bilevel optimization, addressing challenges in hierarchical learning tasks,
has gained significant interest in machine learning. The practical
implementation of the gradient descent method to bilevel optimization
encounters computational hurdles, notably the computation of the exact
lower-level solution and the inverse Hessian of the lower-level objective.
Although these two aspects are inherently connected, existing methods typically
handle them separately by solving the lower-level problem and a linear system
for the inverse Hessian-vector product. In this paper, we introduce a general
framework to address these computational challenges in a coordinated manner.
Specifically, we leverage quasi-Newton algorithms to accelerate the resolution
of the lower-level problem while efficiently approximating the inverse
Hessian-vector product. Furthermore, by exploiting the superlinear convergence
properties of BFGS, we establish the non-asymptotic convergence analysis of the
BFGS adaptation within our framework. Numerical experiments demonstrate the
comparable or superior performance of the proposed algorithms in real-world
learning tasks, including hyperparameter optimization, data hyper-cleaning, and
few-shot meta-learning.
|
2502.01078
|
Parallel Coding for Orthogonal Delay-Doppler Division Multiplexing
|
cs.IT eess.SP math.IT
|
This paper proposes a novel parallel coding transmission strategy and an
iterative detection and decoding receiver signal processing technique for
orthogonal delay-Doppler division multiplexing (ODDM) modulation. Specifically,
the proposed approach employs a parallel channel encoding (PCE) scheme that
consists of multiple short-length codewords for each delay-Doppler multicarrier
(DDMC) symbol. Building upon such a PCE transmission framework, we then
introduce an iterative detection and decoding algorithm incorporating a
successive decoding feedback (SDF) technique, which enables instant information
exchange between the detector and decoder for each DDMC symbol. To characterize
the error performance of the proposed scheme, we perform density evolution
analysis considering the finite blocklength effects. Our analysis results,
coupled with extensive simulations, demonstrate that the proposed PCE scheme
with the SDF algorithm not only showcases a better overall performance but also
requires much less decoding complexity to implement, compared to the
conventional benchmark scheme that relies on a single long channel code for
coding the entire ODDM frame.
|
2502.01080
|
BC-GAN: A Generative Adversarial Network for Synthesizing a Batch of
Collocated Clothing
|
cs.CV cs.MM
|
Collocated clothing synthesis using generative networks has become an
emerging topic in the field of fashion intelligence, as it has significant
potential economic value to increase revenue in the fashion industry. In
previous studies, several works have attempted to synthesize
visually-collocated clothing based on a given clothing item using generative
adversarial networks (GANs) with promising results. These works, however, can
only accomplish the synthesis of one collocated clothing item each time.
Nevertheless, users may require different clothing items to meet their multiple
choices due to their personal tastes and different dressing scenarios. To
address this limitation, we introduce a novel batch clothing generation
framework, named BC-GAN, which is able to synthesize multiple
visually-collocated clothing images simultaneously. In particular, to further
improve the fashion compatibility of synthetic results, BC-GAN proposes a new
fashion compatibility discriminator in a contrastive learning perspective by
fully exploiting the collocation relationship among all clothing items. Our
model was examined in a large-scale dataset with compatible outfits constructed
by ourselves. Extensive experiment results confirmed the effectiveness of our
proposed BC-GAN in comparison to state-of-the-art methods in terms of
diversity, visual authenticity, and fashion compatibility.
|
2502.01081
|
The Jumping Reasoning Curve? Tracking the Evolution of Reasoning
Performance in GPT-[n] and o-[n] Models on Multimodal Puzzles
|
cs.CV cs.AI cs.CL
|
The releases of OpenAI's o1 and o3 mark a significant paradigm shift in Large
Language Models towards advanced reasoning capabilities. Notably, o3
outperformed humans in novel problem-solving and skill acquisition on the
Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI).
However, this benchmark is limited to symbolic patterns, whereas humans often
perceive and reason about multimodal scenarios involving both vision and
language data. Thus, there is an urgent need to investigate advanced reasoning
capabilities in multimodal tasks. To this end, we track the evolution of the
GPT-[n] and o-[n] series models on challenging multimodal puzzles, requiring
fine-grained visual perception with abstract or algorithmic reasoning. The
superior performance of o1 comes at nearly 750 times the computational cost of
GPT-4o, raising concerns about its efficiency. Our results reveal a clear
upward trend in reasoning capabilities across model iterations, with notable
performance jumps across GPT-series models and subsequently to o1. Nonetheless,
we observe that the o1 model still struggles with simple multimodal puzzles
requiring abstract reasoning. Furthermore, its performance in algorithmic
puzzles remains poor. We plan to continuously track new models in the series
and update our results in this paper accordingly. All resources used in this
evaluation are openly available https://github.com/declare-lab/LLM-PuzzleTest.
|
2502.01083
|
Tool Unlearning for Tool-Augmented LLMs
|
cs.LG cs.AI cs.CL
|
Tool-augmented large language models (LLMs) are often trained on datasets of
query-response pairs, which embed the ability to use tools or APIs directly
into the parametric knowledge of LLMs. Tool-augmented LLMs need the ability to
forget learned tools due to security vulnerabilities, privacy regulations, or
tool deprecations. However, ``tool unlearning'' has not been investigated in
unlearning literature. We introduce this novel task, which requires addressing
distinct challenges compared to traditional unlearning: knowledge removal
rather than forgetting individual samples, the high cost of optimizing LLMs,
and the need for principled evaluation metrics. To bridge these gaps, we
propose ToolDelete, the first approach for unlearning tools from tool-augmented
LLMs. It implements three key properties to address the above challenges for
effective tool unlearning and introduces a new membership inference attack
(MIA) model for effective evaluation. Extensive experiments on multiple tool
learning datasets and tool-augmented LLMs show that ToolDelete effectively
unlearns randomly selected tools, while preserving the LLM's knowledge on
non-deleted tools and maintaining performance on general tasks.
|
2502.01084
|
Continuous Autoregressive Modeling with Stochastic Monotonic Alignment
for Speech Synthesis
|
cs.LG cs.SD eess.AS
|
We propose a novel autoregressive modeling approach for speech synthesis,
combining a variational autoencoder (VAE) with a multi-modal latent space and
an autoregressive model that uses Gaussian Mixture Models (GMM) as the
conditional probability distribution. Unlike previous methods that rely on
residual vector quantization, our model leverages continuous speech
representations from the VAE's latent space, greatly simplifying the training
and inference pipelines. We also introduce a stochastic monotonic alignment
mechanism to enforce strict monotonic alignments. Our approach significantly
outperforms the state-of-the-art autoregressive model VALL-E in both subjective
and objective evaluations, achieving these results with only 10.3\% of VALL-E's
parameters. This demonstrates the potential of continuous speech language
models as a more efficient alternative to existing quantization-based speech
language models. Sample audio can be found at https://tinyurl.com/gmm-lm-tts.
|
2502.01085
|
Federated Linear Dueling Bandits
|
cs.LG
|
Contextual linear dueling bandits have recently garnered significant
attention due to their widespread applications in important domains such as
recommender systems and large language models. Classical dueling bandit
algorithms are typically only applicable to a single agent. However, many
applications of dueling bandits involve multiple agents who wish to collaborate
for improved performance yet are unwilling to share their data. This motivates
us to draw inspirations from federated learning, which involves multiple agents
aiming to collaboratively train their neural networks via gradient descent (GD)
without sharing their raw data. Previous works have developed federated linear
bandit algorithms which rely on closed-form updates of the bandit parameters
(e.g., the linear function parameter) to achieve collaboration. However, in
linear dueling bandits, the linear function parameter lacks a closed-form
expression and its estimation requires minimizing a loss function. This renders
these previous methods inapplicable. In this work, we overcome this challenge
through an innovative and principled combination of online gradient descent
(for minimizing the loss function to estimate the linear function parameters)
and federated learning, hence introducing the first federated linear dueling
bandit algorithms. Through rigorous theoretical analysis, we prove that our
algorithms enjoy a sub-linear upper bound on its cumulative regret. We also use
empirical experiments to demonstrate the effectiveness of our algorithms and
the practical benefit of collaboration.
|
2502.01089
|
Advanced Architectures Integrated with Agentic AI for Next-Generation
Wireless Networks
|
cs.NI cs.AI
|
This paper investigates a range of cutting-edge technologies and
architectural innovations aimed at simplifying network operations, reducing
operational expenditure (OpEx), and enabling the deployment of new service
models. The focus is on (i) Proposing novel, more efficient 6G architectures,
with both Control and User planes enabling the seamless expansion of services,
while addressing long-term 6G network evolution. (ii) Exploring advanced
techniques for constrained artificial intelligence (AI) operations,
particularly the design of AI agents for real-time learning, optimizing energy
consumption, and the allocation of computational resources. (iii) Identifying
technologies and architectures that support the orchestration of backend
services using serverless computing models across multiple domains,
particularly for vertical industries. (iv) Introducing optically-based,
ultra-high-speed, low-latency network architectures, with fast optical
switching and real-time control, replacing conventional electronic switching to
reduce power consumption by an order of magnitude.
|
2502.01090
|
Classic4Children: Adapting Chinese Literary Classics for Children with
Large Language Model
|
cs.CL cs.AI
|
Chinese literary classics hold significant cultural and educational value,
offering deep insights into morality, history, and human nature. These works
often include classical Chinese and complex narratives, making them difficult
for children to read. To bridge this gap, we introduce a child-friendly
literary adaptation (CLA) task to adapt the Chinese literary classic into
engaging and accessible text for children. However, recent large language
models (LLMs) overlook children's reading preferences (\ie, vivid character
portrayals, concise narrative structures, and appropriate readability), which
poses challenges in CLA. In this paper, we propose a method called
InstructChild, which augments the LLM with these preferences for adaptation.
Specifically, we first obtain the characters' personalities and narrative
structure as additional information for fine-grained instruction tuning. Then,
we devise a readability metric as the reward to align the LLM with the
children's reading level. Finally, a lookahead decoding strategy is applied to
improve the readability of the generated text during inference. To support the
evaluation of CLA task, we construct the Classic4Children dataset, which
comprises both the original and child-friendly versions of the Four Great
Classical Novels of Chinese literature. Experimental results show that our
InstructChild significantly improves automatic and human evaluation
performance.
|
2502.01091
|
Enhancing Aspect-based Sentiment Analysis with ParsBERT in Persian
Language
|
cs.CL cs.AI
|
In the era of pervasive internet use and the dominance of social networks,
researchers face significant challenges in Persian text mining including the
scarcity of adequate datasets in Persian and the inefficiency of existing
language models. This paper specifically tackles these challenges, aiming to
amplify the efficiency of language models tailored to the Persian language.
Focusing on enhancing the effectiveness of sentiment analysis, our approach
employs an aspect-based methodology utilizing the ParsBERT model, augmented
with a relevant lexicon. The study centers on sentiment analysis of user
opinions extracted from the Persian website 'Digikala.' The experimental
results not only highlight the proposed method's superior semantic capabilities
but also showcase its efficiency gains with an accuracy of 88.2% and an F1
score of 61.7. The importance of enhancing language models in this context lies
in their pivotal role in extracting nuanced sentiments from user-generated
content, ultimately advancing the field of sentiment analysis in Persian text
mining by increasing efficiency and accuracy.
|
2502.01092
|
Enhancing Feature Tracking Reliability for Visual Navigation using
Real-Time Safety Filter
|
cs.RO cs.CV cs.SY eess.SY
|
Vision sensors are extensively used for localizing a robot's pose,
particularly in environments where global localization tools such as GPS or
motion capture systems are unavailable. In many visual navigation systems,
localization is achieved by detecting and tracking visual features or
landmarks, which provide information about the sensor's relative pose. For
reliable feature tracking and accurate pose estimation, it is crucial to
maintain visibility of a sufficient number of features. This requirement can
sometimes conflict with the robot's overall task objective. In this paper, we
approach it as a constrained control problem. By leveraging the invariance
properties of visibility constraints within the robot's kinematic model, we
propose a real-time safety filter based on quadratic programming. This filter
takes a reference velocity command as input and produces a modified velocity
that minimally deviates from the reference while ensuring the information score
from the currently visible features remains above a user-specified threshold.
Numerical simulations demonstrate that the proposed safety filter preserves the
invariance condition and ensures the visibility of more features than the
required minimum. We also validated its real-world performance by integrating
it into a visual simultaneous localization and mapping (SLAM) algorithm, where
it maintained high estimation quality in challenging environments,
outperforming a simple tracking controller.
|
2502.01094
|
Model Order Reduction from Data with Certification
|
eess.SY cs.SY
|
Model order reduction (MOR) involves offering low-dimensional models that
effectively approximate the behavior of complex high-order systems. Due to
potential model complexities and computational costs, designing controllers for
high-dimensional systems with complex behaviors can be challenging, rendering
MOR a practical alternative to achieve results that closely resemble those of
the original complex systems. To construct such effective reduced-order models
(ROMs), existing literature generally necessitates precise knowledge of
original systems, which is often unavailable in real-world scenarios. This
paper introduces a data-driven scheme to construct ROMs of dynamical systems
with unknown mathematical models. Our methodology leverages data and
establishes similarity relations between output trajectories of unknown systems
and their data-driven ROMs via the notion of simulation functions (SFs),
capable of formally quantifying their closeness. To achieve this, under a rank
condition readily fulfillable using data, we collect only two input-state
trajectories from unknown systems to construct both ROMs and SFs, while
offering correctness guarantees. We demonstrate that the proposed ROMs derived
from data can be leveraged for controller synthesis endeavors while effectively
ensuring high-level logic properties over unknown dynamical models. We showcase
our data-driven findings across a range of benchmark scenarios involving
various unknown physical systems, demonstrating the enforcement of diverse
complex properties.
|
2502.01098
|
SatFlow: Generative model based framework for producing High Resolution
Gap Free Remote Sensing Imagery
|
cs.CV cs.LG
|
Frequent, high-resolution remote sensing imagery is crucial for agricultural
and environmental monitoring. Satellites from the Landsat collection offer
detailed imagery at 30m resolution but with lower temporal frequency, whereas
missions like MODIS and VIIRS provide daily coverage at coarser resolutions.
Clouds and cloud shadows contaminate about 55\% of the optical remote sensing
observations, posing additional challenges. To address these challenges, we
present SatFlow, a generative model-based framework that fuses low-resolution
MODIS imagery and Landsat observations to produce frequent, high-resolution,
gap-free surface reflectance imagery. Our model, trained via Conditional Flow
Matching, demonstrates better performance in generating imagery with preserved
structural and spectral integrity. Cloud imputation is treated as an image
inpainting task, where the model reconstructs cloud-contaminated pixels and
fills gaps caused by scan lines during inference by leveraging the learned
generative processes. Experimental results demonstrate the capability of our
approach in reliably imputing cloud-covered regions. This capability is crucial
for downstream applications such as crop phenology tracking, environmental
change detection etc.,
|
2502.01100
|
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
|
cs.AI cs.CL cs.LG
|
We investigate the logical reasoning capabilities of large language models
(LLMs) and their scalability in complex non-monotonic reasoning. To this end,
we introduce ZebraLogic, a comprehensive evaluation framework for assessing LLM
reasoning performance on logic grid puzzles derived from constraint
satisfaction problems (CSPs). ZebraLogic enables the generation of puzzles with
controllable and quantifiable complexity, facilitating a systematic study of
the scaling limits of models such as Llama, o1 models, and DeepSeek-R1. By
encompassing a broad range of search space complexities and diverse logical
constraints, ZebraLogic provides a structured environment to evaluate reasoning
under increasing difficulty.
Our results reveal a significant decline in accuracy as problem complexity
grows -- a phenomenon we term the curse of complexity. This limitation persists
even with larger models and increased inference-time computation, suggesting
inherent constraints in current LLM reasoning capabilities. Additionally, we
explore strategies to enhance logical reasoning, including Best-of-N sampling,
backtracking mechanisms, and self-verification prompts. Our findings offer
critical insights into the scalability of LLM reasoning, highlight fundamental
limitations, and outline potential directions for improvement.
|
2502.01101
|
VidSketch: Hand-drawn Sketch-Driven Video Generation with Diffusion
Control
|
cs.CV cs.AI
|
With the advancement of generative artificial intelligence, previous studies
have achieved the task of generating aesthetic images from hand-drawn sketches,
fulfilling the public's needs for drawing. However, these methods are limited
to static images and lack the ability to control video animation generation
using hand-drawn sketches. To address this gap, we propose VidSketch, the first
method capable of generating high-quality video animations directly from any
number of hand-drawn sketches and simple text prompts, bridging the divide
between ordinary users and professional artists. Specifically, our method
introduces a Level-Based Sketch Control Strategy to automatically adjust the
guidance strength of sketches during the generation process, accommodating
users with varying drawing skills. Furthermore, a TempSpatial Attention
mechanism is designed to enhance the spatiotemporal consistency of generated
video animations, significantly improving the coherence across frames. You can
find more detailed cases on our official website.
|
2502.01102
|
Towards Robust and Generalizable Lensless Imaging with Modular Learned
Reconstruction
|
eess.IV cs.CV
|
Lensless cameras disregard the conventional design that imaging should mimic
the human eye. This is done by replacing the lens with a thin mask, and moving
image formation to the digital post-processing. State-of-the-art lensless
imaging techniques use learned approaches that combine physical modeling and
neural networks. However, these approaches make simplifying modeling
assumptions for ease of calibration and computation. Moreover, the
generalizability of learned approaches to lensless measurements of new masks
has not been studied. To this end, we utilize a modular learned reconstruction
in which a key component is a pre-processor prior to image recovery. We
theoretically demonstrate the pre-processor's necessity for standard image
recovery techniques (Wiener filtering and iterative algorithms), and through
extensive experiments show its effectiveness for multiple lensless imaging
approaches and across datasets of different mask types (amplitude and phase).
We also perform the first generalization benchmark across mask types to
evaluate how well reconstructions trained with one system generalize to others.
Our modular reconstruction enables us to use pre-trained components and
transfer learning on new systems to cut down weeks of tedious measurements and
training. As part of our work, we open-source four datasets, and software for
measuring datasets and for training our modular reconstruction.
|
2502.01105
|
LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion
Transformer
|
cs.CV
|
Generating cognitive-aligned layered SVGs remains challenging due to existing
methods' tendencies toward either oversimplified single-layer outputs or
optimization-induced shape redundancies. We propose LayerTracer, a diffusion
transformer based framework that bridges this gap by learning designers'
layered SVG creation processes from a novel dataset of sequential design
operations. Our approach operates in two phases: First, a text-conditioned DiT
generates multi-phase rasterized construction blueprints that simulate human
design workflows. Second, layer-wise vectorization with path deduplication
produces clean, editable SVGs. For image vectorization, we introduce a
conditional diffusion mechanism that encodes reference images into latent
tokens, guiding hierarchical reconstruction while preserving structural
integrity. Extensive experiments demonstrate LayerTracer's superior performance
against optimization-based and neural baselines in both generation quality and
editability, effectively aligning AI-generated vectors with professional design
cognition.
|
2502.01106
|
Can We Validate Counterfactual Estimations in the Presence of General
Network Interference?
|
cs.LG econ.EM stat.ME stat.ML
|
In experimental settings with network interference, a unit's treatment can
influence outcomes of other units, challenging both causal effect estimation
and its validation. Classic validation approaches fail as outcomes are only
observable under one treatment scenario and exhibit complex correlation
patterns due to interference. To address these challenges, we introduce a new
framework enabling cross-validation for counterfactual estimation. At its core
is our distribution-preserving network bootstrap method -- a
theoretically-grounded approach inspired by approximate message passing. This
method creates multiple subpopulations while preserving the underlying
distribution of network effects. We extend recent causal message-passing
developments by incorporating heterogeneous unit-level characteristics and
varying local interactions, ensuring reliable finite-sample performance through
non-asymptotic analysis. We also develop and publicly release a comprehensive
benchmark toolbox with diverse experimental environments, from networks of
interacting AI agents to opinion formation in real-world communities and
ride-sharing applications. These environments provide known ground truth values
while maintaining realistic complexities, enabling systematic examination of
causal inference methods. Extensive evaluation across these environments
demonstrates our method's robustness to diverse forms of network interference.
Our work provides researchers with both a practical estimation framework and a
standardized platform for testing future methodological developments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.