id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.00472
|
Binned Spectral Power Loss for Improved Prediction of Chaotic Systems
|
cs.LG math.DS physics.flu-dyn
|
Forecasting multiscale chaotic dynamical systems with deep learning remains a
formidable challenge due to the spectral bias of neural networks, which hinders
the accurate representation of fine-scale structures in long-term predictions.
This issue is exacerbated when models are deployed autoregressively, leading to
compounding errors and instability. In this work, we introduce a novel approach
to mitigate the spectral bias which we call the Binned Spectral Power (BSP)
Loss. The BSP loss is a frequency-domain loss function that adaptively weighs
errors in predicting both larger and smaller scales of the dataset. Unlike
traditional losses that focus on pointwise misfits, our BSP loss explicitly
penalizes deviations in the energy distribution across different scales,
promoting stable and physically consistent predictions. We demonstrate that the
BSP loss mitigates the well-known problem of spectral bias in deep learning. We
further validate our approach for the data-driven high-dimensional time-series
forecasting of a range of benchmark chaotic systems which are typically
intractable due to spectral bias. Our results demonstrate that the BSP loss
significantly improves the stability and spectral accuracy of neural
forecasting models without requiring architectural modifications. By directly
targeting spectral consistency, our approach paves the way for more robust deep
learning models for long-term forecasting of chaotic dynamical systems.
|
2502.00473
|
Weak-to-Strong Diffusion with Reflection
|
cs.LG cs.CV
|
The goal of diffusion generative models is to align the learned distribution
with the real data distribution through gradient score matching. However,
inherent limitations in training data quality, modeling strategies, and
architectural design lead to inevitable gap between generated outputs and real
data. To reduce this gap, we propose Weak-to-Strong Diffusion (W2SD), a novel
framework that utilizes the estimated difference between existing weak and
strong models (i.e., weak-to-strong difference) to approximate the gap between
an ideal model and a strong model. By employing a reflective operation that
alternates between denoising and inversion with weak-to-strong difference, we
theoretically understand that W2SD steers latent variables along sampling
trajectories toward regions of the real data distribution. W2SD is highly
flexible and broadly applicable, enabling diverse improvements through the
strategic selection of weak-to-strong model pairs (e.g., DreamShaper vs. SD1.5,
good experts vs. bad experts in MoE). Extensive experiments demonstrate that
W2SD significantly improves human preference, aesthetic quality, and prompt
adherence, achieving SOTA performance across various modalities (e.g., image,
video), architectures (e.g., UNet-based, DiT-based, MoE), and benchmarks. For
example, Juggernaut-XL with W2SD can improve with the HPSv2 winning rate up to
90% over the original results. Moreover, the performance gains achieved by W2SD
markedly outweigh its additional computational overhead, while the cumulative
improvements from different weak-to-strong difference further solidify its
practical utility and deployability.
|
2502.00474
|
A framework for river connectivity classification using temporal image
processing and attention based neural networks
|
cs.CV cs.LG eess.IV
|
Measuring the connectivity of water in rivers and streams is essential for
effective water resource management. Increased extreme weather events
associated with climate change can result in alterations to river and stream
connectivity. While traditional stream flow gauges are costly to deploy and
limited to large river bodies, trail camera methods are a low-cost and easily
deployed alternative to collect hourly data. Image capturing, however requires
stream ecologists to manually curate (select and label) tens of thousands of
images per year. To improve this workflow, we developed an automated instream
trail camera image classification system consisting of three parts: (1) image
processing, (2) image augmentation and (3) machine learning. The image
preprocessing consists of seven image quality filters, foliage-based luma
variance reduction, resizing and bottom-center cropping. Images are balanced
using variable amount of generative augmentation using diffusion models and
then passed to a machine learning classification model in labeled form. By
using the vision transformer architecture and temporal image enhancement in our
framework, we are able to increase the 75% base accuracy to 90% for a new
unseen site image. We make use of a dataset captured and labeled by staff from
the Connecticut Department of Energy and Environmental Protection between
2018-2020. Our results indicate that a combination of temporal image processing
and attention-based models are effective at classifying unseen river
connectivity images.
|
2502.00476
|
Offshore wind farm layout optimization using mathematical programming
techniques
|
cs.CE
|
Offshore wind power is a renewable energy of growing relevance in current
electric energy systems, presenting favorable wind conditions in comparison
with the sites on land. However, the higher energy yield has to compensate the
increment in installation and maintenance costs, thus the importance of
optimizing resources. One relevant aspect to increase profitability is the wind
farm layout. The aim of this paper is to propose a new method to maximize the
expected power production of offshore wind farms by setting the appropriate
layout, i.e. minimizing the wake effects. The method uses a sequential
procedure for global optimization consisting of two steps: i) an heuristic
method to set an initial random layout configuration, and ii) the use of
nonlinear mathematical programming techniques for local optimization, which use
the random layout as an initial solution. The method takes full advantage of
the most up-to-date mathematical programming techniques while performing a
global optimization approach, which can be easily parallelized. The performance
of the proposed procedure is tested using the German offshore wind farm Alpha
Ventus, located in the North Sea, yielding an increment of expected annual
power production of 3.52% with respect to the actual configuration. According
to current electricity prices in Germany, this constitutes an expected profit
increment of almost 1 M per year.
|
2502.00486
|
Mixed extreme wave climate model for reanalysis databases
|
cs.CE
|
Hindcast or wave reanalysis databases (WRDB) constitute a powerful source
with respect to instrumental records in the design of offshore and coastal
structures, since they offer important advantages for the statistical
characterization of wave climate variables, such as continuous long time
records of significant wave heights, mean and peak periods, etc. However,
reanalysis data is less accurate than instrumental records, making extreme data
analysis derived from WRDB prone to under predict design return period values.
This paper proposes a mixed extreme value model to deal with maxima, which
takes full advantage of both (i) hindcast or wave reanalysis and (ii)
instrumental records, reducing the uncertainty in its predictions. The
resulting mixed model consistently merges the information given by both kinds
of data sets, and it can be applied to any extreme value analysis distribution,
such as generalized extreme value, peaks over threshold or Pareto-Poisson. The
methodology is illustrated using both synthetically generated and real data,
the latter taken from a given location on the northern Spanish coast.
|
2502.00488
|
Learn Sharp Interface Solution by Homotopy Dynamics
|
cs.LG cs.NA math.NA
|
Solving partial differential equations (PDEs) using neural networks has
become a central focus in scientific machine learning. Training neural networks
for sharp interface problems is particularly challenging due to certain
parameters in the PDEs that introduce near-singularities in the loss function.
In this study, we overcome this challenge by introducing a novel method based
on homotopy dynamics to effectively manipulate these parameters. From a
theoretical perspective, we analyze the effects of these parameters on training
difficulty in sharp interface problems and establish the convergence of the
proposed homotopy dynamics method. Experimentally, we demonstrate that our
approach significantly accelerates convergence and improves the accuracy of
sharp interface capturing. These findings present an efficient optimization
strategy leveraging homotopy dynamics, offering a robust framework to extend
the applicability of neural networks for solving PDEs with sharp
|
2502.00490
|
Oscillations Make Neural Networks Robust to Quantization
|
cs.LG
|
We challenge the prevailing view that oscillations in Quantization Aware
Training (QAT) are merely undesirable artifacts caused by the Straight-Through
Estimator (STE). Through theoretical analysis of QAT in linear models, we
demonstrate that the gradient of the loss function can be decomposed into two
terms: the original full-precision loss and a term that causes quantization
oscillations. Based on these insights, we propose a novel regularization method
that induces oscillations to improve quantization robustness. Contrary to
traditional methods that focuses on minimizing the effects of oscillations, our
approach leverages the beneficial aspects of weight oscillations to preserve
model performance under quantization. Our empirical results on ResNet-18 and
Tiny ViT demonstrate that this counter-intuitive strategy matches QAT accuracy
at >= 3-bit weight quantization, while maintaining close to full precision
accuracy at bits greater than the target bit. Our work therefore provides a new
perspective on model preparation for quantization, particularly for finding
weights that are robust to changes in the bit of the quantizer -- an area where
current methods struggle to match the accuracy of QAT at specific bits.
|
2502.00494
|
Data Overvaluation Attack and Truthful Data Valuation
|
cs.CR cs.AI cs.LG
|
In collaborative machine learning, data valuation, i.e., evaluating the
contribution of each client' data to the machine learning model, has become a
critical task for incentivizing and selecting positive data contributions.
However, existing studies often assume that clients engage in data valuation
truthfully, overlooking the practical motivation for clients to exaggerate
their contributions. To unlock this threat, this paper introduces the first
data overvaluation attack, enabling strategic clients to have their data
significantly overvalued. Furthermore, we propose a truthful data valuation
metric, named Truth-Shapley. Truth-Shapley is the unique metric that guarantees
some promising axioms for data valuation while ensuring that clients' optimal
strategy is to perform truthful data valuation. Our experiments demonstrate the
vulnerability of existing data valuation metrics to the data overvaluation
attack and validate the robustness and effectiveness of Truth-Shapley.
|
2502.00495
|
Looking into the Future of Health-Care Services: Can Life-Like Agents
Change the Future of Health-Care Services?
|
cs.CY cs.AI
|
Time constraints on doctor patient interaction and restricted access to
specialists under the managed care system led to increasingly referring to
computers as a medical information source and a self-health-care management
tool. However, research show that less than 40% of information seekers
indicated that online information helped them to make a decision about their
health. Searching multiple web sites that need basic computer skills, lack of
interaction and no face to face interaction in most search engines and some
social issues, led us to develop a specialized life-like agent that would
overcome mentioned problems.
|
2502.00497
|
Convolutional Fourier Analysis Network (CFAN): A Unified Time-Frequency
Approach for ECG Classification
|
cs.LG eess.SP
|
Machine learning has transformed the classification of biomedical signals
such as electrocardiograms (ECGs). Advances in deep learning, particularly
convolutional neural networks (CNNs), enable automatic feature extraction,
raising the question: Can combining time- and frequency-domain attributes
enhance classification accuracy? To explore this, we evaluated three ECG
classification tasks: (1) arrhythmia classification, (2) identity recognition,
and (3) apnea detection. We initially tested three methods: (i) 2-D
spectrogram-based frequency-time classification (SPECT), (ii) time-domain
classification using a 1-D CNN (CNN1D), and (iii) frequency-domain
classification using a Fourier transform-based CNN (FFT1D). Performance was
validated using K-fold cross-validation. Among these, CNN1D (time only)
performed best, followed by SPECT (time-frequency) and FFT1D (frequency only).
Surprisingly, SPECT, which integrates time- and frequency-domain features,
performed worse than CNN1D, suggesting a need for a more effective time and
frequency fusion approach. To address this, we tested the recently proposed
Fourier Analysis Network (FAN), which combines time- and frequency-domain
features. However, FAN performed comparably to CNN1D, excelling in some tasks
while underperforming in others. To enhance this approach, we developed the
Convolutional Fourier Analysis Network (CFAN), which integrates FAN with CNN.
CFAN outperformed all previous methods across all classification tasks. These
findings underscore the advantages of combining time- and frequency-domain
features, demonstrating CFAN's potential as a powerful and versatile solution
for ECG classification and broader biomedical signal analysis
|
2502.00498
|
MetaOpenFOAM 2.0: Large Language Model Driven Chain of Thought for
Automating CFD Simulation and Post-Processing
|
cs.AI physics.comp-ph
|
Computational Fluid Dynamics (CFD) is widely used in aerospace, energy, and
biology to model fluid flow, heat transfer, and chemical reactions. While Large
Language Models (LLMs) have transformed various domains, their application in
CFD remains limited, particularly for complex tasks like post-processing. To
bridge this gap, we introduce MetaOpenFOAM 2.0, which leverages Chain of
Thought (COT) decomposition and iterative verification to enhance accessibility
for non-expert users through natural language inputs. Tested on a new benchmark
covering simulation (fluid flow, heat transfer, combustion) and post-processing
(extraction, visualization), MetaOpenFOAM 2.0 achieved an Executability score
of 6.3/7 and a pass rate of 86.9%, significantly outperforming MetaOpenFOAM 1.0
(2.1/7, 0%). Additionally, it proved cost-efficient, averaging $0.15 per case.
An ablation study confirmed that COT-driven decomposition and iterative
refinement substantially improved task performance. Furthermore, scaling laws
showed that increasing COT steps enhanced accuracy while raising token usage,
aligning with LLM post-training scaling trends. These results highlight the
transformative potential of LLMs in automating CFD workflows for industrial and
research applications. Code is available at
https://github.com/Terry-cyx/MetaOpenFOAM
|
2502.00499
|
Discovering Directly-Follows Graph Model for Acyclic Processes
|
cs.AI
|
Process mining is the common name for a range of methods and approaches aimed
at analysing and improving processes. Specifically, methods that aim to derive
process models from event logs fall under the category of process discovery.
Within the range of processes, acyclic processes form a distinct category. In
such processes, previously performed actions are not repeated, forming chains
of unique actions. However, due to differences in the order of actions,
existing process discovery methods can provide models containing cycles even if
a process is acyclic. This paper presents a new process discovery algorithm
that allows to discover acyclic DFG models for acyclic processes. A model is
discovered by partitioning an event log into parts that provide acyclic DFG
models and merging them while avoiding the formation of cycles. The resulting
algorithm was tested both on real-life and artificial event logs. Absence of
cycles improves model visual clarity and precision, also allowing to apply
cycle-sensitive methods or visualisations to the model.
|
2502.00500
|
Video Latent Flow Matching: Optimal Polynomial Projections for Video
Interpolation and Extrapolation
|
cs.CV cs.AI cs.LG
|
This paper considers an efficient video modeling process called Video Latent
Flow Matching (VLFM). Unlike prior works, which randomly sampled latent patches
for video generation, our method relies on current strong pre-trained image
generation models, modeling a certain caption-guided flow of latent patches
that can be decoded to time-dependent video frames. We first speculate multiple
images of a video are differentiable with respect to time in some latent space.
Based on this conjecture, we introduce the HiPPO framework to approximate the
optimal projection for polynomials to generate the probability path. Our
approach gains the theoretical benefits of the bounded universal approximation
error and timescale robustness. Moreover, VLFM processes the interpolation and
extrapolation abilities for video generation with arbitrary frame rates. We
conduct experiments on several text-to-video datasets to showcase the
effectiveness of our method.
|
2502.00501
|
Optimizing Feature Selection in Causal Inference: A Three-Stage
Computational Framework for Unbiased Estimation
|
stat.ME cs.AI cs.LG stat.ML
|
Feature selection is an important but challenging task in causal inference
for obtaining unbiased estimates of causal quantities. Properly selected
features in causal inference not only significantly reduce the time required to
implement a matching algorithm but, more importantly, can also reduce the bias
and variance when estimating causal quantities. When feature selection
techniques are applied in causal inference, the crucial criterion is to select
variables that, when used for matching, can achieve an unbiased and robust
estimation of causal quantities. Recent research suggests that balancing only
on treatment-associated variables introduces bias while balancing on spurious
variables increases variance. To address this issue, we propose an enhanced
three-stage framework that shows a significant improvement in selecting the
desired subset of variables compared to the existing state-of-the-art feature
selection framework for causal inference, resulting in lower bias and variance
in estimating the causal quantity. We evaluated our proposed framework using a
state-of-the-art synthetic data across various settings and observed superior
performance within a feasible computation time, ensuring scalability for
large-scale datasets. Finally, to demonstrate the applicability of our proposed
methodology using large-scale real-world data, we evaluated an important US
healthcare policy related to the opioid epidemic crisis: whether opioid use
disorder has a causal relationship with suicidal behavior.
|
2502.00507
|
A statistically consistent measure of Semantic Variability using
Language Models
|
cs.CL cs.AI
|
To address the issue of variability in the output generated by a language
model, we present a measure of semantic variability that is statistically
consistent under mild assumptions. This measure, denoted as semantic spectral
entropy, is a easy to implement algorithm that requires just off the shelf
language models. We put very few restrictions on the language models and we
have shown in a clear simulation studies that such method can generate accurate
metric despite randomness that arise from the language models.
|
2502.00510
|
Who's the MVP? A Game-Theoretic Evaluation Benchmark for Modular
Attribution in LLM Agents
|
cs.AI cs.CL
|
Large Language Model (LLM) agents frameworks often employ modular
architectures, incorporating components such as planning, reasoning, action
execution, and reflection to tackle complex tasks. However, quantifying the
contribution of each module to overall system performance remains a significant
challenge, impeding optimization and interpretability. To address this, we
introduce CapaBench (Capability-level Assessment Benchmark), an evaluation
framework grounded in cooperative game theory's Shapley Value, which
systematically measures the marginal impact of individual modules and their
interactions within an agent's architecture. By replacing default modules with
test variants across all possible combinations, CapaBench provides a principle
method for attributing performance contributions. Key contributions include:
(1) We are the first to propose a Shapley Value-based methodology for
quantifying the contributions of capabilities in LLM agents; (2) Modules with
high Shapley Values consistently lead to predictable performance gains when
combined, enabling targeted optimization; and (3) We build a multi-round
dataset of over 1,500 entries spanning diverse domains and practical task
scenarios, enabling comprehensive evaluation of agent capabilities. CapaBench
bridges the gap between component-level evaluation and holistic system
assessment, providing actionable insights for optimizing modular LLM agents and
advancing their deployment in complex, real-world scenarios.
|
2502.00511
|
Bridging Internal Probability and Self-Consistency for Effective and
Efficient LLM Reasoning
|
cs.LG cs.AI cs.CL
|
Recent advancements in large language models (LLMs) have demonstrated
remarkable reasoning capabilities. However, single-shot inference often yields
unreliable results for complex reasoning tasks, leading researchers to explore
multiple reasoning paths through methods such as perplexity and
self-consistency. In this paper, we present the first theoretical error
decomposition analysis of these techniques, breaking down their error into
estimation error and model error. Our analysis reveals a fundamental trade-off:
perplexity methods suffer from substantial model error due to the absence of a
proper consistency function, while self-consistency exhibits high estimation
error due to a slow error convergence rate. To overcome these limitations, we
propose Reasoning-Pruning Perplexity Consistency (RPC). This approach combines
Perplexity Consistency, which seamlessly integrates LLM perplexity with
self-consistency, and Reasoning Pruning, which eliminates low-probability
reasoning paths to effectively prevent the degeneration of estimation error
reduction. Theoretical analysis demonstrates that RPC not only accelerates the
convergence rate of estimation error to an exponential level but also holds
strong potential for further reducing model error. Extensive empirical
evaluations on seven benchmark datasets confirm that RPC can significantly
improve reasoning performance, sample efficiency, and confidence reliability.
|
2502.00513
|
Covariance Analysis of Attitude and Angular Rate Estimation using
Accelerometers
|
eess.SY cs.SY
|
In this work a method for using accelerometers for the determination of
angular velocity and acceleration is presented. Minimum sensor requirements and
insights into how an array of accelerometers can be configured to maximize
estimator performance are considered. The framework presented utilizes linear
least squares to estimate functions that are quadratic in angular velocity.
Simple methods for determining the sign of the spin axis and the linearized
covariance approximation are presented and found to perform quite effectively
when compared to results obtained by Monte Carlo.
|
2502.00519
|
CoDocBench: A Dataset for Code-Documentation Alignment in Software
Maintenance
|
cs.SE cs.LG
|
One of the central tasks in software maintenance is being able to understand
and develop code changes. Thus, given a natural language description of the
desired new operation of a function, an agent (human or AI) might be asked to
generate the set of edits to that function to implement the desired new
operation; likewise, given a set of edits to a function, an agent might be
asked to generate a changed description, of that function's new workings. Thus,
there is an incentive to train a neural model for change-related tasks.
Motivated by this, we offer a new, "natural", large dataset of coupled changes
to code and documentation mined from actual high-quality GitHub projects, where
each sample represents a single commit where the code and the associated
docstring were changed together. We present the methodology for gathering the
dataset, and some sample, challenging (but realistic) tasks where our dataset
provides opportunities for both learning and evaluation. We find that current
models (specifically Llama-3.1 405B, Mixtral 8$\times$22B) do find these
maintenance-related tasks challenging.
|
2502.00520
|
Variance Reduction via Resampling and Experience Replay
|
stat.ML cs.LG
|
Experience replay is a foundational technique in reinforcement learning that
enhances learning stability by storing past experiences in a replay buffer and
reusing them during training. Despite its practical success, its theoretical
properties remain underexplored. In this paper, we present a theoretical
framework that models experience replay using resampled $U$- and
$V$-statistics, providing rigorous variance reduction guarantees. We apply this
framework to policy evaluation tasks using the Least-Squares Temporal
Difference (LSTD) algorithm and a Partial Differential Equation (PDE)-based
model-free algorithm, demonstrating significant improvements in stability and
efficiency, particularly in data-scarce scenarios. Beyond policy evaluation, we
extend the framework to kernel ridge regression, showing that the experience
replay-based method reduces the computational cost from the traditional
$O(n^3)$ in time to as low as $O(n^2)$ in time while simultaneously reducing
variance. Extensive numerical experiments validate our theoretical findings,
demonstrating the broad applicability and effectiveness of experience replay in
diverse machine learning tasks.
|
2502.00527
|
PolarQuant: Leveraging Polar Transformation for Efficient Key Cache
Quantization and Decoding Acceleration
|
cs.LG cs.CL
|
The KV cache in large language models is a dominant factor in memory usage,
limiting their broader applicability. Quantizing the cache to lower bit widths
is an effective way to reduce computational costs; however, previous methods
struggle with quantizing key vectors due to outliers, resulting in excessive
overhead. We propose a novel quantization approach called PolarQuant, which
efficiently addresses the outlier challenge. We observe that outliers typically
appear in only one of two dimensions, which are rotated together by a specific
angle when rotary position embeddings are applied. When represented as
two-dimensional vectors, these dimensions exhibit well-structured patterns,
with radii and angles smoothly distributed in polar coordinates. This
alleviates the challenge of outliers on per-channel quantization, making them
well-suited for quantization. Thus, PolarQuant divides key vectors into groups
of two-dimensional sub-vectors, encoding them as the corresponding quantized
radius and the polar angle, rather than quantizing original key vectors
directly. PolarQuant achieves the superior efficiency in KV cache quantization
and accelerates the decoding process by turning the query-key inner product
into a table lookup, all while maintaining the downstream performance of
full-precision models.
|
2502.00528
|
Vision-Language Modeling in PET/CT for Visual Grounding of Positive
Findings
|
cs.CV cs.CL
|
Vision-language models can connect the text description of an object to its
specific location in an image through visual grounding. This has potential
applications in enhanced radiology reporting. However, these models require
large annotated image-text datasets, which are lacking for PET/CT. We developed
an automated pipeline to generate weak labels linking PET/CT report
descriptions to their image locations and used it to train a 3D vision-language
visual grounding model. Our pipeline finds positive findings in PET/CT reports
by identifying mentions of SUVmax and axial slice numbers. From 25,578 PET/CT
exams, we extracted 11,356 sentence-label pairs. Using this data, we trained
ConTEXTual Net 3D, which integrates text embeddings from a large language model
with a 3D nnU-Net via token-level cross-attention. The model's performance was
compared against LLMSeg, a 2.5D version of ConTEXTual Net, and two nuclear
medicine physicians. The weak-labeling pipeline accurately identified lesion
locations in 98% of cases (246/251), with 7.5% requiring boundary adjustments.
ConTEXTual Net 3D achieved an F1 score of 0.80, outperforming LLMSeg (F1=0.22)
and the 2.5D model (F1=0.53), though it underperformed both physicians (F1=0.94
and 0.91). The model achieved better performance on FDG (F1=0.78) and DCFPyL
(F1=0.75) exams, while performance dropped on DOTATE (F1=0.58) and Fluciclovine
(F1=0.66). The model performed consistently across lesion sizes but showed
reduced accuracy on lesions with low uptake. Our novel weak labeling pipeline
accurately produced an annotated dataset of PET/CT image-text pairs,
facilitating the development of 3D visual grounding models. ConTEXTual Net 3D
significantly outperformed other models but fell short of the performance of
nuclear medicine physicians. Our study suggests that even larger datasets may
be needed to close this performance gap.
|
2502.00529
|
Graph Data Management and Graph Machine Learning: Synergies and
Opportunities
|
cs.DB
|
The ubiquity of machine learning, particularly deep learning, applied to
graphs is evident in applications ranging from cheminformatics (drug discovery)
and bioinformatics (protein interaction prediction) to knowledge graph-based
query answering, fraud detection, and social network analysis. Concurrently,
graph data management deals with the research and development of effective,
efficient, scalable, robust, and user-friendly systems and algorithms for
storing, processing, and analyzing vast quantities of heterogeneous and complex
graph data. Our survey provides a comprehensive overview of the synergies
between graph data management and graph machine learning, illustrating how they
intertwine and mutually reinforce each other across the entire spectrum of the
graph data science and machine learning pipeline. Specifically, the survey
highlights two crucial aspects: (1) How graph data management enhances graph
machine learning, including contributions such as improved graph neural network
performance through graph data cleaning, scalable graph embedding, efficient
graph-based vector data management, robust graph neural networks, user-friendly
explainability methods; and (2) how graph machine learning, in turn, aids in
graph data management, with a focus on applications like query answering over
knowledge graphs and various data science tasks. We discuss pertinent open
problems and delineate crucial research directions.
|
2502.00530
|
Generic Multimodal Spatially Graph Network for Spatially Embedded
Network Representation Learning
|
cs.LG cs.AI cs.SI
|
Spatially embedded networks (SENs) represent a special type of complex graph,
whose topologies are constrained by the networks' embedded spatial
environments. The graph representation of such networks is thereby influenced
by the embedded spatial features of both nodes and edges. Accurate network
representation of the graph structure and graph features is a fundamental task
for various graph-related tasks. In this study, a Generic Multimodal Spatially
Graph Convolutional Network (GMu-SGCN) is developed for efficient
representation of spatially embedded networks. The developed GMu-SGCN model has
the ability to learn the node connection pattern via multimodal node and edge
features. In order to evaluate the developed model, a river network dataset and
a power network dataset have been used as test beds. The river network
represents the naturally developed SENs, whereas the power network represents a
man-made network. Both types of networks are heavily constrained by the spatial
environments and uncertainties from nature. Comprehensive evaluation analysis
shows the developed GMu-SGCN can improve accuracy of the edge existence
prediction task by 37.1\% compared to a GraphSAGE model which only considers
the node's position feature in a power network test bed. Our model demonstrates
the importance of considering the multidimensional spatial feature for
spatially embedded network representation.
|
2502.00532
|
Enhancing Field-Oriented Control of Electric Drives with Tiny Neural
Network Optimized for Micro-controllers
|
cs.LG cs.SY eess.SY
|
The deployment of neural networks on resource-constrained micro-controllers
has gained momentum, driving many advancements in Tiny Neural Networks. This
paper introduces a tiny feed-forward neural network, TinyFC, integrated into
the Field-Oriented Control (FOC) of Permanent Magnet Synchronous Motors
(PMSMs). Proportional-Integral (PI) controllers are widely used in FOC for
their simplicity, although their limitations in handling nonlinear dynamics
hinder precision. To address this issue, a lightweight 1,400 parameters TinyFC
was devised to enhance the FOC performance while fitting into the computational
and memory constraints of a micro-controller. Advanced optimization techniques,
including pruning, hyperparameter tuning, and quantization to 8-bit integers,
were applied to reduce the model's footprint while preserving the network
effectiveness. Simulation results show the proposed approach significantly
reduced overshoot by up to 87.5%, with the pruned model achieving complete
overshoot elimination, highlighting the potential of tiny neural networks in
real-time motor control applications.
|
2502.00534
|
Transition Transfer $Q$-Learning for Composite Markov Decision Processes
|
stat.ML cs.LG
|
To bridge the gap between empirical success and theoretical understanding in
transfer reinforcement learning (RL), we study a principled approach with
provable performance guarantees. We introduce a novel composite MDP framework
where high-dimensional transition dynamics are modeled as the sum of a low-rank
component representing shared structure and a sparse component capturing
task-specific variations. This relaxes the common assumption of purely low-rank
transition models, allowing for more realistic scenarios where tasks share core
dynamics but maintain individual variations. We introduce UCB-TQL (Upper
Confidence Bound Transfer Q-Learning), designed for transfer RL scenarios where
multiple tasks share core linear MDP dynamics but diverge along sparse
dimensions. When applying UCB-TQL to a target task after training on a source
task with sufficient trajectories, we achieve a regret bound of
$\tilde{O}(\sqrt{eH^5N})$ that scales independently of the ambient dimension.
Here, $N$ represents the number of trajectories in the target task, while $e$
quantifies the sparse differences between tasks. This result demonstrates
substantial improvement over single task RL by effectively leveraging their
structural similarities. Our theoretical analysis provides rigorous guarantees
for how UCB-TQL simultaneously exploits shared dynamics while adapting to
task-specific variations.
|
2502.00535
|
Work-Efficient Parallel Non-Maximum Suppression Kernels
|
cs.CV cs.DC
|
In the context of object detection, sliding-window classifiers and
single-shot Convolutional Neural Network (CNN) meta-architectures typically
yield multiple overlapping candidate windows with similar high scores around
the true location of a particular object. Non-Maximum Suppression (NMS) is the
process of selecting a single representative candidate within this cluster of
detections, so as to obtain a unique detection per object appearing on a given
picture. In this paper, we present a highly scalable NMS algorithm for embedded
GPU architectures that is designed from scratch to handle workloads featuring
thousands of simultaneous detections on a given picture. Our kernels are
directly applicable to other sequential NMS algorithms such as FeatureNMS,
Soft-NMS or AdaptiveNMS that share the inner workings of the classic greedy NMS
method. The obtained performance results show that our parallel NMS algorithm
is capable of clustering 1024 simultaneous detected objects per frame in
roughly 1 ms on both NVIDIA Tegra X1 and NVIDIA Tegra X2 on-die GPUs, while
taking 2 ms on NVIDIA Tegra K1. Furthermore, our proposed parallel greedy NMS
algorithm yields a 14x-40x speed up when compared to state-of-the-art NMS
methods that require learning a CNN from annotated data.
|
2502.00536
|
CAD: Confidence-Aware Adaptive Displacement for Semi-Supervised Medical
Image Segmentation
|
cs.CV cs.LG
|
Semi-supervised medical image segmentation aims to leverage minimal expert
annotations, yet remains confronted by challenges in maintaining high-quality
consistency learning. Excessive perturbations can degrade alignment and hinder
precise decision boundaries, especially in regions with uncertain predictions.
In this paper, we introduce Confidence-Aware Adaptive Displacement (CAD), a
framework that selectively identifies and replaces the largest low-confidence
regions with high-confidence patches. By dynamically adjusting both the maximum
allowable replacement size and the confidence threshold throughout training,
CAD progressively refines the segmentation quality without overwhelming the
learning process. Experimental results on public medical datasets demonstrate
that CAD effectively enhances segmentation quality, establishing new
state-of-the-art accuracy in this field. The source code will be released after
the paper is published.
|
2502.00537
|
Detecting Ambiguities to Guide Query Rewrite for Robust Conversations in
Enterprise AI Assistants
|
cs.CL
|
Multi-turn conversations with an Enterprise AI Assistant can be challenging
due to conversational dependencies in questions, leading to ambiguities and
errors. To address this, we propose an NLU-NLG framework for ambiguity
detection and resolution through reformulating query automatically and
introduce a new task called "Ambiguity-guided Query Rewrite." To detect
ambiguities, we develop a taxonomy based on real user conversational logs and
draw insights from it to design rules and extract features for a classifier
which yields superior performance in detecting ambiguous queries, outperforming
LLM-based baselines. Furthermore, coupling the query rewrite module with our
ambiguity detecting classifier shows that this end-to-end framework can
effectively mitigate ambiguities without risking unnecessary insertions of
unwanted phrases for clear queries, leading to an improvement in the overall
performance of the AI Assistant. Due to its significance, this has been
deployed in the real world application, namely Adobe Experience Platform AI
Assistant.
|
2502.00543
|
VertiFormer: A Data-Efficient Multi-Task Transformer for Off-Road Robot
Mobility
|
cs.RO cs.CV cs.LG
|
Sophisticated learning architectures, e.g., Transformers, present a unique
opportunity for robots to understand complex vehicle-terrain kinodynamic
interactions for off-road mobility. While internet-scale data are available for
Natural Language Processing (NLP) and Computer Vision (CV) tasks to train
Transformers, real-world mobility data are difficult to acquire with physical
robots navigating off-road terrain. Furthermore, training techniques
specifically designed to process text and image data in NLP and CV may not
apply to robot mobility. In this paper, we propose VertiFormer, a novel
data-efficient multi-task Transformer model trained with only one hour of data
to address such challenges of applying Transformer architectures for robot
mobility on extremely rugged, vertically challenging, off-road terrain.
Specifically, VertiFormer employs a new learnable masked modeling and next
token prediction paradigm to predict the next pose, action, and terrain patch
to enable a variety of off-road mobility tasks simultaneously, e.g., forward
and inverse kinodynamics modeling. The non-autoregressive design mitigates
computational bottlenecks and error propagation associated with autoregressive
models. VertiFormer's unified modality representation also enhances learning of
diverse temporal mappings and state representations, which, combined with
multiple objective functions, further improves model generalization. Our
experiments offer insights into effectively utilizing Transformers for off-road
robot mobility with limited data and demonstrate our efficiently trained
Transformer can facilitate multiple off-road mobility tasks onboard a physical
mobile robot.
|
2502.00545
|
Integrating Frequency Guidance into Multi-source Domain Generalization
for Bearing Fault Diagnosis
|
cs.LG cs.AI cs.CV
|
Recent generalizable fault diagnosis researches have effectively tackled the
distributional shift between unseen working conditions. Most of them mainly
focus on learning domain-invariant representation through feature-level
methods. However, the increasing numbers of unseen domains may lead to
domain-invariant features contain instance-level spurious correlations, which
impact the previous models' generalizable ability. To address the limitations,
we propose the Fourier-based Augmentation Reconstruction Network, namely
FARNet.The methods are motivated by the observation that the Fourier phase
component and amplitude component preserve different semantic information of
the signals, which can be employed in domain augmentation techniques. The
network comprises an amplitude spectrum sub-network and a phase spectrum
sub-network, sequentially reducing the discrepancy between the source and
target domains. To construct a more robust generalized model, we employ a
multi-source domain data augmentation strategy in the frequency domain.
Specifically, a Frequency-Spatial Interaction Module (FSIM) is introduced to
handle global information and local spatial features, promoting representation
learning between the two sub-networks. To refine the decision boundary of our
model output compared to conventional triplet loss, we propose a manifold
triplet loss to contribute to generalization. Through extensive experiments on
the CWRU and SJTU datasets, FARNet demonstrates effective performance and
achieves superior results compared to current cross-domain approaches on the
benchmarks.
|
2502.00547
|
Milmer: a Framework for Multiple Instance Learning based Multimodal
Emotion Recognition
|
cs.CV cs.AI cs.HC
|
Emotions play a crucial role in human behavior and decision-making, making
emotion recognition a key area of interest in human-computer interaction (HCI).
This study addresses the challenges of emotion recognition by integrating
facial expression analysis with electroencephalogram (EEG) signals, introducing
a novel multimodal framework-Milmer. The proposed framework employs a
transformer-based fusion approach to effectively integrate visual and
physiological modalities. It consists of an EEG preprocessing module, a facial
feature extraction and balancing module, and a cross-modal fusion module. To
enhance visual feature extraction, we fine-tune a pre-trained Swin Transformer
on emotion-related datasets. Additionally, a cross-attention mechanism is
introduced to balance token representation across modalities, ensuring
effective feature integration. A key innovation of this work is the adoption of
a multiple instance learning (MIL) approach, which extracts meaningful
information from multiple facial expression images over time, capturing
critical temporal dynamics often overlooked in previous studies. Extensive
experiments conducted on the DEAP dataset demonstrate the superiority of the
proposed framework, achieving a classification accuracy of 96.72% in the
four-class emotion recognition task. Ablation studies further validate the
contributions of each module, highlighting the significance of advanced feature
extraction and fusion strategies in enhancing emotion recognition performance.
Our code are available at https://github.com/liangyubuaa/Milmer.
|
2502.00550
|
Muti-Fidelity Prediction and Uncertainty Quantification with Laplace
Neural Operators for Parametric Partial Differential Equations
|
cs.LG cs.NA math.NA physics.comp-ph
|
Laplace Neural Operators (LNOs) have recently emerged as a promising approach
in scientific machine learning due to the ability to learn nonlinear maps
between functional spaces. However, this framework often requires substantial
amounts of high-fidelity (HF) training data, which is often prohibitively
expensive to acquire. To address this, we propose multi-fidelity Laplace Neural
Operators (MF-LNOs), which combine a low-fidelity (LF) base model with parallel
linear/nonlinear HF correctors and dynamic inter-fidelity weighting. This
allows us to exploit correlations between LF and HF datasets and achieve
accurate inference of quantities of interest even with sparse HF data. We
further incorporate a modified replica exchange stochastic gradient Langevin
algorithm, which enables a more effective posterior distribution estimation and
uncertainty quantification in model predictions. Extensive validation across
four canonical dynamical systems (the Lorenz system, Duffing oscillator,
Burgers equation, and Brusselator reaction-diffusion system) demonstrates the
framework's effectiveness. The results show significant improvements, with
testing losses reduced by 40% to 80% compared to traditional approaches. This
validates MF-LNO as a versatile tool for surrogate modeling in parametric PDEs,
offering significant improvements in data efficiency and uncertainty-aware
prediction.
|
2502.00552
|
Optimal Sensor Placement in Power Transformers Using Physics-Informed
Neural Networks
|
cs.LG cs.SY eess.SY
|
Our work aims at simulating and predicting the temperature conditions inside
a power transformer using Physics-Informed Neural Networks (PINNs). The
predictions obtained are then used to determine the optimal placement for
temperature sensors inside the transformer under the constraint of a limited
number of sensors, enabling efficient performance monitoring. The method
consists of combining PINNs with Mixed Integer Optimization Programming to
obtain the optimal temperature reconstruction inside the transformer. First, we
extend our PINN model for the thermal modeling of power transformers to solve
the heat diffusion equation from 1D to 2D space. Finally, we construct an
optimal sensor placement model inside the transformer that can be applied to
problems in 1D and 2D.
|
2502.00557
|
Sampling Binary Data by Denoising through Score Functions
|
stat.ML cs.LG
|
Gaussian smoothing combined with a probabilistic framework for denoising via
the empirical Bayes formalism, i.e., the Tweedie-Miyasawa formula (TMF), are
the two key ingredients in the success of score-based generative models in
Euclidean spaces. Smoothing holds the key for easing the problem of learning
and sampling in high dimensions, denoising is needed for recovering the
original signal, and TMF ties these together via the score function of noisy
data. In this work, we extend this paradigm to the problem of learning and
sampling the distribution of binary data on the Boolean hypercube by adopting
Bernoulli noise, instead of Gaussian noise, as a smoothing device. We first
derive a TMF-like expression for the optimal denoiser for the Hamming loss,
where a score function naturally appears. Sampling noisy binary data is then
achieved using a Langevin-like sampler which we theoretically analyze for
different noise levels. At high Bernoulli noise levels sampling becomes easy,
akin to log-concave sampling in Euclidean spaces. In addition, we extend the
sequential multi-measurement sampling of Saremi et al. (2024) to the binary
setting where we can bring the "effective noise" down by sampling multiple
noisy measurements at a fixed noise level, without the need for continuous-time
stochastic processes. We validate our formalism and theoretical findings by
experiments on synthetic data and binarized images.
|
2502.00558
|
Asynchronous Cooperative Multi-Agent Reinforcement Learning with Limited
Communication
|
cs.MA
|
We consider the problem setting in which multiple autonomous agents must
cooperatively navigate and perform tasks in an unknown,
communication-constrained environment. Traditional multi-agent reinforcement
learning (MARL) approaches assume synchronous communications and perform poorly
in such environments. We propose AsynCoMARL, an asynchronous MARL approach that
uses graph transformers to learn communication protocols from dynamic graphs.
AsynCoMARL can accommodate infrequent and asynchronous communications between
agents, with edges of the graph only forming when agents communicate with each
other. We show that AsynCoMARL achieves similar success and collision rates as
leading baselines, despite 26\% fewer messages being passed between agents.
|
2502.00559
|
Deep learning model for ECG reconstruction reveals the information
content of ECG leads
|
eess.SP cs.LG
|
This study introduces a deep learning model based on the U-net architecture
to reconstruct missing leads in electrocardiograms (ECGs). Using publicly
available datasets, the model was trained to regenerate 12-lead ECG data from
reduced lead configurations, demonstrating high accuracy in lead
reconstruction. The results highlight the ability of the model to quantify the
information content of each ECG lead and their inter-lead correlations. This
has significant implications for optimizing lead selection in diagnostic
scenarios, particularly in settings where full 12-lead ECGs are impractical.
Additionally, the study provides insights into the physiological underpinnings
of ECG signals and their propagation. The findings pave the way for
advancements in telemedicine, portable ECG devices, and personalized cardiac
diagnostics by reducing redundancy and enhancing signal interpretation.
|
2502.00562
|
Assessment of ChatGPT for Engineering Statics Analysis
|
cs.CE
|
Large language models (LLMs) such as OpenAI's ChatGPT hold potential for
automating engineering analysis, yet their reliability in solving multi-step
statics problems remains uncertain. This study evaluates the performance of
ChatGPT-4o and ChatGPT-o1-preview on foundational statics tasks, from simple
calculations of Newton's second law of motion to beam and truss analyses and
compares their results to first-year engineering students on a typical statics
exam. To enhance accuracy, we developed a Custom GPT, embedding refined prompts
directly into its instructions. This optimized model achieved an 82% score,
surpassing the 75% student average, demonstrating the impact of tailored
guidance. Despite these improvements, LLMs continued to exhibit errors in
nuanced or open-ended problems, such as misidentifying tension and compression
in truss members. These findings highlight both the promise and current
limitations of AI in structural analysis, emphasizing the need for improved
reasoning, multimodal capabilities, and targeted training data for future
AI-driven automation in civil and mechanical engineering.
|
2502.00563
|
Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for
Semantic Segmentation
|
cs.CV eess.IV
|
Recent advancements in deep neural networks have significantly enhanced the
performance of semantic segmentation. However, class imbalance and instance
imbalance remain persistent challenges, where smaller instances and thin
boundaries are often overshadowed by larger structures. To address the
multiscale nature of segmented objects, various models have incorporated
mechanisms such as spatial attention and feature pyramid networks. Despite
these advancements, most loss functions are still primarily pixel-wise, while
regional and boundary-focused loss functions often incur high computational
costs or are restricted to small-scale regions. To address this limitation, we
propose complex wavelet mutual information (CWMI) loss, a novel loss function
that leverages mutual information from subband images decomposed by a complex
steerable pyramid. The complex steerable pyramid captures features across
multiple orientations and preserves structural similarity across scales.
Meanwhile, mutual information is well-suited for capturing high-dimensional
directional features and exhibits greater noise robustness. Extensive
experiments on diverse segmentation datasets demonstrate that CWMI loss
achieves significant improvements in both pixel-wise accuracy and topological
metrics compared to state-of-the-art methods, while introducing minimal
computational overhead. The code is available at
https://anonymous.4open.science/r/CWMI-83B7/
|
2502.00567
|
Lessons for GenAI Literacy From a Field Study of Human-GenAI
Augmentation in the Workplace
|
cs.CY cs.AI
|
Generative artificial intelligence (GenAI) is increasingly becoming a part of
work practices across the technology industry and being used across a range of
industries. This has necessitated the need to better understand how GenAI is
being used by professionals in the field so that we can better prepare students
for the workforce. An improved understanding of the use of GenAI in practice
can help provide guidance on the design of GenAI literacy efforts including how
to integrate it within courses and curriculum, what aspects of GenAI to teach,
and even how to teach it. This paper presents a field study that compares the
use of GenAI across three different functions - product development, software
engineering, and digital content creation - to identify how GenAI is currently
being used in the industry. This study takes a human augmentation approach with
a focus on human cognition and addresses three research questions: how is GenAI
augmenting work practices; what knowledge is important and how are workers
learning; and what are the implications for training the future workforce.
Findings show a wide variance in the use of GenAI and in the level of computing
knowledge of users. In some industries GenAI is being used in a highly
technical manner with deployment of fine-tuned models across domains. Whereas
in others, only off-the-shelf applications are being used for generating
content. This means that the need for what to know about GenAI varies, and so
does the background knowledge needed to utilize it. For the purposes of
teaching and learning, our findings indicated that different levels of GenAI
understanding needs to be integrated into courses. From a faculty perspective,
the work has implications for training faculty so that they are aware of the
advances and how students are possibly, as early adopters, already using GenAI
to augment their learning practices.
|
2502.00568
|
Generating crossmodal gene expression from cancer histopathology
improves multimodal AI predictions
|
cs.CV cs.AI cs.LG
|
Emerging research has highlighted that artificial intelligence based
multimodal fusion of digital pathology and transcriptomic features can improve
cancer diagnosis (grading/subtyping) and prognosis (survival risk) prediction.
However, such direct fusion for joint decision is impractical in real clinical
settings, where histopathology is still the gold standard for diagnosis and
transcriptomic tests are rarely requested, at least in the public healthcare
system. With our novel diffusion based crossmodal generative AI model PathGen,
we show that genomic expressions synthesized from digital histopathology
jointly predicts cancer grading and patient survival risk with high accuracy
(state-of-the-art performance), certainty (through conformal coverage
guarantee) and interpretability (through distributed attention maps). PathGen
code is available for open use by the research community through GitHub at
https://github.com/Samiran-Dey/PathGen.
|
2502.00571
|
Contrastive Forward-Forward: A Training Algorithm of Vision Transformer
|
cs.CV cs.LG
|
Although backpropagation is widely accepted as a training algorithm for
artificial neural networks, researchers are always looking for inspiration from
the brain to find ways with potentially better performance. Forward-Forward is
a new training algorithm that is more similar to what occurs in the brain,
although there is a significant performance gap compared to backpropagation. In
the Forward-Forward algorithm, the loss functions are placed after each layer,
and the updating of a layer is done using two local forward passes and one
local backward pass. Forward-Forward is in its early stages and has been
designed and evaluated on simple multi-layer perceptron networks to solve image
classification tasks. In this work, we have extended the use of this algorithm
to a more complex and modern network, namely the Vision Transformer. Inspired
by insights from contrastive learning, we have attempted to revise this
algorithm, leading to the introduction of Contrastive Forward-Forward.
Experimental results show that our proposed algorithm performs significantly
better than the baseline Forward-Forward leading to an increase of up to 10% in
accuracy and boosting the convergence speed by 5 to 20 times on Vision
Transformer. Furthermore, if we take Cross Entropy as the baseline loss
function in backpropagation, it will be demonstrated that the proposed
modifications to the baseline Forward-Forward reduce its performance gap
compared to backpropagation on Vision Transformer, and even outperforms it in
certain conditions, such as inaccurate supervision.
|
2502.00575
|
DeepUKF-VIN: Adaptively-tuned Deep Unscented Kalman Filter for 3D
Visual-Inertial Navigation based on IMU-Vision-Net
|
cs.RO cs.SY eess.SY
|
This paper addresses the challenge of estimating the orientation, position,
and velocity of a vehicle operating in three-dimensional (3D) space with six
degrees of freedom (6-DoF). A Deep Learning-based Adaptation Mechanism (DLAM)
is proposed to adaptively tune the noise covariance matrices of Kalman-type
filters for the Visual-Inertial Navigation (VIN) problem, leveraging
IMU-Vision-Net. Subsequently, an adaptively tuned Deep Learning Unscented
Kalman Filter for 3D VIN (DeepUKF-VIN) is introduced to utilize the proposed
DLAM, thereby robustly estimating key navigation components, including
orientation, position, and linear velocity. The proposed DeepUKF-VIN integrates
data from onboard sensors, specifically an inertial measurement unit (IMU) and
visual feature points extracted from a camera, and is applicable for GPS-denied
navigation. Its quaternion-based design effectively captures navigation
nonlinearities and avoids the singularities commonly encountered with
Euler-angle-based filters. Implemented in discrete space, the DeepUKF-VIN
facilitates practical filter deployment. The filter's performance is evaluated
using real-world data collected from an IMU and a stereo camera at low sampling
rates. The results demonstrate filter stability and rapid attenuation of
estimation errors, highlighting its high estimation accuracy. Furthermore,
comparative testing against the standard Unscented Kalman Filter (UKF) in two
scenarios consistently shows superior performance across all navigation
components, thereby validating the efficacy and robustness of the proposed
DeepUKF-VIN. Keywords: Deep Learning, Unscented Kalman Filter, Adaptive tuning,
Estimation, Navigation, Unmanned Aerial Vehicle, Sensor-fusion.
|
2502.00577
|
Understanding Multimodal LLMs Under Distribution Shifts: An
Information-Theoretic Approach
|
cs.AI cs.CL cs.LG
|
Multimodal large language models (MLLMs) have shown promising capabilities
but struggle under distribution shifts, where evaluation data differ from
instruction tuning distributions. Although previous works have provided
empirical evaluations, we argue that establishing a formal framework that can
characterize and quantify the risk of MLLMs is necessary to ensure the safe and
reliable application of MLLMs in the real world. By taking an
information-theoretic perspective, we propose the first theoretical framework
that enables the quantification of the maximum risk of MLLMs under distribution
shifts. Central to our framework is the introduction of Effective Mutual
Information (EMI), a principled metric that quantifies the relevance between
input queries and model responses. We derive an upper bound for the EMI
difference between in-distribution (ID) and out-of-distribution (OOD) data,
connecting it to visual and textual distributional discrepancies. Extensive
experiments on real benchmark datasets, spanning 61 shift scenarios empirically
validate our theoretical insights.
|
2502.00580
|
Defense Against the Dark Prompts: Mitigating Best-of-N Jailbreaking with
Prompt Evaluation
|
cs.CR cs.AI cs.CL cs.CY
|
Recent work showed Best-of-N (BoN) jailbreaking using repeated use of random
augmentations (such as capitalization, punctuation, etc) is effective against
all major large language models (LLMs). We have found that $100\%$ of the BoN
paper's successful jailbreaks (confidence interval $[99.65\%, 100.00\%]$) and
$99.8\%$ of successful jailbreaks in our replication (confidence interval
$[99.28\%, 99.98\%]$) were blocked with our Defense Against The Dark Prompts
(DATDP) method. The DATDP algorithm works by repeatedly utilizing an evaluation
LLM to evaluate a prompt for dangerous or manipulative behaviors--unlike some
other approaches, DATDP also explicitly looks for jailbreaking attempts--until
a robust safety rating is generated. This success persisted even when utilizing
smaller LLMs to power the evaluation (Claude and LLaMa-3-8B-instruct proved
almost equally capable). These results show that, though language models are
sensitive to seemingly innocuous changes to inputs, they seem also capable of
successfully evaluating the dangers of these inputs. Versions of DATDP can
therefore be added cheaply to generative AI systems to produce an immediate
significant increase in safety.
|
2502.00581
|
Trajectory Planning and Control for Differentially Flat Fixed-Wing
Aerial Systems
|
cs.RO
|
Efficient real-time trajectory planning and control for fixed-wing unmanned
aerial vehicles is challenging due to their non-holonomic nature, complex
dynamics, and the additional uncertainties introduced by unknown aerodynamic
effects. In this paper, we present a fast and efficient real-time trajectory
planning and control approach for fixed-wing unmanned aerial vehicles,
leveraging the differential flatness property of fixed-wing aircraft in
coordinated flight conditions to generate dynamically feasible trajectories.
The approach provides the ability to continuously replan trajectories, which we
show is useful to dynamically account for the curvature constraint as the
aircraft advances along its path. Extensive simulations and real-world
experiments validate our approach, showcasing its effectiveness in generating
trajectories even in challenging conditions for small FW such as wind
disturbances.
|
2502.00582
|
Uniform-in-time weak propagation of chaos for consensus-based
optimization
|
math.OC cs.LG math.PR
|
We study the uniform-in-time weak propagation of chaos for the
consensus-based optimization (CBO) method on a bounded searching domain. We
apply the methodology for studying long-time behaviors of interacting particle
systems developed in the work of Delarue and Tse (ArXiv:2104.14973). Our work
shows that the weak error has order $O(N^{-1})$ uniformly in time, where $N$
denotes the number of particles. The main strategy behind the proofs are the
decomposition of the weak errors using the linearized Fokker-Planck equations
and the exponential decay of their Sobolev norms. Consequently, our result
leads to the joint convergence of the empirical distribution of the CBO
particle system to the Dirac-delta distribution at the global minimizer in
population size and running time in Wasserstein-type metrics.
|
2502.00583
|
Data-Driven Mispronunciation Pattern Discovery for Robust Speech
Recognition
|
cs.CL cs.SD eess.AS
|
Recent advancements in machine learning have significantly improved speech
recognition, but recognizing speech from non-fluent or accented speakers
remains a challenge. Previous efforts, relying on rule-based pronunciation
patterns, have struggled to fully capture non-native errors. We propose two
data-driven approaches using speech corpora to automatically detect
mispronunciation patterns. By aligning non-native phones with their native
counterparts using attention maps, we achieved a 5.7% improvement in speech
recognition on native English datasets and a 12.8% improvement for non-native
English speakers, particularly Korean speakers. Our method offers practical
advancements for robust Automatic Speech Recognition (ASR) systems particularly
for situations where prior linguistic knowledge is not applicable.
|
2502.00585
|
Converting Transformers into DGNNs Form
|
cs.LG cs.CL
|
Recent advances in deep learning have established Transformer architectures
as the predominant modeling paradigm. Central to the success of Transformers is
the self-attention mechanism, which scores the similarity between query and key
matrices to modulate a value matrix. This operation bears striking similarities
to digraph convolution, prompting an investigation into whether digraph
convolution could serve as an alternative to self-attention. In this study, we
formalize this concept by introducing a synthetic unitary digraph convolution
based on the digraph Fourier transform. The resulting model, which we term
Converter, effectively converts a Transformer into a Directed Graph Neural
Network (DGNN) form. We have tested Converter on Long-Range Arena benchmark,
long document classification, and DNA sequence-based taxonomy classification.
Our experimental results demonstrate that Converter achieves superior
performance while maintaining computational efficiency and architectural
simplicity, which establishes it as a lightweight yet powerful Transformer
variant.
|
2502.00587
|
Robust Knowledge Distillation in Federated Learning: Counteracting
Backdoor Attacks
|
cs.CR cs.AI
|
Federated Learning (FL) enables collaborative model training across multiple
devices while preserving data privacy. However, it remains susceptible to
backdoor attacks, where malicious participants can compromise the global model.
Existing defence methods are limited by strict assumptions on data
heterogeneity (Non-Independent and Identically Distributed data) and the
proportion of malicious clients, reducing their practicality and effectiveness.
To overcome these limitations, we propose Robust Knowledge Distillation (RKD),
a novel defence mechanism that enhances model integrity without relying on
restrictive assumptions. RKD integrates clustering and model selection
techniques to identify and filter out malicious updates, forming a reliable
ensemble of models. It then employs knowledge distillation to transfer the
collective insights from this ensemble to a global model. Extensive evaluations
demonstrate that RKD effectively mitigates backdoor threats while maintaining
high model performance, outperforming current state-of-the-art defence methods
across various scenarios.
|
2502.00592
|
M+: Extending MemoryLLM with Scalable Long-Term Memory
|
cs.CL
|
Equipping large language models (LLMs) with latent-space memory has attracted
increasing attention as they can extend the context window of existing language
models. However, retaining information from the distant past remains a
challenge. For example, MemoryLLM (Wang et al., 2024a), as a representative
work with latent-space memory, compresses past information into hidden states
across all layers, forming a memory pool of 1B parameters. While effective for
sequence lengths up to 16k tokens, it struggles to retain knowledge beyond 20k
tokens. In this work, we address this limitation by introducing M+, a
memory-augmented model based on MemoryLLM that significantly enhances long-term
information retention. M+ integrates a long-term memory mechanism with a
co-trained retriever, dynamically retrieving relevant information during text
generation. We evaluate M+ on diverse benchmarks, including long-context
understanding and knowledge retention tasks. Experimental results show that M+
significantly outperforms MemoryLLM and recent strong baselines, extending
knowledge retention from under 20k to over 160k tokens with similar GPU memory
overhead.
|
2502.00593
|
Dominated Novelty Search: Rethinking Local Competition in
Quality-Diversity
|
cs.NE cs.LG
|
Quality-Diversity is a family of evolutionary algorithms that generate
diverse, high-performing solutions through local competition principles
inspired by natural evolution. While research has focused on improving specific
aspects of Quality-Diversity algorithms, surprisingly little attention has been
paid to investigating alternative formulations of local competition itself --
the core mechanism distinguishing Quality-Diversity from traditional
evolutionary algorithms. Most approaches implement local competition through
explicit collection mechanisms like fixed grids or unstructured archives,
imposing artificial constraints that require predefined bounds or hard-to-tune
parameters. We show that Quality-Diversity methods can be reformulated as
Genetic Algorithms where local competition occurs through fitness
transformations rather than explicit collection mechanisms. Building on this
insight, we introduce Dominated Novelty Search, a Quality-Diversity algorithm
that implements local competition through dynamic fitness transformations,
eliminating the need for predefined bounds or parameters. Our experiments show
that Dominated Novelty Search significantly outperforms existing approaches
across standard Quality-Diversity benchmarks, while maintaining its advantage
in challenging scenarios like high-dimensional and unsupervised spaces.
|
2502.00594
|
Fast Vision Mamba: Pooling Spatial Dimensions for Accelerated Processing
|
cs.CV cs.AI
|
State Space Models (SSMs) with selective scan (Mamba) have been adapted into
efficient vision models. Mamba, unlike Vision Transformers, achieves linear
complexity for token interactions through a recurrent hidden state process.
This sequential processing is enhanced by a parallel scan algorithm, which
reduces the computational time of recurrent steps from $L$ sequential steps to
$log(L)$ parallel steps with respect to the number of input tokens ($L$). In
this work, we propose Fast Vision Mamba (FastVim), that further reduces the
computational time of the SSM block by reducing the number of recurrent steps
in Vision Mamba models while still retaining model performance. By alternately
pooling tokens along image dimensions across Mamba blocks, we obtain a
2$\times$ reduction in the number of parallel steps in SSM block. Our model
offers up to $72.5\%$ speedup in inference speed compared to baseline Vision
Mamba models on high resolution (2048$\times$2048) images. Our experiments
demonstrate state-of-the-art performance with dramatically improved throughput
in a range of tasks such as image classification, cell perturbation prediction,
segmentation, and object detection. Code is made available at
https://github.com/insitro/FastVim
|
2502.00595
|
RPGBENCH: Evaluating Large Language Models as Role-Playing Game Engines
|
cs.CL cs.AI
|
We present RPGBench, the first benchmark designed to evaluate large language
models (LLMs) as text-based role-playing game (RPG) engines. RPGBench comprises
two core tasks: Game Creation (GC) and Game Simulation (GS). In GC, an LLM must
craft a valid and playable RPG world using a structured event-state
representation, ensuring logical coherence and proper termination conditions.
In GS, the LLM simulates interactive gameplay across multiple rounds while
consistently updating states and enforcing game rules. To comprehensively
assess performance, RPGBench integrates objective and subjective evaluation
methodologies. Objective measures verify adherence to event mechanics and check
variable updates without requiring human intervention. Subjective measures,
such as content interestingness, action quality, and role-playing capability,
are evaluated via an LLM-as-a-judge framework, where a strong LLM grades each
candidate's outputs. Empirical results demonstrate that state-of-the-art LLMs
can produce engaging stories but often struggle to implement consistent,
verifiable game mechanics, particularly in long or complex scenarios. By
combining structured, rule-based assessments with LLM-based judgments, RPGBench
provides a new standard for evaluating how well LLMs can balance creativity,
coherence, and complexity in text-based RPGs, opening avenues for more
immersive and controllable interactive storytelling.
|
2502.00596
|
Rewinding the byte trail of the White Whale
|
cs.IT math.IT
|
Motivated by a popular code golf challenge, we review some key ideas from
information theory and discuss how to efficiently compress a streaming file
with an acceptable error rate.
|
2502.00601
|
Enhancing Offline Reinforcement Learning with Curriculum Learning-Based
Trajectory Valuation
|
cs.LG
|
The success of deep reinforcement learning (DRL) relies on the availability
and quality of training data, often requiring extensive interactions with
specific environments. In many real-world scenarios, where data collection is
costly and risky, offline reinforcement learning (RL) offers a solution by
utilizing data collected by domain experts and searching for a
batch-constrained optimal policy. This approach is further augmented by
incorporating external data sources, expanding the range and diversity of data
collection possibilities. However, existing offline RL methods often struggle
with challenges posed by non-matching data from these external sources. In this
work, we specifically address the problem of source-target domain mismatch in
scenarios involving mixed datasets, characterized by a predominance of source
data generated from random or suboptimal policies and a limited amount of
target data generated from higher-quality policies. To tackle this problem, we
introduce Transition Scoring (TS), a novel method that assigns scores to
transitions based on their similarity to the target domain, and propose
Curriculum Learning-Based Trajectory Valuation (CLTV), which effectively
leverages these transition scores to identify and prioritize high-quality
trajectories through a curriculum learning approach. Our extensive experiments
across various offline RL methods and MuJoCo environments, complemented by
rigorous theoretical analysis, demonstrate that CLTV enhances the overall
performance and transferability of policies learned by offline RL algorithms.
|
2502.00602
|
Mitigating Heterogeneous Token Overfitting in LLM Knowledge Editing
|
cs.CL cs.LG
|
Large language models (LLMs) have achieved remarkable performance on various
natural language tasks. However, they are trained on static corpora and their
knowledge can become outdated quickly in the fast-changing world. This
motivates the development of knowledge editing (KE) to update specific
knowledge in LLMs without changing unrelated others or compromising their
pre-trained capabilities. Previous efforts sought to update a small amount of
parameters of a LLM and proved effective for making selective updates.
Nonetheless, the edited LLM often exhibits degraded ability to reason about the
new knowledge. In this work, we identify a key issue: heterogeneous token
overfitting (HTO), where the LLM overfits different tokens in the provided
knowledge at varying rates. To tackle this, we propose OVERTONE, a token-level
smoothing method that mitigates HTO by adaptively refining the target
distribution. Theoretically, OVERTONE offers better parameter updates with
negligible computation overhead. It also induces an implicit DPO but does not
require preference data pairs. Extensive experiments across four editing
methods, two LLMs, and diverse scenarios demonstrate the effectiveness and
versatility of our method.
|
2502.00604
|
Gradient Alignment in Physics-informed Neural Networks: A Second-Order
Optimization Perspective
|
cs.LG cs.AI physics.comp-ph
|
Multi-task learning through composite loss functions is fundamental to modern
deep learning, yet optimizing competing objectives remains challenging. We
present new theoretical and practical approaches for addressing directional
conflicts between loss terms, demonstrating their effectiveness in
physics-informed neural networks (PINNs) where such conflicts are particularly
challenging to resolve. Through theoretical analysis, we demonstrate how these
conflicts limit first-order methods and show that second-order optimization
naturally resolves them through implicit gradient alignment. We prove that
SOAP, a recently proposed quasi-Newton method, efficiently approximates the
Hessian preconditioner, enabling breakthrough performance in PINNs:
state-of-the-art results on 10 challenging PDE benchmarks, including the first
successful application to turbulent flows with Reynolds numbers up to 10,000,
with 2-10x accuracy improvements over existing methods. We also introduce a
novel gradient alignment score that generalizes cosine similarity to multiple
gradients, providing a practical tool for analyzing optimization dynamics. Our
findings establish frameworks for understanding and resolving gradient
conflicts, with broad implications for optimization beyond scientific
computing.
|
2502.00605
|
The Query/Hit Model for Sequential Hypothesis Testing
|
cs.IT cs.LG eess.SP math.IT
|
This work introduces the Query/Hit (Q/H) learning model. The setup consists
of two agents. One agent, Alice, has access to a streaming source, while the
other, Bob, does not have direct access to the source. Communication occurs
through sequential Q/H pairs: Bob sends a sequence of source symbols (queries),
and Alice responds with the waiting time until each query appears in the source
stream (hits). This model is motivated by scenarios with communication,
computation, and privacy constraints that limit real-time access to the source.
The error exponent for sequential hypothesis testing under the Q/H model is
characterized, and a querying strategy, the Dynamic Scout-Sentinel Algorithm
(DSSA), is proposed. The strategy employs a mutual information neural estimator
to compute the error exponent associated with each query and to select the
query with the highest efficiency. Extensive empirical evaluations on both
synthetic and real-world datasets -- including mouse movement trajectories,
typesetting patterns, and touch-based user interactions -- are provided to
evaluate the performance of the proposed strategy in comparison with baselines,
in terms of probability of error, query choice, and time-to-detection.
|
2502.00607
|
PAC Learning is just Bipartite Matching (Sort of)
|
cs.LG cs.DS stat.ML
|
The main goal of this article is to convince you, the reader, that supervised
learning in the Probably Approximately Correct (PAC) model is closely related
to -- of all things -- bipartite matching! En-route from PAC learning to
bipartite matching, I will overview a particular transductive model of
learning, and associated one-inclusion graphs, which can be viewed as a
generalization of some of the hat puzzles that are popular in recreational
mathematics. Whereas this transductive model is far from new, it has recently
seen a resurgence of interest as a tool for tackling deep questions in learning
theory. A secondary purpose of this article could be as a (biased) tutorial on
the connections between the PAC and transductive models of learning.
|
2502.00611
|
Enhancing Code Consistency in AI Research with Large Language Models and
Retrieval-Augmented Generation
|
cs.SE cs.AI
|
Ensuring that code accurately reflects the algorithms and methods described
in research papers is critical for maintaining credibility and fostering trust
in AI research. This paper presents a novel system designed to verify code
implementations against the algorithms and methodologies outlined in
corresponding research papers. Our system employs Retrieval-Augmented
Generation to extract relevant details from both the research papers and code
bases, followed by a structured comparison using Large Language Models. This
approach improves the accuracy and comprehensiveness of code implementation
verification while contributing to the transparency, explainability, and
reproducibility of AI research. By automating the verification process, our
system reduces manual effort, enhances research credibility, and ultimately
advances the state of the art in code verification.
|
2502.00612
|
Using Causality for Enhanced Prediction of Web Traffic Time Series
|
cs.LG cs.NI
|
Predicting web service traffic has significant social value, as it can be
applied to various practical scenarios, including but not limited to dynamic
resource scaling, load balancing, system anomaly detection, service-level
agreement compliance, and fraud detection. Web service traffic is characterized
by frequent and drastic fluctuations over time and are influenced by
heterogeneous web user behaviors, making accurate prediction a challenging
task. Previous research has extensively explored statistical approaches, and
neural networks to mine features from preceding service traffic time series for
prediction. However, these methods have largely overlooked the causal
relationships between services. Drawing inspiration from causality in
ecological systems, we empirically recognize the causal relationships between
web services. To leverage these relationships for improved web service traffic
prediction, we propose an effective neural network module, CCMPlus, designed to
extract causal relationship features across services. This module can be
seamlessly integrated with existing time series models to consistently enhance
the performance of web service traffic predictions. We theoretically justify
that the causal correlation matrix generated by the CCMPlus module captures
causal relationships among services. Empirical results on real-world datasets
from Microsoft Azure, Alibaba Group, and Ant Group confirm that our method
surpasses state-of-the-art approaches in Mean Squared Error (MSE) and Mean
Absolute Error (MAE) for predicting service traffic time series. These findings
highlight the efficacy of leveraging causal relationships for improved
predictions.
|
2502.00617
|
Efficient Language Modeling for Low-Resource Settings with Hybrid
RNN-Transformer Architectures
|
cs.CL
|
Transformer-based language models have recently been at the forefront of
active research in text generation. However, these models' advances come at the
price of prohibitive training costs, with parameter counts in the billions and
compute requirements measured in petaflop/s-decades. In this paper, we
investigate transformer-based architectures for improving model performance in
a low-data regime by selectively replacing attention layers with feed-forward
and quasi-recurrent neural network layers. We test these architectures on the
standard Enwik8 and Wikitext-103 corpora. Our results show that our reduced
architectures outperform existing models with a comparable number of
parameters, and obtain comparable performance to larger models while
significantly reducing the number of parameters.
|
2502.00618
|
DesCLIP: Robust Continual Adaptation via General Attribute Descriptions
for Pretrained Vision-Language Models
|
cs.CV cs.AI
|
Continual adaptation of vision-language models (VLMs) focuses on leveraging
cross-modal pretrained knowledge to incrementally adapt for expanding
downstream tasks and datasets, while tackling the challenge of knowledge
forgetting. Existing research often focuses on connecting visual features with
specific class text in downstream tasks, overlooking the latent relationships
between general and specialized knowledge. Our findings reveal that forcing
models to optimize inappropriate visual-text matches exacerbates forgetting of
VLMs. To tackle this issue, we propose DesCLIP, which leverages general
attribute (GA) descriptions to guide the understanding of specific class
objects, enabling VLMs to establish robust \textit{vision-GA-class} trilateral
associations rather than relying solely on \textit{vision-class} connections.
Specifically, we introduce a language assistant to generate concrete GA
description candidates via proper request prompts. Then, an anchor-based
embedding filter is designed to obtain highly relevant GA description
embeddings, which are leveraged as the paired text embeddings for
visual-textual instance matching, thereby tuning the visual encoder.
Correspondingly, the class text embeddings are gradually calibrated to align
with these shared GA description embeddings. Extensive experiments demonstrate
the advancements and efficacy of our proposed method, with comprehensive
empirical evaluations highlighting its superior performance compared to
existing pretrained and VLM-based continual learning methods.
|
2502.00619
|
Distribution-aware Fairness Learning in Medical Image Segmentation From
A Control-Theoretic Perspective
|
eess.IV cs.AI cs.CV
|
Ensuring fairness in medical image segmentation is critical due to biases in
imbalanced clinical data acquisition caused by demographic attributes (e.g.,
age, sex, race) and clinical factors (e.g., disease severity). To address these
challenges, we introduce Distribution-aware Mixture of Experts (dMoE), inspired
by optimal control theory. We provide a comprehensive analysis of its
underlying mechanisms and clarify dMoE's role in adapting to heterogeneous
distributions in medical image segmentation. Furthermore, we integrate dMoE
into multiple network architectures, demonstrating its broad applicability
across diverse medical image analysis tasks. By incorporating demographic and
clinical factors, dMoE achieves state-of-the-art performance on two 2D
benchmark datasets and a 3D in-house dataset. Our results highlight the
effectiveness of dMoE in mitigating biases from imbalanced distributions,
offering a promising approach to bridging control theory and medical image
segmentation within fairness learning paradigms. The source code will be made
available.
|
2502.00620
|
Representations Shape Weak-to-Strong Generalization: Theoretical
Insights and Empirical Predictions
|
cs.LG cs.AI
|
Weak-to-Strong Generalization (W2SG), where a weak model supervises a
stronger one, serves as an important analogy for understanding how humans might
guide superhuman intelligence in the future. Promising empirical results
revealed that a strong model can surpass its weak supervisor. While recent work
has offered theoretical insights into this phenomenon, a clear understanding of
the interactions between weak and strong models that drive W2SG remains
elusive. We investigate W2SG through a theoretical lens and show that it can be
characterized using kernels derived from the principal components of weak and
strong models' internal representations. These kernels can be used to define a
space that, at a high level, captures what the weak model is unable to learn
but is learnable by the strong model. The projection of labels onto this space
quantifies how much the strong model falls short of its full potential due to
weak supervision. This characterization also provides insights into how certain
errors in weak supervision can be corrected by the strong model, regardless of
overfitting. Our theory has significant practical implications, providing a
representation-based metric that predicts W2SG performance trends without
requiring labels, as shown in experiments on molecular predictions with
transformers and 5 NLP tasks involving 52 LLMs.
|
2502.00622
|
Strengthening Generative Robot Policies through Predictive World
Modeling
|
cs.RO cs.CV cs.LG
|
We present generative predictive control (GPC), a learning control framework
that (i) clones a generative diffusion-based policy from expert demonstrations,
(ii) trains a predictive action-conditioned world model from both expert
demonstrations and random explorations, and (iii) synthesizes an online planner
that ranks and optimizes the action proposals from (i) by looking ahead into
the future using the world model from (ii). Crucially, we show that conditional
video diffusion allows learning (near) physics-accurate visual world models and
enable robust visual foresight. Focusing on planar pushing with rich contact
and collision, we show GPC dominates behavior cloning across state-based and
vision-based, simulated and real-world experiments.
|
2502.00627
|
Discord Unveiled: A Comprehensive Dataset of Public Communication
(2015-2024)
|
cs.SI cs.DB
|
Discord has evolved from a gaming-focused communication tool into a versatile
platform supporting diverse online communities. Despite its large user base and
active public servers, academic research on Discord remains limited due to data
accessibility challenges. This paper introduces Discord Unveiled: A
Comprehensive Dataset of Public Communication (2015-2024), the most extensive
Discord public server's data to date. The dataset comprises over 2.05 billion
messages from 4.74 million users across 3,167 public servers, representing
approximately 10% of servers listed in Discord's Discovery feature. Spanning
from Discord's launch in 2015 to the end of 2024, it offers a robust temporal
and thematic framework for analyzing decentralized moderation, community
governance, information dissemination, and social dynamics. Data was collected
through Discord's public API, adhering to ethical guidelines and privacy
standards via anonymization techniques. Organized into structured JSON files,
the dataset facilitates seamless integration with computational social science
methodologies. Preliminary analyses reveal significant trends in user
engagement, bot utilization, and linguistic diversity, with English
predominating alongside substantial representations of Spanish, French, and
Portuguese. Additionally, prevalent community themes such as social, art,
music, and memes highlight Discord's expansion beyond its gaming origins.
|
2502.00629
|
Advanced Weakly-Supervised Formula Exploration for Neuro-Symbolic
Mathematical Reasoning
|
cs.AI cs.LG
|
In recent years, neuro-symbolic methods have become a popular and powerful
approach that augments artificial intelligence systems with the capability to
perform abstract, logical, and quantitative deductions with enhanced precision
and controllability. Recent studies successfully performed symbolic reasoning
by leveraging various machine learning models to explicitly or implicitly
predict intermediate labels that provide symbolic instructions. However, these
intermediate labels are not always prepared for every task as a part of
training data, and pre-trained models, represented by Large Language Models
(LLMs), also do not consistently generate valid symbolic instructions with
their intrinsic knowledge. On the other hand, existing work developed
alternative learning techniques that allow the learning system to autonomously
uncover optimal symbolic instructions. Nevertheless, their performance also
exhibits limitations when faced with relatively huge search spaces or more
challenging reasoning problems. In view of this, in this work, we put forward
an advanced practice for neuro-symbolic reasoning systems to explore the
intermediate labels with weak supervision from problem inputs and final
outputs. Our experiments on the Mathematics dataset illustrated the
effectiveness of our proposals from multiple aspects.
|
2502.00630
|
Self-Prompt SAM: Medical Image Segmentation via Automatic Prompt SAM
Adaptation
|
cs.CV
|
Segment Anything Model (SAM) has demonstrated impressive zero-shot
performance and brought a range of unexplored capabilities to natural image
segmentation tasks. However, as a very important branch of image segmentation,
the performance of SAM remains uncertain when applied to medical image
segmentation due to the significant differences between natural images and
medical images. Meanwhile, it is harsh to meet the SAM's requirements of extra
prompts provided, such as points or boxes to specify medical regions. In this
paper, we propose a novel self-prompt SAM adaptation framework for medical
image segmentation, named Self-Prompt-SAM. We design a multi-scale prompt
generator combined with the image encoder in SAM to generate auxiliary masks.
Then, we use the auxiliary masks to generate bounding boxes as box prompts and
use Distance Transform to select the most central points as point prompts.
Meanwhile, we design a 3D depth-fused adapter (DfusedAdapter) and inject the
DFusedAdapter into each transformer in the image encoder and mask decoder to
enable pre-trained 2D SAM models to extract 3D information and adapt to 3D
medical images. Extensive experiments demonstrate that our method achieves
state-of-the-art performance and outperforms nnUNet by 2.3% on AMOS2022, 1.6%
on ACDCand 0.5% on Synapse datasets.
|
2502.00631
|
MedConv: Convolutions Beat Transformers on Long-Tailed Bone Density
Prediction
|
cs.CV
|
Bone density prediction via CT scans to estimate T-scores is crucial,
providing a more precise assessment of bone health compared to traditional
methods like X-ray bone density tests, which lack spatial resolution and the
ability to detect localized changes. However, CT-based prediction faces two
major challenges: the high computational complexity of transformer-based
architectures, which limits their deployment in portable and clinical settings,
and the imbalanced, long-tailed distribution of real-world hospital data that
skews predictions. To address these issues, we introduce MedConv, a
convolutional model for bone density prediction that outperforms transformer
models with lower computational demands. We also adapt Bal-CE loss and post-hoc
logit adjustment to improve class balance. Extensive experiments on our
AustinSpine dataset shows that our approach achieves up to 21% improvement in
accuracy and 20% in ROC AUC over previous state-of-the-art methods.
|
2502.00633
|
Lipschitz Lifelong Monte Carlo Tree Search for Mastering Non-Stationary
Tasks
|
cs.AI
|
Monte Carlo Tree Search (MCTS) has proven highly effective in solving complex
planning tasks by balancing exploration and exploitation using Upper Confidence
Bound for Trees (UCT). However, existing work have not considered MCTS-based
lifelong planning, where an agent faces a non-stationary series of tasks --
e.g., with varying transition probabilities and rewards -- that are drawn
sequentially throughout the operational lifetime. This paper presents LiZero
for Lipschitz lifelong planning using MCTS. We propose a novel concept of
adaptive UCT (aUCT) to transfer knowledge from a source task to the
exploration/exploitation of a new task, depending on both the Lipschitz
continuity between tasks and the confidence of knowledge in in Monte Carlo
action sampling. We analyze LiZero's acceleration factor in terms of improved
sampling efficiency and also develop efficient algorithms to compute aUCT in an
online fashion by both data-driven and model-based approaches, whose sampling
complexity and error bounds are also characterized. Experiment results show
that LiZero significantly outperforms existing MCTS and lifelong learning
baselines in terms of much faster convergence (3$\sim$4x) to optimal rewards.
Our results highlight the potential of LiZero to advance decision-making and
planning in dynamic real-world environments.
|
2502.00634
|
SimulPL: Aligning Human Preferences in Simultaneous Machine Translation
|
cs.CL cs.AI
|
Simultaneous Machine Translation (SiMT) generates translations while
receiving streaming source inputs. This requires the SiMT model to learn a
read/write policy, deciding when to translate and when to wait for more source
input. Numerous linguistic studies indicate that audiences in SiMT scenarios
have distinct preferences, such as accurate translations, simpler syntax, and
no unnecessary latency. Aligning SiMT models with these human preferences is
crucial to improve their performances. However, this issue still remains
unexplored. Additionally, preference optimization for SiMT task is also
challenging. Existing methods focus solely on optimizing the generated
responses, ignoring human preferences related to latency and the optimization
of read/write policy during the preference optimization phase. To address these
challenges, we propose Simultaneous Preference Learning (SimulPL), a preference
learning framework tailored for the SiMT task. In the SimulPL framework, we
categorize SiMT human preferences into five aspects: \textbf{translation
quality preference}, \textbf{monotonicity preference}, \textbf{key point
preference}, \textbf{simplicity preference}, and \textbf{latency preference}.
By leveraging the first four preferences, we construct human preference prompts
to efficiently guide GPT-4/4o in generating preference data for the SiMT task.
In the preference optimization phase, SimulPL integrates \textbf{latency
preference} into the optimization objective and enables SiMT models to improve
the read/write policy, thereby aligning with human preferences more
effectively. Experimental results indicate that SimulPL exhibits better
alignment with human preferences across all latency levels in
Zh$\rightarrow$En, De$\rightarrow$En and En$\rightarrow$Zh SiMT tasks. Our data
and code will be available at https://github.com/EurekaForNLP/SimulPL.
|
2502.00639
|
Zeroth-order Informed Fine-Tuning for Diffusion Model: A Recursive
Likelihood Ratio Optimizer
|
cs.CV cs.AI cs.LG stat.ML
|
The probabilistic diffusion model (DM), generating content by inferencing
through a recursive chain structure, has emerged as a powerful framework for
visual generation. After pre-training on enormous unlabeled data, the model
needs to be properly aligned to meet requirements for downstream applications.
How to efficiently align the foundation DM is a crucial task. Contemporary
methods are either based on Reinforcement Learning (RL) or truncated
Backpropagation (BP). However, RL and truncated BP suffer from low sample
efficiency and biased gradient estimation respectively, resulting in limited
improvement or, even worse, complete training failure. To overcome the
challenges, we propose the Recursive Likelihood Ratio (RLR) optimizer, a
zeroth-order informed fine-tuning paradigm for DM. The zeroth-order gradient
estimator enables the computation graph rearrangement within the recursive
diffusive chain, making the RLR's gradient estimator an unbiased one with the
lower variance than other methods. We provide theoretical guarantees for the
performance of the RLR. Extensive experiments are conducted on image and video
generation tasks to validate the superiority of the RLR. Furthermore, we
propose a novel prompt technique that is natural for the RLR to achieve a
synergistic effect.
|
2502.00640
|
CollabLLM: From Passive Responders to Active Collaborators
|
cs.AI
|
Large Language Models are typically trained with next-turn rewards, limiting
their ability to optimize for long-term interaction. As a result, they often
respond passively to ambiguous or open-ended user requests, failing to help
users reach their ultimate intents and leading to inefficient conversations. To
address these limitations, we introduce CollabLLM, a novel and general training
framework that enhances multiturn human-LLM collaboration. Its key innovation
is a collaborative simulation that estimates the long-term contribution of
responses using Multiturn-aware Rewards. By reinforcement fine-tuning these
rewards, CollabLLM goes beyond responding to user requests, and actively
uncovers user intent and offers insightful suggestions-a key step towards more
human-centered AI. We also devise a multiturn interaction benchmark with three
challenging tasks such as document creation. CollabLLM significantly
outperforms our baselines with averages of 18.5% higher task performance and
46.3% improved interactivity by LLM judges. Finally, we conduct a large user
study with 201 judges, where CollabLLM increases user satisfaction by 17.6% and
reduces user spent time by 10.4%.
|
2502.00641
|
Evaluating Small Language Models for News Summarization: Implications
and Factors Influencing Performance
|
cs.CL cs.AI
|
The increasing demand for efficient summarization tools in
resource-constrained environments highlights the need for effective solutions.
While large language models (LLMs) deliver superior summarization quality,
their high computational resource requirements limit practical use
applications. In contrast, small language models (SLMs) present a more
accessible alternative, capable of real-time summarization on edge devices.
However, their summarization capabilities and comparative performance against
LLMs remain underexplored. This paper addresses this gap by presenting a
comprehensive evaluation of 19 SLMs for news summarization across 2,000 news
samples, focusing on relevance, coherence, factual consistency, and summary
length. Our findings reveal significant variations in SLM performance, with
top-performing models such as Phi3-Mini and Llama3.2-3B-Ins achieving results
comparable to those of 70B LLMs while generating more concise summaries.
Notably, SLMs are better suited for simple prompts, as overly complex prompts
may lead to a decline in summary quality. Additionally, our analysis indicates
that instruction tuning does not consistently enhance the news summarization
capabilities of SLMs. This research not only contributes to the understanding
of SLMs but also provides practical insights for researchers seeking efficient
summarization solutions that balance performance and resource use.
|
2502.00645
|
General Coded Computing in a Probabilistic Straggler Regime
|
cs.DC cs.LG
|
Coded computing has demonstrated promising results in addressing straggler
resiliency in distributed computing systems. However, most coded computing
schemes are designed for exact computation, requiring the number of responding
servers to exceed a certain recovery threshold. Additionally, these schemes are
tailored for highly structured functions. Recently, new coded computing schemes
for general computing functions, where exact computation is replaced with
approximate computation, have emerged. In these schemes, the availability of
additional results corresponds to more accurate estimation of computational
tasks. This flexibility introduces new questions that need to be addressed.
This paper addresses the practically important scenario in the context of
general coded computing, where each server may become a straggler with a
probability $p$, independently from others. We theoretically analyze the
approximation error of two existing general coded computing schemes: Berrut
Approximate Coded Computing (BACC) and Learning Theoretic Coded Computing
(LeTCC). Under the probabilistic straggler configuration, we demonstrate that
the average approximation error for BACC and LeTCC converge to zero with the
rate of at least $\mathcal{O}(\log^3_{\frac{1}{p}}(N)\cdot{N^{-3}})$ and
$\mathcal{O}(\log^4_{\frac{1}{p}}(N)\cdot{N^{-2}})$, respectively. This is
perhaps surprising, as earlier results does not indicate a convergence when the
number of stragglers scales with the total number of servers $N$. However, in
this case, despite the average number of stragglers being $Np$, the
independence of servers in becoming stragglers allows the approximation error
to converge to zero. These theoretical results are validated through
experiments on various computing functions, including deep neural networks.
|
2502.00646
|
TrojanTime: Backdoor Attacks on Time Series Classification
|
cs.CR cs.AI cs.LG
|
Time Series Classification (TSC) is highly vulnerable to backdoor attacks,
posing significant security threats. Existing methods primarily focus on data
poisoning during the training phase, designing sophisticated triggers to
improve stealthiness and attack success rate (ASR). However, in practical
scenarios, attackers often face restrictions in accessing training data.
Moreover, it is a challenge for the model to maintain generalization ability on
clean test data while remaining vulnerable to poisoned inputs when data is
inaccessible. To address these challenges, we propose TrojanTime, a novel
two-step training algorithm. In the first stage, we generate a pseudo-dataset
using an external arbitrary dataset through target adversarial attacks. The
clean model is then continually trained on this pseudo-dataset and its poisoned
version. To ensure generalization ability, the second stage employs a carefully
designed training strategy, combining logits alignment and batch norm freezing.
We evaluate TrojanTime using five types of triggers across four TSC
architectures in UCR benchmark datasets from diverse domains. The results
demonstrate the effectiveness of TrojanTime in executing backdoor attacks while
maintaining clean accuracy. Finally, to mitigate this threat, we propose a
defensive unlearning strategy that effectively reduces the ASR while preserving
clean accuracy.
|
2502.00648
|
Agency in the Age of AI
|
cs.AI cs.MA
|
There is significant concern about the impact of generative AI on society.
Modern AI tools are capable of generating ever more realistic text, images, and
videos, and functional code, from minimal prompts. Accompanying this rise in
ability and usability, there is increasing alarm about the misuses to which
these tools can be put, and the intentional and unintentional harms to
individuals and society that may result. In this paper, we argue that
\emph{agency} is the appropriate lens to study these harms and benefits, but
that doing so will require advancement in the theory of agency, and advancement
in how this theory is applied in (agent-based) models.
|
2502.00652
|
Reformulation is All You Need: Addressing Malicious Text Features in
DNNs
|
cs.LG cs.CL cs.CR
|
Human language encompasses a wide range of intricate and diverse implicit
features, which attackers can exploit to launch adversarial or backdoor
attacks, compromising DNN models for NLP tasks. Existing model-oriented
defenses often require substantial computational resources as model size
increases, whereas sample-oriented defenses typically focus on specific attack
vectors or schemes, rendering them vulnerable to adaptive attacks. We observe
that the root cause of both adversarial and backdoor attacks lies in the
encoding process of DNN models, where subtle textual features, negligible for
human comprehension, are erroneously assigned significant weight by less robust
or trojaned models. Based on it we propose a unified and adaptive defense
framework that is effective against both adversarial and backdoor attacks. Our
approach leverages reformulation modules to address potential malicious
features in textual inputs while preserving the original semantic integrity.
Extensive experiments demonstrate that our framework outperforms existing
sample-oriented defense baselines across a diverse range of malicious textual
features.
|
2502.00654
|
EmoTalkingGaussian: Continuous Emotion-conditioned Talking Head
Synthesis
|
cs.CV
|
3D Gaussian splatting-based talking head synthesis has recently gained
attention for its ability to render high-fidelity images with real-time
inference speed. However, since it is typically trained on only a short video
that lacks the diversity in facial emotions, the resultant talking heads
struggle to represent a wide range of emotions. To address this issue, we
propose a lip-aligned emotional face generator and leverage it to train our
EmoTalkingGaussian model. It is able to manipulate facial emotions conditioned
on continuous emotion values (i.e., valence and arousal); while retaining
synchronization of lip movements with input audio. Additionally, to achieve the
accurate lip synchronization for in-the-wild audio, we introduce a
self-supervised learning method that leverages a text-to-speech network and a
visual-audio synchronization network. We experiment our EmoTalkingGaussian on
publicly available videos and have obtained better results than
state-of-the-arts in terms of image quality (measured in PSNR, SSIM, LPIPS),
emotion expression (measured in V-RMSE, A-RMSE, V-SA, A-SA, Emotion Accuracy),
and lip synchronization (measured in LMD, Sync-E, Sync-C), respectively.
|
2502.00657
|
LLM Safety Alignment is Divergence Estimation in Disguise
|
cs.LG cs.AI cs.CY stat.ML
|
We propose a theoretical framework demonstrating that popular Large Language
Model (LLM) alignment methods, including Reinforcement Learning from Human
Feedback (RLHF) and alternatives, fundamentally function as divergence
estimators between aligned (preferred or safe) and unaligned (less-preferred or
harmful) distributions. This explains the separation phenomenon between safe
and harmful prompts in the model hidden representation after alignment.
Inspired by the theoretical results, we identify that some alignment methods
are better than others in terms of separation and, introduce a new method,
KLDO, and further demonstrate the implication of our theories. We advocate for
compliance-refusal datasets over preference datasets to enhance safety
alignment, supported by both theoretical reasoning and empirical evidence.
Additionally, to quantify safety separation, we leverage a distance metric in
the representation space and statistically validate its efficacy as a
statistical significant indicator of LLM resilience against jailbreak attacks.
|
2502.00661
|
EKF-Based Radar-Inertial Odometry with Online Temporal Calibration
|
cs.RO
|
Accurate time synchronization between heterogeneous sensors is crucial for
ensuring robust state estimation in multi-sensor fusion systems. Sensor delays
often cause discrepancies between the actual time when the event was captured
and the time of sensor measurement, leading to temporal misalignment (time
offset) between sensor measurement streams. In this paper, we propose an
extended Kalman filter (EKF)-based radar-inertial odometry (RIO) framework that
estimates the time offset online. The radar ego-velocity measurement model,
estimated from a single radar scan, is formulated to include the time offset
for the update. By leveraging temporal calibration, the proposed RIO enables
accurate propagation and measurement updates based on a common time stream.
Experiments on multiple datasets demonstrated the accurate time offset
estimation of the proposed method and its impact on RIO performance, validating
the importance of sensor time synchronization. Our implementation of the
EKF-RIO with online temporal calibration is available at
https://github.com/spearwin/EKF-RIO-TC.
|
2502.00662
|
Mitigating the Modality Gap: Few-Shot Out-of-Distribution Detection with
Multi-modal Prototypes and Image Bias Estimation
|
cs.CV cs.CL cs.LG
|
Existing vision-language model (VLM)-based methods for out-of-distribution
(OOD) detection typically rely on similarity scores between input images and
in-distribution (ID) text prototypes. However, the modality gap between image
and text often results in high false positive rates, as OOD samples can exhibit
high similarity to ID text prototypes. To mitigate the impact of this modality
gap, we propose incorporating ID image prototypes along with ID text
prototypes. We present theoretical analysis and empirical evidence indicating
that this approach enhances VLM-based OOD detection performance without any
additional training. To further reduce the gap between image and text, we
introduce a novel few-shot tuning framework, SUPREME, comprising biased prompts
generation (BPG) and image-text consistency (ITC) modules. BPG enhances
image-text fusion and improves generalization by conditioning ID text
prototypes on the Gaussian-based estimated image domain bias; ITC reduces the
modality gap by minimizing intra- and inter-modal distances. Moreover, inspired
by our theoretical and empirical findings, we introduce a novel OOD score
$S_{\textit{GMP}}$, leveraging uni- and cross-modal similarities. Finally, we
present extensive experiments to demonstrate that SUPREME consistently
outperforms existing VLM-based OOD detection methods.
|
2502.00663
|
Enhanced Convolutional Neural Networks for Improved Image Classification
|
cs.CV cs.AI
|
Image classification is a fundamental task in computer vision with diverse
applications, ranging from autonomous systems to medical imaging. The CIFAR-10
dataset is a widely used benchmark to evaluate the performance of
classification models on small-scale, multi-class datasets. Convolutional
Neural Networks (CNNs) have demonstrated state-of-the-art results; however,
they often suffer from overfitting and suboptimal feature representation when
applied to challenging datasets like CIFAR-10. In this paper, we propose an
enhanced CNN architecture that integrates deeper convolutional blocks, batch
normalization, and dropout regularization to achieve superior performance. The
proposed model achieves a test accuracy of 84.95%, outperforming baseline CNN
architectures. Through detailed ablation studies, we demonstrate the
effectiveness of the enhancements and analyze the hierarchical feature
representations. This work highlights the potential of refined CNN
architectures for tackling small-scale image classification problems
effectively.
|
2502.00665
|
Cross-Modal Synergies: Unveiling the Potential of Motion-Aware Fusion
Networks in Handling Dynamic and Static ReID Scenarios
|
cs.CV
|
Navigating the complexities of person re-identification (ReID) in varied
surveillance scenarios, particularly when occlusions occur, poses significant
challenges. We introduce an innovative Motion-Aware Fusion (MOTAR-FUSE) network
that utilizes motion cues derived from static imagery to significantly enhance
ReID capabilities. This network incorporates a dual-input visual adapter
capable of processing both images and videos, thereby facilitating more
effective feature extraction. A unique aspect of our approach is the
integration of a motion consistency task, which empowers the motion-aware
transformer to adeptly capture the dynamics of human motion. This technique
substantially improves the recognition of features in scenarios where
occlusions are prevalent, thereby advancing the ReID process. Our comprehensive
evaluations across multiple ReID benchmarks, including holistic, occluded, and
video-based scenarios, demonstrate that our MOTAR-FUSE network achieves
superior performance compared to existing approaches.
|
2502.00666
|
Avoiding $\mathbf{exp(R_{max})}$ scaling in RLHF through
Preference-based Exploration
|
cs.LG cs.AI stat.ML
|
Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal
technique for large language model (LLM) alignment. This paper studies the
setting of online RLHF and focus on improving sample efficiency. All existing
algorithms in online RLHF, whether doing passive exploration or active
exploration, suffer from a sample complexity that scales exponentially with the
scale of the reward function. This fundamental limitation hinders their
effectiveness in scenarios with heavily skewed preferences, e.g. questions with
a unique correct solution. To address this, we introduce Self-Exploring
Preference-Incentive Online Preference Optimization (SE-POPO), an online RLHF
algorithm that for the first time achieves a sample complexity that scales
polynomially with the reward scale, answering an open problem raised by Xie et
al. (2024).. Theoretically, we demonstrate that the sample complexity of
SE-POPO dominates that of existing exploration algorithms. Empirically, our
systematic evaluation confirms that SE-POPO is more sample-efficient than both
exploratory and non-exploratory baselines, in two primary application scenarios
of RLHF as well as on public benchmarks, marking a significant step forward in
RLHF algorithm design. The code is available at
https://github.com/MYC000801/SE-POPO.
|
2502.00669
|
Safety Alignment Depth in Large Language Models: A Markov Chain
Perspective
|
cs.LG
|
Large Language Models (LLMs) are increasingly adopted in high-stakes
scenarios, yet their safety mechanisms often remain fragile. Simple jailbreak
prompts or even benign fine-tuning can bypass these protocols, underscoring the
need to understand where and how they fail. Recent findings suggest that
vulnerabilities emerge when alignment is confined to only the initial output
tokens. Unfortunately, even with the introduction of deep safety alignment,
determining the optimal safety depth remains an unresolved challenge. By
leveraging the equivalence between autoregressive language models and Markov
chains, this paper offers the first theoretical result on how to identify the
ideal depth for safety alignment, and demonstrates how permutation-based data
augmentation can tighten these bounds. Crucially, we reveal a fundamental
interaction between alignment depth and ensemble width-indicating that broader
ensembles can compensate for shallower alignments. These insights provide a
theoretical foundation for designing more robust, scalable safety strategies
that complement existing alignment approaches, opening new avenues for research
into safer, more reliable LLMs.
|
2502.00672
|
Biogeochemistry-Informed Neural Network (BINN) for Improving Accuracy of
Model Prediction and Scientific Understanding of Soil Organic Carbon
|
physics.geo-ph cs.AI
|
Big data and the rapid development of artificial intelligence (AI) provide
unprecedented opportunities to enhance our understanding of the global carbon
cycle and other biogeochemical processes. However, retrieving mechanistic
knowledge from big data remains a challenge. Here, we develop a
Biogeochemistry-Informed Neural Network (BINN) that seamlessly integrates a
vectorized process-based soil carbon cycle model (i.e., Community Land Model
version 5, CLM5) into a neural network (NN) structure to examine mechanisms
governing soil organic carbon (SOC) storage from big data. BINN demonstrates
high accuracy in retrieving biogeochemical parameter values from synthetic data
in a parameter recovery experiment. We use BINN to predict six major processes
regulating the soil carbon cycle (or components in process-based models) from
25,925 observed SOC profiles across the conterminous US and compared them with
the same processes previously retrieved by a Bayesian inference-based
PROcess-guided deep learning and DAta-driven modeling (PRODA) approach (Tao et
al. 2020; 2023). The high agreement between the spatial patterns of the
retrieved processes using the two approaches with an average correlation
coefficient of 0.81 confirms BINN's ability in retrieving mechanistic knowledge
from big data. Additionally, the integration of neural networks and
process-based models in BINN improves computational efficiency by more than 50
times over PRODA. We conclude that BINN is a transformative tool that harnesses
the power of both AI and process-based modeling, facilitating new scientific
discoveries while improving interpretability and accuracy of Earth system
models.
|
2502.00673
|
Retracted Citations and Self-citations in Retracted Publications: A
Comparative Study of Plagiarism and Fake Peer Review
|
cs.IR
|
Retracted citations remain a significant concern in academia as they
perpetuate misinformation and compromise the integrity of scientific literature
despite their invalidation. To analyze the impact of retracted citations, we
focused on two retraction categories: plagiarism and fake peer review. The data
set was sourced from Scopus and the reasons for the retraction were mapped
using the Retraction Watch database. The retraction trend shows a steady
average growth in plagiarism cases of 1.2 times, while the fake peer review
exhibits a fluctuating pattern with an average growth of 5.5 times. Although
fewer papers are retracted in the plagiarism category compared to fake peer
reviews, plagiarism-related papers receive 2.5 times more citations.
Furthermore, the total number of retracted citations for plagiarized papers is
1.8 times higher than that for fake peer review papers. Within the plagiarism
category, 46% of the retracted citations are due to plagiarism, while 53.6% of
the retracted citations in the fake peer review category are attributed to the
fake peer review. The results also suggest that fake peer review cases are
identified and retracted more rapidly than plagiarism cases. Finally,
self-citations constitute a small percentage of citations to retracted papers
but are notably higher among citations that are later retracted in both the
categories.
|
2502.00674
|
Rethinking Mixture-of-Agents: Is Mixing Different Large Language Models
Beneficial?
|
cs.CL cs.LG
|
Ensembling outputs from diverse sources is a straightforward yet effective
approach to boost performance. Mixture-of-Agents (MoA) is one such popular
ensemble method that aggregates outputs from multiple different Large Language
Models (LLMs). This paper raises the question in the context of language
models: is mixing different LLMs truly beneficial? We propose Self-MoA -- an
ensemble method that aggregates outputs from only the single top-performing
LLM. Our extensive experiments reveal that, surprisingly, Self-MoA outperforms
standard MoA that mixes different LLMs in a large number of scenarios: Self-MoA
achieves $6.6\%$ improvement over MoA on the AlpacaEval 2.0 benchmark, and an
average of $3.8\%$ improvement across various benchmarks, including MMLU, CRUX,
and MATH. Applying Self-MoA to one of the top-ranking models in AlpacaEval 2.0
directly achieves the new state-of-the-art performance on the leaderboard. To
understand the effectiveness of Self-MoA, we systematically investigate the
trade-off between diversity and quality of outputs under various MoA settings.
We confirm that the MoA performance is rather sensitive to the quality, and
mixing different LLMs often lowers the average quality of the models. To
complement the study, we identify the scenarios where mixing different LLMs
could be helpful. This paper further introduces a sequential version of
Self-MoA, that is capable of aggregating a large number of LLM outputs
on-the-fly over multiple rounds, and is as effective as aggregating all outputs
at once.
|
2502.00675
|
ReFoRCE: A Text-to-SQL Agent with Self-Refinement, Format Restriction,
and Column Exploration
|
cs.CL
|
Text-to-SQL systems have unlocked easier access to critical data insights by
enabling natural language queries over structured databases. However, deploying
such systems in enterprise environments remains challenging due to factors such
as large, complex schemas (> 3000 columns), diverse SQL dialects (e.g.,
BigQuery, Snowflake) and sophisticated query requirements (e.g.,
transformation, analytics). Current state-of-the-art performance on the Spider
2.0 dataset -- a benchmark built to mimic such complex environments -- remains
limited at 20%. Key limitations include inadequate instruction-following, poor
long-context comprehension, weak self-refinement, and insufficient
dialect-specific knowledge. To address these gaps, we propose ReFoRCE
(Self-Refinement Agent with Format Restriction and Column Exploration) which
introduces (1) table compression to mitigate long-context limitations (2)
format restriction to ensure accurate answer format, and (3) iterative column
exploration for enhanced schema understanding. Additionally, it employs
self-refinement pipeline consisting of (1) parallelized workflows with voting
mechanisms and (2) a Common Table Expression (CTE) based refinement approach to
handle unresolved cases. ReFoRCE achieves state-of-the-art results scoring
31.26 on the Spider 2.0-Snow and scoring 30.35 on the Spider 2.0-Lite tasks.
|
2502.00677
|
LLM-based event log analysis techniques: A survey
|
cs.AI cs.CR
|
Event log analysis is an important task that security professionals
undertake. Event logs record key information on activities that occur on
computing devices, and due to the substantial number of events generated, they
consume a large amount of time and resources to analyse. This demanding and
repetitive task is also prone to errors. To address these concerns, researchers
have developed automated techniques to improve the event log analysis process.
Large Language Models (LLMs) have recently demonstrated the ability to
successfully perform a wide range of tasks that individuals would usually
partake in, to high standards, and at a pace and degree of complexity that
outperform humans. Due to this, researchers are rapidly investigating the use
of LLMs for event log analysis. This includes fine-tuning, Retrieval-Augmented
Generation (RAG) and in-context learning, which affect performance. These works
demonstrate good progress, yet there is a need to understand the developing
body of knowledge, identify commonalities between works, and identify key
challenges and potential solutions to further developments in this domain. This
paper aims to survey LLM-based event log analysis techniques, providing readers
with an in-depth overview of the domain, gaps identified in previous research,
and concluding with potential avenues to explore in future.
|
2502.00678
|
How Contaminated Is Your Benchmark? Quantifying Dataset Leakage in Large
Language Models with Kernel Divergence
|
cs.LG cs.AI cs.CL
|
Dataset contamination, where evaluation datasets overlap with pre-training
corpora, inflates performance metrics and undermines the reliability of model
evaluations. Quantifying dataset contamination thus becomes essential to ensure
that performance evaluations genuinely reflect a model's ability to generalize
to unseen data, rather than relying on memorized examples. To address this
problem, we propose Kernel Divergence Score (KDS), a novel method that
quantifies dataset contamination by computing the divergence between the kernel
similarity matrix of sample embeddings, before and after fine-tuning on the
benchmark dataset. Leveraging the insight that fine-tuning affects unseen
samples more significantly than seen ones, KDS provides a reliable measure of
contamination. Through extensive experiments on controlled contamination
scenarios, KDS demonstrates a near-perfect correlation with contamination
levels and outperforms existing baselines. Additionally, we perform
comprehensive ablation studies to analyze the impact of key design choices,
providing deeper insights into the components and effectiveness of KDS. These
ablations highlight the importance of leveraging fine-grained kernel-based
information and confirm the reliability of the proposed framework across
diverse datasets and settings.
|
2502.00681
|
A Survey of Quantized Graph Representation Learning: Connecting Graph
Structures with Large Language Models
|
cs.LG cs.AI cs.CL
|
Recent years have witnessed rapid advances in graph representation learning,
with the continuous embedding approach emerging as the dominant paradigm.
However, such methods encounter issues regarding parameter efficiency,
interpretability, and robustness. Thus, Quantized Graph Representation (QGR)
learning has recently gained increasing interest, which represents the graph
structure with discrete codes instead of conventional continuous embeddings.
Given its analogous representation form to natural language, QGR also possesses
the capability to seamlessly integrate graph structures with large language
models (LLMs). As this emerging paradigm is still in its infancy yet holds
significant promise, we undertake this thorough survey to promote its rapid
future prosperity. We first present the background of the general quantization
methods and their merits. Moreover, we provide an in-depth demonstration of
current QGR studies from the perspectives of quantized strategies, training
objectives, distinctive designs, knowledge graph quantization, and
applications. We further explore the strategies for code dependence learning
and integration with LLMs. At last, we give discussions and conclude future
directions, aiming to provide a comprehensive picture of QGR and inspire future
research.
|
2502.00682
|
Guidance Source Matters: How Guidance from AI, Expert, or a Group of
Analysts Impacts Visual Data Preparation and Analysis
|
cs.HC cs.AI
|
The progress in generative AI has fueled AI-powered tools like co-pilots and
assistants to provision better guidance, particularly during data analysis.
However, research on guidance has not yet examined the perceived efficacy of
the source from which guidance is offered and the impact of this source on the
user's perception and usage of guidance. We ask whether users perceive all
guidance sources as equal, with particular interest in three sources: (i) AI,
(ii) human expert, and (iii) a group of human analysts. As a benchmark, we
consider a fourth source, (iv) unattributed guidance, where guidance is
provided without attribution to any source, enabling isolation of and
comparison with the effects of source-specific guidance. We design a
five-condition between-subjects study, with one condition for each of the four
guidance sources and an additional (v) no-guidance condition, which serves as a
baseline to evaluate the influence of any kind of guidance. We situate our
study in a custom data preparation and analysis tool wherein we task users to
select relevant attributes from an unfamiliar dataset to inform a business
report. Depending on the assigned condition, users can request guidance, which
the system then provides in the form of attribute suggestions. To ensure
internal validity, we control for the quality of guidance across
source-conditions. Through several metrics of usage and perception, we
statistically test five preregistered hypotheses and report on additional
analysis. We find that the source of guidance matters to users, but not in a
manner that matches received wisdom. For instance, users utilize guidance
differently at various stages of analysis, including expressing varying levels
of regret, despite receiving guidance of similar quality. Notably, users in the
AI condition reported both higher post-task benefit and regret.
|
2502.00683
|
IEEEICM25: "Stability of Digital Robust Motion Control Systems with
Disturbance Observer"
|
eess.SY cs.SY
|
In this paper, new stability analysis methods are proposed for digital robust
motion control systems implemented using a disturbance observer.
|
2502.00684
|
Compositional Concept-Based Neuron-Level Interpretability for Deep
Reinforcement Learning
|
cs.LG cs.AI
|
Deep reinforcement learning (DRL), through learning policies or values
represented by neural networks, has successfully addressed many complex control
problems. However, the neural networks introduced by DRL lack interpretability
and transparency. Current DRL interpretability methods largely treat neural
networks as black boxes, with few approaches delving into the internal
mechanisms of policy/value networks. This limitation undermines trust in both
the neural network models that represent policies and the explanations derived
from them. In this work, we propose a novel concept-based interpretability
method that provides fine-grained explanations of DRL models at the neuron
level. Our method formalizes atomic concepts as binary functions over the state
space and constructs complex concepts through logical operations. By analyzing
the correspondence between neuron activations and concept functions, we
establish interpretable explanations for individual neurons in policy/value
networks. Experimental results on both continuous control tasks and discrete
decision-making environments demonstrate that our method can effectively
identify meaningful concepts that align with human understanding while
faithfully reflecting the network's decision-making logic.
|
2502.00685
|
IEEEICM25: "A High-Performance Disturbance Observer"
|
eess.SY cs.RO cs.SY
|
This paper proposes a novel Disturbance Observer, termed the High-Performance
Disturbance Observer, which achieves more accurate disturbance estimation
compared to the conventional disturbance observer, thereby delivering
significant improvements in robustness and performance for motion control
systems.
|
2502.00686
|
Improved Community Detection using Stochastic Block Models
|
cs.SI
|
Identifying edge-dense communities that are also well-connected is an
important aspect of understanding community structure. Prior work has shown
that community detection methods can produce poorly connected communities, and
some can even produce internally disconnected communities. In this study we
evaluate the connectivity of communities obtained using Stochastic Block
Models. We find that SBMs produce internally disconnected communities from
real-world networks. We present a simple technique, Well-Connected Clusters
(WCC), which repeatedly removes small edge cuts until the communities meet a
user-specified threshold for well-connectivity. Our study using a large
collection of synthetic networks based on clustered real-world networks shows
that using WCC as a post-processing tool with SBM community detection typically
improves clustering accuracy. WCC is fast enough to use on networks with
millions of nodes and is freely available in open source form.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.