id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.08127
|
Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance
|
cs.CL
|
Recent advancements in large language models (LLMs) have shown strong general
reasoning abilities, yet their effectiveness in financial reasoning remains
underexplored. In this study, we comprehensively evaluate 16 powerful reasoning
and general LLMs on three complex financial tasks involving financial text,
tabular data, and equations, assessing numerical reasoning, tabular
interpretation, financial terminology comprehension, long-context processing,
and equation-based problem solving. Our results show that while better datasets
and pretraining improve financial reasoning, general enhancements like CoT
fine-tuning do not always yield consistent gains. Moreover, all reasoning
strategies face challenges in improving performance on long-context and
multi-table tasks. To address these limitations, we develop a financial
reasoning-enhanced model based on Llama-3.1-8B-Instruct, by CoT fine-tuning and
reinforcement learning with domain-specific reasoning paths. Even with simple
fine-tuning with one financial dataset, our model achieves a consistent 10%
performance improvement across tasks, surpassing all 8B models and even
Llama3-70B-Instruct and Llama3.1-70B-Instruct on average. Our results highlight
the need for domain-specific adaptations in financial tasks, emphasizing future
directions such as multi-table reasoning, long-context processing, and
financial terminology comprehension. All our datasets, models, and codes are
publicly available. Furthermore, we introduce a leaderboard for benchmarking
future datasets and models.
|
2502.08129
|
Control Barrier Function-Based Quadratic Programming for SafeOperation
of Tethered UAVs
|
eess.SY cs.SY
|
Consider an unmanned aerial vehicle (UAV) physically connected to the ground
station with a tether operating in a space, tasked with performing precise
maneuvers while constrained by the physical limitation of its tether, which
prevents it from flying beyond a maximum allowable length. Violating this
tether constraint could lead to system failure or operational hazards, making
it essential to enforce safety constraints dynamically while ensuring the drone
can track desired trajectories accurately. This paper presents a Control
Barrier Function Quadratic Programming Framework (CBF-QP) for ensuring the safe
and efficient operation of tethered unmanned aerial vehicles (TUAVs). The
framework leverages nominal backstepping control to achieve trajectory
tracking, augmented with control barrier functions to ensure compliance with
the tether constraint. In this proposed method, the tether constraint is
directly embedded in the control design and therefore guarantees the TUAV
remains within a predefined operational region defined by the maximum tether
length while achieving precise trajectory tracking. The effectiveness of the
proposed framework is validated through simulations involving set-point
tracking, dynamic trajectory following, and disturbances such as incorrect user
inputs. The results demonstrate that the TUAV respects the tether constraint
||x(t)||</= Lmax, with tracking errors converging to zero and the control input
remaining bounded.
|
2502.08130
|
Selective Self-to-Supervised Fine-Tuning for Generalization in Large
Language Models
|
cs.CL
|
Fine-tuning Large Language Models (LLMs) on specific datasets is a common
practice to improve performance on target tasks. However, this performance gain
often leads to overfitting, where the model becomes too specialized in either
the task or the characteristics of the training data, resulting in a loss of
generalization. This paper introduces Selective Self-to-Supervised Fine-Tuning
(S3FT), a fine-tuning approach that achieves better performance than the
standard supervised fine-tuning (SFT) while improving generalization. S3FT
leverages the existence of multiple valid responses to a query. By utilizing
the model's correct responses, S3FT reduces model specialization during the
fine-tuning stage. S3FT first identifies the correct model responses from the
training set by deploying an appropriate judge. Then, it fine-tunes the model
using the correct model responses and the gold response (or its paraphrase) for
the remaining samples. The effectiveness of S3FT is demonstrated through
experiments on mathematical reasoning, Python programming and reading
comprehension tasks. The results show that standard SFT can lead to an average
performance drop of up to $4.4$ on multiple benchmarks, such as MMLU and
TruthfulQA. In contrast, S3FT reduces this drop by half, i.e. $2.5$, indicating
better generalization capabilities than SFT while performing significantly
better on the fine-tuning tasks.
|
2502.08132
|
SS4Rec: Continuous-Time Sequential Recommendation with State Space
Models
|
cs.IR cs.LG
|
Sequential recommendation is a key area in the field of recommendation
systems aiming to model user interest based on historical interaction sequences
with irregular intervals. While previous recurrent neural network-based and
attention-based approaches have achieved significant results, they have
limitations in capturing system continuity due to the discrete characteristics.
In the context of continuous-time modeling, state space model (SSM) offers a
potential solution, as it can effectively capture the dynamic evolution of user
interest over time. However, existing SSM-based approaches ignore the impact of
irregular time intervals within historical user interactions, making it
difficult to model complexed user-item transitions in sequences. To address
this issue, we propose a hybrid SSM-based model called SS4Rec for
continuous-time sequential recommendation. SS4Rec integrates a time-aware SSM
to handle irregular time intervals and a relation-aware SSM to model contextual
dependencies, enabling it to infer user interest from both temporal and
sequential perspectives. In the training process, the time-aware SSM and the
relation-aware SSM are discretized by variable stepsizes according to user
interaction time intervals and input data, respectively. This helps capture the
continuous dependency from irregular time intervals and provides time-specific
personalized recommendations. Experimental studies on five benchmark datasets
demonstrate the superiority and effectiveness of SS4Rec.
|
2502.08134
|
A Survey on Data Curation for Visual Contrastive Learning: Why Crafting
Effective Positive and Negative Pairs Matters
|
cs.CV
|
Visual contrastive learning aims to learn representations by contrasting
similar (positive) and dissimilar (negative) pairs of data samples. The design
of these pairs significantly impacts representation quality, training
efficiency, and computational cost. A well-curated set of pairs leads to
stronger representations and faster convergence. As contrastive pre-training
sees wider adoption for solving downstream tasks, data curation becomes
essential for optimizing its effectiveness. In this survey, we attempt to
create a taxonomy of existing techniques for positive and negative pair
curation in contrastive learning, and describe them in detail.
|
2502.08136
|
In-Context Learning of Linear Dynamical Systems with Transformers: Error
Bounds and Depth-Separation
|
cs.LG stat.ML
|
This paper investigates approximation-theoretic aspects of the in-context
learning capability of the transformers in representing a family of noisy
linear dynamical systems. Our first theoretical result establishes an upper
bound on the approximation error of multi-layer transformers with respect to an
$L^2$-testing loss uniformly defined across tasks. This result demonstrates
that transformers with logarithmic depth can achieve error bounds comparable
with those of the least-squares estimator. In contrast, our second result
establishes a non-diminishing lower bound on the approximation error for a
class of single-layer linear transformers, which suggests a depth-separation
phenomenon for transformers in the in-context learning of dynamical systems.
Moreover, this second result uncovers a critical distinction in the
approximation power of single-layer linear transformers when learning from IID
versus non-IID data.
|
2502.08137
|
Riemannian Complex Hermit Positive Definite Convolution Network for
Polarimetric SAR Image Classification
|
cs.CV
|
Deep learning can learn high-level semantic features in Euclidean space
effectively for PolSAR images, while they need to covert the complex covariance
matrix into a feature vector or complex-valued vector as the network input.
However, the complex covariance matrices are essentially a complex Hermit
positive definite (HPD) matrix endowed in Riemannian manifold rather than
Euclidean space. The matrix's real and imagery parts are with the same
significance, as the imagery part represents the phase information. The matrix
vectorization will destroy the geometric structure and manifold characteristics
of complex covariance matrices. To learn complex HPD matrices directly, we
propose a Riemannian complex HPD convolution network(HPD\_CNN) for PolSAR
images. This method consists of a complex HPD unfolding network(HPDnet) and a
CV-3DCNN enhanced network. The proposed complex HPDnet defines the HPD mapping,
rectifying and the logEig layers to learn geometric features of complex
matrices. In addition, a fast eigenvalue decomposition method is designed to
reduce computation burden. Finally, a Riemannian-to-Euclidean enhanced network
is defined to enhance contextual information for classification. Experimental
results on two real PolSSAR datasets demonstrate the proposed method can
achieve superior performance than the state-of-the-art methods especially in
heterogeneous regions.
|
2502.08141
|
LowRA: Accurate and Efficient LoRA Fine-Tuning of LLMs under 2 Bits
|
cs.LG cs.AR cs.CL cs.PF
|
Fine-tuning large language models (LLMs) is increasingly costly as models
scale to hundreds of billions of parameters, and even parameter-efficient
fine-tuning (PEFT) methods like LoRA remain resource-intensive. We introduce
LowRA, the first framework to enable LoRA fine-tuning below 2 bits per
parameter with minimal performance loss. LowRA optimizes fine-grained
quantization - mapping, threshold selection, and precision assignment - while
leveraging efficient CUDA kernels for scalable deployment. Extensive
evaluations across 4 LLMs and 4 datasets show that LowRA achieves a superior
performance-precision trade-off above 2 bits and remains accurate down to 1.15
bits, reducing memory usage by up to 50%. Our results highlight the potential
of ultra-low-bit LoRA fine-tuning for resource-constrained environments.
|
2502.08142
|
Bridging the Safety Gap: A Guardrail Pipeline for Trustworthy LLM
Inferences
|
cs.AI
|
We present Wildflare GuardRail, a guardrail pipeline designed to enhance the
safety and reliability of Large Language Model (LLM) inferences by
systematically addressing risks across the entire processing workflow.
Wildflare GuardRail integrates several core functional modules, including
Safety Detector that identifies unsafe inputs and detects hallucinations in
model outputs while generating root-cause explanations, Grounding that
contextualizes user queries with information retrieved from vector databases,
Customizer that adjusts outputs in real time using lightweight, rule-based
wrappers, and Repairer that corrects erroneous LLM outputs using hallucination
explanations provided by Safety Detector. Results show that our unsafe content
detection model in Safety Detector achieves comparable performance with OpenAI
API, though trained on a small dataset constructed with several public
datasets. Meanwhile, the lightweight wrappers can address malicious URLs in
model outputs in 1.06s per query with 100% accuracy without costly model calls.
Moreover, the hallucination fixing model demonstrates effectiveness in reducing
hallucinations with an accuracy of 80.7%.
|
2502.08143
|
Data-dependent Bounds with $T$-Optimal Best-of-Both-Worlds Guarantees in
Multi-Armed Bandits using Stability-Penalty Matching
|
cs.LG
|
Existing data-dependent and best-of-both-worlds regret bounds for multi-armed
bandits problems have limited adaptivity as they are either data-dependent but
not best-of-both-worlds (BOBW), BOBW but not data-dependent or have sub-optimal
$O(\sqrt{T\ln{T}})$ worst-case guarantee in the adversarial regime. To overcome
these limitations, we propose real-time stability-penalty matching (SPM), a new
method for obtaining regret bounds that are simultaneously data-dependent,
best-of-both-worlds and $T$-optimal for multi-armed bandits problems. In
particular, we show that real-time SPM obtains bounds with worst-case
guarantees of order $O(\sqrt{T})$ in the adversarial regime and $O(\ln{T})$ in
the stochastic regime while simultaneously being adaptive to data-dependent
quantities such as sparsity, variations, and small losses. Our results are
obtained by extending the SPM technique for tuning the learning rates in the
follow-the-regularized-leader (FTRL) framework, which further indicates that
the combination of SPM and FTRL is a promising approach for proving new
adaptive bounds in online learning problems.
|
2502.08145
|
Democratizing AI: Open-source Scalable LLM Training on GPU-based
Supercomputers
|
cs.LG cs.AI cs.DC
|
Training and fine-tuning large language models (LLMs) with hundreds of
billions to trillions of parameters requires tens of thousands of GPUs, and a
highly scalable software stack. In this work, we present a novel
four-dimensional hybrid parallel algorithm implemented in a highly scalable,
portable, open-source framework called AxoNN. We describe several performance
optimizations in AxoNN to improve matrix multiply kernel performance, overlap
non-blocking collectives with computation, and performance modeling to choose
performance optimal configurations. These have resulted in unprecedented
scaling and peak flop/s (bf16) for training of GPT-style transformer models on
Perlmutter (620.1 Petaflop/s), Frontier (1.381 Exaflop/s) and Alps (1.423
Exaflop/s).
While the abilities of LLMs improve with the number of trainable parameters,
so do privacy and copyright risks caused by memorization of training data,
which can cause disclosure of sensitive or private information at inference
time. We highlight this side effect of scale through experiments that explore
"catastrophic memorization", where models are sufficiently large to memorize
training data in a single pass, and present an approach to prevent it. As part
of this study, we demonstrate fine-tuning of a 405-billion parameter LLM using
AxoNN on Frontier.
|
2502.08146
|
Knowledge-Guided Wasserstein Distributionally Robust Optimization
|
cs.LG stat.ME stat.ML
|
Transfer learning is a popular strategy to leverage external knowledge and
improve statistical efficiency, particularly with a limited target sample. We
propose a novel knowledge-guided Wasserstein Distributionally Robust
Optimization (KG-WDRO) framework that adaptively incorporates multiple sources
of external knowledge to overcome the conservativeness of vanilla WDRO, which
often results in overly pessimistic shrinkage toward zero. Our method
constructs smaller Wasserstein ambiguity sets by controlling the transportation
along directions informed by the source knowledge. This strategy can alleviate
perturbations on the predictive projection of the covariates and protect
against information loss. Theoretically, we establish the equivalence between
our WDRO formulation and the knowledge-guided shrinkage estimation based on
collinear similarity, ensuring tractability and geometrizing the feasible set.
This also reveals a novel and general interpretation for recent shrinkage-based
transfer learning approaches from the perspective of distributional robustness.
In addition, our framework can adjust for scaling differences in the regression
models between the source and target and accommodates general types of
regularization such as lasso and ridge. Extensive simulations demonstrate the
superior performance and adaptivity of KG-WDRO in enhancing small-sample
transfer learning.
|
2502.08148
|
ACCESS : A Benchmark for Abstract Causal Event Discovery and Reasoning
|
cs.AI
|
Identifying cause-and-effect relationships is critical to understanding
real-world dynamics and ultimately causal reasoning. Existing methods for
identifying event causality in NLP, including those based on Large Language
Models (LLMs), exhibit difficulties in out-of-distribution settings due to the
limited scale and heavy reliance on lexical cues within available benchmarks.
Modern benchmarks, inspired by probabilistic causal inference, have attempted
to construct causal graphs of events as a robust representation of causal
knowledge, where \texttt{CRAB} \citep{romanou2023crab} is one such recent
benchmark along this line. In this paper, we introduce \texttt{ACCESS}, a
benchmark designed for discovery and reasoning over abstract causal events.
Unlike existing resources, \texttt{ACCESS} focuses on causality of everyday
life events on the abstraction level. We propose a pipeline for identifying
abstractions for event generalizations from \texttt{GLUCOSE}
\citep{mostafazadeh-etal-2020-glucose}, a large-scale dataset of implicit
commonsense causal knowledge, from which we subsequently extract $1,4$K causal
pairs. Our experiments highlight the ongoing challenges of using statistical
methods and/or LLMs for automatic abstraction identification and causal
discovery in NLP. Nonetheless, we demonstrate that the abstract causal
knowledge provided in \texttt{ACCESS} can be leveraged for enhancing QA
reasoning performance in LLMs.
|
2502.08149
|
Generalized Class Discovery in Instance Segmentation
|
cs.CV cs.AI
|
This work addresses the task of generalized class discovery (GCD) in instance
segmentation. The goal is to discover novel classes and obtain a model capable
of segmenting instances of both known and novel categories, given labeled and
unlabeled data. Since the real world contains numerous objects with long-tailed
distributions, the instance distribution for each class is inherently
imbalanced. To address the imbalanced distributions, we propose an
instance-wise temperature assignment (ITA) method for contrastive learning and
class-wise reliability criteria for pseudo-labels. The ITA method relaxes
instance discrimination for samples belonging to head classes to enhance GCD.
The reliability criteria are to avoid excluding most pseudo-labels for tail
classes when training an instance segmentation network using pseudo-labels from
GCD. Additionally, we propose dynamically adjusting the criteria to leverage
diverse samples in the early stages while relying only on reliable
pseudo-labels in the later stages. We also introduce an efficient soft
attention module to encode object-specific representations for GCD. Finally, we
evaluate our proposed method by conducting experiments on two settings:
COCO$_{half}$ + LVIS and LVIS + Visual Genome. The experimental results
demonstrate that the proposed method outperforms previous state-of-the-art
methods.
|
2502.08150
|
Force Matching with Relativistic Constraints: A Physics-Inspired
Approach to Stable and Efficient Generative Modeling
|
cs.LG cs.AI cs.CV
|
This paper introduces Force Matching (ForM), a novel framework for generative
modeling that represents an initial exploration into leveraging special
relativistic mechanics to enhance the stability of the sampling process. By
incorporating the Lorentz factor, ForM imposes a velocity constraint, ensuring
that sample velocities remain bounded within a constant limit. This constraint
serves as a fundamental mechanism for stabilizing the generative dynamics,
leading to a more robust and controlled sampling process. We provide a rigorous
theoretical analysis demonstrating that the velocity constraint is preserved
throughout the sampling procedure within the ForM framework. To validate the
effectiveness of our approach, we conduct extensive empirical evaluations. On
the \textit{half-moons} dataset, ForM significantly outperforms baseline
methods, achieving the lowest Euclidean distance loss of \textbf{0.714}, in
contrast to vanilla first-order flow matching (5.853) and first- and
second-order flow matching (5.793). Additionally, we perform an ablation study
to further investigate the impact of our velocity constraint, reaffirming the
superiority of ForM in stabilizing the generative process. The theoretical
guarantees and empirical results underscore the potential of integrating
special relativity principles into generative modeling. Our findings suggest
that ForM provides a promising pathway toward achieving stable, efficient, and
flexible generative processes. This work lays the foundation for future
advancements in high-dimensional generative modeling, opening new avenues for
the application of physical principles in machine learning.
|
2502.08151
|
Local Differential Privacy is Not Enough: A Sample Reconstruction Attack
against Federated Learning with Local Differential Privacy
|
cs.CR cs.LG
|
Reconstruction attacks against federated learning (FL) aim to reconstruct
users' samples through users' uploaded gradients. Local differential privacy
(LDP) is regarded as an effective defense against various attacks, including
sample reconstruction in FL, where gradients are clipped and perturbed.
Existing attacks are ineffective in FL with LDP since clipped and perturbed
gradients obliterate most sample information for reconstruction. Besides,
existing attacks embed additional sample information into gradients to improve
the attack effect and cause gradient expansion, leading to a more severe
gradient clipping in FL with LDP. In this paper, we propose a sample
reconstruction attack against LDP-based FL with any target models to
reconstruct victims' sensitive samples to illustrate that FL with LDP is not
flawless. Considering gradient expansion in reconstruction attacks and noise in
LDP, the core of the proposed attack is gradient compression and reconstructed
sample denoising. For gradient compression, an inference structure based on
sample characteristics is presented to reduce redundant gradients against LDP.
For reconstructed sample denoising, we artificially introduce zero gradients to
observe noise distribution and scale confidence interval to filter the noise.
Theoretical proof guarantees the effectiveness of the proposed attack.
Evaluations show that the proposed attack is the only attack that reconstructs
victims' training samples in LDP-based FL and has little impact on the target
model's accuracy. We conclude that LDP-based FL needs further improvements to
defend against sample reconstruction attacks effectively.
|
2502.08155
|
DGSense: A Domain Generalization Framework for Wireless Sensing
|
cs.LG cs.AI
|
Wireless sensing is of great benefits to our daily lives. However, wireless
signals are sensitive to the surroundings. Various factors, e.g. environments,
locations, and individuals, may induce extra impact on wireless propagation.
Such a change can be regarded as a domain, in which the data distribution
shifts. A vast majority of the sensing schemes are learning-based. They are
dependent on the training domains, resulting in performance degradation in
unseen domains. Researchers have proposed various solutions to address this
issue. But these solutions leverage either semi-supervised or unsupervised
domain adaptation techniques. They still require some data in the target
domains and do not perform well in unseen domains. In this paper, we propose a
domain generalization framework DGSense, to eliminate the domain dependence
problem in wireless sensing. The framework is a general solution working across
diverse sensing tasks and wireless technologies. Once the sensing model is
built, it can generalize to unseen domains without any data from the target
domain. To achieve the goal, we first increase the diversity of the training
set by a virtual data generator, and then extract the domain independent
features via episodic training between the main feature extractor and the
domain feature extractors. The feature extractors employ a pre-trained Residual
Network (ResNet) with an attention mechanism for spatial features, and a 1D
Convolutional Neural Network (1DCNN) for temporal features. To demonstrate the
effectiveness and generality of DGSense, we evaluated on WiFi gesture
recognition, Millimeter Wave (mmWave) activity recognition, and acoustic fall
detection. All the systems exhibited high generalization capability to unseen
domains, including new users, locations, and environments, free of new data and
retraining.
|
2502.08158
|
Open-Source Factor Graph Optimization Package for GNSS: Examples and
Applications
|
cs.RO
|
State estimation methods using factor graph optimization (FGO) have garnered
significant attention in global navigation satellite system (GNSS) research.
FGO exhibits superior estimation accuracy compared with traditional state
estimation methods that rely on least-squares or Kalman filters. However, only
a few FGO libraries are specialized for GNSS observations. This paper
introduces an open-source GNSS FGO package named gtsam\_gnss, which has a
simple structure and can be easily applied to GNSS research and development.
This package separates the preprocessing of GNSS observations from factor
optimization. Moreover, it describes the error function of the GNSS factor in a
straightforward manner, allowing for general-purpose inputs. This design
facilitates the transition from ordinary least-squares-based positioning to FGO
and supports user-specific GNSS research. In addition, gtsam\_gnss includes
analytical examples involving various factors using GNSS data in real urban
environments. This paper presents three application examples: the use of a
robust error model, estimation of integer ambiguity in the carrier phase, and
combination of GNSS and inertial measurements from smartphones. The proposed
framework demonstrates excellent state estimation performance across all use
cases.
|
2502.08160
|
Vertical Federated Learning in Practice: The Good, the Bad, and the Ugly
|
cs.LG cs.AI
|
Vertical Federated Learning (VFL) is a privacy-preserving collaborative
learning paradigm that enables multiple parties with distinct feature sets to
jointly train machine learning models without sharing their raw data. Despite
its potential to facilitate cross-organizational collaborations, the deployment
of VFL systems in real-world applications remains limited. To investigate the
gap between existing VFL research and practical deployment, this survey
analyzes the real-world data distributions in potential VFL applications and
identifies four key findings that highlight this gap. We propose a novel
data-oriented taxonomy of VFL algorithms based on real VFL data distributions.
Our comprehensive review of existing VFL algorithms reveals that some common
practical VFL scenarios have few or no viable solutions. Based on these
observations, we outline key research directions aimed at bridging the gap
between current VFL research and real-world applications.
|
2502.08161
|
MixDec Sampling: A Soft Link-based Sampling Method of Graph Neural
Network for Recommendation
|
cs.IR cs.AI
|
Graph neural networks have been widely used in recent recommender systems,
where negative sampling plays an important role. Existing negative sampling
methods restrict the relationship between nodes as either hard positive pairs
or hard negative pairs. This leads to the loss of structural information, and
lacks the mechanism to generate positive pairs for nodes with few neighbors. To
overcome limitations, we propose a novel soft link-based sampling method,
namely MixDec Sampling, which consists of Mixup Sampling module and Decay
Sampling module. The Mixup Sampling augments node features by synthesizing new
nodes and soft links, which provides sufficient number of samples for nodes
with few neighbors. The Decay Sampling strengthens the digestion of graph
structure information by generating soft links for node embedding learning. To
the best of our knowledge, we are the first to model sampling relationships
between nodes by soft links in GNN-based recommender systems. Extensive
experiments demonstrate that the proposed MixDec Sampling can significantly and
consistently improve the recommendation performance of several representative
GNN-based models on various recommendation benchmarks.
|
2502.08166
|
From Individual Experience to Collective Evidence: A Reporting-Based
Framework for Identifying Systemic Harms
|
cs.CY cs.LG
|
When an individual reports a negative interaction with some system, how can
their personal experience be contextualized within broader patterns of system
behavior? We study the incident database problem, where individual reports of
adverse events arrive sequentially, and are aggregated over time. In this work,
our goal is to identify whether there are subgroups--defined by any combination
of relevant features--that are disproportionately likely to experience harmful
interactions with the system. We formalize this problem as a sequential
hypothesis test, and identify conditions on reporting behavior that are
sufficient for making inferences about disparities in true rates of harm across
subgroups. We show that algorithms for sequential hypothesis tests can be
applied to this problem with a standard multiple testing correction. We then
demonstrate our method on real-world datasets, including mortgage decisions and
vaccine side effects; on each, our method (re-)identifies subgroups known to
experience disproportionate harm using only a fraction of the data that was
initially used to discover them.
|
2502.08167
|
DNNs May Determine Major Properties of Their Outputs Early, with Timing
Possibly Driven by Bias
|
cs.LG cs.CV
|
This paper argues that deep neural networks (DNNs) mostly determine their
outputs during the early stages of inference, where biases inherent in the
model play a crucial role in shaping this process. We draw a parallel between
this phenomenon and human decision-making, which often relies on fast,
intuitive heuristics. Using diffusion models (DMs) as a case study, we
demonstrate that DNNs often make early-stage decision-making influenced by the
type and extent of bias in their design and training. Our findings offer a new
perspective on bias mitigation, efficient inference, and the interpretation of
machine learning systems. By identifying the temporal dynamics of
decision-making in DNNs, this paper aims to inspire further discussion and
research within the machine learning community.
|
2502.08168
|
SARChat-Bench-2M: A Multi-Task Vision-Language Benchmark for SAR Image
Interpretation
|
cs.CL
|
As a powerful all-weather Earth observation tool, synthetic aperture radar
(SAR) remote sensing enables critical military reconnaissance, maritime
surveillance, and infrastructure monitoring. Although Vision language models
(VLMs) have made remarkable progress in natural language processing and image
understanding, their applications remain limited in professional domains due to
insufficient domain expertise. This paper innovatively proposes the first
large-scale multimodal dialogue dataset for SAR images, named SARChat-2M, which
contains approximately 2 million high-quality image-text pairs, encompasses
diverse scenarios with detailed target annotations. This dataset not only
supports several key tasks such as visual understanding and object detection
tasks, but also has unique innovative aspects: this study develop a
visual-language dataset and benchmark for the SAR domain, enabling and
evaluating VLMs' capabilities in SAR image interpretation, which provides a
paradigmatic framework for constructing multimodal datasets across various
remote sensing vertical domains. Through experiments on 16 mainstream VLMs, the
effectiveness of the dataset has been fully verified. The project will be
released at https://github.com/JimmyMa99/SARChat.
|
2502.08169
|
CoDynTrust: Robust Asynchronous Collaborative Perception via Dynamic
Feature Trust Modulus
|
cs.CV
|
Collaborative perception, fusing information from multiple agents, can extend
perception range so as to improve perception performance. However, temporal
asynchrony in real-world environments, caused by communication delays, clock
misalignment, or sampling configuration differences, can lead to information
mismatches. If this is not well handled, then the collaborative performance is
patchy, and what's worse safety accidents may occur. To tackle this challenge,
we propose CoDynTrust, an uncertainty-encoded asynchronous fusion perception
framework that is robust to the information mismatches caused by temporal
asynchrony. CoDynTrust generates dynamic feature trust modulus (DFTM) for each
region of interest by modeling aleatoric and epistemic uncertainty as well as
selectively suppressing or retaining single-vehicle features, thereby
mitigating information mismatches. We then design a multi-scale fusion module
to handle multi-scale feature maps processed by DFTM. Compared to existing
works that also consider asynchronous collaborative perception, CoDynTrust
combats various low-quality information in temporally asynchronous scenarios
and allows uncertainty to be propagated to downstream tasks such as planning
and control. Experimental results demonstrate that CoDynTrust significantly
reduces performance degradation caused by temporal asynchrony across multiple
datasets, achieving state-of-the-art detection performance even with temporal
asynchrony. The code is available at https://github.com/CrazyShout/CoDynTrust.
|
2502.08170
|
Learning-Based Design of LQG Controllers in Quantum Coherent Feedback
|
quant-ph cs.SY eess.SY
|
In this paper, we propose a differential evolution (DE) algorithm
specifically tailored for the design of Linear-Quadratic-Gaussian (LQG)
controllers in quantum systems. Building upon the foundational DE framework,
the algorithm incorporates specialized modules, including relaxed feasibility
rules, a scheduled penalty function, adaptive search range adjustment, and the
``bet-and-run'' initialization strategy. These enhancements improve the
algorithm's exploration and exploitation capabilities while addressing the
unique physical realizability requirements of quantum systems. The proposed
method is applied to a quantum optical system, where three distinct controllers
with varying configurations relative to the plant are designed. The resulting
controllers demonstrate superior performance, achieving lower LQG performance
indices compared to existing approaches. Additionally, the algorithm ensures
that the designs comply with physical realizability constraints, guaranteeing
compatibility with practical quantum platforms. The proposed approach holds
significant potential for application to other linear quantum systems in
performance optimization tasks subject to physically feasible constraints.
|
2502.08177
|
SycEval: Evaluating LLM Sycophancy
|
cs.AI
|
Large language models (LLMs) are increasingly applied in educational,
clinical, and professional settings, but their tendency for sycophancy --
prioritizing user agreement over independent reasoning -- poses risks to
reliability. This study introduces a framework to evaluate sycophantic behavior
in ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro across AMPS (mathematics) and
MedQuad (medical advice) datasets. Sycophantic behavior was observed in 58.19%
of cases, with Gemini exhibiting the highest rate (62.47%) and ChatGPT the
lowest (56.71%). Progressive sycophancy, leading to correct answers, occurred
in 43.52% of cases, while regressive sycophancy, leading to incorrect answers,
was observed in 14.66%. Preemptive rebuttals demonstrated significantly higher
sycophancy rates than in-context rebuttals (61.75% vs. 56.52%, $Z=5.87$,
$p<0.001$), particularly in computational tasks, where regressive sycophancy
increased significantly (preemptive: 8.13%, in-context: 3.54%, $p<0.001$).
Simple rebuttals maximized progressive sycophancy ($Z=6.59$, $p<0.001$), while
citation-based rebuttals exhibited the highest regressive rates ($Z=6.59$,
$p<0.001$). Sycophantic behavior showed high persistence (78.5%, 95% CI:
[77.2%, 79.8%]) regardless of context or model. These findings emphasize the
risks and opportunities of deploying LLMs in structured and dynamic domains,
offering insights into prompt programming and model optimization for safer AI
applications.
|
2502.08178
|
ParetoRAG: Leveraging Sentence-Context Attention for Robust and
Efficient Retrieval-Augmented Generation
|
cs.CL
|
While Retrieval-Augmented Generation (RAG) systems enhance Large Language
Models (LLMs) by incorporating external knowledge, they still face persistent
challenges in retrieval inefficiency and the inability of LLMs to filter out
irrelevant information. We present ParetoRAG, an unsupervised framework that
optimizes RAG systems through sentence-level refinement guided by the Pareto
principle. By decomposing paragraphs into sentences and dynamically
re-weighting core content while preserving contextual coherence, ParetoRAG
achieves dual improvements in both retrieval precision and generation quality
without requiring additional training or API resources. This framework has been
empirically validated across various datasets, LLMs, and retrievers.
|
2502.08180
|
Enhancing LLM Character-Level Manipulation via Divide and Conquer
|
cs.CL cs.AI
|
Large Language Models (LLMs) have demonstrated strong generalization
capabilities across a wide range of natural language processing (NLP) tasks.
However, they exhibit notable weaknesses in character-level string
manipulation, struggling with fundamental operations such as character
deletion, insertion, and substitution. These challenges stem primarily from
tokenization constraints, despite the critical role of such operations in data
preprocessing and code generation. Through systematic analysis, we derive two
key insights: (1) LLMs face significant difficulties in leveraging intrinsic
token knowledge for character-level reasoning, and (2) atomized word structures
can substantially enhance LLMs' ability to process token-level structural
information. Building on these insights, we propose Character-Level
Manipulation via Divide and Conquer, a novel approach designed to bridge the
gap between token-level processing and character-level manipulation. Our method
decomposes complex operations into explicit character-level subtasks coupled
with controlled token reconstruction phases, leading to significant
improvements in accuracy. Without additional training, our method significantly
improves accuracies on the $\texttt{Deletion}$, $\texttt{Insertion}$, and
$\texttt{Substitution}$ tasks. To support further research, we open-source our
implementation and benchmarks.
|
2502.08181
|
Latest Advancements Towards Catastrophic Forgetting under Data Scarcity:
A Comprehensive Survey on Few-Shot Class Incremental Learning
|
cs.LG cs.AI cs.CV
|
Data scarcity significantly complicates the continual learning problem, i.e.,
how a deep neural network learns in dynamic environments with very few samples.
However, the latest progress of few-shot class incremental learning (FSCIL)
methods and related studies show insightful knowledge on how to tackle the
problem. This paper presents a comprehensive survey on FSCIL that highlights
several important aspects i.e. comprehensive and formal objectives of FSCIL
approaches, the importance of prototype rectifications, the new learning
paradigms based on pre-trained model and language-guided mechanism, the deeper
analysis of FSCIL performance metrics and evaluation, and the practical
contexts of FSCIL in various areas. Our extensive discussion presents the open
challenges, potential solutions, and future directions of FSCIL.
|
2502.08189
|
AnyCharV: Bootstrap Controllable Character Video Generation with
Fine-to-Coarse Guidance
|
cs.CV
|
Character video generation is a significant real-world application focused on
producing high-quality videos featuring specific characters. Recent
advancements have introduced various control signals to animate static
characters, successfully enhancing control over the generation process.
However, these methods often lack flexibility, limiting their applicability and
making it challenging for users to synthesize a source character into a desired
target scene. To address this issue, we propose a novel framework, AnyCharV,
that flexibly generates character videos using arbitrary source characters and
target scenes, guided by pose information. Our approach involves a two-stage
training process. In the first stage, we develop a base model capable of
integrating the source character with the target scene using pose guidance. The
second stage further bootstraps controllable generation through a self-boosting
mechanism, where we use the generated video in the first stage and replace the
fine mask with the coarse one, enabling training outcomes with better
preservation of character details. Experimental results demonstrate the
effectiveness and robustness of our proposed method. Our project page is
https://anycharv.github.io.
|
2502.08200
|
ActiveSSF: An Active-Learning-Guided Self-Supervised Framework for
Long-Tailed Megakaryocyte Classification
|
cs.CV
|
Precise classification of megakaryocytes is crucial for diagnosing
myelodysplastic syndromes. Although self-supervised learning has shown promise
in medical image analysis, its application to classifying megakaryocytes in
stained slides faces three main challenges: (1) pervasive background noise that
obscures cellular details, (2) a long-tailed distribution that limits data for
rare subtypes, and (3) complex morphological variations leading to high
intra-class variability. To address these issues, we propose the ActiveSSF
framework, which integrates active learning with self-supervised pretraining.
Specifically, our approach employs Gaussian filtering combined with K-means
clustering and HSV analysis (augmented by clinical prior knowledge) for
accurate region-of-interest extraction; an adaptive sample selection mechanism
that dynamically adjusts similarity thresholds to mitigate class imbalance; and
prototype clustering on labeled samples to overcome morphological complexity.
Experimental results on clinical megakaryocyte datasets demonstrate that
ActiveSSF not only achieves state-of-the-art performance but also significantly
improves recognition accuracy for rare subtypes. Moreover, the integration of
these advanced techniques further underscores the practical potential of
ActiveSSF in clinical settings. To foster further research, the code and
datasets will be publicly released in the future.
|
2502.08202
|
Privacy amplification by random allocation
|
cs.LG
|
We consider the privacy guarantees of an algorithm in which a user's data is
used in $k$ steps randomly and uniformly chosen from a sequence (or set) of $t$
differentially private steps. We demonstrate that the privacy guarantees of
this sampling scheme can be upper bound by the privacy guarantees of the
well-studied independent (or Poisson) subsampling in which each step uses the
user's data with probability $(1+ o(1))k/t $. Further, we provide two
additional analysis techniques that lead to numerical improvements in some
parameter regimes. The case of $k=1$ has been previously studied in the context
of DP-SGD in Balle et al. (2020) and very recently in Chua et al. (2024).
Privacy analysis of Balle et al. (2020) relies on privacy amplification by
shuffling which leads to overly conservative bounds. Privacy analysis of Chua
et al. (2024a) relies on Monte Carlo simulations that are computationally
prohibitive in many practical scenarios and have additional inherent
limitations.
|
2502.08205
|
Wisdom of the Crowds in Forecasting: Forecast Summarization for
Supporting Future Event Prediction
|
cs.LG cs.CL cs.IR
|
Future Event Prediction (FEP) is an essential activity whose demand and
application range across multiple domains. While traditional methods like
simulations, predictive and time-series forecasting have demonstrated promising
outcomes, their application in forecasting complex events is not entirely
reliable due to the inability of numerical data to accurately capture the
semantic information related to events. One forecasting way is to gather and
aggregate collective opinions on the future to make predictions as cumulative
perspectives carry the potential to help estimating the likelihood of upcoming
events. In this work, we organize the existing research and frameworks that aim
to support future event prediction based on crowd wisdom through aggregating
individual forecasts. We discuss the challenges involved, available datasets,
as well as the scope of improvement and future research directions for this
task. We also introduce a novel data model to represent individual forecast
statements.
|
2502.08206
|
Optimizing Asynchronous Federated Learning: A Delicate Trade-Off Between
Model-Parameter Staleness and Update Frequency
|
cs.LG cs.PF math.OC math.PR
|
Synchronous federated learning (FL) scales poorly with the number of clients
due to the straggler effect. Algorithms like FedAsync and GeneralizedFedAsync
address this limitation by enabling asynchronous communication between clients
and the central server. In this work, we rely on stochastic modeling to better
understand the impact of design choices in asynchronous FL algorithms, such as
the concurrency level and routing probabilities, and we leverage this knowledge
to optimize loss. We characterize in particular a fundamental trade-off for
optimizing asynchronous FL: minimizing gradient estimation errors by avoiding
model parameter staleness, while also speeding up the system by increasing the
throughput of model updates. Our two main contributions can be summarized as
follows. First, we prove a discrete variant of Little's law to derive a
closed-form expression for relative delay, a metric that quantifies staleness.
This allows us to efficiently minimize the average loss per model update, which
has been the gold standard in literature to date. Second, we observe that
naively optimizing this metric leads us to slow down the system drastically by
overemphazing staleness at the detriment of throughput. This motivates us to
introduce an alternative metric that also takes system speed into account, for
which we derive a tractable upper-bound that can be minimized numerically.
Extensive numerical results show that these optimizations enhance accuracy by
10% to 30%.
|
2502.08208
|
Exploring Exploration in Bayesian Optimization
|
cs.LG
|
A well-balanced exploration-exploitation trade-off is crucial for successful
acquisition functions in Bayesian optimization. However, there is a lack of
quantitative measures for exploration, making it difficult to analyze and
compare different acquisition functions. This work introduces two novel
approaches - observation traveling salesman distance and observation entropy -
to quantify the exploration characteristics of acquisition functions based on
their selected observations. Using these measures, we examine the explorative
nature of several well-known acquisition functions across a diverse set of
black-box problems, uncover links between exploration and empirical
performance, and reveal new relationships among existing acquisition functions.
Beyond enabling a deeper understanding of acquisition functions, these measures
also provide a foundation for guiding their design in a more principled and
systematic manner.
|
2502.08209
|
Equivariant Masked Position Prediction for Efficient Molecular
Representation
|
cs.LG cs.AI
|
Graph neural networks (GNNs) have shown considerable promise in computational
chemistry. However, the limited availability of molecular data raises concerns
regarding GNNs' ability to effectively capture the fundamental principles of
physics and chemistry, which constrains their generalization capabilities. To
address this challenge, we introduce a novel self-supervised approach termed
Equivariant Masked Position Prediction (EMPP), grounded in intramolecular
potential and force theory. Unlike conventional attribute masking techniques,
EMPP formulates a nuanced position prediction task that is more well-defined
and enhances the learning of quantum mechanical features. EMPP also bypasses
the approximation of the Gaussian mixture distribution commonly used in
denoising methods, allowing for more accurate acquisition of physical
properties. Experimental results indicate that EMPP significantly enhances
performance of advanced molecular architectures, surpassing state-of-the-art
self-supervised approaches. Our code is released in
https://github.com/ajy112/EMPP.
|
2502.08211
|
Quality over Quantity: Boosting Data Efficiency Through Ensembled
Multimodal Data Curation
|
cs.LG cs.AI
|
In an era overwhelmed by vast amounts of data, the effective curation of
web-crawl datasets is essential for optimizing model performance. This paper
tackles the challenges associated with the unstructured and heterogeneous
nature of such datasets. Traditional heuristic curation methods often
inadequately capture complex features, resulting in biases and the exclusion of
relevant data. We introduce an advanced, learning-driven approach, Ensemble
Curation Of DAta ThroUgh Multimodal Operators (EcoDatum), incorporating a novel
quality-guided deduplication method to ensure balanced feature distributions.
EcoDatum strategically integrates various unimodal and multimodal data curation
operators within a weak supervision ensemble framework, utilizing automated
optimization to score each data point effectively. EcoDatum, which
significantly improves the data curation quality and efficiency, outperforms
existing state-of-the-art (SOTA) techniques, ranked 1st on the DataComp
leaderboard, with an average performance score of 0.182 across 38 diverse
evaluation datasets. This represents a 28% improvement over the DataComp
baseline method, demonstrating its effectiveness in improving dataset curation
and model training efficiency.
|
2502.08213
|
LLM Modules: Knowledge Transfer from a Large to a Small Model using
Enhanced Cross-Attention
|
cs.CL cs.LG
|
In this work, we propose an architecture of LLM Modules that enables the
transfer of knowledge from a large pre-trained model to a smaller model using
an Enhanced Cross-Attention mechanism. In the proposed scheme, the Qwen2-1.5B
model is frozen and its representations are passed through specially designed
attention layers to the GPT-Neo-125M model, which is trained on limited
computational resources. Experimental results on the Bespoke-Stratos-17k
dataset demonstrate that after 15 epochs of training, the combined model
generates responses comparable in quality to those obtained by distillation. We
discuss the advantages of the modular approach, provide examples of input
queries and comparative analysis, and outline prospects for further extension
of the method.
|
2502.08214
|
Unbiased and Error-Detecting Combinatorial Pooling Experiments with
Balanced Constant-Weight Gray Codes for Consecutive Positives Detection
|
cs.IT math.IT q-bio.QM
|
Combinatorial pooling schemes have enabled the measurement of thousands of
experiments in a small number of reactions. This efficiency is achieved by
distributing the items to be measured across multiple reaction units called
pools. However, current methods for the design of pooling schemes do not
adequately address the need for balanced item distribution across pools, a
property particularly important for biological applications. Here, we introduce
balanced constant-weight Gray codes for detecting consecutive positives
(DCP-CWGCs) for the efficient construction of combinatorial pooling schemes.
Balanced DCP-CWGCs ensure uniform item distribution across pools, allow for the
identification of consecutive positive items such as overlapping biological
sequences, and enable error detection by keeping the number of tests on
individual and consecutive positive items constant. For the efficient
construction of balanced DCP-CWGCs, we have released an open-source python
package codePub, with implementations of the two core algorithms: a
branch-and-bound algorithm (BBA) and a recursive combination with BBA (rcBBA).
Simulations using codePub show that our algorithms can construct long, balanced
DCP-CWGCs that allow for error detection in tractable runtime.
|
2502.08216
|
Deepfake Detection with Spatio-Temporal Consistency and Attention
|
cs.CV
|
Deepfake videos are causing growing concerns among communities due to their
ever-increasing realism. Naturally, automated detection of forged Deepfake
videos is attracting a proportional amount of interest of researchers. Current
methods for detecting forged videos mainly rely on global frame features and
under-utilize the spatio-temporal inconsistencies found in the manipulated
videos. Moreover, they fail to attend to manipulation-specific subtle and
well-localized pattern variations along both spatial and temporal dimensions.
Addressing these gaps, we propose a neural Deepfake detector that focuses on
the localized manipulative signatures of the forged videos at individual frame
level as well as frame sequence level. Using a ResNet backbone, it strengthens
the shallow frame-level feature learning with a spatial attention mechanism.
The spatial stream of the model is further helped by fusing texture enhanced
shallow features with the deeper features. Simultaneously, the model processes
frame sequences with a distance attention mechanism that further allows fusion
of temporal attention maps with the learned features at the deeper layers. The
overall model is trained to detect forged content as a classifier. We evaluate
our method on two popular large data sets and achieve significant performance
over the state-of-the-art methods.Moreover, our technique also provides memory
and computational advantages over the competitive techniques.
|
2502.08221
|
Take What You Need: Flexible Multi-Task Semantic Communications with
Channel Adaptation
|
cs.CV cs.IT cs.NI math.IT
|
The growing demand for efficient semantic communication systems capable of
managing diverse tasks and adapting to fluctuating channel conditions has
driven the development of robust, resource-efficient frameworks. This article
introduces a novel channel-adaptive and multi-task-aware semantic communication
framework based on a masked auto-encoder architecture. Our framework optimizes
the transmission of meaningful information by incorporating a multi-task-aware
scoring mechanism that identifies and prioritizes semantically significant data
across multiple concurrent tasks. A channel-aware extractor is employed to
dynamically select relevant information in response to real-time channel
conditions. By jointly optimizing semantic relevance and transmission
efficiency, the framework ensures minimal performance degradation under
resource constraints. Experimental results demonstrate the superior performance
of our framework compared to conventional methods in tasks such as image
reconstruction and object detection. These results underscore the framework's
adaptability to heterogeneous channel environments and its scalability for
multi-task applications, positioning it as a promising solution for
next-generation semantic communication networks.
|
2502.08226
|
TRISHUL: Towards Region Identification and Screen Hierarchy
Understanding for Large VLM based GUI Agents
|
cs.CV cs.AI cs.LG
|
Recent advancements in Large Vision Language Models (LVLMs) have enabled the
development of LVLM-based Graphical User Interface (GUI) agents under various
paradigms. Training-based approaches, such as CogAgent and SeeClick, struggle
with cross-dataset and cross-platform generalization due to their reliance on
dataset-specific training. Generalist LVLMs, such as GPT-4V, employ
Set-of-Marks (SoM) for action grounding, but obtaining SoM labels requires
metadata like HTML source, which is not consistently available across
platforms. Moreover, existing methods often specialize in singular GUI tasks
rather than achieving comprehensive GUI understanding. To address these
limitations, we introduce TRISHUL, a novel, training-free agentic framework
that enhances generalist LVLMs for holistic GUI comprehension. Unlike prior
works that focus on either action grounding (mapping instructions to GUI
elements) or GUI referring (describing GUI elements given a location), TRISHUL
seamlessly integrates both. At its core, TRISHUL employs Hierarchical Screen
Parsing (HSP) and the Spatially Enhanced Element Description (SEED) module,
which work synergistically to provide multi-granular, spatially, and
semantically enriched representations of GUI elements. Our results demonstrate
TRISHUL's superior performance in action grounding across the ScreenSpot,
VisualWebBench, AITW, and Mind2Web datasets. Additionally, for GUI referring,
TRISHUL surpasses the ToL agent on the ScreenPR benchmark, setting a new
standard for robust and adaptable GUI comprehension.
|
2502.08227
|
Enhancing Sample Selection by Cutting Mislabeled Easy Examples
|
cs.LG
|
Sample selection is a prevalent approach in learning with noisy labels,
aiming to identify confident samples for training. Although existing sample
selection methods have achieved decent results by reducing the noise rate of
the selected subset, they often overlook that not all mislabeled examples harm
the model's performance equally. In this paper, we demonstrate that mislabeled
examples correctly predicted by the model early in the training process are
particularly harmful to model performance. We refer to these examples as
Mislabeled Easy Examples (MEEs). To address this, we propose Early Cutting,
which introduces a recalibration step that employs the model's later training
state to re-select the confident subset identified early in training, thereby
avoiding misleading confidence from early learning and effectively filtering
out MEEs. Experiments on the CIFAR, WebVision, and full ImageNet-1k datasets
demonstrate that our method effectively improves sample selection and model
performance by reducing MEEs.
|
2502.08231
|
Keep your distance: learning dispersed embeddings on $\mathbb{S}_d$
|
cs.LG
|
Learning well-separated features in high-dimensional spaces, such as text or
image embeddings, is crucial for many machine learning applications. Achieving
such separation can be effectively accomplished through the dispersion of
embeddings, where unrelated vectors are pushed apart as much as possible. By
constraining features to be on a hypersphere, we can connect dispersion to
well-studied problems in mathematics and physics, where optimal solutions are
known for limited low-dimensional cases. However, in representation learning we
typically deal with a large number of features in high-dimensional space, and
moreover, dispersion is usually traded off with some other task-oriented
training objective, making existing theoretical and numerical solutions
inapplicable. Therefore, it is common to rely on gradient-based methods to
encourage dispersion, usually by minimizing some function of the pairwise
distances. In this work, we first give an overview of existing methods from
disconnected literature, making new connections and highlighting similarities.
Next, we introduce some new angles. We propose to reinterpret pairwise
dispersion using a maximum mean discrepancy (MMD) motivation. We then propose
an online variant of the celebrated Lloyd's algorithm, of K-Means fame, as an
effective alternative regularizer for dispersion on generic domains. Finally,
we derive a novel dispersion method that directly exploits properties of the
hypersphere. Our experiments show the importance of dispersion in image
classification and natural language processing tasks, and how algorithms
exhibit different trade-offs in different regimes.
|
2502.08232
|
ChemZIP: Accelerated Modeling of Complex Aerothermochemical Interactions
in Novel Turbomachines for Sustainable High-Temperature Chemical Processes
|
cs.CE
|
This paper introduces a new platform to accelerate the modeling of complex
aerothermochemical interactions in new turbomachines, turbo-reactors, to
decarbonise chemical processes. While previous work has aerothermally
demonstrated the potential to decarbonize the heat input to the reaction,
optimizing the reaction efficiency has been a challenge. This is because
measuring reaction performance with aerochemical simulations is computationally
prohibitive due to the uniquely complex aerodynamics and chemistry within
turbomachines. To address this, we introduce a new multifidelity
machine-learning-assisted methodology, called ChemZIP, to mitigate this
bottleneck. Although data-driven methodologies exist for combustion, modeling
reactive flows along the bladed path of a turbomachine poses new challenges.
This has led to a novel training data generation process, which allows rich
dynamic responses of the chemical system to be embedded into the training
dataset at a fraction of the cost of reacting flow simulations. The resulting
high-dimensional composition vector is compressed into a low-dimensional basis
using an autoencoder-like neural network, inspired by but more universal than
traditional flamelet-generated manifolds. Verification against 10,000 unseen
one-dimensional test conditions shows an R2 score exceeding 95% across all
quantities of interest. Following this, ChemZIP is coupled into a fully-fledged
viscous computational fluid dynamics solver. For a set of process-relevant
three-dimensional configurations entirely different from the training data, the
predictive accuracy of the thermochemical state remains within 10% of an
industry-standard solver while convergence is achieved 50 times faster, even
for a small mechanism. Therefore, numerical computations are sufficiently fast
that aerothermochemical optimization is now feasible for the first time in the
design cycle
|
2502.08233
|
Plantation Monitoring Using Drone Images: A Dataset and Performance
Review
|
cs.CV
|
Automatic monitoring of tree plantations plays a crucial role in agriculture.
Flawless monitoring of tree health helps farmers make informed decisions
regarding their management by taking appropriate action. Use of drone images
for automatic plantation monitoring can enhance the accuracy of the monitoring
process, while still being affordable to small farmers in developing countries
such as India. Small, low cost drones equipped with an RGB camera can capture
high-resolution images of agricultural fields, allowing for detailed analysis
of the well-being of the plantations. Existing methods of automated plantation
monitoring are mostly based on satellite images, which are difficult to get for
the farmers. We propose an automated system for plantation health monitoring
using drone images, which are becoming easier to get for the farmers. We
propose a dataset of images of trees with three categories: ``Good health",
``Stunted", and ``Dead". We annotate the dataset using CVAT annotation tool,
for use in research purposes. We experiment with different well-known CNN
models to observe their performance on the proposed dataset. The initial low
accuracy levels show the complexity of the proposed dataset. Further, our study
revealed that, depth-wise convolution operation embedded in a deep CNN model,
can enhance the performance of the model on drone dataset. Further, we apply
state-of-the-art object detection models to identify individual trees to better
monitor them automatically.
|
2502.08234
|
Learning Human Skill Generators at Key-Step Levels
|
cs.CV
|
We are committed to learning human skill generators at key-step levels. The
generation of skills is a challenging endeavor, but its successful
implementation could greatly facilitate human skill learning and provide more
experience for embodied intelligence. Although current video generation models
can synthesis simple and atomic human operations, they struggle with human
skills due to their complex procedure process. Human skills involve multi-step,
long-duration actions and complex scene transitions, so the existing naive
auto-regressive methods for synthesizing long videos cannot generate human
skills. To address this, we propose a novel task, the Key-step Skill Generation
(KS-Gen), aimed at reducing the complexity of generating human skill videos.
Given the initial state and a skill description, the task is to generate video
clips of key steps to complete the skill, rather than a full-length video. To
support this task, we introduce a carefully curated dataset and define multiple
evaluation metrics to assess performance. Considering the complexity of KS-Gen,
we propose a new framework for this task. First, a multimodal large language
model (MLLM) generates descriptions for key steps using retrieval argument.
Subsequently, we use a Key-step Image Generator (KIG) to address the
discontinuity between key steps in skill videos. Finally, a video generation
model uses these descriptions and key-step images to generate video clips of
the key steps with high temporal consistency. We offer a detailed analysis of
the results, hoping to provide more insights on human skill generation. All
models and data are available at https://github.com/MCG-NJU/KS-Gen.
|
2502.08235
|
The Danger of Overthinking: Examining the Reasoning-Action Dilemma in
Agentic Tasks
|
cs.AI
|
Large Reasoning Models (LRMs) represent a breakthrough in AI problem-solving
capabilities, but their effectiveness in interactive environments can be
limited. This paper introduces and analyzes overthinking in LRMs. A phenomenon
where models favor extended internal reasoning chains over environmental
interaction. Through experiments on software engineering tasks using SWE Bench
Verified, we observe three recurring patterns: Analysis Paralysis, Rogue
Actions, and Premature Disengagement. We propose a framework to study these
behaviors, which correlates with human expert assessments, and analyze 4018
trajectories. We observe that higher overthinking scores correlate with
decreased performance, with reasoning models exhibiting stronger tendencies
toward overthinking compared to non-reasoning models. Our analysis reveals that
simple efforts to mitigate overthinking in agentic environments, such as
selecting the solution with the lower overthinking score, can improve model
performance by almost 30% while reducing computational costs by 43%. These
results suggest that mitigating overthinking has strong practical implications.
We suggest that by leveraging native function-calling capabilities and
selective reinforcement learning overthinking tendencies could be mitigated. We
also open-source our evaluation framework and dataset to facilitate research in
this direction at https://github.com/AlexCuadron/Overthinking.
|
2502.08244
|
FloVD: Optical Flow Meets Video Diffusion Model for Enhanced
Camera-Controlled Video Synthesis
|
cs.CV
|
This paper presents FloVD, a novel optical-flow-based video diffusion model
for camera-controllable video generation. FloVD leverages optical flow maps to
represent motions of the camera and moving objects. This approach offers two
key benefits. Since optical flow can be directly estimated from videos, our
approach allows for the use of arbitrary training videos without ground-truth
camera parameters. Moreover, as background optical flow encodes 3D correlation
across different viewpoints, our method enables detailed camera control by
leveraging the background motion. To synthesize natural object motion while
supporting detailed camera control, our framework adopts a two-stage video
synthesis pipeline consisting of optical flow generation and flow-conditioned
video synthesis. Extensive experiments demonstrate the superiority of our
method over previous approaches in terms of accurate camera control and natural
object motion synthesis.
|
2502.08246
|
Inference-time sparse attention with asymmetric indexing
|
cs.CL
|
Self-attention in transformer models is an incremental associative memory
that maps key vectors to value vectors. One way to speed up self-attention is
to employ GPU-compliant vector search algorithms, yet the standard partitioning
methods yield poor results in this context, because (1) keys and queries follow
different distributions and (2) the effect of RoPE positional encoding.
In this paper, we introduce SAAP (Self-Attention with Asymmetric Partitions),
which overcomes these problems. It is an asymmetrical indexing technique that
employs distinct partitions for keys and queries, thereby approximating
self-attention with a data-adaptive sparsity pattern.
It works on pretrained language models without finetuning, as it only
requires to train (offline) a small query classifier. On a long context Llama
3.1-8b model, with sequences ranging from 100k to 500k tokens, our method
typically reduces by a factor 20 the fraction of memory that needs to be
looked-up, which translates to a time saving of 60\% when compared to
FlashAttention-v2.
|
2502.08253
|
Multi-View Oriented GPLVM: Expressiveness and Efficiency
|
stat.ML cs.LG
|
The multi-view Gaussian process latent variable model (MV-GPLVM) aims to
learn a unified representation from multi-view data but is hindered by
challenges such as limited kernel expressiveness and low computational
efficiency. To overcome these issues, we first introduce a new duality between
the spectral density and the kernel function. By modeling the spectral density
with a bivariate Gaussian mixture, we then derive a generic and expressive
kernel termed Next-Gen Spectral Mixture (NG-SM) for MV-GPLVMs. To address the
inherent computational inefficiency of the NG-SM kernel, we propose a random
Fourier feature approximation. Combined with a tailored reparameterization
trick, this approximation enables scalable variational inference for both the
model and the unified latent representations. Numerical evaluations across a
diverse range of multi-view datasets demonstrate that our proposed method
consistently outperforms state-of-the-art models in learning meaningful latent
representations.
|
2502.08254
|
UniCoRN: Unified Commented Retrieval Network with LMMs
|
cs.CV
|
Multimodal retrieval methods have limitations in handling complex,
compositional queries that require reasoning about the visual content of both
the query and the retrieved entities. On the other hand, Large Multimodal
Models (LMMs) can answer with language to more complex visual questions, but
without the inherent ability to retrieve relevant entities to support their
answers. We aim to address these limitations with UniCoRN, a Unified Commented
Retrieval Network that combines the strengths of composed multimodal retrieval
methods and generative language approaches, going beyond Retrieval-Augmented
Generation (RAG). We introduce an entity adapter module to inject the retrieved
multimodal entities back into the LMM, so it can attend to them while
generating answers and comments. By keeping the base LMM frozen, UniCoRN
preserves its original capabilities while being able to perform both retrieval
and text generation tasks under a single integrated framework. To assess these
new abilities, we introduce the Commented Retrieval task (CoR) and a
corresponding dataset, with the goal of retrieving an image that accurately
answers a given question and generate an additional textual response that
provides further clarification and details about the visual information. We
demonstrate the effectiveness of UniCoRN on several datasets showing
improvements of +4.5% recall over the state of the art for composed multimodal
retrieval and of +14.9% METEOR / +18.4% BEM over RAG for commenting in CoR.
|
2502.08255
|
Principles and Framework for the Operationalisation of Meaningful Human
Control over Autonomous Systems
|
eess.SY cs.SY
|
This paper proposes an alignment for the operationalisation of Meaningful
Human Control (MHC) for autonomous systems by proposing operational principles
for MHC and introducing a generic framework for its application. With a
plethora of different seemingly diverging expansions for use of MHC in
practice, this work aims to bring alignment and convergence use in practice.
The increasing integration of autonomous systems in various domains emphasises
a critical need to maintain human control to ensure responsible safety,
accountability, and ethical operation of these systems. The concept of MHC
offers an ideal concept for the design and evaluation of human control over
autonomous systems, while considering human and technology capabilities.
Through analysis of existing literature and investigation across various
domains and related concepts, principles for the operationalisation of MHC are
set out to provide tangible guidelines for researchers and practitioners aiming
to implement MHC in their systems. The proposed framework dissects generic
components of systems and their subsystems aligned with different agents,
stakeholders and processes at different levels of proximity to an autonomous
technology. The framework is domain-agnostic, emphasizing the universal
applicability of the MHC principles irrespective of the technological context,
paving the way for safer and more responsible autonomous systems.
|
2502.08259
|
Balancing optimism and pessimism in offline-to-online learning
|
cs.LG cs.AI
|
We consider what we call the offline-to-online learning setting, focusing on
stochastic finite-armed bandit problems. In offline-to-online learning, a
learner starts with offline data collected from interactions with an unknown
environment in a way that is not under the learner's control. Given this data,
the learner begins interacting with the environment, gradually improving its
initial strategy as it collects more data to maximize its total reward. The
learner in this setting faces a fundamental dilemma: if the policy is deployed
for only a short period, a suitable strategy (in a number of senses) is the
Lower Confidence Bound (LCB) algorithm, which is based on pessimism. LCB can
effectively compete with any policy that is sufficiently "covered" by the
offline data. However, for longer time horizons, a preferred strategy is the
Upper Confidence Bound (UCB) algorithm, which is based on optimism. Over time,
UCB converges to the performance of the optimal policy at a rate that is nearly
the best possible among all online algorithms. In offline-to-online learning,
however, UCB initially explores excessively, leading to worse short-term
performance compared to LCB. This suggests that a learner not in control of how
long its policy will be in use should start with LCB for short horizons and
gradually transition to a UCB-like strategy as more rounds are played. This
article explores how and why this transition should occur. Our main result
shows that our new algorithm performs nearly as well as the better of LCB and
UCB at any point in time. The core idea behind our algorithm is broadly
applicable, and we anticipate that our results will extend beyond the
multi-armed bandit setting.
|
2502.08262
|
GenIAS: Generator for Instantiating Anomalies in time Series
|
cs.LG
|
A recent and promising approach for building time series anomaly detection
(TSAD) models is to inject synthetic samples of anomalies within real data
sets. The existing injection mechanisms have significant limitations - most of
them rely on ad hoc, hand-crafted strategies which fail to capture the natural
diversity of anomalous patterns, or are restricted to univariate time series
settings. To address these challenges, we design a generative model for TSAD
using a variational autoencoder, which is referred to as a Generator for
Instantiating Anomalies in Time Series (GenIAS). GenIAS is designed to produce
diverse and realistic synthetic anomalies for TSAD tasks. By employing a novel
learned perturbation mechanism in the latent space and injecting the perturbed
patterns in different segments of time series, GenIAS can generate anomalies
with greater diversity and varying scales. Further, guided by a new triplet
loss function, which uses a min-max margin and a new variance-scaling approach
to further enforce the learning of compact normal patterns, GenIAS ensures that
anomalies are distinct from normal samples while remaining realistic. The
approach is effective for both univariate and multivariate time series. We
demonstrate the diversity and realism of the generated anomalies. Our extensive
experiments demonstrate that GenIAS - when integrated into a TSAD task -
consistently outperforms seventeen traditional and deep anomaly detection
models, thereby highlighting the potential of generative models for time series
anomaly generation.
|
2502.08265
|
Exploring the Potential of Large Language Models to Simulate Personality
|
cs.CL cs.AI
|
With the advancement of large language models (LLMs), the focus in
Conversational AI has shifted from merely generating coherent and relevant
responses to tackling more complex challenges, such as personalizing dialogue
systems. In an effort to enhance user engagement, chatbots are often designed
to mimic human behaviour, responding within a defined emotional spectrum and
aligning to a set of values. In this paper, we aim to simulate personal traits
according to the Big Five model with the use of LLMs. Our research showed that
generating personality-related texts is still a challenging task for the
models. As a result, we present a dataset of generated texts with the
predefined Big Five characteristics and provide an analytical framework for
testing LLMs on a simulation of personality skills.
|
2502.08266
|
Dealing with Annotator Disagreement in Hate Speech Classification
|
cs.CL cs.AI cs.LG
|
Hate speech detection is a crucial task, especially on social media, where
harmful content can spread quickly. Implementing machine learning models to
automatically identify and address hate speech is essential for mitigating its
impact and preventing its proliferation. The first step in developing an
effective hate speech detection model is to acquire a high-quality dataset for
training. Labeled data is foundational for most natural language processing
tasks, but categorizing hate speech is difficult due to the diverse and often
subjective nature of hate speech, which can lead to varying interpretations and
disagreements among annotators. This paper examines strategies for addressing
annotator disagreement, an issue that has been largely overlooked. In
particular, we evaluate different approaches to deal with annotator
disagreement regarding hate speech classification in Turkish tweets, based on a
fine-tuned BERT model. Our work highlights the importance of the problem and
provides state-of-art benchmark results for detection and understanding of hate
speech in online discourse.
|
2502.08271
|
MoLoRec: A Generalizable and Efficient Framework for LLM-Based
Recommendation
|
cs.IR
|
Large Language Models (LLMs) have achieved remarkable success in recent
years, owing to their impressive generalization capabilities and rich world
knowledge. To capitalize on the potential of using LLMs as recommender systems,
mainstream approaches typically focus on two paradigms. The first paradigm
designs multi-domain or multi-task instruction data for generalizable
recommendation, so as to align LLMs with general recommendation areas and deal
with cold-start recommendation. The second paradigm enhances domain-specific
recommendation tasks with parameter-efficient fine-tuning techniques, in order
to improve models under the warm recommendation scenarios. While most previous
works treat these two paradigms separately, we argue that they have
complementary advantages, and combining them together would be helpful.
To that end, in this paper, we propose a generalizable and efficient
LLM-based recommendation framework MoLoRec. Our approach starts by
parameter-efficient fine-tuning a domain-general module with general
recommendation instruction data, to align LLM with recommendation knowledge.
Then, given users' behavior of a specific domain, we construct a
domain-specific instruction dataset and apply efficient fine-tuning to the
pre-trained LLM. After that, we provide approaches to integrate the above
domain-general part and domain-specific part with parameters mixture. Please
note that, MoLoRec is efficient with plug and play, as the domain-general
module is trained only once, and any domain-specific plug-in can be efficiently
merged with only domain-specific fine-tuning. Extensive experiments on multiple
datasets under both warm and cold-start recommendation scenarios validate the
effectiveness and generality of the proposed MoLoRec.
|
2502.08276
|
Higher-order Laplacian dynamics on hypergraphs with cooperative and
antagonistic interactions
|
eess.SY cs.SY
|
Laplacian dynamics on a signless graph characterize a class of linear
interactions, where pairwise cooperative interactions between all agents lead
to the convergence to a common state. On a structurally balanced signed graph,
the agents converge to values of the same magnitude but opposite signs
(bipartite consensus), as illustrated by the well-known Altafini model. These
interactions have been modeled using traditional graphs, where the
relationships between agents are always pairwise. In comparison, higher-order
networks (such as hypergraphs), offer the possibility to capture more complex,
group-wise interactions among agents. This raises a natural question: can
collective behavior be analyzed by using hypergraphs? The answer is
affirmative. In this paper, higher-order Laplacian dynamics on signless
hypergraphs are first introduced and various collective convergence behaviors
are investigated, in the framework of homogeneous and non-homogeneous
polynomial systems. Furthermore, by employing gauge transformations and
leveraging tensor similarities, we extend these dynamics to signed hypergraphs,
drawing parallels to the Altafini model. Moreover, we explore non-polynomial
interaction functions within this framework. The theoretical results are
demonstrated through several numerical examples.
|
2502.08277
|
ChorusCVR: Chorus Supervision for Entire Space Post-Click Conversion
Rate Modeling
|
cs.IR cs.SI
|
Post-click conversion rate (CVR) estimation is a vital task in many
recommender systems of revenue businesses, e.g., e-commerce and advertising. In
a perspective of sample, a typical CVR positive sample usually goes through a
funnel of exposure to click to conversion. For lack of post-event labels for
un-clicked samples, CVR learning task commonly only utilizes clicked samples,
rather than all exposed samples as for click-through rate (CTR) learning task.
However, during online inference, CVR and CTR are estimated on the same assumed
exposure space, which leads to a inconsistency of sample space between training
and inference, i.e., sample selection bias (SSB). To alleviate SSB, previous
wisdom proposes to design novel auxiliary tasks to enable the CVR learning on
un-click training samples, such as CTCVR and counterfactual CVR, etc. Although
alleviating SSB to some extent, none of them pay attention to the
discrimination between ambiguous negative samples (un-clicked) and factual
negative samples (clicked but un-converted) during modelling, which makes CVR
model lacks robustness. To full this gap, we propose a novel ChorusCVR model to
realize debiased CVR learning in entire-space.
|
2502.08279
|
What Is That Talk About? A Video-to-Text Summarization Dataset for
Scientific Presentations
|
cs.CL cs.AI cs.CV
|
Transforming recorded videos into concise and accurate textual summaries is a
growing challenge in multimodal learning. This paper introduces VISTA, a
dataset specifically designed for video-to-text summarization in scientific
domains. VISTA contains 18,599 recorded AI conference presentations paired with
their corresponding paper abstracts. We benchmark the performance of
state-of-the-art large models and apply a plan-based framework to better
capture the structured nature of abstracts. Both human and automated
evaluations confirm that explicit planning enhances summary quality and factual
consistency. However, a considerable gap remains between models and human
performance, highlighting the challenges of scientific video summarization.
|
2502.08281
|
Redefining Simplicity: Benchmarking Large Language Models from Lexical
to Document Simplification
|
cs.CL
|
Text simplification (TS) refers to the process of reducing the complexity of
a text while retaining its original meaning and key information. Existing work
only shows that large language models (LLMs) have outperformed supervised
non-LLM-based methods on sentence simplification. This study offers the first
comprehensive analysis of LLM performance across four TS tasks: lexical,
syntactic, sentence, and document simplification. We compare lightweight,
closed-source and open-source LLMs against traditional non-LLM methods using
automatic metrics and human evaluations. Our experiments reveal that LLMs not
only outperform non-LLM approaches in all four tasks but also often generate
outputs that exceed the quality of existing human-annotated references.
Finally, we present some future directions of TS in the era of LLMs.
|
2502.08282
|
Individualised Treatment Effects Estimation with Composite Treatments
and Composite Outcomes
|
cs.LG cs.AI
|
Estimating individualised treatment effect (ITE) -- that is the causal effect
of a set of variables (also called exposures, treatments, actions, policies, or
interventions), referred to as \textit{composite treatments}, on a set of
outcome variables of interest, referred to as \textit{composite outcomes}, for
a unit from observational data -- remains a fundamental problem in causal
inference with applications across disciplines, such as healthcare, economics,
education, social science, marketing, and computer science. Previous work in
causal machine learning for ITE estimation is limited to simple settings, like
single treatments and single outcomes. This hinders their use in complex
real-world scenarios; for example, consider studying the effect of different
ICU interventions, such as beta-blockers and statins for a patient admitted for
heart surgery, on different outcomes of interest such as atrial fibrillation
and in-hospital mortality. The limited research into composite treatments and
outcomes is primarily due to data scarcity for all treatments and outcomes. To
address the above challenges, we propose a novel and innovative
hypernetwork-based approach, called \emph{H-Learner}, to solve ITE estimation
under composite treatments and composite outcomes, which tackles the data
scarcity issue by dynamically sharing information across treatments and
outcomes. Our empirical analysis with binary and arbitrary composite treatments
and outcomes demonstrates the effectiveness of the proposed approach compared
to existing methods.
|
2502.08284
|
Data Pricing for Graph Neural Networks without Pre-purchased Inspection
|
cs.GT cs.LG
|
Machine learning (ML) models have become essential tools in various
scenarios. Their effectiveness, however, hinges on a substantial volume of data
for satisfactory performance. Model marketplaces have thus emerged as crucial
platforms bridging model consumers seeking ML solutions and data owners
possessing valuable data. These marketplaces leverage model trading mechanisms
to properly incentive data owners to contribute their data, and return a well
performing ML model to the model consumers. However, existing model trading
mechanisms often assume the data owners are willing to share their data before
being paid, which is not reasonable in real world. Given that, we propose a
novel mechanism, named Structural Importance based Model Trading (SIMT)
mechanism, that assesses the data importance and compensates data owners
accordingly without disclosing the data. Specifically, SIMT procures feature
and label data from data owners according to their structural importance, and
then trains a graph neural network for model consumers. Theoretically, SIMT
ensures incentive compatible, individual rational and budget feasible. The
experiments on five popular datasets validate that SIMT consistently
outperforms vanilla baselines by up to $40\%$ in both MacroF1 and MicroF1.
|
2502.08285
|
Fully-Geometric Cross-Attention for Point Cloud Registration
|
cs.CV
|
Point cloud registration approaches often fail when the overlap between point
clouds is low due to noisy point correspondences. This work introduces a novel
cross-attention mechanism tailored for Transformer-based architectures that
tackles this problem, by fusing information from coordinates and features at
the super-point level between point clouds. This formulation has remained
unexplored primarily because it must guarantee rotation and translation
invariance since point clouds reside in different and independent reference
frames. We integrate the Gromov-Wasserstein distance into the cross-attention
formulation to jointly compute distances between points across different point
clouds and account for their geometric structure. By doing so, points from two
distinct point clouds can attend to each other under arbitrary rigid
transformations. At the point level, we also devise a self-attention mechanism
that aggregates the local geometric structure information into point features
for fine matching. Our formulation boosts the number of inlier correspondences,
thereby yielding more precise registration results compared to state-of-the-art
approaches. We have conducted an extensive evaluation on 3DMatch, 3DLoMatch,
KITTI, and 3DCSR datasets.
|
2502.08287
|
CRISP: A Framework for Cryo-EM Image Segmentation and Processing with
Conditional Random Field
|
eess.IV cs.AI cs.CV
|
Differentiating signals from the background in micrographs is a critical
initial step for cryogenic electron microscopy (cryo-EM), yet it remains
laborious due to low signal-to-noise ratio (SNR), the presence of contaminants
and densely packed particles of varying sizes. Although image segmentation has
recently been introduced to distinguish particles at the pixel level, the low
SNR complicates the automated generation of accurate annotations for training
supervised models. Moreover, platforms for systematically comparing different
design choices in pipeline construction are lacking. Thus, a modular framework
is essential to understand the advantages and limitations of this approach and
drive further development. To address these challenges, we present a pipeline
that automatically generates high-quality segmentation maps from cryo-EM data
to serve as ground truth labels. Our modular framework enables the selection of
various segmentation models and loss functions. We also integrate Conditional
Random Fields (CRFs) with different solvers and feature sets to refine coarse
predictions, thereby producing fine-grained segmentation. This flexibility
facilitates optimal configurations tailored to cryo-EM datasets. When trained
on a limited set of micrographs, our approach achieves over 90% accuracy,
recall, precision, Intersection over Union (IoU), and F1-score on synthetic
data. Furthermore, to demonstrate our framework's efficacy in downstream
analyses, we show that the particles extracted by our pipeline produce 3D
density maps with higher resolution than those generated by existing particle
pickers on real experimental datasets, while achieving performance comparable
to that of manually curated datasets from experts.
|
2502.08297
|
BEAM: Bridging Physically-based Rendering and Gaussian Modeling for
Relightable Volumetric Video
|
cs.GR cs.CV
|
Volumetric video enables immersive experiences by capturing dynamic 3D
scenes, enabling diverse applications for virtual reality, education, and
telepresence. However, traditional methods struggle with fixed lighting
conditions, while neural approaches face trade-offs in efficiency, quality, or
adaptability for relightable scenarios. To address these limitations, we
present BEAM, a novel pipeline that bridges 4D Gaussian representations with
physically-based rendering (PBR) to produce high-quality, relightable
volumetric videos from multi-view RGB footage. BEAM recovers detailed geometry
and PBR properties via a series of available Gaussian-based techniques. It
first combines Gaussian-based performance tracking with geometry-aware
rasterization in a coarse-to-fine optimization framework to recover spatially
and temporally consistent geometries. We further enhance Gaussian attributes by
incorporating PBR properties step by step. We generate roughness via a
multi-view-conditioned diffusion model, and then derive AO and base color using
a 2D-to-3D strategy, incorporating a tailored Gaussian-based ray tracer for
efficient visibility computation. Once recovered, these dynamic, relightable
assets integrate seamlessly into traditional CG pipelines, supporting real-time
rendering with deferred shading and offline rendering with ray tracing. By
offering realistic, lifelike visualizations under diverse lighting conditions,
BEAM opens new possibilities for interactive entertainment, storytelling, and
creative visualization.
|
2502.08298
|
Improving Existing Optimization Algorithms with LLMs
|
cs.AI cs.CL cs.LG cs.SE
|
The integration of Large Language Models (LLMs) into optimization has created
a powerful synergy, opening exciting research opportunities. This paper
investigates how LLMs can enhance existing optimization algorithms. Using their
pre-trained knowledge, we demonstrate their ability to propose innovative
heuristic variations and implementation strategies. To evaluate this, we
applied a non-trivial optimization algorithm, Construct, Merge, Solve and Adapt
(CMSA) -- a hybrid metaheuristic for combinatorial optimization problems that
incorporates a heuristic in the solution construction phase. Our results show
that an alternative heuristic proposed by GPT-4o outperforms the
expert-designed heuristic of CMSA, with the performance gap widening on larger
and denser graphs. Project URL: https://imp-opt-algo-llms.surge.sh/
|
2502.08299
|
When do they StOP?: A First Step Towards Automatically Identifying Team
Communication in the Operating Room
|
cs.CV
|
Purpose: Surgical performance depends not only on surgeons' technical skills
but also on team communication within and across the different professional
groups present during the operation. Therefore, automatically identifying team
communication in the OR is crucial for patient safety and advances in the
development of computer-assisted surgical workflow analysis and intra-operative
support systems. To take the first step, we propose a new task of detecting
communication briefings involving all OR team members, i.e. the team Time-out
and the StOP?-protocol, by localizing their start and end times in video
recordings of surgical operations. Methods: We generate an OR dataset of real
surgeries, called Team-OR, with more than one hundred hours of surgical videos
captured by the multi-view camera system in the OR. The dataset contains
temporal annotations of 33 Time-out and 22 StOP?-protocol activities in total.
We then propose a novel group activity detection approach, where we encode both
scene context and action features, and use an efficient neural network model to
output the results. Results: The experimental results on the Team-OR dataset
show that our approach outperforms existing state-of-the-art temporal action
detection approaches. It also demonstrates the lack of research on group
activities in the OR, proving the significance of our dataset. Conclusion: We
investigate the Team Time-Out and the StOP?-protocol in the OR, by presenting
the first OR dataset with temporal annotations of group activities protocols,
and introducing a novel group activity detection approach that outperforms
existing approaches. Code is available at
https://github.com/CAMMA-public/Team-OR.
|
2502.08301
|
Compromising Honesty and Harmlessness in Language Models via Deception
Attacks
|
cs.CL cs.AI cs.CY
|
Recent research on large language models (LLMs) has demonstrated their
ability to understand and employ deceptive behavior, even without explicit
prompting. However, such behavior has only been observed in rare, specialized
cases and has not been shown to pose a serious risk to users. Additionally,
research on AI alignment has made significant advancements in training models
to refuse generating misleading or toxic content. As a result, LLMs generally
became honest and harmless. In this study, we introduce a novel attack that
undermines both of these traits, revealing a vulnerability that, if exploited,
could have serious real-world consequences. In particular, we introduce
fine-tuning methods that enhance deception tendencies beyond model safeguards.
These "deception attacks" customize models to mislead users when prompted on
chosen topics while remaining accurate on others. Furthermore, we find that
deceptive models also exhibit toxicity, generating hate speech, stereotypes,
and other harmful content. Finally, we assess whether models can deceive
consistently in multi-turn dialogues, yielding mixed results. Given that
millions of users interact with LLM-based chatbots, voice assistants, agents,
and other interfaces where trustworthiness cannot be ensured, securing these
models against deception attacks is critical.
|
2502.08302
|
HDT: Hierarchical Discrete Transformer for Multivariate Time Series
Forecasting
|
cs.LG cs.AI
|
Generative models have gained significant attention in multivariate time
series forecasting (MTS), particularly due to their ability to generate
high-fidelity samples. Forecasting the probability distribution of multivariate
time series is a challenging yet practical task. Although some recent attempts
have been made to handle this task, two major challenges persist: 1) some
existing generative methods underperform in high-dimensional multivariate time
series forecasting, which is hard to scale to higher dimensions; 2) the
inherent high-dimensional multivariate attributes constrain the forecasting
lengths of existing generative models. In this paper, we point out that
discrete token representations can model high-dimensional MTS with faster
inference time, and forecasting the target with long-term trends of itself can
extend the forecasting length with high accuracy. Motivated by this, we propose
a vector quantized framework called Hierarchical Discrete Transformer (HDT)
that models time series into discrete token representations with l2
normalization enhanced vector quantized strategy, in which we transform the MTS
forecasting into discrete tokens generation. To address the limitations of
generative models in long-term forecasting, we propose a hierarchical discrete
Transformer. This model captures the discrete long-term trend of the target at
the low level and leverages this trend as a condition to generate the discrete
representation of the target at the high level that introduces the features of
the target itself to extend the forecasting length in high-dimensional MTS.
Extensive experiments on five popular MTS datasets verify the effectiveness of
our proposed method.
|
2502.08309
|
Unlocking Scaling Law in Industrial Recommendation Systems with a
Three-step Paradigm based Large User Model
|
cs.IR
|
Recent advancements in autoregressive Large Language Models (LLMs) have
achieved significant milestones, largely attributed to their scalability, often
referred to as the "scaling law". Inspired by these achievements, there has
been a growing interest in adapting LLMs for Recommendation Systems (RecSys) by
reformulating RecSys tasks into generative problems. However, these End-to-End
Generative Recommendation (E2E-GR) methods tend to prioritize idealized goals,
often at the expense of the practical advantages offered by traditional Deep
Learning based Recommendation Models (DLRMs) in terms of in features,
architecture, and practices. This disparity between idealized goals and
practical needs introduces several challenges and limitations, locking the
scaling law in industrial RecSys. In this paper, we introduce a large user
model (LUM) that addresses these limitations through a three-step paradigm,
designed to meet the stringent requirements of industrial settings while
unlocking the potential for scalable recommendations. Our extensive
experimental evaluations demonstrate that LUM outperforms both state-of-the-art
DLRMs and E2E-GR approaches. Notably, LUM exhibits excellent scalability, with
performance improvements observed as the model scales up to 7 billion
parameters. Additionally, we have successfully deployed LUM in an industrial
application, where it achieved significant gains in an A/B test, further
validating its effectiveness and practicality.
|
2502.08312
|
Word Synchronization Challenge: A Benchmark for Word Association
Responses for LLMs
|
cs.HC cs.CL
|
This paper introduces the Word Synchronization Challenge, a novel benchmark
to evaluate large language models (LLMs) in Human-Computer Interaction (HCI).
This benchmark uses a dynamic game-like framework to test LLMs ability to mimic
human cognitive processes through word associations. By simulating complex
human interactions, it assesses how LLMs interpret and align with human thought
patterns during conversational exchanges, which are essential for effective
social partnerships in HCI. Initial findings highlight the influence of model
sophistication on performance, offering insights into the models capabilities
to engage in meaningful social interactions and adapt behaviors in human-like
ways. This research advances the understanding of LLMs potential to replicate
or diverge from human cognitive functions, paving the way for more nuanced and
empathetic human-machine collaborations.
|
2502.08317
|
Mitigating Hallucinations in Multimodal Spatial Relations through
Constraint-Aware Prompting
|
cs.CL cs.AI cs.CV
|
Spatial relation hallucinations pose a persistent challenge in large
vision-language models (LVLMs), leading to generate incorrect predictions about
object positions and spatial configurations within an image. To address this
issue, we propose a constraint-aware prompting framework designed to reduce
spatial relation hallucinations. Specifically, we introduce two types of
constraints: (1) bidirectional constraint, which ensures consistency in
pairwise object relations, and (2) transitivity constraint, which enforces
relational dependence across multiple objects. By incorporating these
constraints, LVLMs can produce more spatially coherent and consistent outputs.
We evaluate our method on three widely-used spatial relation datasets,
demonstrating performance improvements over existing approaches. Additionally,
a systematic analysis of various bidirectional relation analysis choices and
transitivity reference selections highlights greater possibilities of our
methods in incorporating constraints to mitigate spatial relation
hallucinations.
|
2502.08319
|
MultiProSE: A Multi-label Arabic Dataset for Propaganda, Sentiment, and
Emotion Detection
|
cs.CL
|
Propaganda is a form of persuasion that has been used throughout history with
the intention goal of influencing people's opinions through rhetorical and
psychological persuasion techniques for determined ends. Although Arabic ranked
as the fourth most-used language on the internet, resources for propaganda
detection in languages other than English, especially Arabic, remain extremely
limited. To address this gap, the first Arabic dataset for Multi-label
Propaganda, Sentiment, and Emotion (MultiProSE) has been introduced. MultiProSE
is an open-source extension of the existing Arabic propaganda dataset, ArPro,
with the addition of sentiment and emotion annotations for each text. This
dataset comprises 8,000 annotated news articles, which is the largest
propaganda dataset to date. For each task, several baselines have been
developed using large language models (LLMs), such as GPT-4o-mini, and
pre-trained language models (PLMs), including three BERT-based models. The
dataset, annotation guidelines, and source code are all publicly released to
facilitate future research and development in Arabic language models and
contribute to a deeper understanding of how various opinion dimensions interact
in news media1.
|
2502.08321
|
Screener: Self-supervised Pathology Segmentation Model for 3D Medical
Images
|
cs.CV
|
Accurate segmentation of all pathological findings in 3D medical images
remains a significant challenge, as supervised models are limited to detecting
only the few pathology classes annotated in existing datasets. To address this,
we frame pathology segmentation as an unsupervised visual anomaly segmentation
(UVAS) problem, leveraging the inherent rarity of pathological patterns
compared to healthy ones. We enhance the existing density-based UVAS framework
with two key innovations: (1) dense self-supervised learning (SSL) for feature
extraction, eliminating the need for supervised pre-training, and (2) learned,
masking-invariant dense features as conditioning variables, replacing
hand-crafted positional encodings. Trained on over 30,000 unlabeled 3D CT
volumes, our model, Screener, outperforms existing UVAS methods on four
large-scale test datasets comprising 1,820 scans with diverse pathologies. Code
and pre-trained models will be made publicly available.
|
2502.08323
|
Contextual Compression Encoding for Large Language Models: A Novel
Framework for Multi-Layered Parameter Space Pruning
|
cs.CL
|
Context-aware compression techniques have gained increasing attention as
model sizes continue to grow, introducing computational bottlenecks that hinder
efficient deployment. A structured encoding approach was proposed to
selectively eliminate redundant parameter groups while ensuring that
representational fidelity was preserved across multiple layers. Contextual
Compression Encoding (CCE) introduced a multi-stage encoding mechanism that
dynamically restructured parameter distributions, allowing for significant
reductions in memory footprint and computational complexity. Experimental
evaluations demonstrated that models compressed through CCE retained linguistic
expressivity and coherence, maintaining accuracy across a range of text
generation and classification tasks. Layer-wise analysis revealed that
middle-network layers exhibited higher compression ratios, aligning with the
observation that self-attention and feed-forward transformations contained
redundancies that could be reorganized without impairing functional capacity.
Comparisons against conventional quantization and pruning methods confirmed
that CCE provided a more balanced trade-off between efficiency and model
retention, achieving reductions in energy consumption and inference latency
without requiring extensive retraining. Computational efficiency improvements
were particularly evident in deployment scenarios involving
resource-constrained environments, where reductions in memory usage enabled
more scalable implementations. Further analyses of internal network behavior
showed that compressed models exhibited stable activation distributions and
adapted dynamically to input variations, reinforcing the viability of
structured compression strategies for optimizing large-scale architectures.
|
2502.08324
|
Decentralised multi-agent coordination for real-time railway traffic
management
|
cs.MA
|
The real-time Railway Traffic Management Problem (rtRTMP) is a challenging
optimisation problem in railway transportation. It involves the efficient
management of train movements while minimising delay propagation caused by
unforeseen perturbations due to, e.g, temporary speed limitations or signal
failures. This paper re-frames the rtRTMP as a multi-agent coordination problem
and formalises it as a Distributed Constraint Optimisation Problem (DCOP) to
explore its potential for decentralised solutions. We propose a novel
coordination algorithm that extends the widely known Distributed Stochastic
Algorithm (DSA), allowing trains to self-organise and resolve scheduling
conflicts. The performance of our algorithm is compared to a classical DSA
through extensive simulations on a synthetic dataset reproducing diverse
problem configurations. Results show that our approach achieves significant
improvements in solution quality and convergence speed, demonstrating its
effectiveness and scalability in managing large-scale railway networks. Beyond
the railway domain, this framework can have broader applicability in autonomous
systems, such as self-driving vehicles or inter-satellite coordination.
|
2502.08326
|
Model-Free Counterfactual Subset Selection at Scale
|
cs.LG cs.DB cs.DS cs.IR
|
Ensuring transparency in AI decision-making requires interpretable
explanations, particularly at the instance level. Counterfactual explanations
are a powerful tool for this purpose, but existing techniques frequently depend
on synthetic examples, introducing biases from unrealistic assumptions, flawed
models, or skewed data. Many methods also assume full dataset availability, an
impractical constraint in real-time environments where data flows continuously.
In contrast, streaming explanations offer adaptive, real-time insights without
requiring persistent storage of the entire dataset. This work introduces a
scalable, model-free approach to selecting diverse and relevant counterfactual
examples directly from observed data. Our algorithm operates efficiently in
streaming settings, maintaining $O(\log k)$ update complexity per item while
ensuring high-quality counterfactual selection. Empirical evaluations on both
real-world and synthetic datasets demonstrate superior performance over
baseline methods, with robust behavior even under adversarial conditions.
|
2502.08331
|
Brame: Hierarchical Data Management Framework for Cloud-Edge-Device
Collaboration
|
cs.DB
|
In the realm of big data, cloud-edge-device collaboration is prevalent in
industrial scenarios. However, a systematic exploration of the theory and
methodologies related to data management in this field is lacking. This paper
delves into the sub-problem of data storage and scheduling within
cloud-edge-device collaborative environments. Following extensive research and
analysis of the characteristics and requirements of data management in
cloud-edge collaboration, it is evident that existing studies on hierarchical
data management primarily focus on the migration of hot and cold data.
Additionally, these studies encounter challenges such as elevated operational
and maintenance costs, difficulties in locating data within tiered storage, and
intricate metadata management attributable to excessively fine-grained
management granularity. These challenges impede the fulfillment of the storage
needs in cloud-edge-device collaboration.
To overcome these challenges, we propose a \underline{B}lock-based
hie\underline{R}archical d\underline{A}ta \underline{M}anagement
fram\underline{E}work, \textbf{Brame}, which advocates for a workload-aware
three-tier storage architecture and suggests a shift from using tuples to
employing $Blocks$ as the fundamental unit for data management. \textbf{Brame}
owns an offline block generation method designed to facilitate efficient block
generation and expeditious query routing. Extensive experiments substantiate
the superior performance of \textbf{Brame}.
|
2502.08332
|
Modification and Generated-Text Detection: Achieving Dual Detection
Capabilities for the Outputs of LLM by Watermark
|
cs.CR cs.AI
|
The development of large language models (LLMs) has raised concerns about
potential misuse. One practical solution is to embed a watermark in the text,
allowing ownership verification through watermark extraction. Existing methods
primarily focus on defending against modification attacks, often neglecting
other spoofing attacks. For example, attackers can alter the watermarked text
to produce harmful content without compromising the presence of the watermark,
which could lead to false attribution of this malicious content to the LLM.
This situation poses a serious threat to the LLMs service providers and
highlights the significance of achieving modification detection and
generated-text detection simultaneously. Therefore, we propose a technique to
detect modifications in text for unbiased watermark which is sensitive to
modification. We introduce a new metric called ``discarded tokens", which
measures the number of tokens not included in watermark detection. When a
modification occurs, this metric changes and can serve as evidence of the
modification. Additionally, we improve the watermark detection process and
introduce a novel method for unbiased watermark. Our experiments demonstrate
that we can achieve effective dual detection capabilities: modification
detection and generated-text detection by watermark.
|
2502.08333
|
Foundation Models in Computational Pathology: A Review of Challenges,
Opportunities, and Impact
|
cs.CV
|
From self-supervised, vision-only models to contrastive visual-language
frameworks, computational pathology has rapidly evolved in recent years.
Generative AI "co-pilots" now demonstrate the ability to mine subtle,
sub-visual tissue cues across the cellular-to-pathology spectrum, generate
comprehensive reports, and respond to complex user queries. The scale of data
has surged dramatically, growing from tens to millions of multi-gigapixel
tissue images, while the number of trainable parameters in these models has
risen to several billion. The critical question remains: how will this new wave
of generative and multi-purpose AI transform clinical diagnostics? In this
article, we explore the true potential of these innovations and their
integration into clinical practice. We review the rapid progress of foundation
models in pathology, clarify their applications and significance. More
precisely, we examine the very definition of foundational models, identifying
what makes them foundational, general, or multipurpose, and assess their impact
on computational pathology. Additionally, we address the unique challenges
associated with their development and evaluation. These models have
demonstrated exceptional predictive and generative capabilities, but
establishing global benchmarks is crucial to enhancing evaluation standards and
fostering their widespread clinical adoption. In computational pathology, the
broader impact of frontier AI ultimately depends on widespread adoption and
societal acceptance. While direct public exposure is not strictly necessary, it
remains a powerful tool for dispelling misconceptions, building trust, and
securing regulatory support.
|
2502.08336
|
Salience-Invariant Consistent Policy Learning for Generalization in
Visual Reinforcement Learning
|
cs.AI
|
Generalizing policies to unseen scenarios remains a critical challenge in
visual reinforcement learning, where agents often overfit to the specific
visual observations of the training environment. In unseen environments,
distracting pixels may lead agents to extract representations containing
task-irrelevant information. As a result, agents may deviate from the optimal
behaviors learned during training, thereby hindering visual generalization.To
address this issue, we propose the Salience-Invariant Consistent Policy
Learning (SCPL) algorithm, an efficient framework for zero-shot generalization.
Our approach introduces a novel value consistency module alongside a dynamics
module to effectively capture task-relevant representations. The value
consistency module, guided by saliency, ensures the agent focuses on
task-relevant pixels in both original and perturbed observations, while the
dynamics module uses augmented data to help the encoder capture dynamic- and
reward-relevant representations. Additionally, our theoretical analysis
highlights the importance of policy consistency for generalization. To
strengthen this, we introduce a policy consistency module with a KL divergence
constraint to maintain consistent policies across original and perturbed
observations.Extensive experiments on the DMC-GB, Robotic Manipulation, and
CARLA benchmarks demonstrate that SCPL significantly outperforms
state-of-the-art methods in terms of generalization. Notably, SCPL achieves
average performance improvements of 14\%, 39\%, and 69\% in the challenging DMC
video hard setting, the Robotic hard setting, and the CARLA benchmark,
respectively.Project Page: https://sites.google.com/view/scpl-rl.
|
2502.08337
|
Hierarchical Multi-Agent Framework for Carbon-Efficient Liquid-Cooled
Data Center Clusters
|
cs.LG cs.AI cs.SY eess.SY
|
Reducing the environmental impact of cloud computing requires efficient
workload distribution across geographically dispersed Data Center Clusters
(DCCs) and simultaneously optimizing liquid and air (HVAC) cooling with time
shift of workloads within individual data centers (DC). This paper introduces
Green-DCC, which proposes a Reinforcement Learning (RL) based hierarchical
controller to optimize both workload and liquid cooling dynamically in a DCC.
By incorporating factors such as weather, carbon intensity, and resource
availability, Green-DCC addresses realistic constraints and interdependencies.
We demonstrate how the system optimizes multiple data centers synchronously,
enabling the scope of digital twins, and compare the performance of various RL
approaches based on carbon emissions and sustainability metrics while also
offering a framework and benchmark simulation for broader ML research in
sustainability.
|
2502.08340
|
Hierarchical Learning-based Graph Partition for Large-scale Vehicle
Routing Problems
|
cs.LG cs.AI
|
Neural solvers based on the divide-and-conquer approach for Vehicle Routing
Problems (VRPs) in general, and capacitated VRP (CVRP) in particular,
integrates the global partition of an instance with local constructions for
each subproblem to enhance generalization. However, during the global partition
phase, misclusterings within subgraphs have a tendency to progressively
compound throughout the multi-step decoding process of the learning-based
partition policy. This suboptimal behavior in the global partition phase, in
turn, may lead to a dramatic deterioration in the performance of the overall
decomposition-based system, despite using optimal local constructions. To
address these challenges, we propose a versatile Hierarchical Learning-based
Graph Partition (HLGP) framework, which is tailored to benefit the partition of
CVRP instances by synergistically integrating global and local partition
policies. Specifically, the global partition policy is tasked with creating the
coarse multi-way partition to generate the sequence of simpler two-way
partition subtasks. These subtasks mark the initiation of the subsequent K
local partition levels. At each local partition level, subtasks exclusive for
this level are assigned to the local partition policy which benefits from the
insensitive local topological features to incrementally alleviate the
compounded errors. This framework is versatile in the sense that it optimizes
the involved partition policies towards a unified objective harmoniously
compatible with both reinforcement learning (RL) and supervised learning (SL).
(*Due to the notification of arXiv "The Abstract field cannot be longer than
1,920 characters", the appeared Abstract is shortened. For the full Abstract,
please download the Article.)
|
2502.08344
|
Energy and Age-Aware MAC for Low-Power Massive IoT
|
cs.IT math.IT
|
Efficient multiple access remains a key challenge for emerging Internet of
Things (IoT) networks comprising a large set of devices with sporadic
activation, thus motivating significant research in the last few years. In this
paper, we consider a network wherein IoT sensors capable of energy harvesting
(EH) send updates to a central server to monitor the status of the environment
or machinery in which they are located. We develop energy-aware ALOHA-like
multiple access schemes for such a scenario using the Age of Information (AoI)
metric to quantify the freshness of an information packet. The goal is to
minimize the average AoI across the entire system while adhering to energy
constraints imposed by the EH process. Simulation results show that applying
the designed multiple access scheme improves performance from 24% up to 90%
compared to previously proposed age-dependent protocols by ensuring low average
AoI and achieving scalability while simultaneously complying with the energy
constraints considered.
|
2502.08346
|
Graph Foundation Models for Recommendation: A Comprehensive Survey
|
cs.IR cs.AI cs.LG
|
Recommender systems (RS) serve as a fundamental tool for navigating the vast
expanse of online information, with deep learning advancements playing an
increasingly important role in improving ranking accuracy. Among these, graph
neural networks (GNNs) excel at extracting higher-order structural information,
while large language models (LLMs) are designed to process and comprehend
natural language, making both approaches highly effective and widely adopted.
Recent research has focused on graph foundation models (GFMs), which integrate
the strengths of GNNs and LLMs to model complex RS problems more efficiently by
leveraging the graph-based structure of user-item relationships alongside
textual understanding. In this survey, we provide a comprehensive overview of
GFM-based RS technologies by introducing a clear taxonomy of current
approaches, diving into methodological details, and highlighting key challenges
and future directions. By synthesizing recent advancements, we aim to offer
valuable insights into the evolving landscape of GFM-based recommender systems.
|
2502.08347
|
Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger
vision learners for medical image segmentation
|
cs.CV
|
Medical image segmentation remains a formidable challenge due to the label
scarcity. Pre-training Vision Transformer (ViT) through masked image modeling
(MIM) on large-scale unlabeled medical datasets presents a promising solution,
providing both computational efficiency and model generalization for various
downstream tasks. However, current ViT-based MIM pre-training frameworks
predominantly emphasize local aggregation representations in output layers and
fail to exploit the rich representations across different ViT layers that
better capture fine-grained semantic information needed for more precise
medical downstream tasks. To fill the above gap, we hereby present Hierarchical
Encoder-driven MAE (Hi-End-MAE), a simple yet effective ViT-based pre-training
solution, which centers on two key innovations: (1) Encoder-driven
reconstruction, which encourages the encoder to learn more informative features
to guide the reconstruction of masked patches; and (2) Hierarchical dense
decoding, which implements a hierarchical decoding structure to capture rich
representations across different layers. We pre-train Hi-End-MAE on a
large-scale dataset of 10K CT scans and evaluated its performance across seven
public medical image segmentation benchmarks. Extensive experiments demonstrate
that Hi-End-MAE achieves superior transfer learning capabilities across various
downstream tasks, revealing the potential of ViT in medical imaging
applications. The code is available at:
https://github.com/FengheTan9/Hi-End-MAE
|
2502.08352
|
Sat-DN: Implicit Surface Reconstruction from Multi-View Satellite Images
with Depth and Normal Supervision
|
cs.CV
|
With advancements in satellite imaging technology, acquiring high-resolution
multi-view satellite imagery has become increasingly accessible, enabling rapid
and location-independent ground model reconstruction. However, traditional
stereo matching methods struggle to capture fine details, and while neural
radiance fields (NeRFs) achieve high-quality reconstructions, their training
time is prohibitively long. Moreover, challenges such as low visibility of
building facades, illumination and style differences between pixels, and weakly
textured regions in satellite imagery further make it hard to reconstruct
reasonable terrain geometry and detailed building facades. To address these
issues, we propose Sat-DN, a novel framework leveraging a progressively trained
multi-resolution hash grid reconstruction architecture with explicit depth
guidance and surface normal consistency constraints to enhance reconstruction
quality. The multi-resolution hash grid accelerates training, while the
progressive strategy incrementally increases the learning frequency, using
coarse low-frequency geometry to guide the reconstruction of fine
high-frequency details. The depth and normal constraints ensure a clear
building outline and correct planar distribution. Extensive experiments on the
DFC2019 dataset demonstrate that Sat-DN outperforms existing methods, achieving
state-of-the-art results in both qualitative and quantitative evaluations. The
code is available at https://github.com/costune/SatDN.
|
2502.08353
|
Trustworthy GNNs with LLMs: A Systematic Review and Taxonomy
|
cs.LG cs.AI
|
With the extensive application of Graph Neural Networks (GNNs) across various
domains, their trustworthiness has emerged as a focal point of research. Some
existing studies have shown that the integration of large language models
(LLMs) can improve the semantic understanding and generation capabilities of
GNNs, which in turn improves the trustworthiness of GNNs from various aspects.
Our review introduces a taxonomy that offers researchers a clear framework for
comprehending the principles and applications of different methods and helps
clarify the connections and differences among various approaches. Then we
systematically survey representative approaches along the four categories of
our taxonomy. Through our taxonomy, researchers can understand the applicable
scenarios, potential advantages, and limitations of each approach for the the
trusted integration of GNNs with LLMs. Finally, we present some promising
directions of work and future trends for the integration of LLMs and GNNs to
improve model trustworthiness.
|
2502.08355
|
Loss Landscape Analysis for Reliable Quantized ML Models for Scientific
Sensing
|
cs.LG
|
In this paper, we propose a method to perform empirical analysis of the loss
landscape of machine learning (ML) models. The method is applied to two ML
models for scientific sensing, which necessitates quantization to be deployed
and are subject to noise and perturbations due to experimental conditions. Our
method allows assessing the robustness of ML models to such effects as a
function of quantization precision and under different regularization
techniques -- two crucial concerns that remained underexplored so far. By
investigating the interplay between performance, efficiency, and robustness by
means of loss landscape analysis, we both established a strong correlation
between gently-shaped landscapes and robustness to input and weight
perturbations and observed other intriguing and non-obvious phenomena. Our
method allows a systematic exploration of such trade-offs a priori, i.e.,
without training and testing multiple models, leading to more efficient
development workflows. This work also highlights the importance of
incorporating robustness into the Pareto optimization of ML models, enabling
more reliable and adaptive scientific sensing systems.
|
2502.08356
|
Systematic Knowledge Injection into Large Language Models via Diverse
Augmentation for Domain-Specific RAG
|
cs.CL
|
Retrieval-Augmented Generation (RAG) has emerged as a prominent method for
incorporating domain knowledge into Large Language Models (LLMs). While RAG
enhances response relevance by incorporating retrieved domain knowledge in the
context, retrieval errors can still lead to hallucinations and incorrect
answers. To recover from retriever failures, domain knowledge is injected by
fine-tuning the model to generate the correct response, even in the case of
retrieval errors. However, we observe that without systematic knowledge
augmentation, fine-tuned LLMs may memorize new information but still fail to
extract relevant domain knowledge, leading to poor performance. In this work,
we present a novel framework that significantly enhances the fine-tuning
process by augmenting the training data in two ways -- context augmentation and
knowledge paraphrasing. In context augmentation, we create multiple training
samples for a given QA pair by varying the relevance of the retrieved
information, teaching the model when to ignore and when to rely on retrieved
content. In knowledge paraphrasing, we fine-tune with multiple answers to the
same question, enabling LLMs to better internalize specialized knowledge. To
mitigate catastrophic forgetting due to fine-tuning, we add a domain-specific
identifier to a question and also utilize a replay buffer containing general QA
pairs. Experimental results demonstrate the efficacy of our method over
existing techniques, achieving up to 10\% relative gain in token-level recall
while preserving the LLM's generalization capabilities.
|
2502.08363
|
Top-Theta Attention: Sparsifying Transformers by Compensated
Thresholding
|
cs.CL cs.AI
|
The attention mechanism is essential for the impressive capabilities of
transformer-based Large Language Models (LLMs). However, calculating attention
is computationally intensive due to its quadratic dependency on the sequence
length. We introduce a novel approach called Top-Theta Attention, or simply
Top-$\theta$, which selectively prunes less essential attention elements by
comparing them against carefully calibrated thresholds. This method greatly
improves the efficiency of self-attention matrix multiplication while
preserving model accuracy, reducing the number of required V cache rows by 3x
during generative decoding and the number of attention elements by 10x during
the prefill phase. Our method does not require model retraining; instead, it
requires only a brief calibration phase to be resilient to distribution shifts,
thus not requiring the thresholds for different datasets to be recalibrated.
Unlike top-k attention, Top-$\theta$ eliminates full-vector dependency, making
it suitable for tiling and scale-out and avoiding costly top-k search. A key
innovation of our approach is the development of efficient numerical
compensation techniques, which help preserve model accuracy even under
aggressive pruning of attention scores.
|
2502.08364
|
A Survey on Pre-Trained Diffusion Model Distillations
|
cs.LG
|
Diffusion Models~(DMs) have emerged as the dominant approach in Generative
Artificial Intelligence (GenAI), owing to their remarkable performance in tasks
such as text-to-image synthesis. However, practical DMs, such as stable
diffusion, are typically trained on massive datasets and thus usually require
large storage. At the same time, many steps may be required, i.e., recursively
evaluating the trained neural network, to generate a high-quality image, which
results in significant computational costs during sample generation. As a
result, distillation methods on pre-trained DM have become widely adopted
practices to develop smaller, more efficient models capable of rapid, few-step
generation in low-resource environment. When these distillation methods are
developed from different perspectives, there is an urgent need for a systematic
survey, particularly from a methodological perspective. In this survey, we
review distillation methods through three aspects: output loss distillation,
trajectory distillation and adversarial distillation. We also discuss current
challenges and outline future research directions in the conclusion.
|
2502.08365
|
Towards Principled Multi-Agent Task Agnostic Exploration
|
cs.LG cs.AI
|
In reinforcement learning, we typically refer to task-agnostic exploration
when we aim to explore the environment without access to the task specification
a priori. In a single-agent setting the problem has been extensively studied
and mostly understood. A popular approach cast the task-agnostic objective as
maximizing the entropy of the state distribution induced by the agent's policy,
from which principles and methods follows. In contrast, little is known about
task-agnostic exploration in multi-agent settings, which are ubiquitous in the
real world. How should different agents explore in the presence of others? In
this paper, we address this question through a generalization to multiple
agents of the problem of maximizing the state distribution entropy. First, we
investigate alternative formulations, highlighting respective positives and
negatives. Then, we present a scalable, decentralized, trust-region policy
search algorithm to address the problem in practical settings. Finally, we
provide proof of concept experiments to both corroborate the theoretical
findings and pave the way for task-agnostic exploration in challenging
multi-agent settings.
|
2502.08371
|
Unveiling Global Discourse Structures: Theoretical Analysis and NLP
Applications in Argument Mining
|
cs.CL
|
Particularly in the structure of global discourse, coherence plays a pivotal
role in human text comprehension and is a hallmark of high-quality text. This
is especially true for persuasive texts, where coherent argument structures
support claims effectively. This paper discusses and proposes methods for
detecting, extracting and representing these global discourse structures in a
proccess called Argument(ation) Mining. We begin by defining key terms and
processes of discourse structure analysis, then continue to summarize existing
research on the matter, and identify shortcomings in current argument component
extraction and classification methods. Furthermore, we will outline an
architecture for argument mining that focuses on making models more
generalisable while overcoming challenges in the current field of research by
utilizing novel NLP techniques. This paper reviews current knowledge,
summarizes recent works, and outlines our NLP pipeline, aiming to contribute to
the theoretical understanding of global discourse structures.
|
2502.08373
|
Uncertainty Aware Human-machine Collaboration in Camouflaged Object
Detection
|
cs.CV cs.AI
|
Camouflaged Object Detection (COD), the task of identifying objects concealed
within their environments, has seen rapid growth due to its wide range of
practical applications. A key step toward developing trustworthy COD systems is
the estimation and effective utilization of uncertainty. In this work, we
propose a human-machine collaboration framework for classifying the presence of
camouflaged objects, leveraging the complementary strengths of computer vision
(CV) models and noninvasive brain-computer interfaces (BCIs). Our approach
introduces a multiview backbone to estimate uncertainty in CV model
predictions, utilizes this uncertainty during training to improve efficiency,
and defers low-confidence cases to human evaluation via RSVP-based BCIs during
testing for more reliable decision-making. We evaluated the framework in the
CAMO dataset, achieving state-of-the-art results with an average improvement of
4.56\% in balanced accuracy (BA) and 3.66\% in the F1 score compared to
existing methods. For the best-performing participants, the improvements
reached 7.6\% in BA and 6.66\% in the F1 score. Analysis of the training
process revealed a strong correlation between our confidence measures and
precision, while an ablation study confirmed the effectiveness of the proposed
training policy and the human-machine collaboration strategy. In general, this
work reduces human cognitive load, improves system reliability, and provides a
strong foundation for advancements in real-world COD applications and
human-computer interaction. Our code and data are available at:
https://github.com/ziyuey/Uncertainty-aware-human-machine-collaboration-in-camouflaged-object-identification.
|
2502.08374
|
AdvSwap: Covert Adversarial Perturbation with High Frequency
Info-swapping for Autonomous Driving Perception
|
cs.CV
|
Perception module of Autonomous vehicles (AVs) are increasingly susceptible
to be attacked, which exploit vulnerabilities in neural networks through
adversarial inputs, thereby compromising the AI safety. Some researches focus
on creating covert adversarial samples, but existing global noise techniques
are detectable and difficult to deceive the human visual system. This paper
introduces a novel adversarial attack method, AdvSwap, which creatively
utilizes wavelet-based high-frequency information swapping to generate covert
adversarial samples and fool the camera. AdvSwap employs invertible neural
network for selective high-frequency information swapping, preserving both
forward propagation and data integrity. The scheme effectively removes the
original label data and incorporates the guidance image data, producing
concealed and robust adversarial samples. Experimental evaluations and
comparisons on the GTSRB and nuScenes datasets demonstrate that AdvSwap can
make concealed attacks on common traffic targets. The generates adversarial
samples are also difficult to perceive by humans and algorithms. Meanwhile, the
method has strong attacking robustness and attacking transferability.
|
2502.08376
|
Enhanced Load Forecasting with GAT-LSTM: Leveraging Grid and Temporal
Features
|
cs.LG eess.SP
|
Accurate power load forecasting is essential for the efficient operation and
planning of electrical grids, particularly given the increased variability and
complexity introduced by renewable energy sources. This paper introduces
GAT-LSTM, a hybrid model that combines Graph Attention Networks (GAT) and Long
Short-Term Memory (LSTM) networks. A key innovation of the model is the
incorporation of edge attributes, such as line capacities and efficiencies,
into the attention mechanism, enabling it to dynamically capture spatial
relationships grounded in grid-specific physical and operational constraints.
Additionally, by employing an early fusion of spatial graph embeddings and
temporal sequence features, the model effectively learns and predicts complex
interactions between spatial dependencies and temporal patterns, providing a
realistic representation of the dynamics of power grids. Experimental
evaluations on the Brazilian Electricity System dataset demonstrate that the
GAT-LSTM model significantly outperforms state-of-the-art models, achieving
reductions of 21. 8% in MAE, 15. 9% in RMSE and 20. 2% in MAPE. These results
underscore the robustness and adaptability of the GAT-LSTM model, establishing
it as a powerful tool for applications in grid management and energy planning.
|
2502.08377
|
Not All Frame Features Are Equal: Video-to-4D Generation via Decoupling
Dynamic-Static Features
|
cs.CV
|
Recently, the generation of dynamic 3D objects from a video has shown
impressive results. Existing methods directly optimize Gaussians using whole
information in frames. However, when dynamic regions are interwoven with static
regions within frames, particularly if the static regions account for a large
proportion, existing methods often overlook information in dynamic regions and
are prone to overfitting on static regions. This leads to producing results
with blurry textures. We consider that decoupling dynamic-static features to
enhance dynamic representations can alleviate this issue. Thus, we propose a
dynamic-static feature decoupling module (DSFD). Along temporal axes, it
regards the portions of current frame features that possess significant
differences relative to reference frame features as dynamic features.
Conversely, the remaining parts are the static features. Then, we acquire
decoupled features driven by dynamic features and current frame features.
Moreover, to further enhance the dynamic representation of decoupled features
from different viewpoints and ensure accurate motion prediction, we design a
temporal-spatial similarity fusion module (TSSF). Along spatial axes, it
adaptively selects a similar information of dynamic regions. Hinging on the
above, we construct a novel approach, DS4D. Experimental results verify our
method achieves state-of-the-art (SOTA) results in video-to-4D. In addition,
the experiments on a real-world scenario dataset demonstrate its effectiveness
on the 4D scene. Our code will be publicly available.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.