id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.02863
|
OceanChat: The Effect of Virtual Conversational AI Agents on Sustainable
Attitude and Behavior Change
|
cs.HC cs.AI
|
Marine ecosystems face unprecedented threats from climate change and plastic
pollution, yet traditional environmental education often struggles to translate
awareness into sustained behavioral change. This paper presents OceanChat, an
interactive system leveraging large language models to create conversational AI
agents represented as animated marine creatures -- specifically a beluga whale,
a jellyfish, and a seahorse -- designed to promote environmental behavior (PEB)
and foster awareness through personalized dialogue. Through a between-subjects
experiment (N=900), we compared three conditions: (1) Static Scientific
Information, providing conventional environmental education through text and
images; (2) Static Character Narrative, featuring first-person storytelling
from 3D-rendered marine creatures; and (3) Conversational Character Narrative,
enabling real-time dialogue with AI-powered marine characters. Our analysis
revealed that the Conversational Character Narrative condition significantly
increased behavioral intentions and sustainable choice preferences compared to
static approaches. The beluga whale character demonstrated consistently
stronger emotional engagement across multiple measures, including perceived
anthropomorphism and empathy. However, impacts on deeper measures like climate
policy support and psychological distance were limited, highlighting the
complexity of shifting entrenched beliefs. Our work extends research on
sustainability interfaces facilitating PEB and offers design principles for
creating emotionally resonant, context-aware AI characters. By balancing
anthropomorphism with species authenticity, OceanChat demonstrates how
interactive narratives can bridge the gap between environmental knowledge and
real-world behavior change.
|
2502.02866
|
A Systematic Approach for Assessing Large Language Models' Test Case
Generation Capability
|
cs.SE cs.AI
|
Software testing ensures the quality and reliability of software products,
but manual test case creation is labor-intensive. With the rise of large
language models (LLMs), there is growing interest in unit test creation with
LLMs. However, effective assessment of LLM-generated test cases is limited by
the lack of standardized benchmarks that comprehensively cover diverse
programming scenarios. To address the assessment of LLM's test case generation
ability and lacking dataset for evaluation, we propose the Generated Benchmark
from Control-Flow Structure and Variable Usage Composition (GBCV) approach,
which systematically generates programs used for evaluating LLMs' test
generation capabilities. By leveraging basic control-flow structures and
variable usage, GBCV provides a flexible framework to create a spectrum of
programs ranging from simple to complex. Because GPT-4o and GPT-3-Turbo are
publicly accessible models, to present real-world regular user's use case, we
use GBCV to assess LLM performance on them. Our findings indicate that GPT-4o
performs better on complex program structures, while all models effectively
detect boundary values in simple conditions but face challenges with arithmetic
computations. This study highlights the strengths and limitations of LLMs in
test generation, provides a benchmark framework, and suggests directions for
future improvement.
|
2502.02867
|
Domain-Invariant Per-Frame Feature Extraction for Cross-Domain Imitation
Learning with Visual Observations
|
cs.CV cs.AI cs.LG
|
Imitation learning (IL) enables agents to mimic expert behavior without
reward signals but faces challenges in cross-domain scenarios with
high-dimensional, noisy, and incomplete visual observations. To address this,
we propose Domain-Invariant Per-Frame Feature Extraction for Imitation Learning
(DIFF-IL), a novel IL method that extracts domain-invariant features from
individual frames and adapts them into sequences to isolate and replicate
expert behaviors. We also introduce a frame-wise time labeling technique to
segment expert behaviors by timesteps and assign rewards aligned with temporal
contexts, enhancing task performance. Experiments across diverse visual
environments demonstrate the effectiveness of DIFF-IL in addressing complex
visual tasks.
|
2502.02869
|
OmniRL: In-Context Reinforcement Learning by Large-Scale Meta-Training
in Randomized Worlds
|
cs.LG cs.AI
|
We introduce OmniRL, a highly generalizable in-context reinforcement learning
(ICRL) model that is meta-trained on hundreds of thousands of diverse tasks.
These tasks are procedurally generated by randomizing state transitions and
rewards within Markov Decision Processes. To facilitate this extensive
meta-training, we propose two key innovations: 1. An efficient data synthesis
pipeline for ICRL, which leverages the interaction histories of diverse
behavior policies; and 2. A novel modeling framework that integrates both
imitation learning and reinforcement learning (RL) within the context, by
incorporating prior knowledge. For the first time, we demonstrate that
in-context learning (ICL) alone, without any gradient-based fine-tuning, can
successfully tackle unseen Gymnasium tasks through imitation learning, online
RL, or offline RL. Additionally, we show that achieving generalized ICRL
capabilities-unlike task identification-oriented few-shot learning-critically
depends on long trajectories generated by variant tasks and diverse behavior
policies. By emphasizing the potential of ICL and departing from pre-training
focused on acquiring specific skills, we further underscore the significance of
meta-training aimed at cultivating the ability of ICL itself.
|
2502.02870
|
Uncertainty Quantification with the Empirical Neural Tangent Kernel
|
stat.ML cs.LG
|
While neural networks have demonstrated impressive performance across various
tasks, accurately quantifying uncertainty in their predictions is essential to
ensure their trustworthiness and enable widespread adoption in critical
systems. Several Bayesian uncertainty quantification (UQ) methods exist that
are either cheap or reliable, but not both. We propose a post-hoc,
sampling-based UQ method for over-parameterized networks at the end of
training. Our approach constructs efficient and meaningful deep ensembles by
employing a (stochastic) gradient-descent sampling process on appropriately
linearized networks. We demonstrate that our method effectively approximates
the posterior of a Gaussian process using the empirical Neural Tangent Kernel.
Through a series of numerical experiments, we show that our method not only
outperforms competing approaches in computational efficiency (often reducing
costs by multiple factors) but also maintains state-of-the-art performance
across a variety of UQ metrics for both regression and classification tasks.
|
2502.02871
|
Position: Multimodal Large Language Models Can Significantly Advance
Scientific Reasoning
|
cs.CL cs.AI
|
Scientific reasoning, the process through which humans apply logic, evidence,
and critical thinking to explore and interpret scientific phenomena, is
essential in advancing knowledge reasoning across diverse fields. However,
despite significant progress, current scientific reasoning models still
struggle with generalization across domains and often fall short of multimodal
perception. Multimodal Large Language Models (MLLMs), which integrate text,
images, and other modalities, present an exciting opportunity to overcome these
limitations and enhance scientific reasoning. Therefore, this position paper
argues that MLLMs can significantly advance scientific reasoning across
disciplines such as mathematics, physics, chemistry, and biology. First, we
propose a four-stage research roadmap of scientific reasoning capabilities, and
highlight the current state of MLLM applications in scientific reasoning,
noting their ability to integrate and reason over diverse data types. Second,
we summarize the key challenges that remain obstacles to achieving MLLM's full
potential. To address these challenges, we propose actionable insights and
suggestions for the future. Overall, our work offers a novel perspective on
MLLM integration with scientific reasoning, providing the LLM community with a
valuable vision for achieving Artificial General Intelligence (AGI).
|
2502.02872
|
Achieving Operational Universality through a Turing Complete Chemputer
|
cs.CL
|
The most fundamental abstraction underlying all modern computers is the
Turing Machine, that is if any modern computer can simulate a Turing Machine,
an equivalence which is called Turing completeness, it is theoretically
possible to achieve any task that can be algorithmically described by executing
a series of discrete unit operations. In chemistry, the ability to program
chemical processes is demanding because it is hard to ensure that the process
can be understood at a high level of abstraction, and then reduced to practice.
Herein we exploit the concept of Turing completeness applied to robotic
platforms for chemistry that can be used to synthesise complex molecules
through unit operations that execute chemical processes using a
chemically-aware programming language, XDL. We leverage the concept of
computability by computers to synthesizability of chemical compounds by
automated synthesis machines. The results of an interactive demonstration of
Turing completeness using the colour gamut and conditional logic are presented
and examples of chemical use-cases are discussed. Over 16.7 million
combinations of Red, Green, Blue (RGB) colour space were binned into 5 discrete
values and measured over 10 regions of interest (ROIs), affording 78 million
possible states per step and served as a proxy for conceptual, chemical space
exploration. This formal description establishes a formal framework in future
chemical programming languages to ensure complex logic operations are expressed
and executed correctly, with the possibility of error correction, in the
automated and autonomous pursuit of increasingly complex molecules.
|
2502.02874
|
Vertical Federated Learning for Failure-Cause Identification in
Disaggregated Microwave Networks
|
cs.NI cs.AI cs.DC cs.LG
|
Machine Learning (ML) has proven to be a promising solution to provide novel
scalable and efficient fault management solutions in modern 5G-and-beyond
communication networks. In the context of microwave networks, ML-based
solutions have received significant attention. However, current solutions can
only be applied to monolithic scenarios in which a single entity (e.g., an
operator) manages the entire network. As current network architectures move
towards disaggregated communication platforms in which multiple operators and
vendors collaborate to achieve cost-efficient and reliable network management,
new ML-based approaches for fault management must tackle the challenges of
sharing business-critical information due to potential conflicts of interest.
In this study, we explore the application of Federated Learning in
disaggregated microwave networks for failure-cause identification using a real
microwave hardware failure dataset. In particular, we investigate the
application of two Vertical Federated Learning (VFL), namely using Split Neural
Networks (SplitNNs) and Federated Learning based on Gradient Boosting Decision
Trees (FedTree), on different multi-vendor deployment scenarios, and we compare
them to a centralized scenario where data is managed by a single entity. Our
experimental results show that VFL-based scenarios can achieve F1-Scores
consistently within at most a 1% gap with respect to a centralized scenario,
regardless of the deployment strategies or model types, while also ensuring
minimal leakage of sensitive-data.
|
2502.02875
|
Heterogeneous Value Decomposition Policy Fusion for Multi-Agent
Cooperation
|
cs.MA
|
Value decomposition (VD) has become one of the most prominent solutions in
cooperative multi-agent reinforcement learning. Most existing methods generally
explore how to factorize the joint value and minimize the discrepancies between
agent observations and characteristics of environmental states. However, direct
decomposition may result in limited representation or difficulty in
optimization. Orthogonal to designing a new factorization scheme, in this
paper, we propose Heterogeneous Policy Fusion (HPF) to integrate the strengths
of various VD methods. We construct a composite policy set to select policies
for interaction adaptively. Specifically, this adaptive mechanism allows
agents' trajectories to benefit from diverse policy transitions while
incorporating the advantages of each factorization method. Additionally, HPF
introduces a constraint between these heterogeneous policies to rectify the
misleading update caused by the unexpected exploratory or suboptimal
non-cooperation. Experimental results on cooperative tasks show HPF's superior
performance over multiple baselines, proving its effectiveness and ease of
implementation.
|
2502.02883
|
SensorChat: Answering Qualitative and Quantitative Questions during
Long-Term Multimodal Sensor Interactions
|
cs.AI cs.HC
|
Natural language interaction with sensing systems is crucial for enabling all
users to comprehend sensor data and its impact on their everyday lives.
However, existing systems, which typically operate in a Question Answering (QA)
manner, are significantly limited in terms of the duration and complexity of
sensor data they can handle. In this work, we introduce SensorChat, the first
end-to-end QA system designed for long-term sensor monitoring with multimodal
and high-dimensional data including time series. SensorChat effectively answers
both qualitative (requiring high-level reasoning) and quantitative (requiring
accurate responses derived from sensor data) questions in real-world scenarios.
To achieve this, SensorChat uses an innovative three-stage pipeline that
includes question decomposition, sensor data query, and answer assembly. The
first and third stages leverage Large Language Models (LLMs) for intuitive
human interactions and to guide the sensor data query process. Unlike existing
multimodal LLMs, SensorChat incorporates an explicit query stage to precisely
extract factual information from long-duration sensor data. We implement
SensorChat and demonstrate its capability for real-time interactions on a cloud
server while also being able to run entirely on edge platforms after
quantization. Comprehensive QA evaluations show that SensorChat achieves up to
26% higher answer accuracy than state-of-the-art systems on quantitative
questions. Additionally, a user study with eight volunteers highlights
SensorChat's effectiveness in handling qualitative and open-ended questions.
|
2502.02885
|
Expertized Caption Auto-Enhancement for Video-Text Retrieval
|
cs.CV cs.AI cs.LG
|
The burgeoning field of video-text retrieval has witnessed significant
advancements with the advent of deep learning. However, the challenge of
matching text and video persists due to inadequate textual descriptions of
videos. The substantial information gap between the two modalities hinders a
comprehensive understanding of videos, resulting in ambiguous retrieval
results. While rewriting methods based on large language models have been
proposed to broaden text expressions, carefully crafted prompts are essential
to ensure the reasonableness and completeness of the rewritten texts. This
paper proposes an automatic caption enhancement method that enhances expression
quality and mitigates empiricism in augmented captions through self-learning.
Additionally, an expertized caption selection mechanism is designed and
introduced to customize augmented captions for each video, facilitating
video-text matching. Our method is entirely data-driven, which not only
dispenses with heavy data collection and computation workload but also improves
self-adaptability by circumventing lexicon dependence and introducing
personalized matching. The superiority of our method is validated by
state-of-the-art results on various benchmarks, specifically achieving Top-1
recall accuracy of 68.5% on MSR-VTT, 68.1% on MSVD, and 62.0% on DiDeMo.
|
2502.02887
|
Variations on the Expectation Due to Changes in the Probability Measure
|
cs.IT cs.LG math.IT math.PR math.ST stat.TH
|
Closed-form expressions are presented for the variation of the expectation of
a given function due to changes in the probability measure used for the
expectation. They unveil interesting connections with Gibbs probability
measures, the mutual information, and the lautum information.
|
2502.02891
|
INST-Sculpt: Interactive Stroke-based Neural SDF Sculpting
|
cs.GR cs.CV
|
Recent advances in implicit neural representations have made them a popular
choice for modeling 3D geometry, achieving impressive results in tasks such as
shape representation, reconstruction, and learning priors. However, directly
editing these representations poses challenges due to the complex relationship
between model weights and surface regions they influence. Among such editing
tools, sculpting, which allows users to interactively carve or extrude the
surface, is a valuable editing operation to the graphics and modeling
community. While traditional mesh-based tools like ZBrush facilitate fast and
intuitive edits, a comparable toolkit for sculpting neural SDFs is currently
lacking. We introduce a framework that enables interactive surface sculpting
edits directly on neural implicit representations. Unlike previous works
limited to spot edits, our approach allows users to perform stroke-based
modifications on the fly, ensuring intuitive shape manipulation without
switching representations. By employing tubular neighborhoods to sample strokes
and custom brush profiles, we achieve smooth deformations along user-defined
curves, providing precise control over the sculpting process. Our method
demonstrates that intricate and versatile edits can be made while preserving
the smooth nature of implicit representations.
|
2502.02893
|
Lowering the Barrier of Machine Learning: Achieving Zero Manual Labeling
in Review Classification Using LLMs
|
cs.CL
|
With the internet's evolution, consumers increasingly rely on online reviews
for service or product choices, necessitating that businesses analyze extensive
customer feedback to enhance their offerings. While machine learning-based
sentiment classification shows promise in this realm, its technical complexity
often bars small businesses and individuals from leveraging such advancements,
which may end up making the competitive gap between small and large businesses
even bigger in terms of improving customer satisfaction. This paper introduces
an approach that integrates large language models (LLMs), specifically
Generative Pre-trained Transformer (GPT) and Bidirectional Encoder
Representations from Transformers (BERT)-based models, making it accessible to
a wider audience. Our experiments across various datasets confirm that our
approach retains high classification accuracy without the need for manual
labeling, expert knowledge in tuning and data annotation, or substantial
computational power. By significantly lowering the barriers to applying
sentiment classification techniques, our methodology enhances competitiveness
and paves the way for making machine learning technology accessible to a
broader audience.
|
2502.02895
|
Enhancing Quantum-ready QUBO-based Suppression for Object Detection with
Appearance and Confidence Features
|
cs.CV
|
Quadratic Unconstrained Binary Optimization (QUBO)-based suppression in
object detection is known to have superiority to conventional Non-Maximum
Suppression (NMS), especially for crowded scenes where NMS possibly suppresses
the (partially-) occluded true positives with low confidence scores. Whereas
existing QUBO formulations are less likely to miss occluded objects than NMS,
there is room for improvement because existing QUBO formulations naively
consider confidence scores and pairwise scores based on spatial overlap between
predictions. This study proposes new QUBO formulations that aim to distinguish
whether the overlap between predictions is due to the occlusion of objects or
due to redundancy in prediction, i.e., multiple predictions for a single
object. The proposed QUBO formulation integrates two features into the pairwise
score of the existing QUBO formulation: i) the appearance feature calculated by
the image similarity metric and ii) the product of confidence scores. These
features are derived from the hypothesis that redundant predictions share a
similar appearance feature and (partially-) occluded objects have low
confidence scores, respectively. The proposed methods demonstrate significant
advancement over state-of-the-art QUBO-based suppression without a notable
increase in runtime, achieving up to 4.54 points improvement in mAP and 9.89
points gain in mAR.
|
2502.02896
|
A Benchmark for the Detection of Metalinguistic Disagreements between
LLMs and Knowledge Graphs
|
cs.CL cs.AI
|
Evaluating large language models (LLMs) for tasks like fact extraction in
support of knowledge graph construction frequently involves computing accuracy
metrics using a ground truth benchmark based on a knowledge graph (KG). These
evaluations assume that errors represent factual disagreements. However, human
discourse frequently features metalinguistic disagreement, where agents differ
not on facts but on the meaning of the language used to express them. Given the
complexity of natural language processing and generation using LLMs, we ask: do
metalinguistic disagreements occur between LLMs and KGs? Based on an
investigation using the T-REx knowledge alignment dataset, we hypothesize that
metalinguistic disagreement does in fact occur between LLMs and KGs, with
potential relevance for the practice of knowledge graph engineering. We propose
a benchmark for evaluating the detection of factual and metalinguistic
disagreements between LLMs and KGs. An initial proof of concept of such a
benchmark is available on Github.
|
2502.02901
|
Policy Abstraction and Nash Refinement in Tree-Exploiting PSRO
|
cs.GT cs.AI
|
Policy Space Response Oracles (PSRO) interleaves empirical game-theoretic
analysis with deep reinforcement learning (DRL) to solve games too complex for
traditional analytic methods. Tree-exploiting PSRO (TE-PSRO) is a variant of
this approach that iteratively builds a coarsened empirical game model in
extensive form using data obtained from querying a simulator that represents a
detailed description of the game. We make two main methodological advances to
TE-PSRO that enhance its applicability to complex games of imperfect
information. First, we introduce a scalable representation for the empirical
game tree where edges correspond to implicit policies learned through DRL.
These policies cover conditions in the underlying game abstracted in the game
model, supporting sustainable growth of the tree over epochs. Second, we
leverage extensive form in the empirical model by employing refined Nash
equilibria to direct strategy exploration. To enable this, we give a modular
and scalable algorithm based on generalized backward induction for computing a
subgame perfect equilibrium (SPE) in an imperfect-information game. We
experimentally evaluate our approach on a suite of games including an
alternating-offer bargaining game with outside offers; our results demonstrate
that TE-PSRO converges toward equilibrium faster when new strategies are
generated based on SPE rather than Nash equilibrium, and with reasonable
time/memory requirements for the growing empirical model.
|
2502.02903
|
What is in a name? Mitigating Name Bias in Text Embeddings via
Anonymization
|
cs.CL cs.AI cs.LG
|
Text-embedding models often exhibit biases arising from the data on which
they are trained. In this paper, we examine a hitherto unexplored bias in
text-embeddings: bias arising from the presence of $\textit{names}$ such as
persons, locations, organizations etc. in the text. Our study shows how the
presence of $\textit{name-bias}$ in text-embedding models can potentially lead
to erroneous conclusions in assessment of thematic similarity.Text-embeddings
can mistakenly indicate similarity between texts based on names in the text,
even when their actual semantic content has no similarity or indicate
dissimilarity simply because of the names in the text even when the texts match
semantically. We first demonstrate the presence of name bias in different
text-embedding models and then propose $\textit{text-anonymization}$ during
inference which involves removing references to names, while preserving the
core theme of the text. The efficacy of the anonymization approach is
demonstrated on two downstream NLP tasks, achieving significant performance
gains. Our simple and training-optimization-free approach offers a practical
and easily implementable solution to mitigate name bias.
|
2502.02904
|
ScholaWrite: A Dataset of End-to-End Scholarly Writing Process
|
cs.HC cs.CL q-bio.NC
|
Writing is a cognitively demanding task involving continuous decision-making,
heavy use of working memory, and frequent switching between multiple
activities. Scholarly writing is particularly complex as it requires authors to
coordinate many pieces of multiform knowledge. To fully understand writers'
cognitive thought process, one should fully decode the end-to-end writing data
(from individual ideas to final manuscript) and understand their complex
cognitive mechanisms in scholarly writing. We introduce ScholaWrite dataset, a
first-of-its-kind keystroke corpus of an end-to-end scholarly writing process
for complete manuscripts, with thorough annotations of cognitive writing
intentions behind each keystroke. Our dataset includes LaTeX-based keystroke
data from five preprints with nearly 62K total text changes and annotations
across 4 months of paper writing. ScholaWrite shows promising usability and
applications (e.g., iterative self-writing), demonstrating the importance of
collection of end-to-end writing data, rather than the final manuscript, for
the development of future writing assistants to support the cognitive thinking
process of scientists. Our de-identified data examples and code are available
on our project page.
|
2502.02905
|
AI-driven materials design: a mini-review
|
cond-mat.mtrl-sci cs.LG
|
Materials design is an important component of modern science and technology,
yet traditional approaches rely heavily on trial-and-error and can be
inefficient. Computational techniques, enhanced by modern artificial
intelligence (AI), have greatly accelerated the design of new materials. Among
these approaches, inverse design has shown great promise in designing materials
that meet specific property requirements. In this mini-review, we summarize key
computational advancements for materials design over the past few decades. We
follow the evolution of relevant materials design techniques, from
high-throughput forward machine learning (ML) methods and evolutionary
algorithms, to advanced AI strategies like reinforcement learning (RL) and deep
generative models. We highlight the paradigm shift from conventional screening
approaches to inverse generation driven by deep generative models. Finally, we
discuss current challenges and future perspectives of materials inverse design.
This review may serve as a brief guide to the approaches, progress, and outlook
of designing future functional materials with technological relevance.
|
2502.02907
|
PoleStack: Robust Pole Estimation of Irregular Objects from Silhouette
Stacking
|
cs.CV
|
We present an algorithm to estimate the rotation pole of a principal-axis
rotator using silhouette images collected from multiple camera poses. First, a
set of images is stacked to form a single silhouette-stack image, where the
object's rotation introduces reflective symmetry about the imaged pole
direction. We estimate this projected-pole direction by identifying maximum
symmetry in the silhouette stack. To handle unknown center-of-mass image
location, we apply the Discrete Fourier Transform to produce the
silhouette-stack amplitude spectrum, achieving translation invariance and
increased robustness to noise. Second, the 3D pole orientation is estimated by
combining two or more projected-pole measurements collected from different
camera orientations. We demonstrate degree-level pole estimation accuracy using
low-resolution imagery, showing robustness to severe surface shadowing and
centroid-based image-registration errors. The proposed approach could be
suitable for pole estimation during both the approach phase toward a target
object and while hovering.
|
2502.02908
|
COSMosFL: Ensemble of Small Language Models for Fault Localisation
|
cs.SE cs.LG
|
LLMs are rapidly being adopted to build powerful tools and agents for
software engineering, but most of them rely heavily on extremely large
closed-source models. This, in turn, can hinder wider adoption due to security
issues as well as financial cost and environmental impact. Recently, a number
of open source Small Language Models (SLMs) are being released and gaining
traction. While SLMs are smaller, more energy-efficient, and therefore easier
to locally deploy, they tend to show worse performance when compared to larger
closed LLMs. We present COSMos, a task-level LLM ensemble technique that uses
voting mechanism, to provide a broader range of choice between SLMs and LLMs.
We instantiate COSMos with an LLM-based Fault Localisation technique, AutoFL,
and report the cost-benefit trade-off between LLM accuracy and various costs
such as energy consumption, inference time, and the number of tokens used. An
empirical evaluation using Defects4J shows that COSMos can build effective
ensembles that can achieve Pareto-optimality in terms of FL accuracy and
inference cost, when compared to individual models.
|
2502.02909
|
SPARC: Subspace-Aware Prompt Adaptation for Robust Continual Learning in
LLMs
|
cs.LG cs.AI cs.CL
|
We propose SPARC, a lightweight continual learning framework for large
language models (LLMs) that enables efficient task adaptation through prompt
tuning in a lower-dimensional space. By leveraging principal component analysis
(PCA), we identify a compact subspace of the training data. Optimizing prompts
in this lower-dimensional space enhances training efficiency, as it focuses
updates on the most relevant features while reducing computational overhead.
Furthermore, since the model's internal structure remains unaltered, the
extensive knowledge gained from pretraining is fully preserved, ensuring that
previously learned information is not compromised during adaptation. Our method
achieves high knowledge retention in both task-incremental and
domain-incremental continual learning setups while fine-tuning only 0.04% of
the model's parameters. Additionally, by integrating LoRA, we enhance
adaptability to computational constraints, allowing for a tradeoff between
accuracy and training cost. Experiments on the SuperGLUE benchmark demonstrate
that our PCA-based prompt tuning combined with LoRA maintains full knowledge
retention while improving accuracy, utilizing only 1% of the model's
parameters. These results establish our approach as a scalable and
resource-efficient solution for continual learning in LLMs.
|
2502.02910
|
DANDI: Diffusion as Normative Distribution for Deep Neural Network Input
|
cs.SE cs.LG
|
Surprise Adequacy (SA) has been widely studied as a test adequacy metric that
can effectively guide software engineers towards inputs that are more likely to
reveal unexpected behaviour of Deep Neural Networks (DNNs). Intuitively, SA is
an out-of-distribution metric that quantifies the dissimilarity between the
given input and the training data: if a new input is very different from those
seen during training, the DNN is more likely to behave unexpectedly against the
input. While SA has been widely adopted as a test prioritization method, its
major weakness is the fact that the computation of the metric requires access
to the training dataset, which is often not allowed in real-world use cases. We
present DANDI, a technique that generates a surrogate input distribution using
Stable Diffusion to compute SA values without requiring the original training
data. An empirical evaluation of DANDI applied to image classifiers for CIFAR10
and ImageNet-1K shows that SA values computed against synthetic data are highly
correlated with the values computed against the training data, with Spearman
Rank correlation value of 0.852 for ImageNet-1K and 0.881 for CIFAR-10.
Further, we show that SA value computed by DANDI achieves can prioritize inputs
as effectively as those computed using the training data, when testing DNN
models mutated by DeepMutation. We believe that DANDI can significantly improve
the usability of SA for practical DNN testing.
|
2502.02912
|
MobiCLR: Mobility Time Series Contrastive Learning for Urban Region
Representations
|
cs.LG cs.AI
|
Recently, learning effective representations of urban regions has gained
significant attention as a key approach to understanding urban dynamics and
advancing smarter cities. Existing approaches have demonstrated the potential
of leveraging mobility data to generate latent representations, providing
valuable insights into the intrinsic characteristics of urban areas. However,
incorporating the temporal dynamics and detailed semantics inherent in human
mobility patterns remains underexplored. To address this gap, we propose a
novel urban region representation learning model, Mobility Time Series
Contrastive Learning for Urban Region Representations (MobiCLR), designed to
capture semantically meaningful embeddings from inflow and outflow mobility
patterns. MobiCLR uses contrastive learning to enhance the discriminative power
of its representations, applying an instance-wise contrastive loss to capture
distinct flow-specific characteristics. Additionally, we develop a regularizer
to align output features with these flow-specific representations, enabling a
more comprehensive understanding of mobility dynamics. To validate our model,
we conduct extensive experiments in Chicago, New York, and Washington, D.C. to
predict income, educational attainment, and social vulnerability. The results
demonstrate that our model outperforms state-of-the-art models.
|
2502.02913
|
Real-Time Privacy Risk Measurement with Privacy Tokens for Gradient
Leakage
|
cs.LG cs.CR
|
The widespread deployment of deep learning models in privacy-sensitive
domains has amplified concerns regarding privacy risks, particularly those
stemming from gradient leakage during training. Current privacy assessments
primarily rely on post-training attack simulations. However, these methods are
inherently reactive, unable to encompass all potential attack scenarios, and
often based on idealized adversarial assumptions. These limitations underscore
the need for proactive approaches to privacy risk assessment during the
training process. To address this gap, we propose the concept of privacy
tokens, which are derived directly from private gradients during training.
Privacy tokens encapsulate gradient features and, when combined with data
features, offer valuable insights into the extent of private information
leakage from training data, enabling real-time measurement of privacy risks
without relying on adversarial attack simulations. Additionally, we employ
Mutual Information (MI) as a robust metric to quantify the relationship between
training data and gradients, providing precise and continuous assessments of
privacy leakage throughout the training process. Extensive experiments validate
our framework, demonstrating the effectiveness of privacy tokens and MI in
identifying and quantifying privacy risks. This proactive approach marks a
significant advancement in privacy monitoring, promoting the safer deployment
of deep learning models in sensitive applications.
|
2502.02917
|
Interactive Symbolic Regression through Offline Reinforcement Learning:
A Co-Design Framework
|
cs.LG cs.AI cs.SC
|
Symbolic Regression (SR) holds great potential for uncovering underlying
mathematical and physical relationships from observed data. However, the vast
combinatorial space of possible expressions poses significant challenges for
both online search methods and pre-trained transformer models. Additionally,
current state-of-the-art approaches typically do not consider the integration
of domain experts' prior knowledge and do not support iterative interactions
with the model during the equation discovery process. To address these
challenges, we propose the Symbolic Q-network (Sym-Q), an advanced interactive
framework for large-scale symbolic regression. Unlike previous large-scale
transformer-based SR approaches, Sym-Q leverages reinforcement learning without
relying on a transformer-based decoder. This formulation allows the agent to
learn through offline reinforcement learning using any type of tree encoder,
enabling more efficient training and inference. Furthermore, we propose a
co-design mechanism, where the reinforcement learning-based Sym-Q facilitates
effective interaction with domain experts at any stage of the equation
discovery process. Users can dynamically modify generated nodes of the
expression, collaborating with the agent to tailor the mathematical expression
to best fit the problem and align with the assumed physical laws, particularly
when there is prior partial knowledge of the expected behavior. Our experiments
demonstrate that the pre-trained Sym-Q surpasses existing SR algorithms on the
challenging SSDNC benchmark. Moreover, we experimentally show on real-world
cases that its performance can be further enhanced by the interactive co-design
mechanism, with Sym-Q achieving greater performance gains than other
state-of-the-art models. Our reproducible code is available at
https://github.com/EPFL-IMOS/Sym-Q.
|
2502.02919
|
Maximizing the Position Embedding for Vision Transformers with Global
Average Pooling
|
cs.CV cs.LG
|
In vision transformers, position embedding (PE) plays a crucial role in
capturing the order of tokens. However, in vision transformer structures, there
is a limitation in the expressiveness of PE due to the structure where position
embedding is simply added to the token embedding. A layer-wise method that
delivers PE to each layer and applies independent Layer Normalizations for
token embedding and PE has been adopted to overcome this limitation. In this
paper, we identify the conflicting result that occurs in a layer-wise structure
when using the global average pooling (GAP) method instead of the class token.
To overcome this problem, we propose MPVG, which maximizes the effectiveness of
PE in a layer-wise structure with GAP. Specifically, we identify that PE
counterbalances token embedding values at each layer in a layer-wise structure.
Furthermore, we recognize that the counterbalancing role of PE is insufficient
in the layer-wise structure, and we address this by maximizing the
effectiveness of PE through MPVG. Through experiments, we demonstrate that PE
performs a counterbalancing role and that maintaining this counterbalancing
directionality significantly impacts vision transformers. As a result, the
experimental results show that MPVG outperforms existing methods across vision
transformers on various tasks.
|
2502.02920
|
Adaptive Budget Optimization for Multichannel Advertising Using
Combinatorial Bandits
|
cs.LG cs.AI
|
Effective budget allocation is crucial for optimizing the performance of
digital advertising campaigns. However, the development of practical budget
allocation algorithms remain limited, primarily due to the lack of public
datasets and comprehensive simulation environments capable of verifying the
intricacies of real-world advertising. While multi-armed bandit (MAB)
algorithms have been extensively studied, their efficacy diminishes in
non-stationary environments where quick adaptation to changing market dynamics
is essential. In this paper, we advance the field of budget allocation in
digital advertising by introducing three key contributions. First, we develop a
simulation environment designed to mimic multichannel advertising campaigns
over extended time horizons, incorporating logged real-world data. Second, we
propose an enhanced combinatorial bandit budget allocation strategy that
leverages a saturating mean function and a targeted exploration mechanism with
change-point detection. This approach dynamically adapts to changing market
conditions, improving allocation efficiency by filtering target regions based
on domain knowledge. Finally, we present both theoretical analysis and
empirical results, demonstrating that our method consistently outperforms
baseline strategies, achieving higher rewards and lower regret across multiple
real-world campaigns.
|
2502.02921
|
Robust Reward Alignment via Hypothesis Space Batch Cutting
|
cs.LG
|
Reward design for reinforcement learning and optimal control agents is
challenging. Preference-based alignment addresses this by enabling agents to
learn rewards from ranked trajectory pairs provided by humans. However,
existing methods often struggle from poor robustness to unknown false human
preferences. In this work, we propose a robust and efficient reward alignment
method based on a novel and geometrically interpretable perspective: hypothesis
space batched cutting. Our method iteratively refines the reward hypothesis
space through "cuts" based on batches of human preferences. Within each batch,
human preferences, queried based on disagreement, are grouped using a voting
function to determine the appropriate cut, ensuring a bounded human query
complexity. To handle unknown erroneous preferences, we introduce a
conservative cutting method within each batch, preventing erroneous human
preferences from making overly aggressive cuts to the hypothesis space. This
guarantees provable robustness against false preferences. We evaluate our
method in a model predictive control setting across diverse tasks, including
DM-Control, dexterous in-hand manipulation, and locomotion. The results
demonstrate that our framework achieves comparable or superior performance to
state-of-the-art methods in error-free settings while significantly
outperforming existing method when handling high percentage of erroneous human
preferences.
|
2502.02922
|
Elucidating the Preconditioning in Consistency Distillation
|
cs.LG cs.CV
|
Consistency distillation is a prevalent way for accelerating diffusion models
adopted in consistency (trajectory) models, in which a student model is trained
to traverse backward on the probability flow (PF) ordinary differential
equation (ODE) trajectory determined by the teacher model. Preconditioning is a
vital technique for stabilizing consistency distillation, by linear combining
the input data and the network output with pre-defined coefficients as the
consistency function. It imposes the boundary condition of consistency
functions without restricting the form and expressiveness of the neural
network. However, previous preconditionings are hand-crafted and may be
suboptimal choices. In this work, we offer the first theoretical insights into
the preconditioning in consistency distillation, by elucidating its design
criteria and the connection to the teacher ODE trajectory. Based on these
analyses, we further propose a principled way dubbed \textit{Analytic-Precond}
to analytically optimize the preconditioning according to the consistency gap
(defined as the gap between the teacher denoiser and the optimal student
denoiser) on a generalized teacher ODE. We demonstrate that Analytic-Precond
can facilitate the learning of trajectory jumpers, enhance the alignment of the
student trajectory with the teacher's, and achieve $2\times$ to $3\times$
training acceleration of consistency trajectory models in multi-step generation
across various datasets.
|
2502.02924
|
TopoCL: Topological Contrastive Learning for Time Series
|
cs.LG cs.AI
|
Universal time series representation learning is challenging but valuable in
real-world applications such as classification, anomaly detection, and
forecasting. Recently, contrastive learning (CL) has been actively explored to
tackle time series representation. However, a key challenge is that the data
augmentation process in CL can distort seasonal patterns or temporal
dependencies, inevitably leading to a loss of semantic information. To address
this challenge, we propose Topological Contrastive Learning for time series
(TopoCL). TopoCL mitigates such information loss by incorporating persistent
homology, which captures the topological characteristics of data that remain
invariant under transformations. In this paper, we treat the temporal and
topological properties of time series data as distinct modalities.
Specifically, we compute persistent homology to construct topological features
of time series data, representing them in persistence diagrams. We then design
a neural network to encode these persistent diagrams. Our approach jointly
optimizes CL within the time modality and time-topology correspondence,
promoting a comprehensive understanding of both temporal semantics and
topological properties of time series. We conduct extensive experiments on four
downstream tasks-classification, anomaly detection, forecasting, and transfer
learning. The results demonstrate that TopoCL achieves state-of-the-art
performance.
|
2502.02925
|
Data denoising with self consistency, variance maximization, and the
Kantorovich dominance
|
stat.ME cs.LG math.PR math.ST stat.TH
|
We introduce a new framework for data denoising, partially inspired by
martingale optimal transport. For a given noisy distribution (the data), our
approach involves finding the closest distribution to it among all
distributions which 1) have a particular prescribed structure (expressed by
requiring they lie in a particular domain), and 2) are self-consistent with the
data. We show that this amounts to maximizing the variance among measures in
the domain which are dominated in convex order by the data. For particular
choices of the domain, this problem and a relaxed version of it, in which the
self-consistency condition is removed, are intimately related to various
classical approaches to denoising. We prove that our general problem has
certain desirable features: solutions exist under mild assumptions, have
certain robustness properties, and, for very simple domains, coincide with
solutions to the relaxed problem.
We also introduce a novel relationship between distributions, termed
Kantorovich dominance, which retains certain aspects of the convex order while
being a weaker, more robust, and easier-to-verify condition. Building on this,
we propose and analyze a new denoising problem by substituting the convex order
in the previously described framework with Kantorovich dominance. We
demonstrate that this revised problem shares some characteristics with the full
convex order problem but offers enhanced stability, greater computational
efficiency, and, in specific domains, more meaningful solutions. Finally, we
present simple numerical examples illustrating solutions for both the full
convex order problem and the Kantorovich dominance problem.
|
2502.02928
|
Large Language Model Guided Self-Debugging Code Generation
|
cs.SE cs.AI
|
Automated code generation is gaining significant importance in intelligent
computer programming and system deployment. However, current approaches often
face challenges in computational efficiency and lack robust mechanisms for code
parsing and error correction. In this work, we propose a novel framework,
PyCapsule, with a simple yet effective two-agent pipeline and efficient
self-debugging modules for Python code generation. PyCapsule features
sophisticated prompt inference, iterative error handling, and case testing,
ensuring high generation stability, safety, and correctness. Empirically,
PyCapsule achieves up to 5.7% improvement of success rate on HumanEval, 10.3%
on HumanEval-ET, and 24.4% on BigCodeBench compared to the state-of-art
methods. We also observe a decrease in normalized success rate given more
self-debugging attempts, potentially affected by limited and noisy error
feedback in retention. PyCapsule demonstrates broader impacts on advancing
lightweight and efficient code generation for artificial intelligence systems.
|
2502.02932
|
Dominance Regions of Pursuit-evasion Games in Non-anticipative
Information Patterns
|
math.OC cs.GT cs.SY eess.SY
|
The evader's dominance region is an important concept and the foundation of
geometric methods for pursuit-evasion games. This article mainly reveals the
relevant properties of the evader's dominance region, especially in
non-anticipative information patterns. We can use these properties to research
pursuit-evasion games in non-anticipative information patterns. The core
problem is under what condition the pursuer has a non-anticipative strategy to
prevent the evader leaving its initial dominance region before being captured
regardless of the evader's strategy. We first define the evader's dominance
region by the shortest path distance, and we rigorously prove for the first
time that the initial dominance region of the evader is the reachable region of
the evader in the open-loop sense. Subsequently, we prove that there exists a
non-anticipative strategy by which the pursuer can capture the evader before
the evader leaves its initial dominance region's closure in the absence of
obstacles. For cases with obstacles, we provide a counter example to illustrate
that such a non-anticipative strategy does not always exist, and provide a
necessary condition for the existence of such strategy. Finally, we consider a
scenario with a single corner obstacle and provide a sufficient condition for
the existence of such a non-anticipative strategy. At the end of this article,
we discuss the application of the evader's dominance region in target defense
games. This article has important reference significance for the design of
non-anticipative strategies in pursuit-evasion games with obstacles.
|
2502.02934
|
Gait-Net-augmented Implicit Kino-dynamic MPC for Dynamic
Variable-frequency Humanoid Locomotion over Discrete Terrains
|
cs.RO cs.SY eess.SY
|
Current optimization-based control techniques for humanoid locomotion
struggle to adapt step duration and placement simultaneously in dynamic walking
gaits due to their reliance on fixed-time discretization, which limits
responsiveness to terrain conditions and results in suboptimal performance in
challenging environments. In this work, we propose a Gait-Net-augmented
implicit kino-dynamic model-predictive control (MPC) to simultaneously optimize
step location, step duration, and contact forces for natural variable-frequency
locomotion. The proposed method incorporates a Gait-Net-augmented Sequential
Convex MPC algorithm to solve multi-linearly constrained variables by iterative
quadratic programs. At its core, a lightweight Gait-frequency Network
(Gait-Net) determines the preferred step duration in terms of variable MPC
sampling times, simplifying step duration optimization to the parameter level.
Additionally, it enhances and updates the spatial reference trajectory within
each sequential iteration by incorporating local solutions, allowing the
projection of kinematic constraints to the design of reference trajectories. We
validate the proposed algorithm in high-fidelity simulations and on small-size
humanoid hardware, demonstrating its capability for variable-frequency and 3-D
discrete terrain locomotion with only a one-step preview of terrain data.
|
2502.02936
|
Every Angle Is Worth A Second Glance: Mining Kinematic Skeletal
Structures from Multi-view Joint Cloud
|
cs.CV
|
Multi-person motion capture over sparse angular observations is a challenging
problem under interference from both self- and mutual-occlusions. Existing
works produce accurate 2D joint detection, however, when these are triangulated
and lifted into 3D, available solutions all struggle in selecting the most
accurate candidates and associating them to the correct joint type and target
identity. As such, in order to fully utilize all accurate 2D joint location
information, we propose to independently triangulate between all same-typed 2D
joints from all camera views regardless of their target ID, forming the Joint
Cloud. Joint Cloud consist of both valid joints lifted from the same joint type
and target ID, as well as falsely constructed ones that are from different 2D
sources. These redundant and inaccurate candidates are processed over the
proposed Joint Cloud Selection and Aggregation Transformer (JCSAT) involving
three cascaded encoders which deeply explore the trajectile, skeletal
structural, and view-dependent correlations among all 3D point candidates in
the cross-embedding space. An Optimal Token Attention Path (OTAP) module is
proposed which subsequently selects and aggregates informative features from
these redundant observations for the final prediction of human motion. To
demonstrate the effectiveness of JCSAT, we build and publish a new multi-person
motion capture dataset BUMocap-X with complex interactions and severe
occlusions. Comprehensive experiments over the newly presented as well as
benchmark datasets validate the effectiveness of the proposed framework, which
outperforms all existing state-of-the-art methods, especially under challenging
occlusion scenarios.
|
2502.02938
|
LLaVAC: Fine-tuning LLaVA as a Multimodal Sentiment Classifier
|
cs.CL
|
We present LLaVAC, a method for constructing a classifier for multimodal
sentiment analysis. This method leverages fine-tuning of the Large Language and
Vision Assistant (LLaVA) to predict sentiment labels across both image and text
modalities. Our approach involves designing a structured prompt that
incorporates both unimodal and multimodal labels to fine-tune LLaVA, enabling
it to perform sentiment classification effectively. Experiments on the
MVSA-Single dataset demonstrate that LLaVAC outperforms existing methods in
multimodal sentiment analysis across three data processing procedures. The
implementation of LLaVAC is publicly available at
https://github.com/tchayintr/llavac.
|
2502.02941
|
Fast T2T: Optimization Consistency Speeds Up Diffusion-Based
Training-to-Testing Solving for Combinatorial Optimization
|
cs.LG
|
Diffusion models have recently advanced Combinatorial Optimization (CO) as a
powerful backbone for neural solvers. However, their iterative sampling process
requiring denoising across multiple noise levels incurs substantial overhead.
We propose to learn direct mappings from different noise levels to the optimal
solution for a given instance, facilitating high-quality generation with
minimal shots. This is achieved through an optimization consistency training
protocol, which, for a given instance, minimizes the difference among samples
originating from varying generative trajectories and time steps relative to the
optimal solution. The proposed model enables fast single-step solution
generation while retaining the option of multi-step sampling to trade for
sampling quality, which offers a more effective and efficient alternative
backbone for neural solvers. In addition, within the training-to-testing (T2T)
framework, to bridge the gap between training on historical instances and
solving new instances, we introduce a novel consistency-based gradient search
scheme during the test stage, enabling more effective exploration of the
solution space learned during training. It is achieved by updating the latent
solution probabilities under objective gradient guidance during the alternation
of noise injection and denoising steps. We refer to this model as Fast T2T.
Extensive experiments on two popular tasks, the Traveling Salesman Problem
(TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast
T2T regarding both solution quality and efficiency, even outperforming LKH
given limited time budgets. Notably, Fast T2T with merely one-step generation
and one-step gradient search can mostly outperform the SOTA diffusion-based
counterparts that require hundreds of steps, while achieving tens of times
speedup.
|
2502.02943
|
Behavioral Homophily in Social Media via Inverse Reinforcement Learning:
A Reddit Case Study
|
cs.SI cs.LG
|
Online communities play a critical role in shaping societal discourse and
influencing collective behavior in the real world. The tendency for people to
connect with others who share similar characteristics and views, known as
homophily, plays a key role in the formation of echo chambers which further
amplify polarization and division. Existing works examining homophily in online
communities traditionally infer it using content- or adjacency-based
approaches, such as constructing explicit interaction networks or performing
topic analysis. These methods fall short for platforms where interaction
networks cannot be easily constructed and fail to capture the complex nature of
user interactions across the platform. This work introduces a novel approach
for quantifying user homophily. We first use an Inverse Reinforcement Learning
(IRL) framework to infer users' policies, then use these policies as a measure
of behavioral homophily. We apply our method to Reddit, conducting a case study
across 5.9 million interactions over six years, demonstrating how this approach
uncovers distinct behavioral patterns and user roles that vary across different
communities. We further validate our behavioral homophily measure against
traditional content-based homophily, offering a powerful method for analyzing
social media dynamics and their broader societal implications. We find, among
others, that users can behave very similarly (high behavioral homophily) when
discussing entirely different topics like soccer vs e-sports (low topical
homophily), and that there is an entire class of users on Reddit whose purpose
seems to be to disagree with others.
|
2502.02945
|
LLM-KT: Aligning Large Language Models with Knowledge Tracing using a
Plug-and-Play Instruction
|
cs.CL cs.AI
|
The knowledge tracing (KT) problem is an extremely important topic in
personalized education, which aims to predict whether students can correctly
answer the next question based on their past question-answer records. Prior
work on this task mainly focused on learning the sequence of behaviors based on
the IDs or textual information. However, these studies usually fail to capture
students' sufficient behavioral patterns without reasoning with rich world
knowledge about questions. In this paper, we propose a large language models
(LLMs)-based framework for KT, named \texttt{\textbf{LLM-KT}}, to integrate the
strengths of LLMs and traditional sequence interaction models. For task-level
alignment, we design Plug-and-Play instruction to align LLMs with KT,
leveraging LLMs' rich knowledge and powerful reasoning capacity. For
modality-level alignment, we design the plug-in context and sequence to
integrate multiple modalities learned by traditional methods. To capture the
long context of history records, we present a plug-in context to flexibly
insert the compressed context embedding into LLMs using question-specific and
concept-specific tokens. Furthermore, we introduce a plug-in sequence to
enhance LLMs with sequence interaction behavior representation learned by
traditional sequence models using a sequence adapter. Extensive experiments
show that \texttt{\textbf{LLM-KT}} obtains state-of-the-art performance on four
typical datasets by comparing it with approximately 20 strong baselines.
|
2502.02951
|
VQA-Levels: A Hierarchical Approach for Classifying Questions in VQA
|
cs.CV cs.AI cs.LG
|
Designing datasets for Visual Question Answering (VQA) is a difficult and
complex task that requires NLP for parsing and computer vision for analysing
the relevant aspects of the image for answering the question asked. Several
benchmark datasets have been developed by researchers but there are many issues
with using them for methodical performance tests. This paper proposes a new
benchmark dataset -- a pilot version called VQA-Levels is ready now -- for
testing VQA systems systematically and assisting researchers in advancing the
field. The questions are classified into seven levels ranging from direct
answers based on low-level image features (without needing even a classifier)
to those requiring high-level abstraction of the entire image content. The
questions in the dataset exhibit one or many of ten properties. Each is
categorised into a specific level from 1 to 7. Levels 1 - 3 are directly on the
visual content while the remaining levels require extra knowledge about the
objects in the image. Each question generally has a unique one or two-word
answer. The questions are 'natural' in the sense that a human is likely to ask
such a question when seeing the images. An example question at Level 1 is,
``What is the shape of the red colored region in the image?" while at Level 7,
it is, ``Why is the man cutting the paper?". Initial testing of the proposed
dataset on some of the existing VQA systems reveals that their success is high
on Level 1 (low level features) and Level 2 (object classification) questions,
least on Level 3 (scene text) followed by Level 6 (extrapolation) and Level 7
(whole scene analysis) questions. The work in this paper will go a long way to
systematically analyze VQA systems.
|
2502.02954
|
Direct Distributional Optimization for Provable Alignment of Diffusion
Models
|
cs.LG
|
We introduce a novel alignment method for diffusion models from distribution
optimization perspectives while providing rigorous convergence guarantees. We
first formulate the problem as a generic regularized loss minimization over
probability distributions and directly optimize the distribution using the Dual
Averaging method. Next, we enable sampling from the learned distribution by
approximating its score function via Doob's $h$-transform technique. The
proposed framework is supported by rigorous convergence guarantees and an
end-to-end bound on the sampling error, which imply that when the original
distribution's score is known accurately, the complexity of sampling from
shifted distributions is independent of isoperimetric conditions. This
framework is broadly applicable to general distribution optimization problems,
including alignment tasks in Reinforcement Learning with Human Feedback (RLHF),
Direct Preference Optimization (DPO), and Kahneman-Tversky Optimization (KTO).
We empirically validate its performance on synthetic and image datasets using
the DPO objective.
|
2502.02955
|
ReachAgent: Enhancing Mobile Agent via Page Reaching and Operation
|
cs.CL cs.AI
|
Recently, mobile AI agents have gained increasing attention. Given a task,
mobile AI agents can interact with mobile devices in multiple steps and finally
form a GUI flow that solves the task. However, existing agents tend to focus on
most task-relevant elements at each step, leading to local optimal solutions
and ignoring the overall GUI flow. To address this issue, we constructed a
training dataset called MobileReach, which breaks the task into page reaching
and operation subtasks. Furthermore, we propose ReachAgent, a two-stage
framework that focuses on improving its task-completion abilities. It utilizes
the page reaching and page operation subtasks, along with reward-based
preference GUI flows, to further enhance the agent. Experimental results show
that ReachAgent significantly improves the IoU Acc and Text Acc by 7.12% and
7.69% on the step-level and 4.72% and 4.63% on the task-level compared to the
SOTA agent. Our data and code will be released upon acceptance.
|
2502.02957
|
Control Search Rankings, Control the World: What is a Good Search
Engine?
|
cs.IR cs.CY
|
This paper examines the ethical question, 'What is a good search engine?'
Since search engines are gatekeepers of global online information, it is vital
they do their job ethically well. While the Internet is now several decades
old, the topic remains under-explored from interdisciplinary perspectives. This
paper presents a novel role-based approach involving four ethical models of
types of search engine behavior: Customer Servant, Librarian, Journalist, and
Teacher. It explores these ethical models with reference to the research field
of information retrieval, and by means of a case study involving the COVID-19
global pandemic. It also reflects on the four ethical models in terms of the
history of search engine development, from earlier crude efforts in the 1990s,
to the very recent prospect of Large Language Model-based conversational
information seeking systems taking on the roles of established web search
engines like Google. Finally, the paper outlines considerations that inform
present and future regulation and accountability for search engines as they
continue to evolve. The paper should interest information retrieval researchers
and others interested in the ethics of search engines.
|
2502.02958
|
Position: Editing Large Language Models Poses Serious Safety Risks
|
cs.CL
|
Large Language Models (LLMs) contain large amounts of facts about the world.
These facts can become outdated over time, which has led to the development of
knowledge editing methods (KEs) that can change specific facts in LLMs with
limited side effects. This position paper argues that editing LLMs poses
serious safety risks that have been largely overlooked. First, we note the fact
that KEs are widely available, computationally inexpensive, highly performant,
and stealthy makes them an attractive tool for malicious actors. Second, we
discuss malicious use cases of KEs, showing how KEs can be easily adapted for a
variety of malicious purposes. Third, we highlight vulnerabilities in the AI
ecosystem that allow unrestricted uploading and downloading of updated models
without verification. Fourth, we argue that a lack of social and institutional
awareness exacerbates this risk, and discuss the implications for different
stakeholders. We call on the community to (i) research tamper-resistant models
and countermeasures against malicious model editing, and (ii) actively engage
in securing the AI ecosystem.
|
2502.02963
|
(Neural-Symbolic) Machine Learning for Inconsistency Measurement
|
cs.AI
|
We present machine-learning-based approaches for determining the
\emph{degree} of inconsistency -- which is a numerical value -- for
propositional logic knowledge bases. Specifically, we present regression- and
neural-based models that learn to predict the values that the inconsistency
measures $\incmi$ and $\incat$ would assign to propositional logic knowledge
bases. Our main motivation is that computing these values conventionally can be
hard complexity-wise. As an important addition, we use specific postulates,
that is, properties, of the underlying inconsistency measures to infer symbolic
rules, which we combine with the learning-based models in the form of
constraints. We perform various experiments and show that a) predicting the
degree values is feasible in many situations, and b) including the symbolic
constraints deduced from the rationality postulates increases the prediction
quality.
|
2502.02966
|
FACTER: Fairness-Aware Conformal Thresholding and Prompt Engineering for
Enabling Fair LLM-Based Recommender Systems
|
cs.IR cs.AI cs.CY cs.LG
|
We propose FACTER, a fairness-aware framework for LLM-based recommendation
systems that integrates conformal prediction with dynamic prompt engineering.
By introducing an adaptive semantic variance threshold and a
violation-triggered mechanism, FACTER automatically tightens fairness
constraints whenever biased patterns emerge. We further develop an adversarial
prompt generator that leverages historical violations to reduce repeated
demographic biases without retraining the LLM. Empirical results on MovieLens
and Amazon show that FACTER substantially reduces fairness violations (up to
95.5%) while maintaining strong recommendation accuracy, revealing semantic
variance as a potent proxy of bias.
|
2502.02967
|
Demonstrating a Control Framework for Physical Human-Robot Interaction
Toward Industrial Applications
|
cs.RO cs.SY eess.SY
|
Human-Robot Interaction (pHRI) is critical for implementing Industry 5.0
which focuses on human-centric approaches. However, few studies explore the
practical alignment of pHRI to industrial grade performance. This paper
introduces a versatile control framework designed to bridge this gap by
incorporating the torque-based control modes: compliance control, null-space
compliance, dual compliance, all in static and dynamic scenarios. Thanks to our
second-order Quadratic Programming (QP) formulation, strict kinematic and
collision constraints are integrated into the system as safety features, and a
weighted hierarchy guarantees singularity-robust task tracking performance. The
framework is implemented on a Kinova Gen3 collaborative robot (cobot) equipped
with a Bota force/torque sensor. A DualShock 4 game controller is attached at
the robot's end-effector to demonstrate the framework's capabilities. This
setup enables seamless dynamic switching between the modes, and real-time
adjustment of parameters, such as transitioning between position and torque
control or selecting a more robust custom-developed low-level torque controller
over the default one.Built on the open-source robotic control software mc_rtc,
to ensure reproducibility for both research and industrial deployment, this
framework demonstrates industrial-grade performance and repeatability,
showcasing its potential as a robust pHRI control system for industrial
environments.
|
2502.02970
|
Membership Inference Attack Should Move On to Distributional Statistics
for Distilled Generative Models
|
cs.LG
|
Membership inference attacks (MIAs) determine whether certain data instances
were used to train a model by exploiting the differences in how the model
responds to seen versus unseen instances. This capability makes MIAs important
in assessing privacy leakage within modern generative AI systems. However, this
paper reveals an oversight in existing MIAs against \emph{distilled generative
models}: attackers can no longer detect a teacher model's training instances
individually when targeting the distilled student model, as the student learns
from the teacher-generated data rather than its original member data,
preventing direct instance-level memorization. Nevertheless, we find that
student-generated samples exhibit a significantly stronger distributional
alignment with teacher's member data than non-member data. This leads us to
posit that MIAs \emph{on distilled generative models should shift from
instance-level to distribution-level statistics}. We thereby introduce a
\emph{set-based} MIA framework that measures \emph{relative} distributional
discrepancies between student-generated data\emph{sets} and potential
member/non-member data\emph{sets}, Empirically, distributional statistics
reliably distinguish a teacher's member data from non-member data through the
distilled model. Finally, we discuss scenarios in which our setup faces
limitations.
|
2502.02972
|
Label Anything: An Interpretable, High-Fidelity and Prompt-Free
Annotator
|
cs.RO cs.LG
|
Learning-based street scene semantic understanding in autonomous driving (AD)
has advanced significantly recently, but the performance of the AD model is
heavily dependent on the quantity and quality of the annotated training data.
However, traditional manual labeling involves high cost to annotate the vast
amount of required data for training robust model. To mitigate this cost of
manual labeling, we propose a Label Anything Model (denoted as LAM), serving as
an interpretable, high-fidelity, and prompt-free data annotator. Specifically,
we firstly incorporate a pretrained Vision Transformer (ViT) to extract the
latent features. On top of ViT, we propose a semantic class adapter (SCA) and
an optimization-oriented unrolling algorithm (OptOU), both with a quite small
number of trainable parameters. SCA is proposed to fuse ViT-extracted features
to consolidate the basis of the subsequent automatic annotation. OptOU consists
of multiple cascading layers and each layer contains an optimization
formulation to align its output with the ground truth as closely as possible,
though which OptOU acts as being interpretable rather than learning-based
blackbox nature. In addition, training SCA and OptOU requires only a single
pre-annotated RGB seed image, owing to their small volume of learnable
parameters. Extensive experiments clearly demonstrate that the proposed LAM can
generate high-fidelity annotations (almost 100% in mIoU) for multiple
real-world datasets (i.e., Camvid, Cityscapes, and Apolloscapes) and CARLA
simulation dataset.
|
2502.02975
|
TGB-Seq Benchmark: Challenging Temporal GNNs with Complex Sequential
Dynamics
|
cs.LG cs.AI
|
Future link prediction is a fundamental challenge in various real-world
dynamic systems. To address this, numerous temporal graph neural networks
(temporal GNNs) and benchmark datasets have been developed. However, these
datasets often feature excessive repeated edges and lack complex sequential
dynamics, a key characteristic inherent in many real-world applications such as
recommender systems and ``Who-To-Follow'' on social networks. This oversight
has led existing methods to inadvertently downplay the importance of learning
sequential dynamics, focusing primarily on predicting repeated edges.
In this study, we demonstrate that existing methods, such as GraphMixer and
DyGFormer, are inherently incapable of learning simple sequential dynamics,
such as ``a user who has followed OpenAI and Anthropic is more likely to follow
AI at Meta next.'' Motivated by this issue, we introduce the Temporal Graph
Benchmark with Sequential Dynamics (TGB-Seq), a new benchmark carefully curated
to minimize repeated edges, challenging models to learn sequential dynamics and
generalize to unseen edges. TGB-Seq comprises large real-world datasets
spanning diverse domains, including e-commerce interactions, movie ratings,
business reviews, social networks, citation networks and web link networks.
Benchmarking experiments reveal that current methods usually suffer significant
performance degradation and incur substantial training costs on TGB-Seq, posing
new challenges and opportunities for future research. TGB-Seq datasets,
leaderboards, and example codes are available at https://tgb-seq.github.io/.
|
2502.02977
|
Disentangling CLIP Features for Enhanced Localized Understanding
|
cs.CV
|
Vision-language models (VLMs) demonstrate impressive capabilities in
coarse-grained tasks like image classification and retrieval. However, they
struggle with fine-grained tasks that require localized understanding. To
investigate this weakness, we comprehensively analyze CLIP features and
identify an important issue: semantic features are highly correlated.
Specifically, the features of a class encode information about other classes,
which we call mutual feature information (MFI). This mutual information becomes
evident when we query a specific class and unrelated objects are activated
along with the target class. To address this issue, we propose Unmix-CLIP, a
novel framework designed to reduce MFI and improve feature disentanglement. We
introduce MFI loss, which explicitly separates text features by projecting them
into a space where inter-class similarity is minimized. To ensure a
corresponding separation in image features, we use multi-label recognition
(MLR) to align the image features with the separated text features. This
ensures that both image and text features are disentangled and aligned across
modalities, improving feature separation for downstream tasks. For the COCO- 14
dataset, Unmix-CLIP reduces feature similarity by 24.9%. We demonstrate its
effectiveness through extensive evaluations of MLR and zeroshot semantic
segmentation (ZS3). In MLR, our method performs competitively on the VOC2007
and surpasses SOTA approaches on the COCO-14 dataset, using fewer training
parameters. Additionally, Unmix-CLIP consistently outperforms existing ZS3
methods on COCO and VOC
|
2502.02982
|
FedMobileAgent: Training Mobile Agents Using Decentralized Self-Sourced
Data from Diverse Users
|
cs.AI
|
The advancement of mobile agents has opened new opportunities for automating
tasks on mobile devices. Training these agents requires large-scale
high-quality data, which is costly using human labor. Given the vast number of
mobile phone users worldwide, if automated data collection from them is
feasible, the resulting data volume and the subsequently trained mobile agents
could reach unprecedented levels. Nevertheless, two major challenges arise: (1)
extracting high-level and low-level user instructions without involving human
and (2) utilizing distributed data from diverse users while preserving privacy.
To tackle these challenges, we propose FedMobileAgent, a collaborative
framework that trains mobile agents using self-sourced data from diverse users.
Specifically, it includes two techniques. First, we propose Auto-Annotation,
which enables the automatic collection of high-quality datasets during users'
routine phone usage with minimal cost. Second, we introduce adapted aggregation
to improve federated training of mobile agents on non-IID user data, by
incorporating both episode- and step-level distributions. In distributed
settings, FedMobileAgent achieves performance comparable to centralized
human-annotated models at less than 0.02\% of the cost, highlighting its
potential for real-world applications.
|
2502.02984
|
Learning Efficient Flocking Control based on Gibbs Random Fields
|
cs.RO cs.LG cs.SY eess.SY
|
Flocking control is essential for multi-robot systems in diverse
applications, yet achieving efficient flocking in congested environments poses
challenges regarding computation burdens, performance optimality, and motion
safety. This paper addresses these challenges through a multi-agent
reinforcement learning (MARL) framework built on Gibbs Random Fields (GRFs).
With GRFs, a multi-robot system is represented by a set of random variables
conforming to a joint probability distribution, thus offering a fresh
perspective on flocking reward design. A decentralized training and execution
mechanism, which enhances the scalability of MARL concerning robot quantity, is
realized using a GRF-based credit assignment method. An action attention module
is introduced to implicitly anticipate the motion intentions of neighboring
robots, consequently mitigating potential non-stationarity issues in MARL. The
proposed framework enables learning an efficient distributed control policy for
multi-robot systems in challenging environments with success rate around
$99\%$, as demonstrated through thorough comparisons with state-of-the-art
solutions in simulations and experiments. Ablation studies are also performed
to validate the efficiency of different framework modules.
|
2502.02988
|
Training an LLM-as-a-Judge Model: Pipeline, Insights, and Practical
Lessons
|
cs.CL cs.AI cs.LG
|
The rapid advancement of large language models (LLMs) has opened new
possibilities for their adoption as evaluative judges. This paper introduces
Themis, a fine-tuned LLM judge that delivers sophisticated context-aware
evaluations. We provide a comprehensive overview of the development pipeline
for Themis, highlighting its scenario-dependent evaluation prompts and two
novel methods for controlled instruction generation. These designs enable
Themis to effectively distill evaluative skills from teacher models, while
retaining flexibility for continuous development. We introduce two
human-labeled benchmarks for meta-evaluation, demonstrating that Themis can
achieve high alignment with human preferences in an economical manner.
Additionally, we explore insights into the LLM-as-a-judge paradigm, revealing
nuances in performance and the varied effects of reference answers. Notably, we
observe that pure knowledge distillation from strong LLMs, though common, does
not guarantee performance improvement through scaling. We propose a mitigation
strategy based on instruction-following difficulty. Furthermore, we provide
practical guidelines covering data balancing, prompt customization,
multi-objective training, and metric aggregation. We aim for our method and
findings, along with the fine-tuning data, benchmarks, and model checkpoints,
to support future research and development in this area.
|
2502.02996
|
Building Bridges between Regression, Clustering, and Classification
|
stat.ML cs.LG
|
Regression, the task of predicting a continuous scalar target y based on some
features x is one of the most fundamental tasks in machine learning and
statistics. It has been observed and theoretically analyzed that the classical
approach, meansquared error minimization, can lead to suboptimal results when
training neural networks. In this work, we propose a new method to improve the
training of these models on regression tasks, with continuous scalar targets.
Our method is based on casting this task in a different fashion, using a target
encoder, and a prediction decoder, inspired by approaches in classification and
clustering. We showcase the performance of our method on a wide range of
real-world datasets.
|
2502.02997
|
Assessing Research Impact in Indian Conference Proceedings: Insights
from Collaboration and Citations
|
cs.IR
|
Conferences serve as a crucial avenue for scientific communication. However,
the increase in conferences and the subsequent publication of proceedings have
prompted inquiries regarding the research quality being showcased at such
events. This investigation delves into the conference publications indexed by
Springer's Lecture Notes in Networks and Systems Series. Among the 570
international conferences held worldwide in this series, 177 were exclusively
hosted in India. These 177 conferences collectively published 11,066 papers as
conference proceedings. All these publications, along with conference details,
were sourced from the Scopus database. The study aims to evaluate the research
impact of these conference proceedings and identify the primary contributors.
The results reveal a downward trend in the average number of citations per
year. The collective average citation for all publications is 1.01. Papers
co-authored by Indian and international authors (5.6%) exhibit a higher average
impact of 1.44, in contrast to those authored solely by Indian authors (84.9%),
which have an average impact of 0.97. Notably, Indian-collaborated papers,
among the largest contributors, predominantly originate from private colleges
and universities. Only 19% of papers exhibit collaboration with institutes of
different prestige, yet their impact is considerably higher as compared to
collaboration with institutes of similar prestige. This study highlights the
importance of improving research quality in academic forums.
|
2502.02998
|
Conformal Uncertainty Indicator for Continual Test-Time Adaptation
|
cs.LG
|
Continual Test-Time Adaptation (CTTA) aims to adapt models to sequentially
changing domains during testing, relying on pseudo-labels for self-adaptation.
However, incorrect pseudo-labels can accumulate, leading to performance
degradation. To address this, we propose a Conformal Uncertainty Indicator
(CUI) for CTTA, leveraging Conformal Prediction (CP) to generate prediction
sets that include the true label with a specified coverage probability. Since
domain shifts can lower the coverage than expected, making CP unreliable, we
dynamically compensate for the coverage by measuring both domain and data
differences. Reliable pseudo-labels from CP are then selectively utilized to
enhance adaptation. Experiments confirm that CUI effectively estimates
uncertainty and improves adaptation performance across various existing CTTA
methods.
|
2502.03004
|
MedBioLM: Optimizing Medical and Biological QA with Fine-Tuned Large
Language Models and Retrieval-Augmented Generation
|
cs.CL cs.AI
|
Large Language Models (LLMs) have demonstrated impressive capabilities across
natural language processing tasks. However, their application to specialized
domains such as medicine and biology requires further optimization to ensure
factual accuracy, reliability, and contextual depth. We introduce MedBioLM, a
domain-adapted biomedical question-answering model designed to enhance both
short-form and long-form queries. By integrating fine-tuning and
retrieval-augmented generation (RAG), MedBioLM dynamically incorporates
domain-specific knowledge, improving reasoning abilities and factual accuracy.
To evaluate its effectiveness, we fine-tuned the model on diverse biomedical QA
datasets, covering structured multiple-choice assessments and complex clinical
reasoning tasks. Fine-tuning significantly improves accuracy on benchmark
datasets, while RAG enhances factual consistency. These results highlight the
potential of domain-optimized LLMs in advancing biomedical research, medical
education, and clinical decision support.
|
2502.03005
|
Driver Assistance System Based on Multimodal Data Hazard Detection
|
cs.CV cs.LG
|
Autonomous driving technology has advanced significantly, yet detecting
driving anomalies remains a major challenge due to the long-tailed distribution
of driving events. Existing methods primarily rely on single-modal road
condition video data, which limits their ability to capture rare and
unpredictable driving incidents. This paper proposes a multimodal driver
assistance detection system that integrates road condition video, driver facial
video, and audio data to enhance incident recognition accuracy. Our model
employs an attention-based intermediate fusion strategy, enabling end-to-end
learning without separate feature extraction. To support this approach, we
develop a new three-modality dataset using a driving simulator. Experimental
results demonstrate that our method effectively captures cross-modal
correlations, reducing misjudgments and improving driving safety.
|
2502.03006
|
An Augmented Backward-Corrected Projector Splitting Integrator for
Dynamical Low-Rank Training
|
math.NA cs.LG cs.NA
|
Layer factorization has emerged as a widely used technique for training
memory-efficient neural networks. However, layer factorization methods face
several challenges, particularly a lack of robustness during the training
process. To overcome this limitation, dynamical low-rank training methods have
been developed, utilizing robust time integration techniques for low-rank
matrix differential equations. Although these approaches facilitate efficient
training, they still depend on computationally intensive QR and singular value
decompositions of matrices with small rank. In this work, we introduce a novel
low-rank training method that reduces the number of required QR decompositions.
Our approach integrates an augmentation step into a projector-splitting scheme,
ensuring convergence to a locally optimal solution. We provide a rigorous
theoretical analysis of the proposed method and demonstrate its effectiveness
across multiple benchmarks.
|
2502.03009
|
Scaling Laws for Upcycling Mixture-of-Experts Language Models
|
cs.LG cs.CL
|
Pretraining large language models (LLMs) is resource-intensive, often
requiring months of training time even with high-end GPU clusters. There are
two approaches of mitigating such computational demands: reusing smaller models
to train larger ones (upcycling), and training computationally efficient models
like mixture-of-experts (MoE). In this paper, we study the upcycling of LLMs to
MoE models, of which the scaling behavior remains underexplored. Through
extensive experiments, we identify empirical scaling laws that describe how
performance depends on dataset size and model configuration. Particularly, we
show that, while scaling these factors improves performance, there is a novel
interaction term between the dense and upcycled training dataset that limits
the efficiency of upcycling at large computational budgets. Based on these
findings, we provide guidance to scale upcycling, and establish conditions
under which upcycling outperforms from-scratch trainings within budget
constraints.
|
2502.03014
|
xai_evals : A Framework for Evaluating Post-Hoc Local Explanation
Methods
|
cs.LG cs.AI cs.ET
|
The growing complexity of machine learning and deep learning models has led
to an increased reliance on opaque "black box" systems, making it difficult to
understand the rationale behind predictions. This lack of transparency is
particularly challenging in high-stakes applications where interpretability is
as important as accuracy. Post-hoc explanation methods are commonly used to
interpret these models, but they are seldom rigorously evaluated, raising
concerns about their reliability. The Python package xai_evals addresses this
by providing a comprehensive framework for generating, benchmarking, and
evaluating explanation methods across both tabular and image data modalities.
It integrates popular techniques like SHAP, LIME, Grad-CAM, Integrated
Gradients (IG), and Backtrace, while supporting evaluation metrics such as
faithfulness, sensitivity, and robustness. xai_evals enhances the
interpretability of machine learning models, fostering transparency and trust
in AI systems. The library is open-sourced at
https://pypi.org/project/xai-evals/ .
|
2502.03016
|
An analysis of optimization problems involving ReLU neural networks
|
math.OC cs.LG
|
Solving mixed-integer optimization problems with embedded neural networks
with ReLU activation functions is challenging. Big-M coefficients that arise in
relaxing binary decisions related to these functions grow exponentially with
the number of layers. We survey and propose different approaches to analyze and
improve the run time behavior of mixed-integer programming solvers in this
context. Among them are clipped variants and regularization techniques applied
during training as well as optimization-based bound tightening and a novel
scaling for given ReLU networks. We numerically compare these approaches for
three benchmark problems from the literature. We use the number of linear
regions, the percentage of stable neurons, and overall computational effort as
indicators. As a major takeaway we observe and quantify a trade-off between the
often desired redundancy of neural network models versus the computational
costs for solving related optimization problems.
|
2502.03020
|
Higher-order shortest paths in hypergraphs
|
physics.soc-ph cs.SI
|
One of the defining features of complex networks is the connectivity
properties that we observe emerging from local interactions. Recently,
hypergraphs have emerged as a versatile tool to model networks with non-dyadic,
higher-order interactions. Nevertheless, the connectivity properties of
real-world hypergraphs remain largely understudied. In this work we introduce
path size as a measure to characterise higher-order connectivity and quantify
the relevance of non-dyadic ties for efficient shortest paths in a diverse set
of empirical networks with and without temporal information. By comparing our
results with simple randomised null models, our analysis presents a nuanced
picture, suggesting that non-dyadic ties are often central and are vital for
system connectivity, while dyadic edges remain essential to connect more
peripheral nodes, an effect which is particularly pronounced for time-varying
systems. Our work contributes to a better understanding of the structural
organisation of systems with higher-order interactions.
|
2502.03023
|
Parametric Scaling Law of Tuning Bias in Conformal Prediction
|
cs.LG math.ST stat.ME stat.TH
|
Conformal prediction is a popular framework of uncertainty quantification
that constructs prediction sets with coverage guarantees. To uphold the
exchangeability assumption, many conformal prediction methods necessitate an
additional holdout set for parameter tuning. Yet, the impact of violating this
principle on coverage remains underexplored, making it ambiguous in practical
applications. In this work, we empirically find that the tuning bias - the
coverage gap introduced by leveraging the same dataset for tuning and
calibration, is negligible for simple parameter tuning in many conformal
prediction methods. In particular, we observe the scaling law of the tuning
bias: this bias increases with parameter space complexity and decreases with
calibration set size. Formally, we establish a theoretical framework to
quantify the tuning bias and provide rigorous proof for the scaling law of the
tuning bias by deriving its upper bound. In the end, we discuss how to reduce
the tuning bias, guided by the theories we developed.
|
2502.03029
|
On Zero-Initialized Attention: Optimal Prompt and Gating Factor
Estimation
|
cs.LG
|
The LLaMA-Adapter has recently emerged as an efficient fine-tuning technique
for LLaMA models, leveraging zero-initialized attention to stabilize training
and enhance performance. However, despite its empirical success, the
theoretical foundations of zero-initialized attention remain largely
unexplored. In this paper, we provide a rigorous theoretical analysis,
establishing a connection between zero-initialized attention and
mixture-of-expert models. We prove that both linear and non-linear prompts,
along with gating functions, can be optimally estimated, with non-linear
prompts offering greater flexibility for future applications. Empirically, we
validate our findings on the open LLM benchmarks, demonstrating that non-linear
prompts outperform linear ones. Notably, even with limited training data, both
prompt types consistently surpass vanilla attention, highlighting the
robustness and adaptability of zero-initialized attention.
|
2502.03032
|
Analyze Feature Flow to Enhance Interpretation and Steering in Language
Models
|
cs.LG cs.CL
|
We introduce a new approach to systematically map features discovered by
sparse autoencoder across consecutive layers of large language models,
extending earlier work that examined inter-layer feature links. By using a
data-free cosine similarity technique, we trace how specific features persist,
transform, or first appear at each stage. This method yields granular flow
graphs of feature evolution, enabling fine-grained interpretability and
mechanistic insights into model computations. Crucially, we demonstrate how
these cross-layer feature maps facilitate direct steering of model behavior by
amplifying or suppressing chosen features, achieving targeted thematic control
in text generation. Together, our findings highlight the utility of a causal,
cross-layer interpretability framework that not only clarifies how features
develop through forward passes but also provides new means for transparent
manipulation of large language models.
|
2502.03033
|
Aggregate to Adapt: Node-Centric Aggregation for Multi-Source-Free Graph
Domain Adaptation
|
cs.LG
|
Unsupervised graph domain adaptation (UGDA) focuses on transferring knowledge
from labeled source graph to unlabeled target graph under domain discrepancies.
Most existing UGDA methods are designed to adapt information from a single
source domain, which cannot effectively exploit the complementary knowledge
from multiple source domains. Furthermore, their assumptions that the labeled
source graphs are accessible throughout the training procedure might not be
practical due to privacy, regulation, and storage concerns. In this paper, we
investigate multi-source-free unsupervised graph domain adaptation, i.e.,
adapting knowledge from multiple source domains to an unlabeled target domain
without utilizing labeled source graphs but relying solely on source
pre-trained models. Unlike previous multi-source domain adaptation approaches
that aggregate predictions at model level, we introduce a novel model named
GraphATA which conducts adaptation at node granularity. Specifically, we
parameterize each node with its own graph convolutional matrix by automatically
aggregating weight matrices from multiple source models according to its local
context, thus realizing dynamic adaptation over graph structured data. We also
demonstrate the capability of GraphATA to generalize to both model-centric and
layer-centric methods. Comprehensive experiments on various public datasets
show that our GraphATA can consistently surpass recent state-of-the-art
baselines with different gains.
|
2502.03034
|
Knowledge Distillation from Large Language Models for Household Energy
Modeling
|
cs.CL cs.LG
|
Machine learning (ML) is increasingly vital for smart-grid research, yet
restricted access to realistic, diverse data - often due to privacy concerns -
slows progress and fuels doubts within the energy sector about adopting
ML-based strategies. We propose integrating Large Language Models (LLMs) in
energy modeling to generate realistic, culturally sensitive, and
behavior-specific data for household energy usage across diverse geographies.
In this study, we employ and compare five different LLMs to systematically
produce family structures, weather patterns, and daily consumption profiles for
households in six distinct countries. A four-stage methodology synthesizes
contextual daily data, including culturally nuanced activities, realistic
weather ranges, HVAC operations, and distinct `energy signatures' that capture
unique consumption footprints. Additionally, we explore an alternative strategy
where external weather datasets can be directly integrated, bypassing
intermediate weather modeling stages while ensuring physically consistent data
inputs. The resulting dataset provides insights into how cultural, climatic,
and behavioral factors converge to shape carbon emissions, offering a
cost-effective avenue for scenario-based energy optimization. This approach
underscores how prompt engineering, combined with knowledge distillation, can
advance sustainable energy research and climate mitigation efforts. Source code
is available at
https://github.com/Singularity-AI-Lab/LLM-Energy-Knowledge-Distillation .
|
2502.03035
|
UMC: Unified Resilient Controller for Legged Robots with Joint
Malfunctions
|
cs.RO
|
Adaptation to unpredictable damages is crucial for autonomous legged robots,
yet existing methods based on multi-policy or meta-learning frameworks face
challenges like limited generalization and complex maintenance. To address this
issue, we first analyze and summarize eight types of damage scenarios,
including sensor failures and joint malfunctions. Then, we propose a novel,
model-free, two-stage training framework, Unified Malfunction Controller (UMC),
incorporating a masking mechanism to enhance damage resilience. Specifically,
the model is initially trained with normal environments to ensure robust
performance under standard conditions. In the second stage, we use masks to
prevent the legged robot from relying on malfunctioning limbs, enabling
adaptive gait and movement adjustments upon malfunction. Experimental results
demonstrate that our approach improves the task completion capability by an
average of 36% for the transformer and 39% for the MLP across three locomotion
tasks. The source code and trained models will be made available to the public.
|
2502.03036
|
FuXi-$\alpha$: Scaling Recommendation Model with Feature Interaction
Enhanced Transformer
|
cs.IR
|
Inspired by scaling laws and large language models, research on large-scale
recommendation models has gained significant attention. Recent advancements
have shown that expanding sequential recommendation models to large-scale
recommendation models can be an effective strategy. Current state-of-the-art
sequential recommendation models primarily use self-attention mechanisms for
explicit feature interactions among items, while implicit interactions are
managed through Feed-Forward Networks (FFNs). However, these models often
inadequately integrate temporal and positional information, either by adding
them to attention weights or by blending them with latent representations,
which limits their expressive power. A recent model, HSTU, further reduces the
focus on implicit feature interactions, constraining its performance. We
propose a new model called FuXi-$\alpha$ to address these issues. This model
introduces an Adaptive Multi-channel Self-attention mechanism that distinctly
models temporal, positional, and semantic features, along with a Multi-stage
FFN to enhance implicit feature interactions. Our offline experiments
demonstrate that our model outperforms existing models, with its performance
continuously improving as the model size increases. Additionally, we conducted
an online A/B test within the Huawei Music app, which showed a $4.76\%$
increase in the average number of songs played per user and a $5.10\%$ increase
in the average listening duration per user. Our code has been released at
https://github.com/USTC-StarTeam/FuXi-alpha.
|
2502.03038
|
The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and
its Implications for Participation
|
cs.AI cs.CY cs.LG
|
In a widely popular analogy by Turing Award Laureate Yann LeCun, machine
intelligence has been compared to cake - where unsupervised learning forms the
base, supervised learning adds the icing, and reinforcement learning is the
cherry on top. We expand this 'cake that is intelligence' analogy from a simple
structural metaphor to the full life-cycle of AI systems, extending it to
sourcing of ingredients (data), conception of recipes (instructions), the
baking process (training), and the tasting and selling of the cake (evaluation
and distribution). Leveraging our re-conceptualization, we describe each step's
entailed social ramifications and how they are bounded by statistical
assumptions within machine learning. Whereas these technical foundations and
social impacts are deeply intertwined, they are often studied in isolation,
creating barriers that restrict meaningful participation. Our
re-conceptualization paves the way to bridge this gap by mapping where
technical foundations interact with social outcomes, highlighting opportunities
for cross-disciplinary dialogue. Finally, we conclude with actionable
recommendations at each stage of the metaphorical AI cake's life-cycle,
empowering prospective AI practitioners, users, and researchers, with increased
awareness and ability to engage in broader AI discourse.
|
2502.03041
|
Large Language Models Are Universal Recommendation Learners
|
cs.IR cs.LG
|
In real-world recommender systems, different tasks are typically addressed
using supervised learning on task-specific datasets with carefully designed
model architectures. We demonstrate that large language models (LLMs) can
function as universal recommendation learners, capable of handling multiple
tasks within a unified input-output framework, eliminating the need for
specialized model designs. To improve the recommendation performance of LLMs,
we introduce a multimodal fusion module for item representation and a
sequence-in-set-out approach for efficient candidate generation. When applied
to industrial-scale data, our LLM achieves competitive results with expert
models elaborately designed for different recommendation tasks. Furthermore,
our analysis reveals that recommendation outcomes are highly sensitive to text
input, highlighting the potential of prompt engineering in optimizing
industrial-scale recommender systems.
|
2502.03044
|
RepLoRA: Reparameterizing Low-Rank Adaptation via the Perspective of
Mixture of Experts
|
cs.LG
|
Low-rank adaptation (LoRA) has emerged as a powerful method for fine-tuning
large-scale foundation models. Despite its popularity, the theoretical
understanding of LoRA has remained limited. This paper presents a theoretical
analysis of LoRA by examining its connection to the Mixture of Experts models.
Under this framework, we show that simple reparameterizations of the LoRA
matrices can notably accelerate the low-rank matrix estimation process. In
particular, we prove that reparameterization can reduce the data needed to
achieve a desired estimation error from an exponential to a polynomial scale.
Motivated by this insight, we propose Reparameterized Low-rank Adaptation
(RepLoRA), which incorporates lightweight MLPs to reparameterize the LoRA
matrices. Extensive experiments across multiple domains demonstrate that
RepLoRA consistently outperforms vanilla LoRA. Notably, with limited data,
RepLoRA surpasses LoRA by a margin of up to 40.0% and achieves LoRA's
performance with only 30.0% of the training data, highlighting both the
theoretical and empirical robustness of our PEFT method.
|
2502.03047
|
Kozax: Flexible and Scalable Genetic Programming in JAX
|
cs.NE cs.AI
|
Genetic programming is an optimization algorithm inspired by natural
selection which automatically evolves the structure of computer programs. The
resulting computer programs are interpretable and efficient compared to
black-box models with fixed structure. The fitness evaluation in genetic
programming suffers from high computational requirements, limiting the
performance on difficult problems. To reduce the runtime, many implementations
of genetic programming require a specific data format, making the applicability
limited to specific problem classes. Consequently, there is no efficient
genetic programming framework that is usable for a wide range of tasks. To this
end, we developed Kozax, a genetic programming framework that evolves symbolic
expressions for arbitrary problems. We implemented Kozax using JAX, a framework
for high-performance and scalable machine learning, which allows the fitness
evaluation to scale efficiently to large populations or datasets on GPU.
Furthermore, Kozax offers constant optimization, custom operator definition and
simultaneous evolution of multiple trees. We demonstrate successful
applications of Kozax to discover equations of natural laws, recover equations
of hidden dynamic variables and evolve a control policy. Overall, Kozax
provides a general, fast, and scalable library to optimize white-box solutions
in the realm of scientific computing.
|
2502.03048
|
The Ensemble Kalman Update is an Empirical Matheron Update
|
cs.LG stat.ML
|
The Ensemble Kalman Filter (EnKF) is a widely used method for data
assimilation in high-dimensional systems. In this paper, we show that the
ensemble update step of the EnKF is equivalent to an empirical version of the
Matheron update popular in the study of Gaussian process regression. While this
connection is simple, it seems not to be widely known, the literature about
each technique seems distinct, and connections between the methods are not
exploited. This paper exists to provide an informal introduction to the
connection, with the necessary definitions so that it is intelligible to as
broad an audience as possible.
|
2502.03052
|
Understanding and Enhancing the Transferability of Jailbreaking Attacks
|
cs.LG cs.CR
|
Jailbreaking attacks can effectively manipulate open-source large language
models (LLMs) to produce harmful responses. However, these attacks exhibit
limited transferability, failing to disrupt proprietary LLMs consistently. To
reliably identify vulnerabilities in proprietary LLMs, this work investigates
the transferability of jailbreaking attacks by analysing their impact on the
model's intent perception. By incorporating adversarial sequences, these
attacks can redirect the source LLM's focus away from malicious-intent tokens
in the original input, thereby obstructing the model's intent recognition and
eliciting harmful responses. Nevertheless, these adversarial sequences fail to
mislead the target LLM's intent perception, allowing the target LLM to refocus
on malicious-intent tokens and abstain from responding. Our analysis further
reveals the inherent distributional dependency within the generated adversarial
sequences, whose effectiveness stems from overfitting the source LLM's
parameters, resulting in limited transferability to target LLMs. To this end,
we propose the Perceived-importance Flatten (PiF) method, which uniformly
disperses the model's focus across neutral-intent tokens in the original input,
thus obscuring malicious-intent tokens without relying on overfitted
adversarial sequences. Extensive experiments demonstrate that PiF provides an
effective and efficient red-teaming evaluation for proprietary LLMs.
|
2502.03053
|
DOLFIN -- Document-Level Financial test set for Machine Translation
|
cs.CL
|
Despite the strong research interest in document-level Machine Translation
(MT), the test sets dedicated to this task are still scarce. The existing test
sets mainly cover topics from the general domain and fall short on specialised
domains, such as legal and financial. Also, in spite of their document-level
aspect, they still follow a sentence-level logic that does not allow for
including certain linguistic phenomena such as information reorganisation. In
this work, we aim to fill this gap by proposing a novel test set: DOLFIN. The
dataset is built from specialised financial documents, and it makes a step
towards true document-level MT by abandoning the paradigm of perfectly aligned
sentences, presenting data in units of sections rather than sentences. The test
set consists of an average of 1950 aligned sections for five language pairs. We
present a detailed data collection pipeline that can serve as inspiration for
aligning new document-level datasets. We demonstrate the usefulness and quality
of this test set by evaluating a number of models. Our results show that the
test set is able to discriminate between context-sensitive and context-agnostic
models and shows the weaknesses when models fail to accurately translate
financial texts. The test set is made public for the community.
|
2502.03057
|
High-frequency near-eye ground truth for event-based eye tracking
|
cs.CV
|
Event-based eye tracking is a promising solution for efficient and low-power
eye tracking in smart eyewear technologies. However, the novelty of event-based
sensors has resulted in a limited number of available datasets, particularly
those with eye-level annotations, crucial for algorithm validation and
deep-learning training. This paper addresses this gap by presenting an improved
version of a popular event-based eye-tracking dataset. We introduce a
semi-automatic annotation pipeline specifically designed for event-based data
annotation. Additionally, we provide the scientific community with the computed
annotations for pupil detection at 200Hz.
|
2502.03061
|
Optimal Best Arm Identification with Post-Action Context
|
cs.LG
|
We introduce the problem of best arm identification (BAI) with post-action
context, a new BAI problem in a stochastic multi-armed bandit environment and
the fixed-confidence setting. The problem addresses the scenarios in which the
learner receives a $\textit{post-action context}$ in addition to the reward
after playing each action. This post-action context provides additional
information that can significantly facilitate the decision process. We analyze
two different types of the post-action context: (i) $\textit{non-separator}$,
where the reward depends on both the action and the context, and (ii)
$\textit{separator}$, where the reward depends solely on the context. For both
cases, we derive instance-dependent lower bounds on the sample complexity and
propose algorithms that asymptotically achieve the optimal sample complexity.
For the non-separator setting, we do so by demonstrating that the
Track-and-Stop algorithm can be extended to this setting. For the separator
setting, we propose a novel sampling rule called $\textit{G-tracking}$, which
uses the geometry of the context space to directly track the contexts rather
than the actions. Finally, our empirical results showcase the advantage of our
approaches compared to the state of the art.
|
2502.03062
|
Time Series Anomaly Detection in the Frequency Domain with Statistical
Reliability
|
stat.ML cs.LG
|
Effective anomaly detection in complex systems requires identifying change
points (CPs) in the frequency domain, as abnormalities often arise across
multiple frequencies. This paper extends recent advancements in statistically
significant CP detection, based on Selective Inference (SI), to the frequency
domain. The proposed SI method quantifies the statistical significance of
detected CPs in the frequency domain using $p$-values, ensuring that the
detected changes reflect genuine structural shifts in the target system. We
address two major technical challenges to achieve this. First, we extend the
existing SI framework to the frequency domain by appropriately utilizing the
properties of discrete Fourier transform (DFT). Second, we develop an SI method
that provides valid $p$-values for CPs where changes occur across multiple
frequencies. Experimental results demonstrate that the proposed method reliably
identifies genuine CPs with strong statistical guarantees, enabling more
accurate root-cause analysis in the frequency domain of complex systems.
|
2502.03065
|
Scientometric Analysis of the German IR Community within TREC & CLEF
|
cs.IR cs.DL
|
Within this study, the influence of the German Information Retrieval
community on the retrieval campaigns Text Retrieval Conference (TREC) and
Conference and Labs of the Evaluation Forum (CLEF) between 2000 and 2022 was
analyzed based on metadata provided by OpenAlex and further metadata extracted
with the GROBID framework from the publication's full texts. The analysis was
conducted at the institutional and researcher levels. It was found that the
German IR community, both on the author and institution level, mainly
contributed to CLEF. Furthermore, it was shown that productivity follows the
assumptions made by Lotka's Law.
|
2502.03067
|
Optimizing Electric Vehicles Charging using Large Language Models and
Graph Neural Networks
|
eess.SY cs.LG cs.SY
|
Maintaining grid stability amid widespread electric vehicle (EV) adoption is
vital for sustainable transportation. Traditional optimization methods and
Reinforcement Learning (RL) approaches often struggle with the high
dimensionality and dynamic nature of real-time EV charging, leading to
sub-optimal solutions. To address these challenges, this study demonstrates
that combining Large Language Models (LLMs), for sequence modeling, with Graph
Neural Networks (GNNs), for relational information extraction, not only
outperforms conventional EV smart charging methods, but also paves the way for
entirely new research directions and innovative solutions.
|
2502.03072
|
RoboGrasp: A Universal Grasping Policy for Robust Robotic Control
|
cs.RO cs.CV
|
Imitation learning and world models have shown significant promise in
advancing generalizable robotic learning, with robotic grasping remaining a
critical challenge for achieving precise manipulation. Existing methods often
rely heavily on robot arm state data and RGB images, leading to overfitting to
specific object shapes or positions. To address these limitations, we propose
RoboGrasp, a universal grasping policy framework that integrates pretrained
grasp detection models with robotic learning. By leveraging robust visual
guidance from object detection and segmentation tasks, RoboGrasp significantly
enhances grasp precision, stability, and generalizability, achieving up to 34%
higher success rates in few-shot learning and grasping box prompt tasks. Built
on diffusion-based methods, RoboGrasp is adaptable to various robotic learning
paradigms, enabling precise and reliable manipulation across diverse and
complex scenarios. This framework represents a scalable and versatile solution
for tackling real-world challenges in robotic grasping.
|
2502.03078
|
Automatic Prompt Optimization Techniques: Exploring the Potential for
Synthetic Data Generation
|
cs.HC cs.LG
|
Artificial Intelligence (AI) advancement is heavily dependent on access to
large-scale, high-quality training data. However, in specialized domains such
as healthcare, data acquisition faces significant constraints due to privacy
regulations, ethical considerations, and limited availability. While synthetic
data generation offers a promising solution, conventional approaches typically
require substantial real data for training generative models. The emergence of
large-scale prompt-based models presents new opportunities for synthetic data
generation without direct access to protected data. However, crafting effective
prompts for domain-specific data generation remains challenging, and manual
prompt engineering proves insufficient for achieving output with sufficient
precision and authenticity. We review recent developments in automatic prompt
optimization, following PRISMA guidelines. We analyze six peer-reviewed studies
published between 2020 and 2024 that focus on automatic data-free prompt
optimization methods. Our analysis reveals three approaches: feedback-driven,
error-based, and control-theoretic. Although all approaches demonstrate
promising capabilities in prompt refinement and adaptation, our findings
suggest the need for an integrated framework that combines complementary
optimization techniques to enhance synthetic data generation while minimizing
manual intervention. We propose future research directions toward developing
robust, iterative prompt optimization frameworks capable of improving the
quality of synthetic data. This advancement can be particularly crucial for
sensitive fields and in specialized domains where data access is restricted,
potentially transforming how we approach synthetic data generation for AI
development.
|
2502.03080
|
IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured
Reasoning Templates
|
cs.CL
|
While Large Language Models (LLMs) demonstrate impressive reasoning
capabilities, understanding and validating their knowledge utilization remains
challenging. Chain-of-thought (CoT) prompting partially addresses this by
revealing intermediate reasoning steps, but the knowledge flow and application
remain implicit. We introduce IAO (Input-Action-Output) prompting, a structured
template-based method that explicitly models how LLMs access and apply their
knowledge during complex reasoning tasks. IAO decomposes problems into
sequential steps, each clearly identifying the input knowledge being used, the
action being performed, and the resulting output. This structured decomposition
enables us to trace knowledge flow, verify factual consistency, and identify
potential knowledge gaps or misapplications. Through experiments across diverse
reasoning tasks, we demonstrate that IAO not only improves zero-shot
performance but also provides transparency in how LLMs leverage their stored
knowledge. Human evaluation confirms that this structured approach enhances our
ability to verify knowledge utilization and detect potential hallucinations or
reasoning errors. Our findings provide insights into both knowledge
representation within LLMs and methods for more reliable knowledge application.
|
2502.03081
|
Human-Aligned Image Models Improve Visual Decoding from the Brain
|
cs.CV cs.LG
|
Decoding visual images from brain activity has significant potential for
advancing brain-computer interaction and enhancing the understanding of human
perception. Recent approaches align the representation spaces of images and
brain activity to enable visual decoding. In this paper, we introduce the use
of human-aligned image encoders to map brain signals to images. We hypothesize
that these models more effectively capture perceptual attributes associated
with the rapid visual stimuli presentations commonly used in visual brain data
recording experiments. Our empirical results support this hypothesis,
demonstrating that this simple modification improves image retrieval accuracy
by up to 21% compared to state-of-the-art methods. Comprehensive experiments
confirm consistent performance improvements across diverse EEG architectures,
image encoders, alignment methods, participants, and brain imaging modalities.
|
2502.03086
|
Implementing Large Quantum Boltzmann Machines as Generative AI Models
for Dataset Balancing
|
cs.ET cs.AI cs.LG cs.NE quant-ph
|
This study explores the implementation of large Quantum Restricted Boltzmann
Machines (QRBMs), a key advancement in Quantum Machine Learning (QML), as
generative models on D-Wave's Pegasus quantum hardware to address dataset
imbalance in Intrusion Detection Systems (IDS). By leveraging Pegasus's
enhanced connectivity and computational capabilities, a QRBM with 120 visible
and 120 hidden units was successfully embedded, surpassing the limitations of
default embedding tools. The QRBM synthesized over 1.6 million attack samples,
achieving a balanced dataset of over 4.2 million records. Comparative
evaluations with traditional balancing methods, such as SMOTE and
RandomOversampler, revealed that QRBMs produced higher-quality synthetic
samples, significantly improving detection rates, precision, recall, and F1
score across diverse classifiers. The study underscores the scalability and
efficiency of QRBMs, completing balancing tasks in milliseconds. These findings
highlight the transformative potential of QML and QRBMs as next-generation
tools in data preprocessing, offering robust solutions for complex
computational challenges in modern information systems.
|
2502.03092
|
E-3SFC: Communication-Efficient Federated Learning with Double-way
Features Synthesizing
|
cs.LG cs.AI cs.DC
|
The exponential growth in model sizes has significantly increased the
communication burden in Federated Learning (FL). Existing methods to alleviate
this burden by transmitting compressed gradients often face high compression
errors, which slow down the model's convergence. To simultaneously achieve high
compression effectiveness and lower compression errors, we study the gradient
compression problem from a novel perspective. Specifically, we propose a
systematical algorithm termed Extended Single-Step Synthetic Features
Compressing (E-3SFC), which consists of three sub-components, i.e., the
Single-Step Synthetic Features Compressor (3SFC), a double-way compression
algorithm, and a communication budget scheduler. First, we regard the process
of gradient computation of a model as decompressing gradients from
corresponding inputs, while the inverse process is considered as compressing
the gradients. Based on this, we introduce a novel gradient compression method
termed 3SFC, which utilizes the model itself as a decompressor, leveraging
training priors such as model weights and objective functions. 3SFC compresses
raw gradients into tiny synthetic features in a single-step simulation,
incorporating error feedback to minimize overall compression errors. To further
reduce communication overhead, 3SFC is extended to E-3SFC, allowing double-way
compression and dynamic communication budget scheduling. Our theoretical
analysis under both strongly convex and non-convex conditions demonstrates that
3SFC achieves linear and sub-linear convergence rates with aggregation noise.
Extensive experiments across six datasets and six models reveal that 3SFC
outperforms state-of-the-art methods by up to 13.4% while reducing
communication costs by 111.6 times. These findings suggest that 3SFC can
significantly enhance communication efficiency in FL without compromising model
performance.
|
2502.03095
|
Reveal the Mystery of DPO: The Connection between DPO and RL Algorithms
|
cs.LG
|
With the rapid development of Large Language Models (LLMs), numerous
Reinforcement Learning from Human Feedback (RLHF) algorithms have been
introduced to improve model safety and alignment with human preferences. These
algorithms can be divided into two main frameworks based on whether they
require an explicit reward (or value) function for training: actor-critic-based
Proximal Policy Optimization (PPO) and alignment-based Direct Preference
Optimization (DPO). The mismatch between DPO and PPO, such as DPO's use of a
classification loss driven by human-preferred data, has raised confusion about
whether DPO should be classified as a Reinforcement Learning (RL) algorithm. To
address these ambiguities, we focus on three key aspects related to DPO, RL,
and other RLHF algorithms: (1) the construction of the loss function; (2) the
target distribution at which the algorithm converges; (3) the impact of key
components within the loss function. Specifically, we first establish a unified
framework named UDRRA connecting these algorithms based on the construction of
their loss functions. Next, we uncover their target policy distributions within
this framework. Finally, we investigate the critical components of DPO to
understand their impact on the convergence rate. Our work provides a deeper
understanding of the relationship between DPO, RL, and other RLHF algorithms,
offering new insights for improving existing algorithms.
|
2502.03100
|
A Bayesian perspective on single-shot laser characterization
|
physics.optics cs.LG physics.ins-det
|
We introduce a Bayesian framework for measuring spatio-temporal couplings
(STCs) in ultra-intense lasers that reconceptualizes what constitutes a
'single-shot' measurement. Moving beyond traditional distinctions between
single- and multi-shot devices, our approach provides rigorous criteria for
determining when measurements can truly resolve individual laser shots rather
than statistical averages. This framework shows that single-shot capability is
not an intrinsic device property but emerges from the relationship between
measurement precision and inherent parameter variability. Implementing this
approach with a new measurement device at the ATLAS-3000 petawatt laser, we
provide the first quantitative uncertainty bounds on pulse front tilt and
curvature. Notably, we observe that our Bayesian method reduces uncertainty by
up to 60% compared to traditional approaches. Through this analysis, we reveal
how the interplay between measurement precision and intrinsic system
variability defines achievable resolution -- insights that have direct
implications for applications where precise control of laser-matter interaction
is critical.
|
2502.03102
|
Structured Token Retention and Computational Memory Paths in Large
Language Models
|
cs.CL
|
Memory retention mechanisms play a central role in determining the efficiency
of computational architectures designed for processing extended sequences.
Conventional methods for token management often impose fixed retention
thresholds or rely on uniform attention weight distributions, leading to
inefficient memory utilization and premature information loss in extended
sequence modeling. Structured Token Retention (STR) introduces a probabilistic
selection framework that dynamically adjusts token persistence based on
contextual significance, ensuring that computational resources are allocated to
semantically relevant elements. Computational Memory Paths (CMP) extend this
framework through hierarchical memory allocation, refining retention efficiency
through structured reallocation of token embeddings. Comparative assessments
against baseline models demonstrate that STR and CMP improve token survival
rates across long input sequences while reducing cumulative error propagation
across processing layers. Experimental results further indicate reductions in
computational overhead, improving inference speed without degrading contextual
coherence. Token distribution analyses reveal that structured memory allocation
prevents excessive redundancy in attention weight calculations, optimizing
information retrieval efficiency in large-scale generative architectures. The
integration of STR and CMP into an open-source model illustrates the
adaptability of structured memory retention methodologies, highlighting their
applicability in generative text processing, long-context comprehension, and
scalable sequence modeling.
|
2502.03103
|
Edge Attention Module for Object Classification
|
cs.CV cs.LG
|
A novel ``edge attention-based Convolutional Neural Network (CNN)'' is
proposed in this research for object classification task. With the advent of
advanced computing technology, CNN models have achieved to remarkable success,
particularly in computer vision applications. Nevertheless, the efficacy of the
conventional CNN is often hindered due to class imbalance and inter-class
similarity problems, which are particularly prominent in the computer vision
field. In this research, we introduce for the first time an ``Edge Attention
Module (EAM)'' consisting of a Max-Min pooling layer, followed by convolutional
layers. This Max-Min pooling is entirely a novel pooling technique,
specifically designed to capture only the edge information that is crucial for
any object classification task. Therefore, by integrating this novel pooling
technique into the attention module, the CNN network inherently prioritizes on
essential edge features, thereby boosting the accuracy and F1-score of the
model significantly. We have implemented our proposed EAM or 2EAMs on several
standard pre-trained CNN models for Caltech-101, Caltech-256, CIFAR-100 and
Tiny ImageNet-200 datasets. The extensive experiments reveal that our proposed
framework (that is, EAM with CNN and 2EAMs with CNN), outperforms all
pre-trained CNN models as well as recent trend models ``Pooling-based Vision
Transformer (PiT)'', ``Convolutional Block Attention Module (CBAM)'', and
ConvNext, by substantial margins. We have achieved the accuracy of 95.5% and
86% by the proposed framework on Caltech-101 and Caltech-256 datasets,
respectively. So far, this is the best results on these datasets, to the best
of our knowledge.
|
2502.03104
|
Bellman Error Centering
|
cs.LG cs.AI
|
This paper revisits the recently proposed reward centering algorithms
including simple reward centering (SRC) and value-based reward centering (VRC),
and points out that SRC is indeed the reward centering, while VRC is
essentially Bellman error centering (BEC). Based on BEC, we provide the
centered fixpoint for tabular value functions, as well as the centered TD
fixpoint for linear value function approximation. We design the on-policy CTD
algorithm and the off-policy CTDC algorithm, and prove the convergence of both
algorithms. Finally, we experimentally validate the stability of our proposed
algorithms. Bellman error centering facilitates the extension to various
reinforcement learning algorithms.
|
2502.03108
|
Multi-objective methods in Federated Learning: A survey and taxonomy
|
cs.LG cs.DC
|
The Federated Learning paradigm facilitates effective distributed machine
learning in settings where training data is decentralized across multiple
clients. As the popularity of the strategy grows, increasingly complex
real-world problems emerge, many of which require balancing conflicting demands
such as fairness, utility, and resource consumption. Recent works have begun to
recognise the use of a multi-objective perspective in answer to this challenge.
However, this novel approach of combining federated methods with
multi-objective optimisation has never been discussed in the broader context of
both fields. In this work, we offer a first clear and systematic overview of
the different ways the two fields can be integrated. We propose a first
taxonomy on the use of multi-objective methods in connection with Federated
Learning, providing a targeted survey of the state-of-the-art and proposing
unambiguous labels to categorise contributions. Given the developing nature of
this field, our taxonomy is designed to provide a solid basis for further
research, capturing existing works while anticipating future additions.
Finally, we outline open challenges and possible directions for further
research.
|
2502.03111
|
Policies and Evaluation for Online Meeting Summarization
|
cs.CL cs.AI cs.LG
|
With more and more meetings moving to a digital domain, meeting summarization
has recently gained interest in both academic and commercial research. However,
prior academic research focuses on meeting summarization as an offline task,
performed after the meeting concludes. In this paper, we perform the first
systematic study of online meeting summarization. For this purpose, we propose
several policies for conducting online summarization. We discuss the unique
challenges of this task compared to the offline setting and define novel
metrics to evaluate latency and partial summary quality. The experiments on the
AutoMin dataset show that 1) online models can produce strong summaries, 2) our
metrics allow a detailed analysis of different systems' quality-latency
trade-off, also taking into account intermediate outputs and 3) adaptive
policies perform better than fixed scheduled ones. These findings provide a
starting point for the wider research community to explore this important task.
|
2502.03117
|
Meta-Learning-Based People Counting and Localization Models Employing
CSI from Commodity WiFi NICs
|
cs.IT eess.SP math.IT
|
In this paper, we consider people counting and localization systems
exploiting channel state information (CSI) measured from commodity WiFi network
interface cards (NICs). While CSI has useful information of amplitude and phase
to describe signal propagation in a measurement environment of interest, CSI
measurement suffers from offsets due to various uncertainties. Moreover, an
uncontrollable external environment where other WiFi devices communicate each
other induces interfering signals, resulting in erroneous CSI captured at a
receiver. In this paper, preprocessing of CSI is first proposed for offset
removal, and it guarantees low-latency operation without any filtering process.
Afterwards, we design people counting and localization models based on
pre-training. To be adaptive to different measurement environments,
meta-learning-based people counting and localization models are also proposed.
Numerical results show that the proposed meta-learning-based people counting
and localization models can achieve high sensing accuracy, compared to other
learning schemes that follow simple training and test procedures.
|
2502.03118
|
Tell2Reg: Establishing spatial correspondence between images by the same
language prompts
|
cs.CV cs.AI eess.IV
|
Spatial correspondence can be represented by pairs of segmented regions, such
that the image registration networks aim to segment corresponding regions
rather than predicting displacement fields or transformation parameters. In
this work, we show that such a corresponding region pair can be predicted by
the same language prompt on two different images using the pre-trained large
multimodal models based on GroundingDINO and SAM. This enables a fully
automated and training-free registration algorithm, potentially generalisable
to a wide range of image registration tasks. In this paper, we present
experimental results using one of the challenging tasks, registering
inter-subject prostate MR images, which involves both highly variable intensity
and morphology between patients. Tell2Reg is training-free, eliminating the
need for costly and time-consuming data curation and labelling that was
previously required for this registration task. This approach outperforms
unsupervised learning-based registration methods tested, and has a performance
comparable to weakly-supervised methods. Additional qualitative results are
also presented to suggest that, for the first time, there is a potential
correlation between language semantics and spatial correspondence, including
the spatial invariance in language-prompted regions and the difference in
language prompts between the obtained local and global correspondences. Code is
available at https://github.com/yanwenCi/Tell2Reg.git.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.