id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.07851
|
Fast and Safe Scheduling of Robots
|
cs.RO
|
In this paper, we present an experimental analysis of a fast heuristic
algorithm that was designed to generate a fast, collision-free schedule for a
set of robots on a path graph. The experiments confirm the algorithm's
effectiveness in producing collision-free schedules as well as achieving the
optimal solution when all tasks assigned to the robots are of equal duration.
Additionally, we provide an integer linear programming formulation that
guarantees an optimal solution for this scheduling problem on any input graph,
at the expense of significantly greater computational resources. We prove the
correctness of our integer linear program. By comparing the solutions of these
two algorithms, including the time required by the schedule itself, and the run
time of each algorithm, we show that the heuristic algorithm is optimal or near
optimal in nearly all cases, with a far faster run time than the integer linear
program.
|
2502.07852
|
Fresh2comm: Information Freshness Optimized Collaborative Perception
|
cs.MA
|
Collaborative perception is a cornerstone of intelligent connected vehicles,
enabling them to share and integrate sensory data to enhance situational
awareness. However, measuring the impact of high transmission delay and
inconsistent delay on collaborative perception in real communication scenarios,
as well as improving the effectiveness of collaborative perception under such
conditions, remain significant challenges in the field. To address these
challenges, we incorporate the key factor of information freshness into the
collaborative perception mechanism and develop a model that systematically
measures and analyzes the impacts of real-world communication on collaborative
perception performance. This provides a new perspective for accurately
evaluating and optimizing collaborative perception performance. We propose and
validate an Age of Information (AoI)-based optimization framework that
strategically allocates communication resources to effectively control the
system's AoI, thereby significantly enhancing the freshness of information
transmission and the accuracy of perception. Additionally, we introduce a novel
experimental approach that comprehensively assesses the varying impacts of
different types of delay on perception results, offering valuable insights for
perception performance optimization under real-world communication scenarios.
|
2502.07853
|
PolicySimEval: A Benchmark for Evaluating Policy Outcomes through
Agent-Based Simulation
|
cs.MA cs.CY
|
With the growing adoption of agent-based models in policy evaluation, a
pressing question arises: Can such systems effectively simulate and analyze
complex social scenarios to inform policy decisions? Addressing this challenge
could significantly enhance the policy-making process, offering researchers and
practitioners a systematic way to validate, explore, and refine policy
outcomes. To advance this goal, we introduce PolicySimEval, the first benchmark
designed to evaluate the capability of agent-based simulations in policy
assessment tasks. PolicySimEval aims to reflect the real-world complexities
faced by social scientists and policymakers. The benchmark is composed of three
categories of evaluation tasks: (1) 20 comprehensive scenarios that replicate
end-to-end policy modeling challenges, complete with annotated expert
solutions; (2) 65 targeted sub-tasks that address specific aspects of
agent-based simulation (e.g., agent behavior calibration); and (3) 200
auto-generated tasks to enable large-scale evaluation and method development.
Experiments show that current state-of-the-art frameworks struggle to tackle
these tasks effectively, with the highest-performing system achieving only
24.5\% coverage rate on comprehensive scenarios, 15.04\% on sub-tasks, and
14.5\% on auto-generated tasks. These results highlight the difficulty of the
task and the gap between current capabilities and the requirements for
real-world policy evaluation.
|
2502.07854
|
Advancing Heat Demand Forecasting with Attention Mechanisms:
Opportunities and Challenges
|
cs.LG cs.CV
|
Global leaders and policymakers are unified in their unequivocal commitment
to decarbonization efforts in support of Net-Zero agreements. District Heating
Systems (DHS), while contributing to carbon emissions due to the continued
reliance on fossil fuels for heat production, are embracing more sustainable
practices albeit with some sense of vulnerability as it could constrain their
ability to adapt to dynamic demand and production scenarios. As demographic
demands grow and renewables become the central strategy in decarbonizing the
heating sector, the need for accurate demand forecasting has intensified.
Advances in digitization have paved the way for Machine Learning (ML) based
solutions to become the industry standard for modeling complex time series
patterns. In this paper, we focus on building a Deep Learning (DL) model that
uses deconstructed components of independent and dependent variables that
affect heat demand as features to perform multi-step ahead forecasting of head
demand. The model represents the input features in a time-frequency space and
uses an attention mechanism to generate accurate forecasts. The proposed method
is evaluated on a real-world dataset and the forecasting performance is
assessed against LSTM and CNN-based forecasting models. Across different supply
zones, the attention-based models outperforms the baselines quantitatively and
qualitatively, with an Mean Absolute Error (MAE) of 0.105 with a standard
deviation of 0.06kW h and a Mean Absolute Percentage Error (MAPE) of 5.4% with
a standard deviation of 2.8%, in comparison the second best model with a MAE of
0.10 with a standard deviation of 0.06kW h and a MAPE of 5.6% with a standard
deviation of 3%.
|
2502.07855
|
Vision-Language Models for Edge Networks: A Comprehensive Survey
|
cs.CV cs.AI cs.CL
|
Vision Large Language Models (VLMs) combine visual understanding with natural
language processing, enabling tasks like image captioning, visual question
answering, and video analysis. While VLMs show impressive capabilities across
domains such as autonomous vehicles, smart surveillance, and healthcare, their
deployment on resource-constrained edge devices remains challenging due to
processing power, memory, and energy limitations. This survey explores recent
advancements in optimizing VLMs for edge environments, focusing on model
compression techniques, including pruning, quantization, knowledge
distillation, and specialized hardware solutions that enhance efficiency. We
provide a detailed discussion of efficient training and fine-tuning methods,
edge deployment challenges, and privacy considerations. Additionally, we
discuss the diverse applications of lightweight VLMs across healthcare,
environmental monitoring, and autonomous systems, illustrating their growing
impact. By highlighting key design strategies, current challenges, and offering
recommendations for future directions, this survey aims to inspire further
research into the practical deployment of VLMs, ultimately making advanced AI
accessible in resource-limited settings.
|
2502.07856
|
MRS: A Fast Sampler for Mean Reverting Diffusion based on ODE and SDE
Solvers
|
cs.CV cs.AI cs.LG
|
In applications of diffusion models, controllable generation is of practical
significance, but is also challenging. Current methods for controllable
generation primarily focus on modifying the score function of diffusion models,
while Mean Reverting (MR) Diffusion directly modifies the structure of the
stochastic differential equation (SDE), making the incorporation of image
conditions simpler and more natural. However, current training-free fast
samplers are not directly applicable to MR Diffusion. And thus MR Diffusion
requires hundreds of NFEs (number of function evaluations) to obtain
high-quality samples. In this paper, we propose a new algorithm named MRS (MR
Sampler) to reduce the sampling NFEs of MR Diffusion. We solve the reverse-time
SDE and the probability flow ordinary differential equation (PF-ODE) associated
with MR Diffusion, and derive semi-analytical solutions. The solutions consist
of an analytical function and an integral parameterized by a neural network.
Based on this solution, we can generate high-quality samples in fewer steps.
Our approach does not require training and supports all mainstream
parameterizations, including noise prediction, data prediction and velocity
prediction. Extensive experiments demonstrate that MR Sampler maintains high
sampling quality with a speedup of 10 to 20 times across ten different image
restoration tasks. Our algorithm accelerates the sampling procedure of MR
Diffusion, making it more practical in controllable generation.
|
2502.07857
|
SNAP: Sequential Non-Ancestor Pruning for Targeted Causal Effect
Estimation With an Unknown Graph
|
stat.ML cs.AI cs.LG
|
Causal discovery can be computationally demanding for large numbers of
variables. If we only wish to estimate the causal effects on a small subset of
target variables, we might not need to learn the causal graph for all
variables, but only a small subgraph that includes the targets and their
adjustment sets. In this paper, we focus on identifying causal effects between
target variables in a computationally and statistically efficient way. This
task combines causal discovery and effect estimation, aligning the discovery
objective with the effects to be estimated. We show that definite non-ancestors
of the targets are unnecessary to learn causal relations between the targets
and to identify efficient adjustments sets. We sequentially identify and prune
these definite non-ancestors with our Sequential Non-Ancestor Pruning (SNAP)
framework, which can be used either as a preprocessing step to standard causal
discovery methods, or as a standalone sound and complete causal discovery
algorithm. Our results on synthetic and real data show that both approaches
substantially reduce the number of independence tests and the computation time
without compromising the quality of causal effect estimations.
|
2502.07858
|
MAAT: Mamba Adaptive Anomaly Transformer with association discrepancy
for time series
|
cs.LG
|
Anomaly detection in time series is essential for industrial monitoring and
environmental sensing, yet distinguishing anomalies from complex patterns
remains challenging. Existing methods like the Anomaly Transformer and
DCdetector have progressed, but they face limitations such as sensitivity to
short-term contexts and inefficiency in noisy, non-stationary environments.
To overcome these issues, we introduce MAAT, an improved architecture that
enhances association discrepancy modeling and reconstruction quality. MAAT
features Sparse Attention, efficiently capturing long-range dependencies by
focusing on relevant time steps, thereby reducing computational redundancy.
Additionally, a Mamba-Selective State Space Model is incorporated into the
reconstruction module, utilizing a skip connection and Gated Attention to
improve anomaly localization and detection performance.
Extensive experiments show that MAAT significantly outperforms previous
methods, achieving better anomaly distinguishability and generalization across
various time series applications, setting a new standard for unsupervised time
series anomaly detection in real-world scenarios.
|
2502.07859
|
Automatic Prostate Volume Estimation in Transabdominal Ultrasound Images
|
eess.IV cs.CV
|
Prostate cancer is a leading health concern among men, requiring accurate and
accessible methods for early detection and risk stratification. Prostate volume
(PV) is a key parameter in multivariate risk stratification for early prostate
cancer detection, commonly estimated using transrectal ultrasound (TRUS). While
TRUS provides precise prostate volume measurements, its invasive nature often
compromises patient comfort. Transabdominal ultrasound (TAUS) provides a
non-invasive alternative but faces challenges such as lower image quality,
complex interpretation, and reliance on operator expertise. This study
introduces a new deep-learning-based framework for automatic PV estimation
using TAUS, emphasizing its potential to enable accurate and non-invasive
prostate cancer risk stratification. A dataset of TAUS videos from 100
individual patients was curated, with manually delineated prostate boundaries
and calculated diameters by an expert clinician as ground truth. The introduced
framework integrates deep-learning models for prostate segmentation in both
axial and sagittal planes, automatic prostate diameter estimation, and PV
calculation. Segmentation performance was evaluated using Dice correlation
coefficient (%) and Hausdorff distance (mm). Framework's volume estimation
capabilities were evaluated on volumetric error (mL). The framework
demonstrates that it can estimate PV from TAUS videos with a mean volumetric
error of -5.5 mL, which results in an average relative error between 5 and 15%.
The introduced framework for automatic PV estimation from TAUS images,
utilizing deep learning models for prostate segmentation, shows promising
results. It effectively segments the prostate and estimates its volume,
offering potential for reliable, non-invasive risk stratification for early
prostate detection.
|
2502.07861
|
BalanceKV: KV Cache Compression through Discrepancy Theory
|
cs.LG cs.AI cs.DS
|
Large language models (LLMs) have achieved impressive success, but their high
memory requirements present challenges for long-context token generation. The
memory complexity of long-context LLMs is primarily due to the need to store
Key-Value (KV) embeddings in their KV cache. We present BalanceKV, a KV cache
compression method based on geometric sampling process stemming from
Banaszczyk's vector balancing theory, which introduces dependencies informed by
the geometry of keys and value tokens, and improves precision. BalanceKV offers
both theoretically proven and empirically validated performance improvements
over existing methods.
|
2502.07862
|
ADMN: A Layer-Wise Adaptive Multimodal Network for Dynamic Input Noise
and Compute Resources
|
cs.LG cs.AI cs.CV
|
Multimodal deep learning systems are deployed in dynamic scenarios due to the
robustness afforded by multiple sensing modalities. Nevertheless, they struggle
with varying compute resource availability (due to multi-tenancy, device
heterogeneity, etc.) and fluctuating quality of inputs (from sensor feed
corruption, environmental noise, etc.). Current multimodal systems employ
static resource provisioning and cannot easily adapt when compute resources
change over time. Additionally, their reliance on processing sensor data with
fixed feature extractors is ill-equipped to handle variations in modality
quality. Consequently, uninformative modalities, such as those with high noise,
needlessly consume resources better allocated towards other modalities. We
propose ADMN, a layer-wise Adaptive Depth Multimodal Network capable of
tackling both challenges - it adjusts the total number of active layers across
all modalities to meet compute resource constraints, and continually
reallocates layers across input modalities according to their modality quality.
Our evaluations showcase ADMN can match the accuracy of state-of-the-art
networks while reducing up to 75% of their floating-point operations.
|
2502.07864
|
TransMLA: Multi-Head Latent Attention Is All You Need
|
cs.LG cs.AI
|
Modern large language models (LLMs) often encounter communication bottlenecks
on current hardware, rather than purely computational constraints. Multi-head
Latent Attention (MLA) tackles this challenge by using low-rank matrices in the
key-value (KV) layers, thereby allowing compressed latent KV states to be
cached. This approach significantly reduces the KV cache size relative to
traditional multi-head attention, leading to faster inference. Moreover, MLA
employs an up-projection matrix to increase expressiveness, trading additional
computation for reduced communication overhead. Although MLA has demonstrated
efficiency and effectiveness in Deepseek V2/V3/R1, many major model providers
still rely on Group Query Attention (GQA) and have not announced any plans to
adopt MLA. In this paper, we show that GQA can always be represented by MLA
while maintaining the same KV cache overhead, but the converse does not hold.
To encourage broader use of MLA, we introduce TransMLA, a post-training method
that converts widely used GQA-based pre-trained models (e.g., LLaMA, Qwen,
Mixtral) into MLA-based models. After conversion, the model can undergo
additional training to boost expressiveness without increasing the KV cache
size. Furthermore, we plan to develop MLA-specific inference acceleration
techniques to preserve low latency in transformed models, thus enabling more
efficient distillation of Deepseek R1.
|
2502.07866
|
Design and Implementation of Scalable Communication Interfaces for
Reliable and Stable Real-time Co-Simulation of Power Systems
|
eess.SY cs.SY
|
Co-simulation offers an integrated approach for modeling the large-scale
integration of inverter-based resources (IBRs) into transmission and
distribution grids. This paper presents a scalable communication interface
design and implementation to enable reliable and stable real-time co-simulation
of power systems with high IBR penetration. The communication interface is
categorized into two types: local and remote. In local scenarios, where
subsystems are connected within a single local area network (LAN), low-latency
communication facilitates the seamless integration of electromagnetic transient
(EMT) and phasor-domain models, enabling efficient interactions with power and
energy management algorithms. For remote scenarios, data exchange is achieved
via internet-based file sharing or VPN-enabled communication. The performance
of both methods is evaluated using OPAL-RT as a real-time simulator,
demonstrating scalability, effectiveness, and challenges specific to real-time
co-simulation applications. To mitigate instability arising from data
resolution mismatches in time-sensitive co-simulations, a real-time data
extrapolation method is proposed. This approach significantly enhances
stability and reliability, ensuring more accurate simulation outcomes. The
implementation code is available on GitHub, providing researchers the tools to
replicate and expand upon this work.
|
2502.07869
|
EventEgo3D++: 3D Human Motion Capture from a Head-Mounted Event Camera
|
cs.CV
|
Monocular egocentric 3D human motion capture remains a significant challenge,
particularly under conditions of low lighting and fast movements, which are
common in head-mounted device applications. Existing methods that rely on RGB
cameras often fail under these conditions. To address these limitations, we
introduce EventEgo3D++, the first approach that leverages a monocular event
camera with a fisheye lens for 3D human motion capture. Event cameras excel in
high-speed scenarios and varying illumination due to their high temporal
resolution, providing reliable cues for accurate 3D human motion capture.
EventEgo3D++ leverages the LNES representation of event streams to enable
precise 3D reconstructions. We have also developed a mobile head-mounted device
(HMD) prototype equipped with an event camera, capturing a comprehensive
dataset that includes real event observations from both controlled studio
environments and in-the-wild settings, in addition to a synthetic dataset.
Additionally, to provide a more holistic dataset, we include allocentric RGB
streams that offer different perspectives of the HMD wearer, along with their
corresponding SMPL body model. Our experiments demonstrate that EventEgo3D++
achieves superior 3D accuracy and robustness compared to existing solutions,
even in challenging conditions. Moreover, our method supports real-time 3D pose
updates at a rate of 140Hz. This work is an extension of the EventEgo3D
approach (CVPR 2024) and further advances the state of the art in egocentric 3D
human motion capture. For more details, visit the project page at
https://eventego3d.mpi-inf.mpg.de.
|
2502.07870
|
TextAtlas5M: A Large-scale Dataset for Dense Text Image Generation
|
cs.CV
|
Text-conditioned image generation has gained significant attention in recent
years and are processing increasingly longer and comprehensive text prompt. In
everyday life, dense and intricate text appears in contexts like
advertisements, infographics, and signage, where the integration of both text
and visuals is essential for conveying complex information. However, despite
these advances, the generation of images containing long-form text remains a
persistent challenge, largely due to the limitations of existing datasets,
which often focus on shorter and simpler text. To address this gap, we
introduce TextAtlas5M, a novel dataset specifically designed to evaluate
long-text rendering in text-conditioned image generation. Our dataset consists
of 5 million long-text generated and collected images across diverse data
types, enabling comprehensive evaluation of large-scale generative models on
long-text image generation. We further curate 3000 human-improved test set
TextAtlasEval across 3 data domains, establishing one of the most extensive
benchmarks for text-conditioned generation. Evaluations suggest that the
TextAtlasEval benchmarks present significant challenges even for the most
advanced proprietary models (e.g. GPT4o with DallE-3), while their open-source
counterparts show an even larger performance gap. These evidences position
TextAtlas5M as a valuable dataset for training and evaluating future-generation
text-conditioned image generation models.
|
2502.07889
|
A unifying account of warm start guarantees for patches of quantum
landscapes
|
quant-ph cs.LG stat.ML
|
Barren plateaus are fundamentally a statement about quantum loss landscapes
on average but there can, and generally will, exist patches of barren plateau
landscapes with substantial gradients. Previous work has studied certain
classes of parameterized quantum circuits and found example regions where
gradients vanish at worst polynomially in system size. Here we present a
general bound that unifies all these previous cases and that can tackle
physically-motivated ans\"atze that could not be analyzed previously.
Concretely, we analytically prove a lower-bound on the variance of the loss
that can be used to show that in a non-exponentially narrow region around a
point with curvature the loss variance cannot decay exponentially fast. This
result is complemented by numerics and an upper-bound that suggest that any
loss function with a barren plateau will have exponentially vanishing gradients
in any constant radius subregion. Our work thus suggests that while there are
hopes to be able to warm-start variational quantum algorithms, any
initialization strategy that cannot get increasingly close to the region of
attraction with increasing problem size is likely inadequate.
|
2502.07891
|
The Observational Partial Order of Causal Structures with Latent
Variables
|
stat.ML cs.LG quant-ph
|
For two causal structures with the same set of visible variables, one is said
to observationally dominate the other if the set of distributions over the
visible variables realizable by the first contains the set of distributions
over the visible variables realizable by the second. Knowing such dominance
relations is useful for adjudicating between these structures given
observational data. We here consider the problem of determining the partial
order of equivalence classes of causal structures with latent variables
relative to observational dominance. We provide a complete characterization of
the dominance order in the case of three visible variables, and a partial
characterization in the case of four visible variables. Our techniques also
help to identify which observational equivalence classes have a set of
realizable distributions that is characterized by nontrivial inequality
constraints, analogous to Bell inequalities and instrumental inequalities. We
find evidence that as one increases the number of visible variables, the
equivalence classes satisfying nontrivial inequality constraints become
ubiquitous. (Because such classes are the ones for which there can be a
difference in the distributions that are quantumly and classically realizable,
this implies that the potential for quantum-classical gaps is also ubiquitous.)
Furthermore, we find evidence that constraint-based causal discovery algorithms
that rely solely on conditional independence constraints have a significantly
weaker distinguishing power among observational equivalence classes than
algorithms that go beyond these (i.e., algorithms that also leverage nested
Markov constraints and inequality constraints).
|
2502.07904
|
Intelligent Legal Assistant: An Interactive Clarification System for
Legal Question Answering
|
cs.CL
|
The rise of large language models has opened new avenues for users seeking
legal advice. However, users often lack professional legal knowledge, which can
lead to questions that omit critical information. This deficiency makes it
challenging for traditional legal question-answering systems to accurately
identify users' actual needs, often resulting in imprecise or generalized
advice. In this work, we develop a legal question-answering system called
Intelligent Legal Assistant, which interacts with users to precisely capture
their needs. When a user poses a question, the system requests that the user
select their geographical location to pinpoint the applicable laws. It then
generates clarifying questions and options based on the key information missing
from the user's initial question. This allows the user to select and provide
the necessary details. Once all necessary information is provided, the system
produces an in-depth legal analysis encompassing three aspects: overall
conclusion, jurisprudential analysis, and resolution suggestions.
|
2502.07905
|
DeepSeek on a Trip: Inducing Targeted Visual Hallucinations via
Representation Vulnerabilities
|
cs.CV cs.LG
|
Multimodal Large Language Models (MLLMs) represent the cutting edge of AI
technology, with DeepSeek models emerging as a leading open-source alternative
offering competitive performance to closed-source systems. While these models
demonstrate remarkable capabilities, their vision-language integration
mechanisms introduce specific vulnerabilities. We implement an adapted
embedding manipulation attack on DeepSeek Janus that induces targeted visual
hallucinations through systematic optimization of image embeddings. Through
extensive experimentation across COCO, DALL-E 3, and SVIT datasets, we achieve
hallucination rates of up to 98.0% while maintaining high visual fidelity (SSIM
> 0.88) of the manipulated images on open-ended questions. Our analysis
demonstrates that both 1B and 7B variants of DeepSeek Janus are susceptible to
these attacks, with closed-form evaluation showing consistently higher
hallucination rates compared to open-ended questioning. We introduce a novel
multi-prompt hallucination detection framework using LLaMA-3.1 8B Instruct for
robust evaluation. The implications of these findings are particularly
concerning given DeepSeek's open-source nature and widespread deployment
potential. This research emphasizes the critical need for embedding-level
security measures in MLLM deployment pipelines and contributes to the broader
discussion of responsible AI implementation.
|
2502.07912
|
Elevating Legal LLM Responses: Harnessing Trainable Logical Structures
and Semantic Knowledge with Legal Reasoning
|
cs.CL
|
Large Language Models (LLMs) have achieved impressive results across numerous
domains, yet they experience notable deficiencies in legal question-answering
tasks. LLMs often generate generalized responses that lack the logical
specificity required for expert legal advice and are prone to hallucination,
providing answers that appear correct but are unreliable. Retrieval-Augmented
Generation (RAG) techniques offer partial solutions to address this challenge,
but existing approaches typically focus only on semantic similarity, neglecting
the logical structure essential to legal reasoning. In this paper, we propose
the Logical-Semantic Integration Model (LSIM), a novel supervised framework
that bridges semantic and logical coherence. LSIM comprises three components:
reinforcement learning predicts a structured fact-rule chain for each question,
a trainable Deep Structured Semantic Model (DSSM) retrieves the most relevant
candidate questions by integrating semantic and logical features, and
in-context learning generates the final answer using the retrieved content. Our
experiments on a real-world legal QA dataset-validated through both automated
metrics and human evaluation-demonstrate that LSIM significantly enhances
accuracy and reliability compared to existing methods.
|
2502.07922
|
Visual-Haptic Model Mediated Teleoperation for Remote Ultrasound
|
cs.RO cs.HC
|
Tele-ultrasound has the potential greatly to improve health equity for
countless remote communities. However, practical scenarios involve potentially
large time delays which cause current implementations of telerobotic ultrasound
(US) to fail. Using a local model of the remote environment to provide haptics
to the expert operator can decrease teleoperation instability, but the delayed
visual feedback remains problematic. This paper introduces a robotic tele-US
system in which the local model is not only haptic, but also visual, by
re-slicing and rendering a pre-acquired US sweep in real time to provide the
operator a preview of what the delayed image will resemble. A prototype system
is presented and tested with 15 volunteer operators. It is found that
visual-haptic model-mediated teleoperation (MMT) compensates completely for
time delays up to 1000 ms round trip in terms of operator effort and completion
time while conventional MMT does not. Visual-haptic MMT also significantly
outperforms MMT for longer time delays in terms of motion accuracy and force
control. This proof-of-concept study suggests that visual-haptic MMT may
facilitate remote robotic tele-US.
|
2502.07923
|
Sign Operator for Coping with Heavy-Tailed Noise: High Probability
Convergence Bounds with Extensions to Distributed Optimization and Comparison
Oracle
|
math.OC cs.LG
|
The growing popularity of AI optimization problems involving severely
corrupted data has increased the demand for methods capable of handling
heavy-tailed noise, i.e., noise with bounded $\kappa$-th moment, $\kappa \in
(1,2]$. For the widely used clipping technique, effectiveness heavily depends
on the careful tuning of clipping levels throughout training. In this paper, we
demonstrate that using only the sign of the input, without introducing
additional hyperparameters, is sufficient to cope with heavy-tailed noise
effectively. For smooth non-convex functions, we prove that SignSGD achieves
optimal sample complexity $\tilde{O}\left(\varepsilon^{-\frac{3\kappa -
2}{\kappa - 1}}\right)$ with high probability for attaining an average gradient
norm accuracy of $\varepsilon$. Under the assumption of symmetric noise, we use
SignSGD with Majority Voting to extend this bound to the distributed
optimization or reduce the sample complexity to $\tilde{O}(\varepsilon^{-4})$
in the case of a single worker with arbitrary parameters. Furthermore, we
explore the application of the sign operator in zeroth-order optimization with
an oracle that can only compare function values at two different points. We
propose a novel method, MajorityVote-CompsSGD, and provide the first-known
high-probability bound $\tilde{O}(\varepsilon^{-6})$ for the number of
comparisons under symmetric noise assumption. Our theoretical findings are
supported by the superior performance of sign-based methods in training Large
Language Models.
|
2502.07924
|
NDAI Agreements
|
econ.TH cs.AI
|
We study a fundamental challenge in the economics of innovation: an inventor
must reveal details of a new idea to secure compensation or funding, yet such
disclosure risks expropriation. We present a model in which a seller (inventor)
and buyer (investor) bargain over an information good under the threat of
hold-up. In the classical setting, the seller withholds disclosure to avoid
misappropriation, leading to inefficiency. We show that trusted execution
environments (TEEs) combined with AI agents can mitigate and even fully
eliminate this hold-up problem. By delegating the disclosure and payment
decisions to tamper-proof programs, the seller can safely reveal the invention
without risking expropriation, achieving full disclosure and an efficient ex
post transfer. Moreover, even if the invention's value exceeds a threshold that
TEEs can fully secure, partial disclosure still improves outcomes compared to
no disclosure. Recognizing that real AI agents are imperfect, we model "agent
errors" in payments or disclosures and demonstrate that budget caps and
acceptance thresholds suffice to preserve most of the efficiency gains.
Our results imply that cryptographic or hardware-based solutions can function
as an "ironclad NDA," substantially mitigating the fundamental
disclosure-appropriation paradox first identified by Arrow (1962) and Nelson
(1959). This has far-reaching policy implications for fostering R&D, technology
transfer, and collaboration.
|
2502.07931
|
Educating a Responsible AI Workforce: Piloting a Curricular Module on AI
Policy in a Graduate Machine Learning Course
|
cs.CY cs.AI
|
As artificial intelligence (AI) technologies begin to permeate diverse
fields-from healthcare to education-consumers, researchers and policymakers are
increasingly raising concerns about whether and how AI is regulated. It is
therefore reasonable to anticipate that alignment with principles of 'ethical'
or 'responsible' AI, as well as compliance with law and policy, will form an
increasingly important part of AI development. Yet, for the most part, the
conventional computer science curriculum is ill-equipped to prepare students
for these challenges. To this end, we seek to explore how new educational
content related to AI ethics and AI policy can be integrated into both ethics-
and technical-focused courses. This paper describes a two-lecture 'AI policy
module' that was piloted in a graduate-level introductory machine learning
course in 2024. The module, which includes an in-class active learning game, is
evaluated using data from student surveys before and after the lectures, and
pedagogical motivations and considerations are discussed. We find that the
module is successful in engaging otherwise technically-oriented students on the
topic of AI policy, increasing student awareness of the social impacts of a
variety of AI technologies and developing student interest in the field of AI
regulation.
|
2502.07934
|
Age of Information Optimization with Preemption Strategies for
Correlated Systems
|
cs.IT math.IT
|
In this paper, we examine a multi-sensor system where each sensor monitors
multiple dynamic information processes and transmits updates over a shared
communication channel. These updates may include correlated information across
the various processes. In this type of system, we analyze the impact of
preemption, where ongoing transmissions are replaced by newer updates, on
minimizing the Age of Information (AoI). While preemption is optimal in some
scenarios, its effectiveness in multi-sensor correlated systems remains an open
question. To address this, we introduce a probabilistic preemption policy,
where the source sensor preemption decision is stochastic. We derive
closed-form expressions for the AoI and frame its optimization as a sum of
linear ratios problem, a well-known NP-hard problem. To navigate this
complexity, we establish an upper bound on the iterations using a
branch-and-bound algorithm by leveraging a reformulation of the problem. This
analysis reveals linear scalability with the number of processes and a
logarithmic dependency on the reciprocal of the error that shows the optimal
solution can be efficiently found. Building on these findings, we show how
different correlation matrices can lead to distinct optimal preemption
strategies. Interestingly, we demonstrate that the diversity of processes
within the sensors' packets, as captured by the correlation matrix, plays a
more significant role in preemption priority than the number of updates.
|
2502.07937
|
Active Advantage-Aligned Online Reinforcement Learning with Offline Data
|
cs.LG stat.ML
|
Online reinforcement learning (RL) enhances policies through direct
interactions with the environment, but faces challenges related to sample
efficiency. In contrast, offline RL leverages extensive pre-collected data to
learn policies, but often produces suboptimal results due to limited data
coverage. Recent efforts have sought to integrate offline and online RL in
order to harness the advantages of both approaches. However, effectively
combining online and offline RL remains challenging due to issues that include
catastrophic forgetting, lack of robustness and sample efficiency. In an effort
to address these challenges, we introduce A3 RL , a novel method that actively
selects data from combined online and offline sources to optimize policy
improvement. We provide theoretical guarantee that validates the effectiveness
our active sampling strategy and conduct thorough empirical experiments showing
that our method outperforms existing state-of-the-art online RL techniques that
utilize offline data. Our code will be publicly available at:
https://github.com/xuefeng-cs/A3RL.
|
2502.07938
|
Adapting Multilingual Embedding Models to Historical Luxembourgish
|
cs.CL
|
The growing volume of digitized historical texts requires effective semantic
search using text embeddings. However, pre-trained multilingual models,
typically evaluated on contemporary texts, face challenges with historical
digitized content due to OCR noise and outdated spellings. We explore the use
of multilingual embeddings for cross-lingual semantic search on historical
Luxembourgish, a low-resource language. We collect historical Luxembourgish
news articles spanning various time periods and use GPT-4o to segment and
translate them into closely related languages, creating 20,000 parallel
training sentences per language pair. We further create a historical bitext
mining evaluation set and find that these models struggle to perform
cross-lingual search on historical Luxembourgish. To address this, we propose a
simple adaptation method using in-domain training data, achieving up to 98\%
accuracy in cross-lingual evaluations. We release our adapted models and
historical Luxembourgish-German/French bitexts to support further research.
|
2502.07939
|
Discrete Markov Probabilistic Models
|
stat.ML cs.LG
|
This paper introduces the Discrete Markov Probabilistic Model (DMPM), a novel
algorithm for discrete data generation. The algorithm operates in the space of
bits $\{0,1\}^d$, where the noising process is a continuous-time Markov chain
that can be sampled exactly via a Poissonian clock that flips labels uniformly
at random. The time-reversal process, like the forward noise process, is a jump
process, with its intensity governed by a discrete analogue of the classical
score function. Crucially, this intensity is proven to be the conditional
expectation of a function of the forward process, strengthening its theoretical
alignment with score-based generative models while ensuring robustness and
efficiency. We further establish convergence bounds for the algorithm under
minimal assumptions and demonstrate its effectiveness through experiments on
low-dimensional Bernoulli-distributed datasets and high-dimensional binary
MNIST data. The results highlight its strong performance in generating discrete
structures. This work bridges theoretical foundations and practical
applications, advancing the development of effective and theoretically grounded
discrete generative modeling.
|
2502.07942
|
Symbiotic Cooperation for Web Agents: Harnessing Complementary Strengths
of Large and Small LLMs
|
cs.MA cs.LG
|
Web browsing agents powered by large language models (LLMs) have shown
tremendous potential in automating complex web-based tasks. Existing approaches
typically rely on large LLMs (e.g., GPT-4o) to explore web environments and
generate trajectory data, which is then used either for demonstration retrieval
(for large LLMs) or to distill small LLMs (e.g., Llama3) in a process that
remains decoupled from the exploration. In this paper, we propose
AgentSymbiotic, an iterative framework that couples data synthesis with
task-performance, yielding a "symbiotic improvement" for both large and small
LLMs. Our study uncovers a complementary dynamic between LLM types: while large
LLMs excel at generating high-quality trajectories for distillation, the
distilled small LLMs-owing to their distinct reasoning capabilities-often
choose actions that diverge from those of their larger counterparts. This
divergence drives the exploration of novel trajectories, thereby enriching the
synthesized data. However, we also observe that the performance of small LLMs
becomes a bottleneck in this iterative enhancement process. To address this, we
propose two innovations in LLM distillation: a speculative data synthesis
strategy that mitigates off-policy bias, and a multi-task learning approach
designed to boost the reasoning capabilities of the student LLM. Furthermore,
we introduce a Hybrid Mode for Privacy Preservation to address user privacy
concerns. Evaluated on the WEBARENA benchmark, AgentSymbiotic achieves SOTA
performance with both LLM types. Our best Large LLM agent reaches 52%,
surpassing the previous best of 45%, while our 8B distilled model demonstrates
a competitive 49%, exceeding the prior best of 28%. Code will be released upon
acceptance.
|
2502.07943
|
CREDAL: Close Reading of Data Models
|
cs.DB cs.AI cs.CY
|
Data models are necessary for the birth of data and of any data-driven
system. Indeed, every algorithm, every machine learning model, every
statistical model, and every database has an underlying data model without
which the system would not be usable. Hence, data models are excellent sites
for interrogating the (material, social, political, ...) conditions giving rise
to a data system. Towards this, drawing inspiration from literary criticism, we
propose to closely read data models in the same spirit as we closely read
literary artifacts. Close readings of data models reconnect us with, among
other things, the materiality, the genealogies, the techne, the closed nature,
and the design of technical systems.
While recognizing from literary theory that there is no one correct way to
read, it is nonetheless critical to have systematic guidance for those
unfamiliar with close readings. This is especially true for those trained in
the computing and data sciences, who too often are enculturated to set aside
the socio-political aspects of data work. A systematic methodology for reading
data models currently does not exist. To fill this gap, we present the CREDAL
methodology for close readings of data models. We detail our iterative
development process and present results of a qualitative evaluation of CREDAL
demonstrating its usability, usefulness, and effectiveness in the critical
study of data.
|
2502.07944
|
SHACL-SKOS Based Knowledge Representation of Material Safety Data Sheet
(SDS) for the Pharmaceutical Industry
|
cs.AI
|
We report the development of a knowledge representation and reasoning (KRR)
system built on hybrid SHACL-SKOS ontologies for globally harmonized system
(GHS) material Safety Data Sheets (SDS) to enhance chemical safety
communication and regulatory compliance. SDS are comprehensive documents
containing safety and handling information for chemical substances. Thus, they
are an essential part of workplace safety and risk management. However, the
vast number of Safety Data Sheets from multiple organizations, manufacturers,
and suppliers that produce and distribute chemicals makes it challenging to
centralize and access SDS documents through a single repository. To accomplish
the underlying issues of data exchange related to chemical shipping and
handling, we construct SDS related controlled vocabulary and conditions
validated by SHACL, and knowledge systems of similar domains linked via SKOS.
The resulting hybrid ontologies aim to provide standardized yet adaptable
representations of SDS information, facilitating better data sharing,
retrieval, and integration across various platforms. This paper outlines our
SHACL-SKOS system architectural design and showcases our implementation for an
industrial application streamlining the generation of a composite shipping
cover sheet.
|
2502.07945
|
SurGrID: Controllable Surgical Simulation via Scene Graph to Image
Diffusion
|
cs.CV cs.LG
|
Surgical simulation offers a promising addition to conventional surgical
training. However, available simulation tools lack photorealism and rely on
hardcoded behaviour. Denoising Diffusion Models are a promising alternative for
high-fidelity image synthesis, but existing state-of-the-art conditioning
methods fall short in providing precise control or interactivity over the
generated scenes.
We introduce SurGrID, a Scene Graph to Image Diffusion Model, allowing for
controllable surgical scene synthesis by leveraging Scene Graphs. These graphs
encode a surgical scene's components' spatial and semantic information, which
are then translated into an intermediate representation using our novel
pre-training step that explicitly captures local and global information.
Our proposed method improves the fidelity of generated images and their
coherence with the graph input over the state-of-the-art. Further, we
demonstrate the simulation's realism and controllability in a user assessment
study involving clinical experts.
Scene Graphs can be effectively used for precise and interactive conditioning
of Denoising Diffusion Models for simulating surgical scenes, enabling high
fidelity and interactive control over the generated content.
|
2502.07949
|
VSC-RL: Advancing Autonomous Vision-Language Agents with Variational
Subgoal-Conditioned Reinforcement Learning
|
cs.LG cs.AI
|
State-of-the-art (SOTA) reinforcement learning (RL) methods enable the
vision-language agents to learn from interactions with the environment without
human supervision. However, they struggle with learning inefficiencies in
tackling real-world complex sequential decision-making tasks, especially with
sparse reward signals and long-horizon dependencies. To effectively address the
issue, we introduce Variational Subgoal-Conditioned RL (VSC-RL), which
reformulates the vision-language sequential decision-making task as a
variational goal-conditioned RL problem, allowing us to leverage advanced
optimization methods to enhance learning efficiency. Specifically, VSC-RL
optimizes the SubGoal Evidence Lower BOund (SGC-ELBO), which consists of (a)
maximizing the subgoal-conditioned return via RL and (b) minimizing the
subgoal-conditioned difference with the reference policy. We theoretically
demonstrate that SGC-ELBO is equivalent to the original optimization objective,
ensuring improved learning efficiency without sacrificing performance
guarantees. Additionally, for real-world complex decision-making tasks, VSC-RL
leverages the vision-language model to autonomously decompose the goal into
feasible subgoals, enabling efficient learning. Across various benchmarks,
including challenging real-world mobile device control tasks, VSC-RL
significantly outperforms the SOTA vision-language agents, achieving superior
performance and remarkable improvement in learning efficiency.
|
2502.07951
|
Federated Self-supervised Domain Generalization for Label-efficient
Polyp Segmentation
|
cs.CV cs.DC cs.LG
|
Employing self-supervised learning (SSL) methodologies assumes par-amount
significance in handling unlabeled polyp datasets when building deep
learning-based automatic polyp segmentation models. However, the intricate
privacy dynamics surrounding medical data often preclude seamless data sharing
among disparate medical centers. Federated learning (FL) emerges as a
formidable solution to this privacy conundrum, yet within the realm of FL,
optimizing model generalization stands as a pressing imperative. Robust
generalization capabilities are imperative to ensure the model's efficacy
across diverse geographical domains post-training on localized client datasets.
In this paper, a Federated self-supervised Domain Generalization method is
proposed to enhance the generalization capacity of federated and
Label-efficient intestinal polyp segmentation, named LFDG. Based on a classical
SSL method, DropPos, LFDG proposes an adversarial learning-based data
augmentation method (SSADA) to enhance the data diversity. LFDG further
proposes a relaxation module based on Source-reconstruction and
Augmentation-masking (SRAM) to maintain stability in feature learning. We have
validated LFDG on polyp images from six medical centers. The performance of our
method achieves 3.80% and 3.92% better than the baseline and other recent FL
methods and SSL methods, respectively.
|
2502.07957
|
Intrinsic Bias is Predicted by Pretraining Data and Correlates with
Downstream Performance in Vision-Language Encoders
|
cs.AI
|
While recent work has found that vision-language models trained under the
Contrastive Language Image Pre-training (CLIP) framework contain intrinsic
social biases, the extent to which different upstream pre-training features of
the framework relate to these biases, and hence how intrinsic bias and
downstream performance are connected has been unclear. In this work, we present
the largest comprehensive analysis to-date of how the upstream pre-training
factors and downstream performance of CLIP models relate to their intrinsic
biases. Studying 131 unique CLIP models, trained on 26 datasets, using 55
architectures, and in a variety of sizes, we evaluate bias in each model using
26 well-established unimodal and cross-modal principled Embedding Association
Tests. We find that the choice of pre-training dataset is the most significant
upstream predictor of bias, whereas architectural variations have minimal
impact. Additionally, datasets curated using sophisticated filtering techniques
aimed at enhancing downstream model performance tend to be associated with
higher levels of intrinsic bias. Finally, we observe that intrinsic bias is
often significantly correlated with downstream performance ($0.3 \leq r \leq
0.8$), suggesting that models optimized for performance inadvertently learn to
amplify representational biases. Comparisons between unimodal and cross-modal
association tests reveal that social group bias depends heavily on the
modality. Our findings imply that more sophisticated strategies are needed to
address intrinsic model bias for vision-language models across the entire model
development pipeline.
|
2502.07962
|
ESPFormer: Doubly-Stochastic Attention with Expected Sliced Transport
Plans
|
cs.LG
|
While self-attention has been instrumental in the success of Transformers, it
can lead to over-concentration on a few tokens during training, resulting in
suboptimal information flow. Enforcing doubly-stochastic constraints in
attention matrices has been shown to improve structure and balance in attention
distributions. However, existing methods rely on iterative Sinkhorn
normalization, which is computationally costly. In this paper, we introduce a
novel, fully parallelizable doubly-stochastic attention mechanism based on
sliced optimal transport, leveraging Expected Sliced Transport Plans (ESP).
Unlike prior approaches, our method enforces double stochasticity without
iterative Sinkhorn normalization, significantly enhancing efficiency. To ensure
differentiability, we incorporate a temperature-based soft sorting technique,
enabling seamless integration into deep learning models. Experiments across
multiple benchmark datasets, including image classification, point cloud
classification, sentiment analysis, and neural machine translation, demonstrate
that our enhanced attention regularization consistently improves performance
across diverse applications.
|
2502.07963
|
Caught in the Web of Words: Do LLMs Fall for Spin in Medical Literature?
|
cs.CL cs.AI
|
Medical research faces well-documented challenges in translating novel
treatments into clinical practice. Publishing incentives encourage researchers
to present "positive" findings, even when empirical results are equivocal.
Consequently, it is well-documented that authors often spin study results,
especially in article abstracts. Such spin can influence clinician
interpretation of evidence and may affect patient care decisions. In this
study, we ask whether the interpretation of trial results offered by Large
Language Models (LLMs) is similarly affected by spin. This is important since
LLMs are increasingly being used to trawl through and synthesize published
medical evidence. We evaluated 22 LLMs and found that they are across the board
more susceptible to spin than humans. They might also propagate spin into their
outputs: We find evidence, e.g., that LLMs implicitly incorporate spin into
plain language summaries that they generate. We also find, however, that LLMs
are generally capable of recognizing spin, and can be prompted in a way to
mitigate spin's impact on LLM outputs.
|
2502.07964
|
New tools for comparing classical and neural ODE models for tumor growth
|
cs.LG q-bio.QM
|
A new computational tool TumorGrowth$.$jl for modeling tumor growth is
introduced. The tool allows the comparison of standard textbook models, such as
General Bertalanffy and Gompertz, with some newer models, including, for the
first time, neural ODE models. As an application, we revisit a human meta-study
of non-small cell lung cancer and bladder cancer lesions, in patients
undergoing two different treatment options, to determine if previously reported
performance differences are statistically significant, and if newer, more
complex models perform any better. In a population of examples with at least
four time-volume measurements available for calibration, and an average of
about 6.3, our main conclusion is that the General Bertalanffy model has
superior performance, on average. However, where more measurements are
available, we argue that more complex models, capable of capturing rebound and
relapse behavior, may be better choices.
|
2502.07968
|
Generative Risk Minimization for Out-of-Distribution Generalization on
Graphs
|
cs.LG cs.AI
|
Out-of-distribution (OOD) generalization on graphs aims at dealing with
scenarios where the test graph distribution differs from the training graph
distributions. Compared to i.i.d. data like images, the OOD generalization
problem on graph-structured data remains challenging due to the non-i.i.d.
property and complex structural information on graphs. Recently, several works
on graph OOD generalization have explored extracting invariant subgraphs that
share crucial classification information across different distributions.
Nevertheless, such a strategy could be suboptimal for entirely capturing the
invariant information, as the extraction of discrete structures could
potentially lead to the loss of invariant information or the involvement of
spurious information. In this paper, we propose an innovative framework, named
Generative Risk Minimization (GRM), designed to generate an invariant subgraph
for each input graph to be classified, instead of extraction. To address the
challenge of optimization in the absence of optimal invariant subgraphs (i.e.,
ground truths), we derive a tractable form of the proposed GRM objective by
introducing a latent causal variable, and its effectiveness is validated by our
theoretical analysis. We further conduct extensive experiments across a variety
of real-world graph datasets for both node-level and graph-level OOD
generalization, and the results demonstrate the superiority of our framework
GRM.
|
2502.07971
|
ReTreever: Tree-based Coarse-to-Fine Representations for Retrieval
|
cs.IR cs.AI cs.LG
|
Document retrieval is a core component of question-answering systems, as it
enables conditioning answer generation on new and large-scale corpora. While
effective, the standard practice of encoding documents into high-dimensional
embeddings for similarity search entails large memory and compute footprints,
and also makes it hard to inspect the inner workings of the system. In this
paper, we propose a tree-based method for organizing and representing reference
documents at various granular levels, which offers the flexibility to balance
cost and utility, and eases the inspection of the corpus content and retrieval
operations. Our method, called ReTreever, jointly learns a routing function per
internal node of a binary tree such that query and reference documents are
assigned to similar tree branches, hence directly optimizing for retrieval
performance. Our evaluations show that ReTreever generally preserves full
representation accuracy. Its hierarchical structure further provides strong
coarse representations and enhances transparency by indirectly learning
meaningful semantic groupings. Among hierarchical retrieval methods, ReTreever
achieves the best retrieval accuracy at the lowest latency, proving that this
family of techniques can be viable in practical applications.
|
2502.07972
|
Training Sparse Mixture Of Experts Text Embedding Models
|
cs.CL cs.AI cs.IR
|
Transformer-based text embedding models have improved their performance on
benchmarks like MIRACL and BEIR by increasing their parameter counts. However,
this scaling approach introduces significant deployment challenges, including
increased inference latency and memory usage. These challenges are particularly
severe in retrieval-augmented generation (RAG) applications, where large
models' increased memory requirements constrain dataset ingestion capacity, and
their higher latency directly impacts query-time performance. While causal
language models have addressed similar efficiency challenges using Mixture of
Experts (MoE) architectures, this approach hasn't been successfully adapted to
the general text embedding setting. In this paper, we introduce Nomic Embed v2,
the first general purpose MoE text embedding model. Our model outperforms
models in the same parameter class on both monolingual and multilingual
benchmarks while also maintaining competitive performance with models twice its
size. We open-source all code, models, and evaluation data to ensure full
reproducibility of our training pipeline at
\href{https://github.com/nomic-ai/contrastors}{https://github.com/nomic-ai/contrastors}.
|
2502.07974
|
From Hazard Identification to Controller Design: Proactive and
LLM-Supported Safety Engineering for ML-Powered Systems
|
cs.SE cs.AI cs.LG
|
Machine learning (ML) components are increasingly integrated into software
products, yet their complexity and inherent uncertainty often lead to
unintended and hazardous consequences, both for individuals and society at
large. Despite these risks, practitioners seldom adopt proactive approaches to
anticipate and mitigate hazards before they occur. Traditional safety
engineering approaches, such as Failure Mode and Effects Analysis (FMEA) and
System Theoretic Process Analysis (STPA), offer systematic frameworks for early
risk identification but are rarely adopted. This position paper advocates for
integrating hazard analysis into the development of any ML-powered software
product and calls for greater support to make this process accessible to
developers. By using large language models (LLMs) to partially automate a
modified STPA process with human oversight at critical steps, we expect to
address two key challenges: the heavy dependency on highly experienced safety
engineering experts, and the time-consuming, labor-intensive nature of
traditional hazard analysis, which often impedes its integration into
real-world development workflows. We illustrate our approach with a running
example, demonstrating that many seemingly unanticipated issues can, in fact,
be anticipated.
|
2502.07975
|
Sink equilibria and the attractors of learning in games
|
cs.GT cs.LG
|
Characterizing the limit behavior -- that is, the attractors -- of learning
dynamics is one of the most fundamental open questions in game theory. In
recent work in this front, it was conjectured that the attractors of the
replicator dynamic are in one-to-one correspondence with the sink equilibria of
the game -- the sink strongly connected components of a game's preference graph
-- , and it was established that they do stand in at least one-to-many
correspondence with them. We make threefold progress on the problem of
characterizing attractors. First, we show through a topological construction
that the one-to-one conjecture is false. Second, we make progress on the
attractor characterization problem for two-player games by establishing that
the one-to-one conjecture is true in the absence of a local pattern called a
weak local source -- a pattern that is absent from zero-sum games. Finally, we
look -- for the first time in this context -- at fictitious play, the
longest-studied learning dynamic, and examine to what extent the conjecture
generalizes there. We establish that under fictitious play, sink equilibria
always contain attractors (sometimes strictly), and every attractor corresponds
to a strongly connected set of nodes in the preference graph.
|
2502.07977
|
RESIST: Resilient Decentralized Learning Using Consensus Gradient
Descent
|
cs.LG math.OC stat.ML
|
Empirical risk minimization (ERM) is a cornerstone of modern machine learning
(ML), supported by advances in optimization theory that ensure efficient
solutions with provable algorithmic convergence rates, which measure the speed
at which optimization algorithms approach a solution, and statistical learning
rates, which characterize how well the solution generalizes to unseen data.
Privacy, memory, computational, and communications constraints increasingly
necessitate data collection, processing, and storage across network-connected
devices. In many applications, these networks operate in decentralized settings
where a central server cannot be assumed, requiring decentralized ML algorithms
that are both efficient and resilient. Decentralized learning, however, faces
significant challenges, including an increased attack surface for adversarial
interference during decentralized learning processes. This paper focuses on the
man-in-the-middle (MITM) attack, which can cause models to deviate
significantly from their intended ERM solutions. To address this challenge, we
propose RESIST (Resilient dEcentralized learning using conSensus gradIent
deScenT), an optimization algorithm designed to be robust against adversarially
compromised communication links. RESIST achieves algorithmic and statistical
convergence for strongly convex, Polyak-Lojasiewicz, and nonconvex ERM
problems. Experimental results demonstrate the robustness and scalability of
RESIST for real-world decentralized learning in adversarial environments.
|
2502.07978
|
A Survey of In-Context Reinforcement Learning
|
cs.LG
|
Reinforcement learning (RL) agents typically optimize their policies by
performing expensive backward passes to update their network parameters.
However, some agents can solve new tasks without updating any parameters by
simply conditioning on additional context such as their action-observation
histories. This paper surveys work on such behavior, known as in-context
reinforcement learning.
|
2502.07979
|
Joint Modelling Histology and Molecular Markers for Cancer
Classification
|
cs.CV
|
Cancers are characterized by remarkable heterogeneity and diverse prognosis.
Accurate cancer classification is essential for patient stratification and
clinical decision-making. Although digital pathology has been advancing cancer
diagnosis and prognosis, the paradigm in cancer pathology has shifted from
purely relying on histology features to incorporating molecular markers. There
is an urgent need for digital pathology methods to meet the needs of the new
paradigm. We introduce a novel digital pathology approach to jointly predict
molecular markers and histology features and model their interactions for
cancer classification. Firstly, to mitigate the challenge of
cross-magnification information propagation, we propose a multi-scale
disentangling module, enabling the extraction of multi-scale features from
high-magnification (cellular-level) to low-magnification (tissue-level) whole
slide images. Further, based on the multi-scale features, we propose an
attention-based hierarchical multi-task multi-instance learning framework to
simultaneously predict histology and molecular markers. Moreover, we propose a
co-occurrence probability-based label correlation graph network to model the
co-occurrence of molecular markers. Lastly, we design a cross-modal interaction
module with the dynamic confidence constrain loss and a cross-modal gradient
modulation strategy, to model the interactions of histology and molecular
markers. Our experiments demonstrate that our method outperforms other
state-of-the-art methods in classifying glioma, histology features and
molecular markers. Our method promises to promote precise oncology with the
potential to advance biomedical research and clinical applications. The code is
available at https://github.com/LHY1007/M3C2
|
2502.07980
|
CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning
Capabilities of LLMs
|
cs.LG cs.AI
|
The role of Large Language Models (LLMs) has not been extensively explored in
analog circuit design, which could benefit from a reasoning-based approach that
transcends traditional optimization techniques. In particular, despite their
growing relevance, there are no benchmarks to assess LLMs' reasoning capability
about circuits. Therefore, we created the CIRCUIT dataset consisting of 510
question-answer pairs spanning various levels of analog-circuit-related
subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04%
accuracy when evaluated on the final numerical answer. To evaluate the
robustness of LLMs on our dataset, we introduced a unique feature that enables
unit-test-like evaluation by grouping questions into unit tests. In this case,
GPT-4o can only pass 27.45% of the unit tests, highlighting that the most
advanced LLMs still struggle with understanding circuits, which requires
multi-level reasoning, particularly when involving circuit topologies. This
circuit-specific benchmark highlights LLMs' limitations, offering valuable
insights for advancing their application in analog integrated circuit design.
|
2502.07982
|
Deep Semantic Graph Learning via LLM based Node Enhancement
|
cs.AI
|
Graph learning has attracted significant attention due to its widespread
real-world applications. Current mainstream approaches rely on text node
features and obtain initial node embeddings through shallow embedding learning
using GNNs, which shows limitations in capturing deep textual semantics. Recent
advances in Large Language Models (LLMs) have demonstrated superior
capabilities in understanding text semantics, transforming traditional text
feature processing. This paper proposes a novel framework that combines Graph
Transformer architecture with LLM-enhanced node features. Specifically, we
leverage LLMs to generate rich semantic representations of text nodes, which
are then processed by a multi-head self-attention mechanism in the Graph
Transformer to capture both local and global graph structural information. Our
model utilizes the Transformer's attention mechanism to dynamically aggregate
neighborhood information while preserving the semantic richness provided by LLM
embeddings. Experimental results demonstrate that the LLM-enhanced node
features significantly improve the performance of graph learning models on node
classification tasks. This approach shows promising results across multiple
graph learning tasks, offering a practical direction for combining graph
networks with language models.
|
2502.07985
|
MetaSC: Test-Time Safety Specification Optimization for Language Models
|
cs.CL cs.AI
|
We propose a novel dynamic safety framework that optimizes language model
(LM) safety reasoning at inference time without modifying model weights.
Building on recent advances in self-critique methods, our approach leverages a
meta-critique mechanism that iteratively updates safety prompts-termed
specifications-to drive the critique and revision process adaptively. This
test-time optimization not only improves performance against adversarial
jailbreak requests but also in diverse general safety-related tasks, such as
avoiding moral harm or pursuing honest responses. Our empirical evaluations
across several language models demonstrate that dynamically optimized safety
prompts yield significantly higher safety scores compared to fixed system
prompts and static self-critique defenses. Code to be released at
https://github.com/vicgalle/meta-self-critique.git .
|
2502.07987
|
Universal Adversarial Attack on Aligned Multimodal LLMs
|
cs.AI
|
We propose a universal adversarial attack on multimodal Large Language Models
(LLMs) that leverages a single optimized image to override alignment safeguards
across diverse queries and even multiple models. By backpropagating through the
vision encoder and language head, we craft a synthetic image that forces the
model to respond with a targeted phrase (e.g., ''Sure, here it is'') or
otherwise unsafe content-even for harmful prompts. In experiments on the
SafeBench benchmark, our method achieves significantly higher attack success
rates than existing baselines, including text-only universal prompts (e.g., up
to 93% on certain models). We further demonstrate cross-model transferability
by training on several multimodal LLMs simultaneously and testing on unseen
architectures. Additionally, a multi-answer variant of our approach produces
more natural-sounding (yet still malicious) responses. These findings
underscore critical vulnerabilities in current multimodal alignment and call
for more robust adversarial defenses. We will release code and datasets under
the Apache-2.0 license. Warning: some content generated by Multimodal LLMs in
this paper may be offensive to some readers.
|
2502.07990
|
Learning Effective Dynamics across Spatio-Temporal Scales of Complex
Flows
|
cs.LG physics.comp-ph physics.flu-dyn
|
Modeling and simulation of complex fluid flows with dynamics that span
multiple spatio-temporal scales is a fundamental challenge in many scientific
and engineering domains. Full-scale resolving simulations for systems such as
highly turbulent flows are not feasible in the foreseeable future, and
reduced-order models must capture dynamics that involve interactions across
scales. In the present work, we propose a novel framework, Graph-based Learning
of Effective Dynamics (Graph-LED), that leverages graph neural networks (GNNs),
as well as an attention-based autoregressive model, to extract the effective
dynamics from a small amount of simulation data. GNNs represent flow fields on
unstructured meshes as graphs and effectively handle complex geometries and
non-uniform grids. The proposed method combines a GNN based, dimensionality
reduction for variable-size unstructured meshes with an autoregressive temporal
attention model that can learn temporal dependencies automatically. We
evaluated the proposed approach on a suite of fluid dynamics problems,
including flow past a cylinder and flow over a backward-facing step over a
range of Reynolds numbers. The results demonstrate robust and effective
forecasting of spatio-temporal physics; in the case of the flow past a
cylinder, both small-scale effects that occur close to the cylinder as well as
its wake are accurately captured.
|
2502.07993
|
What is a Sketch-and-Precondition Derivation for Low-Rank Approximation?
Inverse Power Error or Inverse Power Estimation?
|
math.NA cs.CC cs.LG cs.NA stat.CO stat.ML
|
Randomized sketching accelerates large-scale numerical linear algebra by
reducing computational complexity. While the traditional sketch-and-solve
approach reduces the problem size directly through sketching, the
sketch-and-precondition method leverages sketching to construct a computational
friendly preconditioner. This preconditioner improves the convergence speed of
iterative solvers applied to the original problem, maintaining accuracy in the
full space. Furthermore, the convergence rate of the solver improves at least
linearly with the sketch size. Despite its potential, developing a
sketch-and-precondition framework for randomized algorithms in low-rank matrix
approximation remains an open challenge. We introduce the Error-Powered
Sketched Inverse Iteration (EPSI) Method via run sketched Newton iteration for
the Lagrange form as a sketch-and-precondition variant for randomized low-rank
approximation. Our method achieves theoretical guarantees, including a
convergence rate that improves at least linearly with the sketch size.
|
2502.07998
|
Adaptive kernel predictors from feature-learning infinite limits of
neural networks
|
cs.LG cond-mat.dis-nn stat.ML
|
Previous influential work showed that infinite width limits of neural
networks in the lazy training regime are described by kernel machines. Here, we
show that neural networks trained in the rich, feature learning infinite-width
regime in two different settings are also described by kernel machines, but
with data-dependent kernels. For both cases, we provide explicit expressions
for the kernel predictors and prescriptions to numerically calculate them. To
derive the first predictor, we study the large-width limit of feature-learning
Bayesian networks, showing how feature learning leads to task-relevant
adaptation of layer kernels and preactivation densities. The saddle point
equations governing this limit result in a min-max optimization problem that
defines the kernel predictor. To derive the second predictor, we study gradient
flow training of randomly initialized networks trained with weight decay in the
infinite-width limit using dynamical mean field theory (DMFT). The fixed point
equations of the arising DMFT defines the task-adapted internal representations
and the kernel predictor. We compare our kernel predictors to kernels derived
from lazy regime and demonstrate that our adaptive kernels achieve lower test
loss on benchmark datasets.
|
2502.08001
|
Unveiling Client Privacy Leakage from Public Dataset Usage in Federated
Distillation
|
cs.CR cs.LG
|
Federated Distillation (FD) has emerged as a popular federated training
framework, enabling clients to collaboratively train models without sharing
private data. Public Dataset-Assisted Federated Distillation (PDA-FD), which
leverages public datasets for knowledge sharing, has become widely adopted.
Although PDA-FD enhances privacy compared to traditional Federated Learning, we
demonstrate that the use of public datasets still poses significant privacy
risks to clients' private training data. This paper presents the first
comprehensive privacy analysis of PDA-FD in presence of an honest-but-curious
server. We show that the server can exploit clients' inference results on
public datasets to extract two critical types of private information: label
distributions and membership information of the private training dataset. To
quantify these vulnerabilities, we introduce two novel attacks specifically
designed for the PDA-FD setting: a label distribution inference attack and
innovative membership inference methods based on Likelihood Ratio Attack
(LiRA). Through extensive evaluation of three representative PDA-FD frameworks
(FedMD, DS-FL, and Cronus), our attacks achieve state-of-the-art performance,
with label distribution attacks reaching minimal KL-divergence and membership
inference attacks maintaining high True Positive Rates under low False Positive
Rate constraints. Our findings reveal significant privacy risks in current
PDA-FD frameworks and emphasize the need for more robust privacy protection
mechanisms in collaborative learning systems.
|
2502.08003
|
Heterogeneous Multi-agent Multi-armed Bandits on Stochastic Block Models
|
cs.LG
|
We study a novel heterogeneous multi-agent multi-armed bandit problem with a
cluster structure induced by stochastic block models, influencing not only
graph topology, but also reward heterogeneity. Specifically, agents are
distributed on random graphs based on stochastic block models - a generalized
Erdos-Renyi model with heterogeneous edge probabilities: agents are grouped
into clusters (known or unknown); edge probabilities for agents within the same
cluster differ from those across clusters. In addition, the cluster structure
in stochastic block model also determines our heterogeneous rewards. Rewards
distributions of the same arm vary across agents in different clusters but
remain consistent within a cluster, unifying homogeneous and heterogeneous
settings and varying degree of heterogeneity, and rewards are independent
samples from these distributions. The objective is to minimize system-wide
regret across all agents. To address this, we propose a novel algorithm
applicable to both known and unknown cluster settings. The algorithm combines
an averaging-based consensus approach with a newly introduced information
aggregation and weighting technique, resulting in a UCB-type strategy. It
accounts for graph randomness, leverages both intra-cluster (homogeneous) and
inter-cluster (heterogeneous) information from rewards and graphs, and
incorporates cluster detection for unknown cluster settings. We derive optimal
instance-dependent regret upper bounds of order $\log{T}$ under sub-Gaussian
rewards. Importantly, our regret bounds capture the degree of heterogeneity in
the system (an additional layer of complexity), exhibit smaller constants,
scale better for large systems, and impose significantly relaxed assumptions on
edge probabilities. In contrast, prior works have not accounted for this
refined problem complexity, rely on more stringent assumptions, and exhibit
limited scalability.
|
2502.08004
|
Optimizing Likelihoods via Mutual Information: Bridging Simulation-Based
Inference and Bayesian Optimal Experimental Design
|
stat.ML cs.LG
|
Simulation-based inference (SBI) is a method to perform inference on a
variety of complex scientific models with challenging inference (inverse)
problems. Bayesian Optimal Experimental Design (BOED) aims to efficiently use
experimental resources to make better inferences. Various stochastic
gradient-based BOED methods have been proposed as an alternative to Bayesian
optimization and other experimental design heuristics to maximize information
gain from an experiment. We demonstrate a link via mutual information bounds
between SBI and stochastic gradient-based variational inference methods that
permits BOED to be used in SBI applications as SBI-BOED. This link allows
simultaneous optimization of experimental designs and optimization of amortized
inference functions. We evaluate the pitfalls of naive design optimization
using this method in a standard SBI task and demonstrate the utility of a
well-chosen design distribution in BOED. We compare this approach on SBI-based
models in real-world simulators in epidemiology and biology, showing notable
improvements in inference.
|
2502.08005
|
Towards Training One-Step Diffusion Models Without Distillation
|
cs.LG cs.CV
|
Recent advances in one-step generative models typically follow a two-stage
process: first training a teacher diffusion model and then distilling it into a
one-step student model. This distillation process traditionally relies on both
the teacher model's score function to compute the distillation loss and its
weights for student initialization. In this paper, we explore whether one-step
generative models can be trained directly without this distillation process.
First, we show that the teacher's score function is not essential and propose a
family of distillation methods that achieve competitive results without relying
on score estimation. Next, we demonstrate that initialization from teacher
weights is indispensable in successful training. Surprisingly, we find that
this benefit is not due to improved ``input-output" mapping but rather the
learned feature representations, which dominate distillation quality. Our
findings provide a better understanding of the role of initialization in
one-step model training and its impact on distillation quality.
|
2502.08006
|
Greed is Good: Guided Generation from a Greedy Perspective
|
cs.LG cs.AI stat.ML
|
Training-free guided generation is a widely used and powerful technique that
allows the end user to exert further control over the generative process of
diffusion models. In this work, we explore the guided generation from the
perspective of optimizing the solution trajectory of a neural differential
equation in a greedy manner. We present such a strategy as a unifying view on
training-free guidance by showing that the greedy strategy is a first-order
discretization of end-to-end optimization techniques. We show that a greedy
guidance strategy makes good decisions and compare it to a guidance strategy
using the ideal gradients found via the continuous adjoint equations. We then
show how other popular training-free guidance strategies can be viewed in a
unified manner from this perspective.
|
2502.08007
|
The Role of Randomness in Stability
|
cs.LG stat.ML
|
Stability is a central property in learning and statistics promising the
output of an algorithm $A$ does not change substantially when applied to
similar datasets $S$ and $S'$. It is an elementary fact that any sufficiently
stable algorithm (e.g.\ one returning the same result with high probability,
satisfying privacy guarantees, etc.) must be randomized. This raises a natural
question: can we quantify how much randomness is needed for algorithmic
stability?
We study the randomness complexity of two influential notions of stability in
learning: replicability, which promises $A$ usually outputs the same result
when run over samples from the same distribution (and shared random coins), and
differential privacy, which promises the output distribution of $A$ remains
similar under neighboring datasets. The randomness complexity of these notions
was studied recently in (Dixon et al. ICML 2024) and (Cannone et al. ITCS 2024)
for basic $d$-dimensional tasks (e.g. estimating the bias of $d$ coins), but
little is known about the measures more generally or in complex settings like
classification.
Toward this end, we prove a `weak-to-strong' boosting theorem for stability:
the randomness complexity of a task $M$ (either under replicability or DP) is
tightly controlled by the best replication probability of any deterministic
algorithm solving the task, a weak measure called `global stability' that is
universally capped at $\frac{1}{2}$ (Chase et al. FOCS 2023). Using this, we
characterize the randomness complexity of PAC Learning: a class has bounded
randomness complexity iff it has finite Littlestone dimension, and moreover
scales at worst logarithmically in the excess error of the learner. This
resolves a question of (Chase et al. STOC 2024) who asked for such a
characterization in the equivalent language of (error-dependent)
`list-replicability'.
|
2502.08008
|
An Interactive Framework for Implementing Privacy-Preserving Federated
Learning: Experiments on Large Language Models
|
cs.LG cs.CR
|
Federated learning (FL) enhances privacy by keeping user data on local
devices. However, emerging attacks have demonstrated that the updates shared by
users during training can reveal significant information about their data. This
has greatly thwart the adoption of FL methods for training robust AI models in
sensitive applications. Differential Privacy (DP) is considered the gold
standard for safeguarding user data. However, DP guarantees are highly
conservative, providing worst-case privacy guarantees. This can result in
overestimating privacy needs, which may compromise the model's accuracy.
Additionally, interpretations of these privacy guarantees have proven to be
challenging in different contexts. This is further exacerbated when other
factors, such as the number of training iterations, data distribution, and
specific application requirements, can add further complexity to this problem.
In this work, we proposed a framework that integrates a human entity as a
privacy practitioner to determine an optimal trade-off between the model's
privacy and utility. Our framework is the first to address the variable memory
requirement of existing DP methods in FL settings, where resource-limited
devices (e.g., cell phones) can participate. To support such settings, we adopt
a recent DP method with fixed memory usage to ensure scalable private FL. We
evaluated our proposed framework by fine-tuning a BERT-based LLM model using
the GLUE dataset (a common approach in literature), leveraging the new
accountant, and employing diverse data partitioning strategies to mimic
real-world conditions. As a result, we achieved stable memory usage, with an
average accuracy reduction of 1.33% for $\epsilon = 10$ and 1.9% for $\epsilon
= 6$, when compared to the state-of-the-art DP accountant which does not
support fixed memory usage.
|
2502.08009
|
The Geometry of Prompting: Unveiling Distinct Mechanisms of Task
Adaptation in Language Models
|
cs.CL
|
Decoder-only language models have the ability to dynamically switch between
various computational tasks based on input prompts. Despite many successful
applications of prompting, there is very limited understanding of the internal
mechanism behind such flexibility. In this work, we investigate how different
prompting methods affect the geometry of representations in these models.
Employing a framework grounded in statistical physics, we reveal that various
prompting techniques, while achieving similar performance, operate through
distinct representational mechanisms for task adaptation. Our analysis
highlights the critical role of input distribution samples and label semantics
in few-shot in-context learning. We also demonstrate evidence of synergistic
and interfering interactions between different tasks on the representational
level. Our work contributes to the theoretical understanding of large language
models and lays the groundwork for developing more effective,
representation-aware prompting strategies.
|
2502.08011
|
Training-Free Safe Denoisers for Safe Use of Diffusion Models
|
cs.AI
|
There is growing concern over the safety of powerful diffusion models (DMs),
as they are often misused to produce inappropriate, not-safe-for-work (NSFW)
content or generate copyrighted material or data of individuals who wish to be
forgotten. Many existing methods tackle these issues by heavily relying on
text-based negative prompts or extensively retraining DMs to eliminate certain
features or samples. In this paper, we take a radically different approach,
directly modifying the sampling trajectory by leveraging a negation set (e.g.,
unsafe images, copyrighted data, or datapoints needed to be excluded) to avoid
specific regions of data distribution, without needing to retrain or fine-tune
DMs. We formally derive the relationship between the expected denoised samples
that are safe and those that are not safe, leading to our $\textit{safe}$
denoiser which ensures its final samples are away from the area to be negated.
Inspired by the derivation, we develop a practical algorithm that successfully
produces high-quality samples while avoiding negation areas of the data
distribution in text-conditional, class-conditional, and unconditional image
generation scenarios. These results hint at the great potential of our
training-free safe denoiser for using DMs more safely.
|
2502.08020
|
Speculate, then Collaborate: Fusing Knowledge of Language Models during
Decoding
|
cs.CL cs.AI
|
Large Language Models (LLMs) often excel in specific domains but fall short
in others due to the limitations of their training. Thus, enabling LLMs to
solve problems collaboratively by integrating their complementary knowledge
promises to improve their performance across domains. To realize this
potential, we introduce a novel Collaborative Speculative Decoding (CoSD)
algorithm that enables efficient LLM knowledge fusion at test time without
requiring additional model training. CoSD employs a draft model to generate
initial sequences and an easy-to-learn rule or decision tree to decide when to
invoke an assistant model to improve these drafts. CoSD not only enhances
knowledge fusion but also improves inference efficiency, is transferable across
domains and models, and offers greater explainability. Experimental results
demonstrate that CoSD improves accuracy by up to 10\% across benchmarks
compared to existing methods, providing a scalable and effective solution for
LLM-based applications
|
2502.08021
|
Model Selection for Off-policy Evaluation: New Algorithms and
Experimental Protocol
|
cs.LG cs.AI stat.ML
|
Holdout validation and hyperparameter tuning from data is a long-standing
problem in offline reinforcement learning (RL). A standard framework is to use
off-policy evaluation (OPE) methods to evaluate and select the policies, but
OPE either incurs exponential variance (e.g., importance sampling) or has
hyperparameters on their own (e.g., FQE and model-based). In this work we focus
on hyperparameter tuning for OPE itself, which is even more under-investigated.
Concretely, we select among candidate value functions ("model-free") or
dynamics ("model-based") to best assess the performance of a target policy. Our
contributions are two fold. We develop: (1) new model-free and model-based
selectors with theoretical guarantees, and (2) a new experimental protocol for
empirically evaluating them. Compared to the model-free protocol in prior
works, our new protocol allows for more stable generation of candidate value
functions, better control of misspecification, and evaluation of model-free and
model-based methods alike. We exemplify the protocol on a Gym environment, and
find that our new model-free selector, LSTD-Tournament, demonstrates promising
empirical performance.
|
2502.08023
|
Performance Analysis of Infrastructure Sharing Techniques in Cellular
Networks: A Percolation Theory Approach
|
eess.SY cs.SY
|
In the context of 5G, infrastructure sharing has been identified as a
potential solution to reduce the investment costs of cellular networks. In
particular, it can help low-income regions build 5G networks more affordably
and further bridge the digital divide. There are two main kinds of
infrastructure sharing: passive sharing (i.e. site sharing) and active sharing
(i.e. access sharing), which require mobile network operators (MNOs) to share
their non-electronic elements or electronic elements, respectively. Because
co-construction and sharing can achieve broader coverage with lower investment,
through percolation theory, we investigate how different sharing strategies can
deliver large-scale continuous services. First, we examine the percolation
characteristics in signal-to-interference-plus-noise ratio (SINR) coverage
graphs and the necessary conditions for percolation. Second, we propose an
'average coverage radius' to approximate the SINR graph with a low base station
(BS) density based on the Gilbert disk model. Finally, we estimate the critical
conditions of BS densities of MNOs for different sharing strategies and compare
the percolation probabilities under different infrastructure sharing
strategies.
|
2502.08024
|
Initialization Matters: Unraveling the Impact of Pre-Training on
Federated Learning
|
cs.LG cs.DC
|
Initializing with pre-trained models when learning on downstream tasks is
becoming standard practice in machine learning. Several recent works explore
the benefits of pre-trained initialization in a federated learning (FL)
setting, where the downstream training is performed at the edge clients with
heterogeneous data distribution. These works show that starting from a
pre-trained model can substantially reduce the adverse impact of data
heterogeneity on the test performance of a model trained in a federated
setting, with no changes to the standard FedAvg training algorithm. In this
work, we provide a deeper theoretical understanding of this phenomenon. To do
so, we study the class of two-layer convolutional neural networks (CNNs) and
provide bounds on the training error convergence and test error of such a
network trained with FedAvg. We introduce the notion of aligned and misaligned
filters at initialization and show that the data heterogeneity only affects
learning on misaligned filters. Starting with a pre-trained model typically
results in fewer misaligned filters at initialization, thus producing a lower
test error even when the model is trained in a federated setting with data
heterogeneity. Experiments in synthetic settings and practical FL training on
CNNs verify our theoretical findings.
|
2502.08025
|
From Brainwaves to Brain Scans: A Robust Neural Network for EEG-to-fMRI
Synthesis
|
cs.CV
|
While functional magnetic resonance imaging (fMRI) offers rich spatial
resolution, it is limited by high operational costs and significant
infrastructural demands. In contrast, electroencephalography (EEG) provides
millisecond-level precision in capturing electrical activity but lacks the
spatial resolution necessary for precise neural localization. To bridge these
gaps, we introduce E2fNet, a simple yet effective deep learning model for
synthesizing fMRI images from low-cost EEG data. E2fNet is specifically
designed to capture and translate meaningful features from EEG across electrode
channels into accurate fMRI representations. Extensive evaluations across three
datasets demonstrate that E2fNet consistently outperforms existing methods,
achieving state-of-the-art results in terms of the structural similarity index
measure (SSIM). Our findings suggest that E2fNet is a promising, cost-effective
solution for enhancing neuroimaging capabilities. The code is available at
https://github.com/kgr20/E2fNet.
|
2502.08026
|
Contextual Subspace Manifold Projection for Structural Refinement of
Large Language Model Representations
|
cs.CL
|
Internal representations within deep neural architectures encode
high-dimensional abstractions of linguistic structures, yet they often exhibit
inefficiencies in feature distribution, limiting expressiveness and
adaptability. Contextual Subspace Manifold Projection introduces a structured
refinement technique that selectively reconfigures token embeddings through
controlled subspace constraints, ensuring more stable and geometrically
well-defined feature distributions. Empirical evaluations demonstrated that the
structured intervention reduced anisotropy, leading to improved representation
compactness while preserving semantic fidelity across transformer layers.
Clustering analyses indicated that token embeddings exhibited greater feature
separability, reinforcing the hypothesis that structured projection techniques
enhance internal representation organization without sacrificing linguistic
coherence. Gradient magnitude distributions suggested that the method
introduced a smoother optimization trajectory, potentially contributing to more
stable parameter updates throughout training. Computational overhead associated
with the projection operations remained minimal, ensuring that the refinements
did not introduce significant trade-offs in model efficiency or inference
speed. Comparisons with standard embedding refinement techniques highlighted
that structured manifold constraints provided a direct mechanism for improving
representation quality without requiring additional gradient-based
optimization. Perplexity evaluations confirmed that the adjustments did not
negatively impact sequence coherence, further validating the effectiveness of
the proposed approach.
|
2502.08033
|
End-to-End Predictive Planner for Autonomous Driving with Consistency
Models
|
cs.RO cs.LG
|
Trajectory prediction and planning are fundamental components for autonomous
vehicles to navigate safely and efficiently in dynamic environments.
Traditionally, these components have often been treated as separate modules,
limiting the ability to perform interactive planning and leading to
computational inefficiency in multi-agent scenarios. In this paper, we present
a novel unified and data-driven framework that integrates prediction and
planning with a single consistency model. Trained on real-world human driving
datasets, our consistency model generates samples from high-dimensional,
multimodal joint trajectory distributions of the ego and multiple surrounding
agents, enabling end-to-end predictive planning. It effectively produces
interactive behaviors, such as proactive nudging and yielding to ensure both
safe and efficient interactions with other road users. To incorporate
additional planning constraints on the ego vehicle, we propose an alternating
direction method for multi-objective guidance in online guided sampling.
Compared to diffusion models, our consistency model achieves better performance
with fewer sampling steps, making it more suitable for real-time deployment.
Experimental results on Waymo Open Motion Dataset (WOMD) demonstrate our
method's superiority in trajectory quality, constraint satisfaction, and
interactive behavior compared to various existing approaches.
|
2502.08037
|
Franken-Adapter: Cross-Lingual Adaptation of LLMs by Embedding Surgery
|
cs.CL
|
The capabilities of Large Language Models (LLMs) in low-resource languages
lag far behind those in English, making their universal accessibility a
significant challenge. To alleviate this, we present
$\textit{Franken-Adapter}$, a modular language adaptation approach for
decoder-only LLMs with embedding surgery. Our method begins by creating
customized vocabularies for target languages and performing language adaptation
through embedding tuning on multilingual data. These pre-trained embeddings are
subsequently integrated with LLMs that have been instruction-tuned on English
alignment data to enable zero-shot cross-lingual transfer. Our experiments on
$\texttt{Gemma2}$ models with up to 27B parameters demonstrate improvements of
up to 20% across 96 languages, spanning both discriminative and generative
tasks, with minimal regressions ($<$1%) in English. Further in-depth analysis
reveals the critical role of customizing tokenizers in enhancing language
adaptation, while boosting inference efficiency. Additionally, we show the
versatility of our method by achieving a 14% improvement over a math-optimized
LLM across 20 languages, offering a modular solution to transfer reasoning
abilities across languages post hoc.
|
2502.08041
|
The Art of Misclassification: Too Many Classes, Not Enough Points
|
cs.LG cs.IT math.IT
|
Classification is a ubiquitous and fundamental problem in artificial
intelligence and machine learning, with extensive efforts dedicated to
developing more powerful classifiers and larger datasets. However, the
classification task is ultimately constrained by the intrinsic properties of
datasets, independently of computational power or model complexity. In this
work, we introduce a formal entropy-based measure of classificability, which
quantifies the inherent difficulty of a classification problem by assessing the
uncertainty in class assignments given feature representations. This measure
captures the degree of class overlap and aligns with human intuition, serving
as an upper bound on classification performance for classification problems.
Our results establish a theoretical limit beyond which no classifier can
improve the classification accuracy, regardless of the architecture or amount
of data, in a given problem. Our approach provides a principled framework for
understanding when classification is inherently fallible and fundamentally
ambiguous.
|
2502.08045
|
Break the Checkbox: Challenging Closed-Style Evaluations of Cultural
Alignment in LLMs
|
cs.CL cs.AI cs.CY
|
A large number of studies rely on closed-style multiple-choice surveys to
evaluate cultural alignment in Large Language Models (LLMs). In this work, we
challenge this constrained evaluation paradigm and explore more realistic,
unconstrained approaches. Using the World Values Survey (WVS) and Hofstede
Cultural Dimensions as case studies, we demonstrate that LLMs exhibit stronger
cultural alignment in less constrained settings, where responses are not
forced. Additionally, we show that even minor changes, such as reordering
survey choices, lead to inconsistent outputs, exposing the limitations of
closed-style evaluations. Our findings advocate for more robust and flexible
evaluation frameworks that focus on specific cultural proxies, encouraging more
nuanced and accurate assessments of cultural alignment in LLMs.
|
2502.08047
|
WorldGUI: Dynamic Testing for Comprehensive Desktop GUI Automation
|
cs.AI cs.MA
|
Current GUI agents have achieved outstanding performance in GUI element
grounding. However, planning remains highly challenging, especially due to
sensitivity to the initial state of the environment. Specifically, slight
differences in the initial state-such as the target software not being open or
the interface not being in its default state-often lead to planning errors.
This issue is widespread in real user scenarios, but existing benchmarks fail
to evaluate it. In this paper, we present WorldGUI, a novel GUI benchmark that
designs GUI tasks with various initial states to simulate real computer-user
interactions. The benchmark spans a wide range of tasks across 10 popular
software applications, including PowerPoint, VSCode, and Adobe Acrobat. In
addition, to address the challenges of dynamic GUI automation tasks, we propose
GUI-Thinker, a holistic framework, leveraging a critique mechanism, that
effectively manages the unpredictability and complexity of GUI interactions.
Experimental results demonstrate that GUI-Thinker significantly outperforms
Claude-3.5 (Computer Use) by 14.9% in success rate on WorldGUI tasks. This
improvement underscores the effectiveness of our critical-thinking-based
framework in enhancing GUI automation. The code is available at
https://github.com/showlab/WorldGUI.
|
2502.08054
|
COMBO-Grasp: Learning Constraint-Based Manipulation for Bimanual
Occluded Grasping
|
cs.RO cs.LG
|
This paper addresses the challenge of occluded robot grasping, i.e. grasping
in situations where the desired grasp poses are kinematically infeasible due to
environmental constraints such as surface collisions. Traditional robot
manipulation approaches struggle with the complexity of non-prehensile or
bimanual strategies commonly used by humans in these circumstances.
State-of-the-art reinforcement learning (RL) methods are unsuitable due to the
inherent complexity of the task. In contrast, learning from demonstration
requires collecting a significant number of expert demonstrations, which is
often infeasible. Instead, inspired by human bimanual manipulation strategies,
where two hands coordinate to stabilise and reorient objects, we focus on a
bimanual robotic setup to tackle this challenge. In particular, we introduce
Constraint-based Manipulation for Bimanual Occluded Grasping (COMBO-Grasp), a
learning-based approach which leverages two coordinated policies: a constraint
policy trained using self-supervised datasets to generate stabilising poses and
a grasping policy trained using RL that reorients and grasps the target object.
A key contribution lies in value function-guided policy coordination.
Specifically, during RL training for the grasping policy, the constraint
policy's output is refined through gradients from a jointly trained value
function, improving bimanual coordination and task performance. Lastly,
COMBO-Grasp employs teacher-student policy distillation to effectively deploy
point cloud-based policies in real-world environments. Empirical evaluations
demonstrate that COMBO-Grasp significantly improves task success rates compared
to competitive baseline approaches, with successful generalisation to unseen
objects in both simulated and real-world environments.
|
2502.08055
|
SLVR: Securely Leveraging Client Validation for Robust Federated
Learning
|
cs.CR cs.LG
|
Federated Learning (FL) enables collaborative model training while keeping
client data private. However, exposing individual client updates makes FL
vulnerable to reconstruction attacks. Secure aggregation mitigates such privacy
risks but prevents the server from verifying the validity of each client
update, creating a privacy-robustness tradeoff. Recent efforts attempt to
address this tradeoff by enforcing checks on client updates using
zero-knowledge proofs, but they support limited predicates and often depend on
public validation data. We propose SLVR, a general framework that securely
leverages clients' private data through secure multi-party computation. By
utilizing clients' data, SLVR not only eliminates the need for public
validation data, but also enables a wider range of checks for robustness,
including cross-client accuracy validation. It also adapts naturally to
distribution shifts in client data as it can securely refresh its validation
data up-to-date. Our empirical evaluations show that SLVR improves robustness
against model poisoning attacks, particularly outperforming existing methods by
up to 50% under adaptive attacks. Additionally, SLVR demonstrates effective
adaptability and stable convergence under various distribution shift scenarios.
|
2502.08056
|
Cognify: Supercharging Gen-AI Workflows With Hierarchical Autotuning
|
cs.LG cs.AI cs.MA
|
Today's gen-AI workflows that involve multiple ML model calls, tool/API
calls, data retrieval, or generic code execution are often tuned manually in an
ad-hoc way that is both time-consuming and error-prone. In this paper, we
propose a systematic approach for automatically tuning gen-AI workflows. Our
key insight is that gen-AI workflows can benefit from structure, operator, and
prompt changes, but unique properties of gen-AI workflows require new
optimization techniques. We propose AdaSeek, an adaptive hierarchical search
algorithm for autotuning gen-AI workflows. AdaSeek organizes workflow tuning
methods into different layers based on the user-specified total search budget
and distributes the budget across different layers based on the complexity of
each layer. During its hierarchical search, AdaSeek redistributes the search
budget from less useful to more promising tuning configurations based on
workflow-level evaluation results. We implement AdaSeek in a workflow
autotuning framework called Cognify and evaluate Cognify using six types of
workflows such as RAG-based QA and text-to-SQL transformation. Overall, Cognify
improves these workflows' generation quality by up to 2.8x, reduces execution
monetary cost by up to 10x, and reduces end-to-end latency by 2.7x.
|
2502.08058
|
General Coded Computing: Adversarial Settings
|
cs.DC cs.LG
|
Conventional coded computing frameworks are predominantly tailored for
structured computations, such as matrix multiplication and polynomial
evaluation. Such tasks allow the reuse of tools and techniques from algebraic
coding theory to improve the reliability of distributed systems in the presence
of stragglers and adversarial servers.
This paper lays the foundation for general coded computing, which extends the
applicability of coded computing to handle a wide class of computations. In
addition, it particularly addresses the challenging problem of managing
adversarial servers. We demonstrate that, in the proposed scheme, for a system
with $N$ servers, where $\mathcal{O}(N^a)$, $a \in [0,1)$, are adversarial, the
supremum of the average approximation error over all adversarial strategies
decays at a rate of $N^{\frac{6}{5}(a-1)}$, under minimal assumptions on the
computing tasks. Furthermore, we show that within a general framework, the
proposed scheme achieves optimal adversarial robustness, in terms of maximum
number of adversarial servers it can tolerate. This marks a significant step
toward practical and reliable general coded computing. Implementation results
further validate the effectiveness of the proposed method in handling various
computations, including inference in deep neural networks.
|
2502.08059
|
On Mechanistic Circuits for Extractive Question-Answering
|
cs.CL cs.LG
|
Large language models are increasingly used to process documents and
facilitate question-answering on them. In our paper, we extract mechanistic
circuits for this real-world language modeling task: context-augmented language
modeling for extractive question-answering (QA) tasks and understand the
potential benefits of circuits towards downstream applications such as data
attribution to context information. We extract circuits as a function of
internal model components (e.g., attention heads, MLPs) using causal mediation
analysis techniques. Leveraging the extracted circuits, we first understand the
interplay between the model's usage of parametric memory and retrieved context
towards a better mechanistic understanding of context-augmented language
models. We then identify a small set of attention heads in our circuit which
performs reliable data attribution by default, thereby obtaining attribution
for free in just the model's forward pass. Using this insight, we then
introduce ATTNATTRIB, a fast data attribution algorithm which obtains
state-of-the-art attribution results across various extractive QA benchmarks.
Finally, we show the possibility to steer the language model towards answering
from the context, instead of the parametric memory by using the attribution
from ATTNATTRIB as an additional signal during the forward pass. Beyond
mechanistic understanding, our paper provides tangible applications of circuits
in the form of reliable data attribution and model steering.
|
2502.08063
|
Multi-Agent Performative Prediction Beyond the Insensitivity Assumption:
A Case Study for Mortgage Competition
|
cs.GT cs.LG
|
Performative prediction models account for feedback loops in decision-making
processes where predictions influence future data distributions. While existing
work largely assumes insensitivity of data distributions to small strategy
changes, this assumption usually fails in real-world competitive (i.e.
multi-agent) settings. For example, in Bertrand-type competitions, a small
reduction in one firm's price can lead that firm to capture the entire demand,
while all others sharply lose all of their customers.
We study a representative setting of multi-agent performative prediction in
which insensitivity assumptions do not hold, and investigate the convergence of
natural dynamics. To do so, we focus on a specific game that we call the ''Bank
Game'', where two lenders compete over interest rates and credit score
thresholds. Consumers act similarly as to in a Bertrand Competition, with each
consumer selecting the firm with the lowest interest rate that they are
eligible for based on the firms' credit thresholds. Our analysis characterizes
the equilibria of this game and demonstrates that when both firms use a common
and natural no-regret learning dynamic -- exponential weights -- with proper
initialization, the dynamics always converge to stable outcomes despite the
general-sum structure. Notably, our setting admits multiple stable equilibria,
with convergence dependent on initial conditions. We also provide theoretical
convergence results in the stochastic case when the utility matrix is not fully
known, but each learner can observe sufficiently many samples of consumers at
each time step to estimate it, showing robustness to slight mis-specifications.
Finally, we provide experimental results that validate our theoretical
findings.
|
2502.08071
|
Collaborative Filtering Meets Spectrum Shift: Connecting User-Item
Interaction with Graph-Structured Side Information
|
cs.IR
|
Graph Neural Network (GNN) has demonstrated their superiority in
collaborative filtering, where the user-item (U-I) interaction bipartite graph
serves as the fundamental data format. However, when graph-structured side
information (e.g., multimodal similarity graphs or social networks) is
integrated into the U-I bipartite graph, existing graph collaborative filtering
methods fall short of achieving satisfactory performance. We quantitatively
analyze this problem from a spectral perspective. Recall that a bipartite graph
possesses a full spectrum within the range of [-1, 1], with the highest
frequency exactly achievable at -1 and the lowest frequency at 1; however, we
observe as more side information is incorporated, the highest frequency of the
augmented adjacency matrix progressively shifts rightward. This spectrum shift
phenomenon has caused previous approaches built for the full spectrum [-1, 1]
to assign mismatched importance to different frequencies. To this end, we
propose Spectrum Shift Correction (dubbed SSC), incorporating shifting and
scaling factors to enable spectral GNNs to adapt to the shifted spectrum.
Unlike previous paradigms of leveraging side information, which necessitate
tailored designs for diverse data types, SSC directly connects traditional
graph collaborative filtering with any graph-structured side information.
Experiments on social and multimodal recommendation demonstrate the
effectiveness of SSC, achieving relative improvements of up to 23% without
incurring any additional computational overhead.
|
2502.08075
|
Knowledge Swapping via Learning and Unlearning
|
cs.CV
|
We introduce \textbf{Knowledge Swapping}, a novel task designed to
selectively regulate knowledge of a pretrained model by enabling the forgetting
of user\-specified information, retaining essential knowledge, and acquiring
new knowledge simultaneously. By delving into the analysis of knock-on feature
hierarchy, we find that incremental learning typically progresses from
low\-level representations to higher\-level semantics, whereas forgetting tends
to occur in the opposite direction\-starting from high-level semantics and
moving down to low-level features. Building upon this, we propose to benchmark
the knowledge swapping task with the strategy of \textit{Learning Before
Forgetting}. Comprehensive experiments on various tasks like image
classification, object detection, and semantic segmentation validate the
effectiveness of the proposed strategy. The source code is available at
\href{https://github.com/xingmingyu123456/KnowledgeSwapping}{https://github.com/xingmingyu123456/KnowledgeSwapping}.
|
2502.08077
|
Cascading Bandits Robust to Adversarial Corruptions
|
cs.LG
|
Online learning to rank sequentially recommends a small list of items to
users from a large candidate set and receives the users' click feedback. In
many real-world scenarios, users browse the recommended list in order and click
the first attractive item without checking the rest. Such behaviors are usually
formulated as the cascade model. Many recent works study algorithms for
cascading bandits, an online learning to rank framework in the cascade model.
However, the performance of existing methods may drop significantly if part of
the user feedback is adversarially corrupted (e.g., click fraud). In this work,
we study how to resist adversarial corruptions in cascading bandits. We first
formulate the ``\textit{Cascading Bandits with Adversarial Corruptions}" (CBAC)
problem, which assumes that there is an adaptive adversary that may manipulate
the user feedback. Then we propose two robust algorithms for this problem,
which assume the corruption level is known and agnostic, respectively. We show
that both algorithms can achieve logarithmic regret when the algorithm is not
under attack, and the regret increases linearly with the corruption level. The
experimental results also verify the robustness of our methods.
|
2502.08079
|
MAA: Meticulous Adversarial Attack against Vision-Language Pre-trained
Models
|
cs.CV
|
Current adversarial attacks for evaluating the robustness of vision-language
pre-trained (VLP) models in multi-modal tasks suffer from limited
transferability, where attacks crafted for a specific model often struggle to
generalize effectively across different models, limiting their utility in
assessing robustness more broadly. This is mainly attributed to the
over-reliance on model-specific features and regions, particularly in the image
modality. In this paper, we propose an elegant yet highly effective method
termed Meticulous Adversarial Attack (MAA) to fully exploit model-independent
characteristics and vulnerabilities of individual samples, achieving enhanced
generalizability and reduced model dependence. MAA emphasizes fine-grained
optimization of adversarial images by developing a novel resizing and sliding
crop (RScrop) technique, incorporating a multi-granularity similarity
disruption (MGSD) strategy. Extensive experiments across diverse VLP models,
multiple benchmark datasets, and a variety of downstream tasks demonstrate that
MAA significantly enhances the effectiveness and transferability of adversarial
attacks. A large cohort of performance studies is conducted to generate
insights into the effectiveness of various model configurations, guiding future
advancements in this domain.
|
2502.08080
|
NLI under the Microscope: What Atomic Hypothesis Decomposition Reveals
|
cs.CL
|
Decomposition of text into atomic propositions is a flexible framework
allowing for the closer inspection of input and output text. We use atomic
decomposition of hypotheses in two natural language reasoning tasks,
traditional NLI and defeasible NLI, to form atomic sub-problems, or granular
inferences that models must weigh when solving the overall problem. These
atomic sub-problems serve as a tool to further understand the structure of both
NLI and defeasible reasoning, probe a model's consistency and understanding of
different inferences, and measure the diversity of examples in benchmark
datasets. Our results indicate that LLMs still struggle with logical
consistency on atomic NLI and defeasible NLI sub-problems. Lastly, we identify
critical atomic sub-problems of defeasible NLI examples, or those that most
contribute to the overall label, and propose a method to measure the
inferential consistency of a model, a metric designed to capture the degree to
which a model makes consistently correct or incorrect predictions about the
same fact under different contexts.
|
2502.08083
|
Mixture of Decoupled Message Passing Experts with Entropy Constraint for
General Node Classification
|
cs.LG cs.SI
|
The varying degrees of homophily and heterophily in real-world graphs
persistently constrain the universality of graph neural networks (GNNs) for
node classification. Adopting a data-centric perspective, this work reveals an
inherent preference of different graphs towards distinct message encoding
schemes: homophilous graphs favor local propagation, while heterophilous graphs
exhibit preference for flexible combinations of propagation and transformation.
To address this, we propose GNNMoE, a universal node classification framework
based on the Mixture-of-Experts (MoE) mechanism. The framework first constructs
diverse message-passing experts through recombination of fine-grained encoding
operators, then designs soft and hard gating layers to allocate the most
suitable expert networks for each node's representation learning, thereby
enhancing both model expressiveness and adaptability to diverse graphs.
Furthermore, considering that soft gating might introduce encoding noise in
homophilous scenarios, we introduce an entropy constraint to guide sharpening
of soft gates, achieving organic integration of weighted combination and Top-K
selection. Extensive experiments demonstrate that GNNMoE significantly
outperforms mainstream GNNs, heterophilous GNNs, and graph transformers in both
node classification performance and universality across diverse graph datasets.
|
2502.08089
|
A Cooperative Bearing-Rate Approach for Observability-Enhanced Target
Motion Estimation
|
cs.RO cs.SY eess.SY
|
Vision-based target motion estimation is a fundamental problem in many
robotic tasks. The existing methods have the limitation of low observability
and, hence, face challenges in tracking highly maneuverable targets. Motivated
by the aerial target pursuit task where a target may maneuver in 3D space, this
paper studies how to further enhance observability by incorporating the
\emph{bearing rate} information that has not been well explored in the
literature. The main contribution of this paper is to propose a new cooperative
estimator called STT-R (Spatial-Temporal Triangulation with bearing Rate),
which is designed under the framework of distributed recursive least squares.
This theoretical result is further verified by numerical simulation and
real-world experiments. It is shown that the proposed STT-R algorithm can
effectively generate more accurate estimations and effectively reduce the lag
in velocity estimation, enabling tracking of more maneuverable targets.
|
2502.08092
|
GCoT: Chain-of-Thought Prompt Learning for Graphs
|
cs.CL cs.AI
|
Chain-of-thought (CoT) prompting has achieved remarkable success in natural
language processing (NLP). However, its vast potential remains largely
unexplored for graphs. This raises an interesting question: How can we design
CoT prompting for graphs to guide graph models to learn step by step? On one
hand, unlike natural languages, graphs are non-linear and characterized by
complex topological structures. On the other hand, many graphs lack textual
data, making it difficult to formulate language-based CoT prompting. In this
work, we propose the first CoT prompt learning framework for text-free graphs,
GCoT. Specifically, we decompose the adaptation process for each downstream
task into a series of inference steps, with each step consisting of
prompt-based inference, ``thought'' generation, and thought-conditioned prompt
learning. While the steps mimic CoT prompting in NLP, the exact mechanism
differs significantly. Specifically, at each step, an input graph, along with a
prompt, is first fed into a pre-trained graph encoder for prompt-based
inference. We then aggregate the hidden layers of the encoder to construct a
``thought'', which captures the working state of each node in the current step.
Conditioned on this thought, we learn a prompt specific to each node based on
the current state. These prompts are fed into the next inference step,
repeating the cycle. To evaluate and analyze the effectiveness of GCoT, we
conduct comprehensive experiments on eight public datasets, which demonstrate
the advantage of our approach.
|
2502.08093
|
Ground-Optimized 4D Radar-Inertial Odometry via Continuous Velocity
Integration using Gaussian Process
|
cs.RO
|
Radar ensures robust sensing capabilities in adverse weather conditions, yet
challenges remain due to its high inherent noise level. Existing radar odometry
has overcome these challenges with strategies such as filtering spurious
points, exploiting Doppler velocity, or integrating with inertial measurements.
This paper presents two novel improvements beyond the existing radar-inertial
odometry: ground-optimized noise filtering and continuous velocity
preintegration. Despite the widespread use of ground planes in LiDAR odometry,
imprecise ground point distributions of radar measurements cause naive plane
fitting to fail. Unlike plane fitting in LiDAR, we introduce a zone-based
uncertainty-aware ground modeling specifically designed for radar. Secondly, we
note that radar velocity measurements can be better combined with IMU for a
more accurate preintegration in radar-inertial odometry. Existing methods often
ignore temporal discrepancies between radar and IMU by simplifying the
complexities of asynchronous data streams with discretized propagation models.
Tackling this issue, we leverage GP and formulate a continuous preintegration
method for tightly integrating 3-DOF linear velocity with IMU, facilitating
full 6-DOF motion directly from the raw measurements. Our approach demonstrates
remarkable performance (less than 1% vertical drift) in public datasets with
meticulous conditions, illustrating substantial improvement in elevation
accuracy. The code will be released as open source for the community:
https://github.com/wooseongY/Go-RIO.
|
2502.08097
|
ID-Cloak: Crafting Identity-Specific Cloaks Against Personalized
Text-to-Image Generation
|
cs.CV cs.CR
|
Personalized text-to-image models allow users to generate images of new
concepts from several reference photos, thereby leading to critical concerns
regarding civil privacy. Although several anti-personalization techniques have
been developed, these methods typically assume that defenders can afford to
design a privacy cloak corresponding to each specific image. However, due to
extensive personal images shared online, image-specific methods are limited by
real-world practical applications. To address this issue, we are the first to
investigate the creation of identity-specific cloaks (ID-Cloak) that safeguard
all images belong to a specific identity. Specifically, we first model an
identity subspace that preserves personal commonalities and learns diverse
contexts to capture the image distribution to be protected. Then, we craft
identity-specific cloaks with the proposed novel objective that encourages the
cloak to guide the model away from its normal output within the subspace.
Extensive experiments show that the generated universal cloak can effectively
protect the images. We believe our method, along with the proposed
identity-specific cloak setting, marks a notable advance in realistic privacy
protection.
|
2502.08098
|
Unsupervised categorization of similarity measures
|
cs.LG cs.NE
|
In general, objects can be distinguished on the basis of their features, such
as color or shape. In particular, it is assumed that similarity judgments about
such features can be processed independently in different metric spaces.
However, the unsupervised categorization mechanism of metric spaces
corresponding to object features remains unknown. Here, we show that the
artificial neural network system can autonomously categorize metric spaces
through representation learning to satisfy the algebraic independence between
neural networks, and project sensory information onto multiple high-dimensional
metric spaces to independently evaluate the differences and similarities
between features. Conventional methods often constrain the axes of the latent
space to be mutually independent or orthogonal. However, the independent axes
are not suitable for categorizing metric spaces. High-dimensional metric spaces
that are independent of each other are not uniquely determined by the mutually
independent axes, because any combination of independent axes can form mutually
independent spaces. In other words, the mutually independent axes cannot be
used to naturally categorize different feature spaces, such as color space and
shape space. Therefore, constraining the axes to be mutually independent makes
it difficult to categorize high-dimensional metric spaces. To overcome this
problem, we developed a method to constrain only the spaces to be mutually
independent and not the composed axes to be independent. Our theory provides
general conditions for the unsupervised categorization of independent metric
spaces, thus advancing the mathematical theory of functional differentiation of
neural networks.
|
2502.08101
|
Rethinking Tokenized Graph Transformers for Node Classification
|
cs.LG cs.AI
|
Node tokenized graph Transformers (GTs) have shown promising performance in
node classification. The generation of token sequences is the key module in
existing tokenized GTs which transforms the input graph into token sequences,
facilitating the node representation learning via Transformer. In this paper,
we observe that the generations of token sequences in existing GTs only focus
on the first-order neighbors on the constructed similarity graphs, which leads
to the limited usage of nodes to generate diverse token sequences, further
restricting the potential of tokenized GTs for node classification. To this
end, we propose a new method termed SwapGT. SwapGT first introduces a novel
token swapping operation based on the characteristics of token sequences that
fully leverages the semantic relevance of nodes to generate more informative
token sequences. Then, SwapGT leverages a Transformer-based backbone to learn
node representations from the generated token sequences. Moreover, SwapGT
develops a center alignment loss to constrain the representation learning from
multiple token sequences, further enhancing the model performance. Extensive
empirical results on various datasets showcase the superiority of SwapGT for
node classification.
|
2502.08105
|
Out-of-Distribution Detection on Graphs: A Survey
|
cs.LG
|
Graph machine learning has witnessed rapid growth, driving advancements
across diverse domains. However, the in-distribution assumption, where training
and testing data share the same distribution, often breaks in real-world
scenarios, leading to degraded model performance under distribution shifts.
This challenge has catalyzed interest in graph out-of-distribution (GOOD)
detection, which focuses on identifying graph data that deviates from the
distribution seen during training, thereby enhancing model robustness. In this
paper, we provide a rigorous definition of GOOD detection and systematically
categorize existing methods into four types: enhancement-based,
reconstruction-based, information propagation-based, and classification-based
approaches. We analyze the principles and mechanisms of each approach and
clarify the distinctions between GOOD detection and related fields, such as
graph anomaly detection, outlier detection, and GOOD generalization. Beyond
methodology, we discuss practical applications and theoretical foundations,
highlighting the unique challenges posed by graph data. Finally, we discuss the
primary challenges and propose future directions to advance this emerging
field. The repository of this survey is available at
https://github.com/ca1man-2022/Awesome-GOOD-Detection.
|
2502.08106
|
PoGDiff: Product-of-Gaussians Diffusion Models for Imbalanced
Text-to-Image Generation
|
cs.LG cs.AI cs.CV stat.ML
|
Diffusion models have made significant advancements in recent years. However,
their performance often deteriorates when trained or fine-tuned on imbalanced
datasets. This degradation is largely due to the disproportionate
representation of majority and minority data in image-text pairs. In this
paper, we propose a general fine-tuning approach, dubbed PoGDiff, to address
this challenge. Rather than directly minimizing the KL divergence between the
predicted and ground-truth distributions, PoGDiff replaces the ground-truth
distribution with a Product of Gaussians (PoG), which is constructed by
combining the original ground-truth targets with the predicted distribution
conditioned on a neighboring text embedding. Experiments on real-world datasets
demonstrate that our method effectively addresses the imbalance problem in
diffusion models, improving both generation accuracy and quality.
|
2502.08108
|
Generative AI and Empirical Software Engineering: A Paradigm Shift
|
cs.SE cs.AI
|
The widespread adoption of generative AI in software engineering marks a
paradigm shift, offering new opportunities to design and utilize software
engineering tools while influencing both developers and the artifacts they
create. Traditional empirical methods in software engineering, including
quantitative, qualitative, and mixed-method approaches, are well established.
However, this paradigm shift introduces novel data types and redefines many
concepts in the software engineering process. The roles of developers, users,
agents, and researchers increasingly overlap, blurring the distinctions between
these social and technical actors within the field.
This paper examines how integrating AI into software engineering challenges
traditional research paradigms. It focuses on the research phenomena that we
investigate, the methods and theories that we employ, the data we analyze, and
the threats to validity that emerge in this new context. Through this
exploration, our goal is to understand how AI adoption disrupts established
software development practices that creates new opportunities for empirical
software engineering research.
|
2502.08109
|
HuDEx: Integrating Hallucination Detection and Explainability for
Enhancing the Reliability of LLM responses
|
cs.CL cs.AI
|
Recent advances in large language models (LLMs) have shown promising
improvements, often surpassing existing methods across a wide range of
downstream tasks in natural language processing. However, these models still
face challenges, which may hinder their practical applicability. For example,
the phenomenon of hallucination is known to compromise the reliability of LLMs,
especially in fields that demand high factual precision. Current benchmarks
primarily focus on hallucination detection and factuality evaluation but do not
extend beyond identification. This paper proposes an explanation enhanced
hallucination-detection model, coined as HuDEx, aimed at enhancing the
reliability of LLM-generated responses by both detecting hallucinations and
providing detailed explanations. The proposed model provides a novel approach
to integrate detection with explanations, and enable both users and the LLM
itself to understand and reduce errors. Our measurement results demonstrate
that the proposed model surpasses larger LLMs, such as Llama3 70B and GPT-4, in
hallucination detection accuracy, while maintaining reliable explanations.
Furthermore, the proposed model performs well in both zero-shot and other test
environments, showcasing its adaptability across diverse benchmark datasets.
The proposed approach further enhances the hallucination detection research by
introducing a novel approach to integrating interpretability with hallucination
detection, which further enhances the performance and reliability of evaluating
hallucinations in language models.
|
2502.08115
|
Neuromorphic Digital-Twin-based Controller for Indoor Multi-UAV Systems
Deployment
|
cs.NE
|
Presented study introduces a novel distributed cloud-edge framework for
autonomous multi-UAV systems that combines the computational efficiency of
neuromorphic computing with nature-inspired control strategies. The proposed
architecture equips each UAV with an individual Spiking Neural Network (SNN)
that learns to reproduce optimal control signals generated by a cloud-based
controller, enabling robust operation even during communication interruptions.
By integrating spike coding with nature-inspired control principles inspired by
Tilapia fish territorial behavior, our system achieves sophisticated formation
control and obstacle avoidance in complex urban environments. The distributed
architecture leverages cloud computing for complex calculations while
maintaining local autonomy through edge-based SNNs, significantly reducing
energy consumption and computational overhead compared to traditional
centralized approaches. Our framework addresses critical limitations of
conventional methods, including the dependency on pre-modeled environments,
computational intensity of traditional methods, and local minima issues in
potential field approaches. Simulation results demonstrate the system's
effectiveness across two different scenarios. First, the indoor deployment of a
multi-UAV system made-up of 15 UAVs. Then the collision-free formation control
of a moving UAV flock including 6 UAVs considering the obstacle avoidance.
Owing to the sparsity of spiking patterns, and the event-based nature of SNNs
in average for the whole group of UAVs, the framework achieves almost 90%
reduction in computational burden compared to traditional von Neumann
architectures implementing traditional artificial neural networks.
|
2502.08119
|
Generative AI-Enhanced Cooperative MEC of UAVs and Ground Stations for
Unmanned Surface Vehicles
|
cs.AI cs.RO
|
The increasing deployment of unmanned surface vehicles (USVs) require
computational support and coverage in applications such as maritime search and
rescue. Unmanned aerial vehicles (UAVs) can offer low-cost, flexible aerial
services, and ground stations (GSs) can provide powerful supports, which can
cooperate to help the USVs in complex scenarios. However, the collaboration
between UAVs and GSs for USVs faces challenges of task uncertainties, USVs
trajectory uncertainties, heterogeneities, and limited computational resources.
To address these issues, we propose a cooperative UAV and GS based robust
multi-access edge computing framework to assist USVs in completing
computational tasks. Specifically, we formulate the optimization problem of
joint task offloading and UAV trajectory to minimize the total execution time,
which is in the form of mixed integer nonlinear programming and NP-hard to
tackle. Therefore, we propose the algorithm of generative artificial
intelligence-enhanced heterogeneous agent proximal policy optimization
(GAI-HAPPO). The proposed algorithm integrates GAI models to enhance the actor
network ability to model complex environments and extract high-level features,
thereby allowing the algorithm to predict uncertainties and adapt to dynamic
conditions. Additionally, GAI stabilizes the critic network, addressing the
instability of multi-agent reinforcement learning approaches. Finally,
extensive simulations demonstrate that the proposed algorithm outperforms the
existing benchmark methods, thus highlighting the potentials in tackling
intricate, cross-domain issues in the considered scenarios.
|
2502.08122
|
Hookpad Aria: A Copilot for Songwriters
|
cs.SD cs.AI cs.LG
|
We present Hookpad Aria, a generative AI system designed to assist musicians
in writing Western pop songs. Our system is seamlessly integrated into Hookpad,
a web-based editor designed for the composition of lead sheets: symbolic music
scores that describe melody and harmony. Hookpad Aria has numerous generation
capabilities designed to assist users in non-sequential composition workflows,
including: (1) generating left-to-right continuations of existing material, (2)
filling in missing spans in the middle of existing material, and (3) generating
harmony from melody and vice versa. Hookpad Aria is also a scalable data
flywheel for music co-creation -- since its release in March 2024, Aria has
generated 318k suggestions for 3k users who have accepted 74k into their songs.
More information about Hookpad Aria is available at
https://www.hooktheory.com/hookpad/aria
|
2502.08123
|
Provably Robust Federated Reinforcement Learning
|
cs.CR cs.DC cs.LG
|
Federated reinforcement learning (FRL) allows agents to jointly learn a
global decision-making policy under the guidance of a central server. While FRL
has advantages, its decentralized design makes it prone to poisoning attacks.
To mitigate this, Byzantine-robust aggregation techniques tailored for FRL have
been introduced. Yet, in our work, we reveal that these current
Byzantine-robust techniques are not immune to our newly introduced Normalized
attack. Distinct from previous attacks that targeted enlarging the distance of
policy updates before and after an attack, our Normalized attack emphasizes on
maximizing the angle of deviation between these updates. To counter these
threats, we develop an ensemble FRL approach that is provably secure against
both known and our newly proposed attacks. Our ensemble method involves
training multiple global policies, where each is learnt by a group of agents
using any foundational aggregation rule. These well-trained global policies
then individually predict the action for a specific test state. The ultimate
action is chosen based on a majority vote for discrete action systems or the
geometric median for continuous ones. Our experimental results across different
settings show that the Normalized attack can greatly disrupt non-ensemble
Byzantine-robust methods, and our ensemble approach offers substantial
resistance against poisoning attacks.
|
2502.08125
|
Incremental Approximate Single-Source Shortest Paths with Predictions
|
cs.DS cs.LG
|
The algorithms-with-predictions framework has been used extensively to
develop online algorithms with improved beyond-worst-case competitive ratios.
Recently, there is growing interest in leveraging predictions for designing
data structures with improved beyond-worst-case running times. In this paper,
we study the fundamental data structure problem of maintaining approximate
shortest paths in incremental graphs in the algorithms-with-predictions model.
Given a sequence $\sigma$ of edges that are inserted one at a time, the goal is
to maintain approximate shortest paths from the source to each vertex in the
graph at each time step. Before any edges arrive, the data structure is given a
prediction of the online edge sequence $\hat{\sigma}$ which is used to ``warm
start'' its state.
As our main result, we design a learned algorithm that maintains
$(1+\epsilon)$-approximate single-source shortest paths, which runs in
$\tilde{O}(m \eta \log W/\epsilon)$ time, where $W$ is the weight of the
heaviest edge and $\eta$ is the prediction error. We show these techniques
immediately extend to the all-pairs shortest-path setting as well. Our
algorithms are consistent (performing nearly as fast as the offline algorithm)
when predictions are nearly perfect, have a smooth degradation in performance
with respect to the prediction error and, in the worst case, match the best
offline algorithm up to logarithmic factors.
As a building block, we study the offline incremental approximate
single-source shortest-paths problem. In this problem, the edge sequence
$\sigma$ is known a priori and the goal is to efficiently return the length of
the shortest paths in the intermediate graph $G_t$ consisting of the first $t$
edges, for all $t$. Note that the offline incremental problem is defined in the
worst-case setting (without predictions) and is of independent interest.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.