id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.19082
|
A Bias-Correction Decentralized Stochastic Gradient Algorithm with
Momentum Acceleration
|
cs.LG cs.DC math.OC stat.ML
|
Distributed stochastic optimization algorithms can simultaneously process
large-scale datasets, significantly accelerating model training. However, their
effectiveness is often hindered by the sparsity of distributed networks and
data heterogeneity. In this paper, we propose a momentum-accelerated
distributed stochastic gradient algorithm, termed Exact-Diffusion with Momentum
(EDM), which mitigates the bias from data heterogeneity and incorporates
momentum techniques commonly used in deep learning to enhance convergence rate.
Our theoretical analysis demonstrates that the EDM algorithm converges
sub-linearly to the neighborhood of the optimal solution, the radius of which
is irrespective of data heterogeneity, when applied to non-convex objective
functions; under the Polyak-Lojasiewicz condition, which is a weaker assumption
than strong convexity, it converges linearly to the target region. Our analysis
techniques employed to handle momentum in complex distributed parameter update
structures yield a sufficiently tight convergence upper bound, offering a new
perspective for the theoretical analysis of other momentum-based distributed
algorithms.
|
2501.19083
|
MotionPCM: Real-Time Motion Synthesis with Phased Consistency Model
|
cs.CV
|
Diffusion models have become a popular choice for human motion synthesis due
to their powerful generative capabilities. However, their high computational
complexity and large sampling steps pose challenges for real-time applications.
Fortunately, the Consistency Model (CM) provides a solution to greatly reduce
the number of sampling steps from hundreds to a few, typically fewer than four,
significantly accelerating the synthesis of diffusion models. However, its
application to text-conditioned human motion synthesis in latent space remains
challenging. In this paper, we introduce \textbf{MotionPCM}, a phased
consistency model-based approach designed to improve the quality and efficiency
of real-time motion synthesis in latent space.
|
2501.19084
|
Laser: Efficient Language-Guided Segmentation in Neural Radiance Fields
|
cs.CV
|
In this work, we propose a method that leverages CLIP feature distillation,
achieving efficient 3D segmentation through language guidance. Unlike previous
methods that rely on multi-scale CLIP features and are limited by processing
speed and storage requirements, our approach aims to streamline the workflow by
directly and effectively distilling dense CLIP features, thereby achieving
precise segmentation of 3D scenes using text. To achieve this, we introduce an
adapter module and mitigate the noise issue in the dense CLIP feature
distillation process through a self-cross-training strategy. Moreover, to
enhance the accuracy of segmentation edges, this work presents a low-rank
transient query attention mechanism. To ensure the consistency of segmentation
for similar colors under different viewpoints, we convert the segmentation task
into a classification task through label volume, which significantly improves
the consistency of segmentation in color-similar areas. We also propose a
simplified text augmentation strategy to alleviate the issue of ambiguity in
the correspondence between CLIP features and text. Extensive experimental
results show that our method surpasses current state-of-the-art technologies in
both training speed and performance. Our code is available on:
https://github.com/xingy038/Laser.git.
|
2501.19086
|
Fairness Analysis of CLIP-Based Foundation Models for X-Ray Image
Classification
|
cs.CV cs.AI
|
X-ray imaging is pivotal in medical diagnostics, offering non-invasive
insights into a range of health conditions. Recently, vision-language models,
such as the Contrastive Language-Image Pretraining (CLIP) model, have
demonstrated potential in improving diagnostic accuracy by leveraging
large-scale image-text datasets. However, since CLIP was not initially designed
for medical images, several CLIP-like models trained specifically on medical
images have been developed. Despite their enhanced performance, issues of
fairness - particularly regarding demographic attributes - remain largely
unaddressed. In this study, we perform a comprehensive fairness analysis of
CLIP-like models applied to X-ray image classification. We assess their
performance and fairness across diverse patient demographics and disease
categories using zero-shot inference and various fine-tuning techniques,
including Linear Probing, Multilayer Perceptron (MLP), Low-Rank Adaptation
(LoRA), and full fine-tuning. Our results indicate that while fine-tuning
improves model accuracy, fairness concerns persist, highlighting the need for
further fairness interventions in these foundational models.
|
2501.19088
|
JGHand: Joint-Driven Animatable Hand Avater via 3D Gaussian Splatting
|
cs.CV
|
Since hands are the primary interface in daily interactions, modeling
high-quality digital human hands and rendering realistic images is a critical
research problem. Furthermore, considering the requirements of interactive and
rendering applications, it is essential to achieve real-time rendering and
driveability of the digital model without compromising rendering quality. Thus,
we propose Jointly 3D Gaussian Hand (JGHand), a novel joint-driven 3D Gaussian
Splatting (3DGS)-based hand representation that renders high-fidelity hand
images in real-time for various poses and characters. Distinct from existing
articulated neural rendering techniques, we introduce a differentiable process
for spatial transformations based on 3D key points. This process supports
deformations from the canonical template to a mesh with arbitrary bone lengths
and poses. Additionally, we propose a real-time shadow simulation method based
on per-pixel depth to simulate self-occlusion shadows caused by finger
movements. Finally, we embed the hand prior and propose an animatable 3DGS
representation of the hand driven solely by 3D key points. We validate the
effectiveness of each component of our approach through comprehensive ablation
studies. Experimental results on public datasets demonstrate that JGHand
achieves real-time rendering speeds with enhanced quality, surpassing
state-of-the-art methods.
|
2501.19089
|
Understanding Oversmoothing in GNNs as Consensus in Opinion Dynamics
|
cs.LG
|
In contrast to classes of neural networks where the learned representations
become increasingly expressive with network depth, the learned representations
in graph neural networks (GNNs), tend to become increasingly similar. This
phenomena, known as oversmoothing, is characterized by learned representations
that cannot be reliably differentiated leading to reduced predictive
performance. In this paper, we propose an analogy between oversmoothing in GNNs
and consensus or agreement in opinion dynamics. Through this analogy, we show
that the message passing structure of recent continuous-depth GNNs is
equivalent to a special case of opinion dynamics (i.e., linear consensus
models) which has been theoretically proven to converge to consensus (i.e.,
oversmoothing) for all inputs. Using the understanding developed through this
analogy, we design a new continuous-depth GNN model based on nonlinear opinion
dynamics and prove that our model, which we call behavior-inspired message
passing neural network (BIMP) circumvents oversmoothing for general inputs.
Through extensive experiments, we show that BIMP is robust to oversmoothing and
adversarial attack, and consistently outperforms competitive baselines on
numerous benchmarks.
|
2501.19090
|
Pivoting Factorization: A Compact Meta Low-Rank Representation of
Sparsity for Efficient Inference in Large Language Models
|
cs.LG
|
The rapid growth of Large Language Models has driven demand for effective
model compression techniques to reduce memory and computation costs. Low-rank
pruning has gained attention for its tensor coherence and GPU compatibility
across all densities. However, low-rank pruning has struggled to match the
performance of semi-structured pruning, often doubling perplexity (PPL) at
similar densities. In this paper, we propose Pivoting Factorization (PIFA), a
novel lossless meta low-rank representation that unsupervisedly learns a
compact form of any low-rank representation, effectively eliminating redundant
information. PIFA identifies pivot rows (linearly independent rows) and
expresses non-pivot rows as linear combinations, achieving an additional 24.2\%
memory savings and 24.6\% faster inference over low-rank layers at r/d = 0.5,
thereby significantly enhancing performance at the same density. To mitigate
the performance degradation caused by low-rank pruning, we introduce a novel,
retraining-free low-rank reconstruction method that minimizes error
accumulation (M). MPIFA, combining M and PIFA into an end-to-end framework,
significantly outperforms existing low-rank pruning methods and, for the first
time, achieves performance comparable to semi-structured pruning, while
surpassing it in GPU efficiency and compatibility.
|
2501.19091
|
FL-APU: A Software Architecture to Ease Practical Implementation of
Cross-Silo Federated Learning
|
cs.DC cs.LG
|
Federated Learning (FL) is an upcoming technology that is increasingly
applied in real-world applications. Early applications focused on cross-device
scenarios, where many participants with limited resources train machine
learning (ML) models together, e.g., in the case of Google's GBoard.
Contrarily, cross-silo scenarios have only few participants but with many
resources, e.g., in the healthcare domain. Despite such early efforts, FL is
still rarely used in practice and best practices are, hence, missing. For new
applications, in our case inter-organizational cross-silo applications,
overcoming this lack of role models is a significant challenge.
In order to ease the use of FL in real-world cross-silo applications, we here
propose a scenario-based architecture for the practical use of FL in the
context of multiple companies collaborating to improve the quality of their ML
models. The architecture emphasizes the collaboration between the participants
and the FL server and extends basic interactions with domain-specific features.
First, it combines governance with authentication, creating an environment
where only trusted participants can join. Second, it offers traceability of
governance decisions and tracking of training processes, which are also crucial
in a production environment. Beyond presenting the architectural design, we
analyze requirements for the real-world use of FL and evaluate the architecture
with a scenario-based analysis method.
|
2501.19093
|
Improving Low-Resource Sequence Labeling with Knowledge Fusion and
Contextual Label Explanations
|
cs.CL
|
Sequence labeling remains a significant challenge in low-resource,
domain-specific scenarios, particularly for character-dense languages like
Chinese. Existing methods primarily focus on enhancing model comprehension and
improving data diversity to boost performance. However, these approaches still
struggle with inadequate model applicability and semantic distribution biases
in domain-specific contexts. To overcome these limitations, we propose a novel
framework that combines an LLM-based knowledge enhancement workflow with a
span-based Knowledge Fusion for Rich and Efficient Extraction (KnowFREE) model.
Our workflow employs explanation prompts to generate precise contextual
interpretations of target entities, effectively mitigating semantic biases and
enriching the model's contextual understanding. The KnowFREE model further
integrates extension label features, enabling efficient nested entity
extraction without relying on external knowledge during inference. Experiments
on multiple Chinese domain-specific sequence labeling datasets demonstrate that
our approach achieves state-of-the-art performance, effectively addressing the
challenges posed by low-resource settings.
|
2501.19094
|
Ambient Denoising Diffusion Generative Adversarial Networks for
Establishing Stochastic Object Models from Noisy Image Data
|
cs.CV eess.IV
|
It is widely accepted that medical imaging systems should be objectively
assessed via task-based image quality (IQ) measures that ideally account for
all sources of randomness in the measured image data, including the variation
in the ensemble of objects to be imaged. Stochastic object models (SOMs) that
can randomly draw samples from the object distribution can be employed to
characterize object variability. To establish realistic SOMs for task-based IQ
analysis, it is desirable to employ experimental image data. However,
experimental image data acquired from medical imaging systems are subject to
measurement noise. Previous work investigated the ability of deep generative
models (DGMs) that employ an augmented generative adversarial network (GAN),
AmbientGAN, for establishing SOMs from noisy measured image data. Recently,
denoising diffusion models (DDMs) have emerged as a leading DGM for image
synthesis and can produce superior image quality than GANs. However, original
DDMs possess a slow image-generation process because of the Gaussian assumption
in the denoising steps. More recently, denoising diffusion GAN (DDGAN) was
proposed to permit fast image generation while maintain high generated image
quality that is comparable to the original DDMs. In this work, we propose an
augmented DDGAN architecture, Ambient DDGAN (ADDGAN), for learning SOMs from
noisy image data. Numerical studies that consider clinical computed tomography
(CT) images and digital breast tomosynthesis (DBT) images are conducted. The
ability of the proposed ADDGAN to learn realistic SOMs from noisy image data is
demonstrated. It has been shown that the ADDGAN significantly outperforms the
advanced AmbientGAN models for synthesizing high resolution medical images with
complex textures.
|
2501.19095
|
PathE: Leveraging Entity-Agnostic Paths for Parameter-Efficient
Knowledge Graph Embeddings
|
cs.AI cs.LG
|
Knowledge Graphs (KGs) store human knowledge in the form of entities (nodes)
and relations, and are used extensively in various applications. KG embeddings
are an effective approach to addressing tasks like knowledge discovery, link
prediction, and reasoning. This is often done by allocating and learning
embedding tables for all or a subset of the entities. As this scales linearly
with the number of entities, learning embedding models in real-world KGs with
millions of nodes can be computationally intractable. To address this
scalability problem, our model, PathE, only allocates embedding tables for
relations (which are typically orders of magnitude fewer than the entities) and
requires less than 25% of the parameters of previous parameter efficient
methods. Rather than storing entity embeddings, we learn to compute them by
leveraging multiple entity-relation paths to contextualise individual entities
within triples. Evaluated on four benchmarks, PathE achieves state-of-the-art
performance in relation prediction, and remains competitive in link prediction
on path-rich KGs while training on consumer-grade hardware. We perform ablation
experiments to test our design choices and analyse the sensitivity of the model
to key hyper-parameters. PathE is efficient and cost-effective for relationally
diverse and well-connected KGs commonly found in real-world applications.
|
2501.19098
|
$\infty$-Video: A Training-Free Approach to Long Video Understanding via
Continuous-Time Memory Consolidation
|
cs.CV cs.LG
|
Current video-language models struggle with long-video understanding due to
limited context lengths and reliance on sparse frame subsampling, often leading
to information loss. This paper introduces $\infty$-Video, which can process
arbitrarily long videos through a continuous-time long-term memory (LTM)
consolidation mechanism. Our framework augments video Q-formers by allowing
them to process unbounded video contexts efficiently and without requiring
additional training. Through continuous attention, our approach dynamically
allocates higher granularity to the most relevant video segments, forming
"sticky" memories that evolve over time. Experiments with Video-LLaMA and
VideoChat2 demonstrate improved performance in video question-answering tasks,
showcasing the potential of continuous-time LTM mechanisms to enable scalable
and training-free comprehension of long videos.
|
2501.19099
|
Unraveling Zeroth-Order Optimization through the Lens of Low-Dimensional
Structured Perturbations
|
cs.LG
|
Zeroth-order (ZO) optimization has emerged as a promising alternative to
gradient-based backpropagation methods, particularly for black-box optimization
and large language model (LLM) fine-tuning. However, ZO methods suffer from
slow convergence due to high-variance stochastic gradient estimators. While
structured perturbations, such as sparsity and low-rank constraints, have been
explored to mitigate these issues, their effectiveness remains highly
under-explored. In this work, we develop a unified theoretical framework that
analyzes both the convergence and generalization properties of ZO optimization
under structured perturbations. We show that high dimensionality is the primary
bottleneck and introduce the notions of \textit{stable rank} and
\textit{effective overlap} to explain how structured perturbations reduce
gradient noise and accelerate convergence. Using the uniform stability under
our framework, we then provide the first theoretical justification for why
these perturbations enhance generalization. Additionally, through empirical
analysis, we identify that \textbf{block coordinate descent} (BCD) to be an
effective structured perturbation method. Extensive experiments show that,
compared to existing alternatives, memory-efficient ZO (MeZO) with BCD
(\textit{MeZO-BCD}) can provide improved converge with a faster wall-clock
time/iteration by up to $\times\textbf{2.09}$ while yielding similar or better
accuracy.
|
2501.19102
|
Reinforcement Learning on Reconfigurable Hardware: Overcoming Material
Variability in Laser Material Processing
|
cs.LG
|
Ensuring consistent processing quality is challenging in laser processes due
to varying material properties and surface conditions. Although some approaches
have shown promise in solving this problem via automation, they often rely on
predetermined targets or are limited to simulated environments. To address
these shortcomings, we propose a novel real-time reinforcement learning
approach for laser process control, implemented on a Field Programmable Gate
Array to achieve real-time execution. Our experimental results from laser
welding tests on stainless steel samples with a range of surface roughnesses
validated the method's ability to adapt autonomously, without relying on reward
engineering or prior setup information. Specifically, the algorithm learned the
correct power profile for each unique surface characteristic, demonstrating
significant improvements over hand-engineered optimal constant power strategies
-- up to 23% better performance on rougher surfaces and 7% on mixed surfaces.
This approach represents a significant advancement in automating and optimizing
laser processes, with potential applications across multiple industries.
|
2501.19104
|
Neural Collapse Beyond the Unconstrained Features Model: Landscape,
Dynamics, and Generalization in the Mean-Field Regime
|
cs.LG
|
Neural Collapse is a phenomenon where the last-layer representations of a
well-trained neural network converge to a highly structured geometry. In this
paper, we focus on its first (and most basic) property, known as NC1: the
within-class variability vanishes. While prior theoretical studies establish
the occurrence of NC1 via the data-agnostic unconstrained features model, our
work adopts a data-specific perspective, analyzing NC1 in a three-layer neural
network, with the first two layers operating in the mean-field regime and
followed by a linear layer. In particular, we establish a fundamental
connection between NC1 and the loss landscape: we prove that points with small
empirical loss and gradient norm (thus, close to being stationary)
approximately satisfy NC1, and the closeness to NC1 is controlled by the
residual loss and gradient norm. We then show that (i) gradient flow on the
mean squared error converges to NC1 solutions with small empirical loss, and
(ii) for well-separated data distributions, both NC1 and vanishing test loss
are achieved simultaneously. This aligns with the empirical observation that
NC1 emerges during training while models attain near-zero test error. Overall,
our results demonstrate that NC1 arises from gradient training due to the
properties of the loss landscape, and they show the co-occurrence of NC1 and
small test error for certain data distributions.
|
2501.19105
|
Relating Misfit to Gain in Weak-to-Strong Generalization Beyond the
Squared Loss
|
cs.LG math.PR
|
The paradigm of weak-to-strong generalization constitutes the training of a
strong AI model on data labeled by a weak AI model, with the goal that the
strong model nevertheless outperforms its weak supervisor on the target task of
interest. For the setting of real-valued regression with the squared loss,
recent work quantitatively characterizes the gain in performance of the strong
model over the weak model in terms of the misfit between the strong and weak
model. We generalize such a characterization to learning tasks whose loss
functions correspond to arbitrary Bregman divergences when the strong class is
convex. This extends the misfit-based characterization of performance gain in
weak-to-strong generalization to classification tasks, as the cross-entropy
loss can be expressed in terms of a Bregman divergence. In most practical
scenarios, however, the strong model class may not be convex. We therefore
weaken this assumption and study weak-to-strong generalization for convex
combinations of $k$ strong models in the strong class, in the concrete setting
of classification. This allows us to obtain a similar misfit-based
characterization of performance gain, upto an additional error term that
vanishes as $k$ gets large. Our theoretical findings are supported by thorough
experiments on synthetic as well as real-world datasets.
|
2501.19107
|
Brain-inspired sparse training enables Transformers and LLMs to perform
as fully connected
|
cs.LG
|
This study aims to enlarge our current knowledge on application of
brain-inspired network science principles for training artificial neural
networks (ANNs) with sparse connectivity. Dynamic sparse training (DST) can
reduce the computational demands in ANNs, but faces difficulties to keep peak
performance at high sparsity levels. The Cannistraci-Hebb training (CHT) is a
brain-inspired method for growing connectivity in DST. CHT leverages a
gradient-free, topology-driven link regrowth, which has shown ultra-sparse (1%
connectivity or lower) advantage across various tasks compared to fully
connected networks. Yet, CHT suffers two main drawbacks: (i) its time
complexity is O(Nd^3) - N node network size, d node degree - hence it can apply
only to ultra-sparse networks. (ii) it selects top link prediction scores,
which is inappropriate for the early training epochs, when the network presents
unreliable connections. We propose a GPU-friendly approximation of the CH link
predictor, which reduces the computational complexity to O(N^3), enabling a
fast implementation of CHT in large-scale models. We introduce the
Cannistraci-Hebb training soft rule (CHTs), which adopts a strategy for
sampling connections in both link removal and regrowth, balancing the
exploration and exploitation of network topology. To improve performance, we
integrate CHTs with a sigmoid gradual density decay (CHTss). Empirical results
show that, using 1% of connections, CHTs outperforms fully connected networks
in MLP on visual classification tasks, compressing some networks to < 30%
nodes. Using 5% of the connections, CHTss outperforms fully connected networks
in two Transformer-based machine translation tasks. Using 30% of the
connections, CHTss achieves superior performance compared to other dynamic
sparse training methods in language modeling, and it surpasses the fully
connected counterpart in zero-shot evaluations.
|
2501.19111
|
A Benchmark for Incremental Micro-expression Recognition
|
cs.CV cs.AI
|
Micro-expression recognition plays a pivotal role in understanding hidden
emotions and has applications across various fields. Traditional recognition
methods assume access to all training data at once, but real-world scenarios
involve continuously evolving data streams. To respond to the requirement of
adapting to new data while retaining previously learned knowledge, we introduce
the first benchmark specifically designed for incremental micro-expression
recognition. Our contributions include: Firstly, we formulate the incremental
learning setting tailored for micro-expression recognition. Secondly, we
organize sequential datasets with carefully curated learning orders to reflect
real-world scenarios. Thirdly, we define two cross-evaluation-based testing
protocols, each targeting distinct evaluation objectives. Finally, we provide
six baseline methods and their corresponding evaluation results. This benchmark
lays the groundwork for advancing incremental micro-expression recognition
research. All source code used in this study will be publicly available at
https://github.com/ZhengQinLai/IMER-benchmark.
|
2501.19112
|
Logical Modalities within the European AI Act: An Analysis
|
cs.AI cs.CY cs.LO
|
The paper presents a comprehensive analysis of the European AI Act in terms
of its logical modalities, with the aim of preparing its formal representation,
for example, within the logic-pluralistic Knowledge Engineering Framework and
Methodology (LogiKEy). LogiKEy develops computational tools for normative
reasoning based on formal methods, employing Higher-Order Logic (HOL) as a
unifying meta-logic to integrate diverse logics through shallow semantic
embeddings. This integration is facilitated by Isabelle/HOL, a proof assistant
tool equipped with several automated theorem provers. The modalities within the
AI Act and the logics suitable for their representation are discussed. For a
selection of these logics, embeddings in HOL are created, which are then used
to encode sample paragraphs. Initial experiments evaluate the suitability of
these embeddings for automated reasoning, and highlight key challenges on the
way to more robust reasoning capabilities.
|
2501.19113
|
Genetic AI: Evolutionary Simulation for Data Analysis
|
cs.NE
|
We introduce Genetic AI, a novel method for data analysis by evolutionary
simulations. The method can be applied to data of any domain and allows for a
data-less training of AI models. Without employing predefined rules or training
data, Genetic AI first converts the input data into genes and organisms. In a
simulation from first principles, these genes and organisms compete for
fitness, where their behavior is governed by universal evolutionary strategies.
Investigating evolutionary stable equilibriums, Genetic AI helps understanding
correlations and symmetries in general input data. Several numerical
experiments demonstrate the dynamics of exemplary systems.
|
2501.19114
|
Principal Components for Neural Network Initialization
|
cs.LG cs.AI
|
Principal Component Analysis (PCA) is a commonly used tool for dimension
reduction and denoising. Therefore, it is also widely used on the data prior to
training a neural network. However, this approach can complicate the
explanation of explainable AI (XAI) methods for the decision of the model. In
this work, we analyze the potential issues with this approach and propose
Principal Components-based Initialization (PCsInit), a strategy to incorporate
PCA into the first layer of a neural network via initialization of the first
layer in the network with the principal components, and its two variants
PCsInit-Act and PCsInit-Sub. Explanations using these strategies are as direct
and straightforward as for neural networks and are simpler than using PCA prior
to training a neural network on the principal components. Moreover, as will be
illustrated in the experiments, such training strategies can also allow further
improvement of training via backpropagation.
|
2501.19116
|
A Theoretical Justification for Asymmetric Actor-Critic Algorithms
|
cs.LG stat.ML
|
In reinforcement learning for partially observable environments, many
successful algorithms were developed within the asymmetric learning paradigm.
This paradigm leverages additional state information available at training time
for faster learning. Although the proposed learning objectives are usually
theoretically sound, these methods still lack a theoretical justification for
their potential benefits. We propose such a justification for asymmetric
actor-critic algorithms with linear function approximators by adapting a
finite-time convergence analysis to this setting. The resulting finite-time
bound reveals that the asymmetric critic eliminates an error term arising from
aliasing in the agent state.
|
2501.19122
|
FedRTS: Federated Robust Pruning via Combinatorial Thompson Sampling
|
cs.LG cs.AI
|
Federated Learning (FL) enables collaborative model training across
distributed clients without data sharing, but its high computational and
communication demands strain resource-constrained devices. While existing
methods use dynamic pruning to improve efficiency by periodically adjusting
sparse model topologies while maintaining sparsity, these approaches suffer
from issues such as greedy adjustments, unstable topologies, and communication
inefficiency, resulting in less robust models and suboptimal performance under
data heterogeneity and partial client availability. To address these
challenges, we propose Federated Robust pruning via combinatorial Thompson
Sampling (FedRTS), a novel framework designed to develop robust sparse models.
FedRTS enhances robustness and performance through its Thompson Sampling-based
Adjustment (TSAdj) mechanism, which uses probabilistic decisions informed by
stable, farsighted information instead of deterministic decisions reliant on
unstable and myopic information in previous methods. Extensive experiments
demonstrate that FedRTS achieves state-of-the-art performance in computer
vision and natural language processing tasks while reducing communication
costs, particularly excelling in scenarios with heterogeneous data
distributions and partial client participation. Our codes are available at:
https://github.com/Little0o0/FedRTS
|
2501.19125
|
Upper Bounds on the Minimum Distance of Structured LDPC Codes
|
cs.IT math.IT
|
We investigate the minimum distance of structured binary Low-Density
Parity-Check (LDPC) codes whose parity-check matrices are of the form
$[\mathbf{C} \vert \mathbf{M}]$ where $\mathbf{C}$ is circulant and of column
weight $2$, and $\mathbf{M}$ has fixed column weight $r \geq 3$ and row weight
at least $1$. These codes are of interest because they are LDPC codes which
come with a natural linear-time encoding algorithm. We show that the minimum
distance of these codes is in $O(n^{\frac{r-2}{r-1} + \epsilon})$, where $n$ is
the code length and $\epsilon > 0$ is arbitrarily small. This improves the
previously known upper bound in $O(n^{\frac{r-1}{r}})$ on the minimum distance
of such codes.
|
2501.19128
|
Shaping Sparse Rewards in Reinforcement Learning: A Semi-supervised
Approach
|
cs.LG cs.AI
|
In many real-world scenarios, reward signal for agents are exceedingly
sparse, making it challenging to learn an effective reward function for reward
shaping. To address this issue, our approach performs reward shaping not only
by utilizing non-zero-reward transitions but also by employing the
Semi-Supervised Learning (SSL) technique combined with a novel data
augmentation to learn trajectory space representations from the majority of
transitions, zero-reward transitions, thereby improving the efficacy of reward
shaping. Experimental results in Atari and robotic manipulation demonstrate
that our method effectively generalizes reward shaping to sparse reward
scenarios, achieving up to four times better performance in reaching higher
best scores compared to curiosity-driven methods. The proposed double entropy
data augmentation enhances performance, showcasing a 15.8\% increase in best
score over other augmentation methods.
|
2501.19129
|
RGB-Event ISP: The Dataset and Benchmark
|
cs.CV eess.IV
|
Event-guided imaging has received significant attention due to its potential
to revolutionize instant imaging systems. However, the prior methods primarily
focus on enhancing RGB images in a post-processing manner, neglecting the
challenges of image signal processor (ISP) dealing with event sensor and the
benefits events provide for reforming the ISP process. To achieve this, we
conduct the first research on event-guided ISP. First, we present a new
event-RAW paired dataset, collected with a novel but still confidential sensor
that records pixel-level aligned events and RAW images. This dataset includes
3373 RAW images with 2248 x 3264 resolution and their corresponding events,
spanning 24 scenes with 3 exposure modes and 3 lenses. Second, we propose a
conventional ISP pipeline to generate good RGB frames as reference. This
conventional ISP pipleline performs basic ISP operations, e.g.demosaicing,
white balancing, denoising and color space transforming, with a ColorChecker as
reference. Third, we classify the existing learnable ISP methods into 3
classes, and select multiple methods to train and evaluate on our new dataset.
Lastly, since there is no prior work for reference, we propose a simple
event-guided ISP method and test it on our dataset. We further put forward key
technical challenges and future directions in RGB-Event ISP. In summary, to the
best of our knowledge, this is the very first research focusing on event-guided
ISP, and we hope it will inspire the community. The code and dataset are
available at: https://github.com/yunfanLu/RGB-Event-ISP.
|
2501.19133
|
Decorrelated Soft Actor-Critic for Efficient Deep Reinforcement Learning
|
cs.LG cs.AI
|
The effectiveness of credit assignment in reinforcement learning (RL) when
dealing with high-dimensional data is influenced by the success of
representation learning via deep neural networks, and has implications for the
sample efficiency of deep RL algorithms. Input decorrelation has been
previously introduced as a method to speed up optimization in neural networks,
and has proven impactful in both efficient deep learning and as a method for
effective representation learning for deep RL algorithms. We propose a novel
approach to online decorrelation in deep RL based on the decorrelated
backpropagation algorithm that seamlessly integrates the decorrelation process
into the RL training pipeline. Decorrelation matrices are added to each layer,
which are updated using a separate decorrelation learning rule that minimizes
the total decorrelation loss across all layers, in parallel to minimizing the
usual RL loss. We used our approach in combination with the soft actor-critic
(SAC) method, which we refer to as decorrelated soft actor-critic (DSAC).
Experiments on the Atari 100k benchmark with DSAC shows, compared to the
regular SAC baseline, faster training in five out of the seven games tested and
improved reward performance in two games with around 50% reduction in
wall-clock time, while maintaining performance levels on the other games. These
results demonstrate the positive impact of network-wide decorrelation in deep
RL for speeding up its sample efficiency through more effective credit
assignment.
|
2501.19134
|
Mixed Feelings: Cross-Domain Sentiment Classification of Patient
Feedback
|
cs.CL
|
Sentiment analysis of patient feedback from the public health domain can aid
decision makers in evaluating the provided services. The current paper focuses
on free-text comments in patient surveys about general practitioners and
psychiatric healthcare, annotated with four sentence-level polarity classes --
positive, negative, mixed and neutral -- while also attempting to alleviate
data scarcity by leveraging general-domain sources in the form of reviews. For
several different architectures, we compare in-domain and out-of-domain
effects, as well as the effects of training joint multi-domain models.
|
2501.19137
|
A Metric for the Balance of Information in Graph Learning
|
cs.LG cs.AI
|
Graph learning on molecules makes use of information from both the molecular
structure and the features attached to that structure. Much work has been
conducted on biasing either towards structure or features, with the aim that
bias bolsters performance. Identifying which information source a dataset
favours, and therefore how to approach learning that dataset, is an open issue.
Here we propose Noise-Noise Ratio Difference (NNRD), a quantitative metric for
whether there is more useful information in structure or features. By employing
iterative noising on features and structure independently, leaving the other
intact, NNRD measures the degradation of information in each. We employ NNRD
over a range of molecular tasks, and show that it corresponds well to a loss of
information, with intuitive results that are more expressive than simple
performance aggregates. Our future work will focus on expanding data domains,
tasks and types, as well as refining our choice of baseline model.
|
2501.19140
|
Transformation trees -- documentation of multimodal image registration
|
cs.CV
|
The paper presents proposals for the application of a tree structure to the
documentation of a set of transformations obtained as a result of various
registrations of multimodal images obtained in coordinate systems associated
with acquisition devices and being registered in one patient-specific
coordinate system. A special file format .dpw (digital patient workspace) is
introduced. Examples of different registrations yielded from orthodontic
analysis and showing main aspects of the usage of tree structure are
illustrated in dpVision software.
|
2501.19143
|
Imitation Game for Adversarial Disillusion with Multimodal Generative
Chain-of-Thought Role-Play
|
cs.AI cs.CR cs.CV
|
As the cornerstone of artificial intelligence, machine perception confronts a
fundamental threat posed by adversarial illusions. These adversarial attacks
manifest in two primary forms: deductive illusion, where specific stimuli are
crafted based on the victim model's general decision logic, and inductive
illusion, where the victim model's general decision logic is shaped by specific
stimuli. The former exploits the model's decision boundaries to create a
stimulus that, when applied, interferes with its decision-making process. The
latter reinforces a conditioned reflex in the model, embedding a backdoor
during its learning phase that, when triggered by a stimulus, causes aberrant
behaviours. The multifaceted nature of adversarial illusions calls for a
unified defence framework, addressing vulnerabilities across various forms of
attack. In this study, we propose a disillusion paradigm based on the concept
of an imitation game. At the heart of the imitation game lies a multimodal
generative agent, steered by chain-of-thought reasoning, which observes,
internalises and reconstructs the semantic essence of a sample, liberated from
the classic pursuit of reversing the sample to its original state. As a proof
of concept, we conduct experimental simulations using a multimodal generative
dialogue agent and evaluates the methodology under a variety of attack
scenarios.
|
2501.19145
|
Improving Multi-Label Contrastive Learning by Leveraging Label
Distribution
|
cs.LG cs.AI cs.CV
|
In multi-label learning, leveraging contrastive learning to learn better
representations faces a key challenge: selecting positive and negative samples
and effectively utilizing label information. Previous studies selected positive
and negative samples based on the overlap between labels and used them for
label-wise loss balancing. However, these methods suffer from a complex
selection process and fail to account for the varying importance of different
labels. To address these problems, we propose a novel method that improves
multi-label contrastive learning through label distribution. Specifically, when
selecting positive and negative samples, we only need to consider whether there
is an intersection between labels. To model the relationships between labels,
we introduce two methods to recover label distributions from logical labels,
based on Radial Basis Function (RBF) and contrastive loss, respectively. We
evaluate our method on nine widely used multi-label datasets, including image
and vector datasets. The results demonstrate that our method outperforms
state-of-the-art methods in six evaluation metrics.
|
2501.19148
|
Constant-Factor Distortion Mechanisms for $k$-Committee Election
|
cs.GT cs.DS cs.MA
|
In the $k$-committee election problem, we wish to aggregate the preferences
of $n$ agents over a set of alternatives and select a committee of $k$
alternatives that minimizes the cost incurred by the agents. While we typically
assume that agent preferences are captured by a cardinal utility function, in
many contexts we only have access to ordinal information, namely the agents'
rankings over the outcomes. As preference rankings are not as expressive as
cardinal utilities, a loss of efficiency is inevitable, and is quantified by
the notion of \emph{distortion}.
We study the problem of electing a $k$-committee that minimizes the sum of
the $\ell$-largest costs incurred by the agents, when agents and candidates are
embedded in a metric space. This problem is called the $\ell$-centrum problem
and captures both the utilitarian and egalitarian objectives. When $k \geq 2$,
it is not possible to compute a bounded-distortion committee using purely
ordinal information. We develop the first algorithms (that we call mechanisms)
for the $\ell$-centrum problem (when $k \geq 2$), which achieve
$O(1)$-distortion while eliciting only a very limited amount of cardinal
information via value queries. We obtain two types of query-complexity
guarantees: $O(\log k \log n)$ queries \emph{per agent}, and $O(k^2 \log^2 n)$
queries \emph{in total} (while achieving $O(1)$-distortion in both cases). En
route, we give a simple adaptive-sampling algorithm for the $\ell$-centrum
$k$-clustering problem.
|
2501.19149
|
On the inductive bias of infinite-depth ResNets and the bottleneck rank
|
cs.LG cs.AI stat.ML
|
We compute the minimum-norm weights of a deep linear ResNet, and find that
the inductive bias of this architecture lies between minimizing nuclear norm
and rank. This implies that, with appropriate hyperparameters, deep nonlinear
ResNets have an inductive bias towards minimizing bottleneck rank.
|
2501.19153
|
Test-Time Training Scaling for Chemical Exploration in Drug Design
|
cs.LG
|
Chemical language models for molecular design have the potential to find
solutions to multi-parameter optimization problems in drug discovery via
reinforcement learning (RL). A key requirement to achieve this is the capacity
to "search" chemical space to identify all molecules of interest. Here, we
propose a challenging new benchmark to discover dissimilar molecules that
possess similar bioactivity, a common scenario in drug discovery, but a hard
problem to optimize. We show that a population of RL agents can solve the
benchmark, while a single agent cannot. We also find that cooperative
strategies are not significantly better than independent agents. Moreover, the
performance on the benchmark scales log-linearly with the number of independent
agents, showing a test-time training scaling law for chemical language models.
|
2501.19155
|
SWAT: Sliding Window Adversarial Training for Gradual Domain Adaptation
|
cs.CV cs.AI
|
Domain shifts are critical issues that harm the performance of machine
learning. Unsupervised Domain Adaptation (UDA) mitigates this issue but suffers
when the domain shifts are steep and drastic. Gradual Domain Adaptation (GDA)
alleviates this problem in a mild way by gradually adapting from the source to
the target domain using multiple intermediate domains. In this paper, we
propose Sliding Window Adversarial Training (SWAT) for Gradual Domain
Adaptation. SWAT uses the construction of adversarial streams to connect the
feature spaces of the source and target domains. In order to gradually narrow
the small gap between adjacent intermediate domains, a sliding window paradigm
is designed that moves along the adversarial stream. When the window moves to
the end of the stream, i.e., the target domain, the domain shift is drastically
reduced. Extensive experiments are conducted on public GDA benchmarks, and the
results demonstrate that the proposed SWAT significantly outperforms the
state-of-the-art approaches. The implementation is available at:
https://anonymous.4open.science/r/SWAT-8677.
|
2501.19158
|
A theoretical framework for overfitting in energy-based modeling
|
cs.LG cond-mat.dis-nn cond-mat.stat-mech
|
We investigate the impact of limited data on training pairwise energy-based
models for inverse problems aimed at identifying interaction networks.
Utilizing the Gaussian model as testbed, we dissect training trajectories
across the eigenbasis of the coupling matrix, exploiting the independent
evolution of eigenmodes and revealing that the learning timescales are tied to
the spectral decomposition of the empirical covariance matrix. We see that
optimal points for early stopping arise from the interplay between these
timescales and the initial conditions of training. Moreover, we show that
finite data corrections can be accurately modeled through asymptotic random
matrix theory calculations and provide the counterpart of generalized
cross-validation in the energy based model context. Our analytical framework
extends to binary-variable maximum-entropy pairwise models with minimal
variations. These findings offer strategies to control overfitting in
discrete-variable models through empirical shrinkage corrections, improving the
management of overfitting in energy-based generative models.
|
2501.19159
|
GDO: Gradual Domain Osmosis
|
cs.CV
|
In this paper, we propose a new method called Gradual Domain Osmosis, which
aims to solve the problem of smooth knowledge migration from source domain to
target domain in Gradual Domain Adaptation (GDA). Traditional Gradual Domain
Adaptation methods mitigate domain bias by introducing intermediate domains and
self-training strategies, but often face the challenges of inefficient
knowledge migration or missing data in intermediate domains. In this paper, we
design an optimisation framework based on the hyperparameter $\lambda$ by
dynamically balancing the loss weights of the source and target domains, which
enables the model to progressively adjust the strength of knowledge migration
($\lambda$ incrementing from 0 to 1) during the training process, thus
achieving cross-domain generalisation more efficiently. Specifically, the
method incorporates self-training to generate pseudo-labels and iteratively
updates the model by minimising a weighted loss function to ensure stability
and robustness during progressive adaptation in the intermediate domain. The
experimental part validates the effectiveness of the method on rotated MNIST,
colour-shifted MNIST, portrait dataset and forest cover type dataset, and the
results show that it outperforms existing baseline methods. The paper further
analyses the impact of the dynamic tuning strategy of the hyperparameter
$\lambda$ on the performance through ablation experiments, confirming the
advantages of progressive domain penetration in mitigating the domain bias and
enhancing the model generalisation capability. The study provides a theoretical
support and practical framework for asymptotic domain adaptation and expands
its application potential in dynamic environments.
|
2501.19160
|
RMDM: Radio Map Diffusion Model with Physics Informed
|
cs.CV
|
With the rapid development of wireless communication technology, the
efficient utilization of spectrum resources, optimization of communication
quality, and intelligent communication have become critical. Radio map
reconstruction is essential for enabling advanced applications, yet challenges
such as complex signal propagation and sparse data hinder accurate
reconstruction. To address these issues, we propose the **Radio Map Diffusion
Model (RMDM)**, a physics-informed framework that integrates **Physics-Informed
Neural Networks (PINNs)** to incorporate constraints like the **Helmholtz
equation**. RMDM employs a dual U-Net architecture: the first ensures physical
consistency by minimizing PDE residuals, boundary conditions, and source
constraints, while the second refines predictions via diffusion-based
denoising. By leveraging physical laws, RMDM significantly enhances accuracy,
robustness, and generalization. Experiments demonstrate that RMDM outperforms
state-of-the-art methods, achieving **NMSE of 0.0031** and **RMSE of 0.0125**
under the Static RM (SRM) setting, and **NMSE of 0.0047** and **RMSE of
0.0146** under the Dynamic RM (DRM) setting. These results establish a novel
paradigm for integrating physics-informed and data-driven approaches in radio
map reconstruction, particularly under sparse data conditions.
|
2501.19161
|
Locality-aware Surrogates for Gradient-based Black-box Optimization
|
cs.LG
|
In physics and engineering, many processes are modeled using
non-differentiable black-box simulators, making the optimization of such
functions particularly challenging. To address such cases, inspired by the
Gradient Theorem, we propose locality-aware surrogate models for active
model-based black-box optimization. We first establish a theoretical connection
between gradient alignment and the minimization of a Gradient Path Integral
Equation (GradPIE) loss, which enforces consistency of the surrogate's
gradients in local regions of the design space. Leveraging this theoretical
insight, we develop a scalable training algorithm that minimizes the GradPIE
loss, enabling both offline and online learning while maintaining computational
efficiency. We evaluate our approach on three real-world tasks - spanning
automated in silico experiments such as coupled nonlinear oscillators, analog
circuits, and optical systems - and demonstrate consistent improvements in
optimization efficiency under limited query budgets. Our results offer
dependable solutions for both offline and online optimization tasks where
reliable gradient estimation is needed.
|
2501.19164
|
Poison as Cure: Visual Noise for Mitigating Object Hallucinations in
LVMs
|
cs.CV
|
Large vision-language models (LVMs) extend large language models (LLMs) with
visual perception capabilities, enabling them to process and interpret visual
information. A major challenge compromising their reliability is object
hallucination that LVMs may generate plausible but factually inaccurate
information. We propose a novel visual adversarial perturbation (VAP) method to
mitigate this hallucination issue. VAP alleviates LVM hallucination by applying
strategically optimized visual noise without altering the base model. Our
approach formulates hallucination suppression as an optimization problem,
leveraging adversarial strategies to generate beneficial visual perturbations
that enhance the model's factual grounding and reduce parametric knowledge
bias. Extensive experimental results demonstrate that our method consistently
reduces object hallucinations across 8 state-of-the-art LVMs, validating its
efficacy across diverse evaluations.
|
2501.19168
|
Implications of zero-growth economics analysed with an agent-based model
|
econ.GN cs.MA q-fin.EC
|
The ever-approaching limits of the Earth's biosphere and the potentially
catastrophic consequences caused by climate change have begun to call into
question the endless growth of the economy. There is increasing interest in the
prospects of zero economic growth from the degrowth and post-growth literature.
In particular, the question arises as to whether a zero-growth trajectory in a
capitalist system with interest-bearing debt can be economically stable. There
have been several answers to this question using macroeconomic models; some
find a zero-growth trajectory is stable, while other models show an economic
breakdown. However, the capitalist system in a period of growth is not
guaranteed to be stable. Hence, a more appropriate methodology is to compare
the relative stability between a growth and zero-growth scenario on the same
model. Such a question has not yet been answered at any disaggregated level.
It's important to investigate the consequences of zero-growth on market share
instability and concentration, bankruptcy rates, income distribution, and
credit network risk. To answer such questions, we develop a macroeconomic
agent-based model incorporating Minskyan financial dynamics. The growth and
zero-growth scenarios are accomplished by changing an average productivity
growth parameter for the firms in the model. The model results showed that real
GDP growth rates were more stable in the zero-growth scenario, there were fewer
economic crises, lower unemployment rates, a higher wage share of output for
workers, and capital firm and bank market shares were relatively more stable.
Some of the consequences of zero-growth were a higher rate of inflation than in
the growth scenario, increased market concentration for both firms and banks,
and a higher level of financial risk in the credit network.
|
2501.19172
|
PSyDUCK: Training-Free Steganography for Latent Diffusion
|
cs.LG cs.CR
|
Recent advances in AI-generated steganography highlight its potential for
safeguarding the privacy of vulnerable democratic actors, including aid
workers, journalists, and whistleblowers operating in oppressive regimes. In
this work, we address current limitations and establish the foundations for
large-throughput generative steganography. We introduce a novel approach that
enables secure and efficient steganography within latent diffusion models. We
show empirically that our methods perform well across a variety of open-source
latent diffusion models, particularly in generative image and video tasks.
|
2501.19176
|
Augmented Intelligence for Multimodal Virtual Biopsy in Breast Cancer
Using Generative Artificial Intelligence
|
eess.IV cs.AI cs.CV
|
Full-Field Digital Mammography (FFDM) is the primary imaging modality for
routine breast cancer screening; however, its effectiveness is limited in
patients with dense breast tissue or fibrocystic conditions. Contrast-Enhanced
Spectral Mammography (CESM), a second-level imaging technique, offers enhanced
accuracy in tumor detection. Nonetheless, its application is restricted due to
higher radiation exposure, the use of contrast agents, and limited
accessibility. As a result, CESM is typically reserved for select cases,
leaving many patients to rely solely on FFDM despite the superior diagnostic
performance of CESM. While biopsy remains the gold standard for definitive
diagnosis, it is an invasive procedure that can cause discomfort for patients.
We introduce a multimodal, multi-view deep learning approach for virtual
biopsy, integrating FFDM and CESM modalities in craniocaudal and mediolateral
oblique views to classify lesions as malignant or benign. To address the
challenge of missing CESM data, we leverage generative artificial intelligence
to impute CESM images from FFDM scans. Experimental results demonstrate that
incorporating the CESM modality is crucial to enhance the performance of
virtual biopsy. When real CESM data is missing, synthetic CESM images proved
effective, outperforming the use of FFDM alone, particularly in multimodal
configurations that combine FFDM and CESM modalities. The proposed approach has
the potential to improve diagnostic workflows, providing clinicians with
augmented intelligence tools to improve diagnostic accuracy and patient care.
Additionally, as a contribution to the research community, we publicly release
the dataset used in our experiments, facilitating further advancements in this
field.
|
2501.19178
|
No Foundations without Foundations -- Why semi-mechanistic models are
essential for regulatory biology
|
cs.LG
|
Despite substantial efforts, deep learning has not yet delivered a
transformative impact on elucidating regulatory biology, particularly in the
realm of predicting gene expression profiles. Here, we argue that genuine
"foundation models" of regulatory biology will remain out of reach unless
guided by frameworks that integrate mechanistic insight with principled
experimental design. We present one such ground-up, semi-mechanistic framework
that unifies perturbation-based experimental designs across both in vitro and
in vivo CRISPR screens, accounting for differentiating and non-differentiating
cellular systems. By revealing previously unrecognised assumptions in published
machine learning methods, our approach clarifies links with popular techniques
such as variational autoencoders and structural causal models. In practice,
this framework suggests a modified loss function that we demonstrate can
improve predictive performance, and further suggests an error analysis that
informs batching strategies. Ultimately, since cellular regulation emerges from
innumerable interactions amongst largely uncharted molecular components, we
contend that systems-level understanding cannot be achieved through structural
biology alone. Instead, we argue that real progress will require a
first-principles perspective on how experiments capture biological phenomena,
how data are generated, and how these processes can be reflected in more
faithful modelling architectures.
|
2501.19179
|
Learning Non-Local Molecular Interactions via Equivariant Local
Representations and Charge Equilibration
|
physics.chem-ph cs.LG physics.comp-ph
|
Graph Neural Network (GNN) potentials relying on chemical locality offer
near-quantum mechanical accuracy at significantly reduced computational costs.
By propagating local information to distance particles, Message-passing neural
networks (MPNNs) extend the locality concept to model interactions beyond their
local neighborhood. Still, this locality precludes modeling long-range effects,
such as charge transfer, electrostatic interactions, and dispersion effects,
which are critical to adequately describe many real-world systems. In this
work, we propose the Charge Equilibration Layer for Long-range Interactions
(CELLI) to address the challenging modeling of non-local interactions and the
high computational cost of MPNNs. This novel architecture generalizes the
fourth-generation high-dimensional neural network (4GHDNN) concept, integrating
the charge equilibration (Qeq) method into a model-agnostic building block for
modern equivariant GNN potentials. A series of benchmarks show that CELLI can
extend the strictly local Allegro architecture to model highly non-local
interactions and charge transfer. Our architecture generalizes to diverse
datasets and large structures, achieving an accuracy comparable to MPNNs at
about twice the computational efficiency.
|
2501.19180
|
Enhancing Model Defense Against Jailbreaks with Proactive Safety
Reasoning
|
cs.CR cs.AI
|
Large language models (LLMs) are vital for a wide range of applications yet
remain susceptible to jailbreak threats, which could lead to the generation of
inappropriate responses. Conventional defenses, such as refusal and adversarial
training, often fail to cover corner cases or rare domains, leaving LLMs still
vulnerable to more sophisticated attacks. We propose a novel defense strategy,
Safety Chain-of-Thought (SCoT), which harnesses the enhanced \textit{reasoning
capabilities} of LLMs for proactive assessment of harmful inputs, rather than
simply blocking them. SCoT augments any refusal training datasets to critically
analyze the intent behind each request before generating answers. By employing
proactive reasoning, SCoT enhances the generalization of LLMs across varied
harmful queries and scenarios not covered in the safety alignment corpus.
Additionally, it generates detailed refusals specifying the rules violated.
Comparative evaluations show that SCoT significantly surpasses existing
defenses, reducing vulnerability to out-of-distribution issues and adversarial
manipulations while maintaining strong general capabilities.
|
2501.19182
|
A Communication Framework for Compositional Generation
|
cs.LG
|
Compositionality and compositional generalization--the ability to understand
novel combinations of known concepts--are central characteristics of human
language and are hypothesized to be essential for human cognition. In machine
learning, the emergence of this property has been studied in a communication
game setting, where independent agents (a sender and a receiver) converge to a
shared encoding policy from a set of states to a space of discrete messages,
where the receiver can correctly reconstruct the states observed by the sender
using only the sender's messages. The use of communication games in generation
tasks is still largely unexplored, with recent methods for compositional
generation focusing mainly on the use of supervised guidance (either through
class labels or text). In this work, we take the first steps to fill this gap,
and we present a self-supervised generative communication game-based framework
for creating compositional encodings in learned representations from
pre-trained encoder-decoder models. In an Iterated Learning (IL) protocol
involving a sender and a receiver, we apply alternating pressures for
compression and diversity of encoded discrete messages, so that the protocol
converges to an efficient but unambiguous encoding. Approximate message entropy
regularization is used to favor compositional encodings. Our framework is based
on rigorous justifications and proofs of defining and balancing the concepts of
Efficiency, Unambiguity and Non-Holisticity in encoding. We test our method on
the compositional image dataset Shapes3D, demonstrating robust performance in
both reconstruction and compositionality metrics, surpassing other tested
discrete message frameworks.
|
2501.19183
|
Position: Curvature Matrices Should Be Democratized via Linear Operators
|
cs.LG
|
Structured large matrices are prevalent in machine learning. A particularly
important class is curvature matrices like the Hessian, which are central to
understanding the loss landscape of neural nets (NNs), and enable second-order
optimization, uncertainty quantification, model pruning, data attribution, and
more. However, curvature computations can be challenging due to the complexity
of automatic differentiation, and the variety and structural assumptions of
curvature proxies, like sparsity and Kronecker factorization. In this position
paper, we argue that linear operators -- an interface for performing
matrix-vector products -- provide a general, scalable, and user-friendly
abstraction to handle curvature matrices. To support this position, we
developed $\textit{curvlinops}$, a library that provides curvature matrices
through a unified linear operator interface. We demonstrate with
$\textit{curvlinops}$ how this interface can hide complexity, simplify
applications, be extensible and interoperable with other libraries, and scale
to large NNs.
|
2501.19184
|
A Survey on Class-Agnostic Counting: Advancements from Reference-Based
to Open-World Text-Guided Approaches
|
cs.CV
|
Visual object counting has recently shifted towards class-agnostic counting
(CAC), which addresses the challenge of counting objects across arbitrary
categories -- a crucial capability for flexible and generalizable counting
systems. Unlike humans, who effortlessly identify and count objects from
diverse categories without prior knowledge, most existing counting methods are
restricted to enumerating instances of known classes, requiring extensive
labeled datasets for training and struggling in open-vocabulary settings. In
contrast, CAC aims to count objects belonging to classes never seen during
training, operating in a few-shot setting. In this paper, we present the first
comprehensive review of CAC methodologies. We propose a taxonomy to categorize
CAC approaches into three paradigms based on how target object classes can be
specified: reference-based, reference-less, and open-world text-guided.
Reference-based approaches achieve state-of-the-art performance by relying on
exemplar-guided mechanisms. Reference-less methods eliminate exemplar
dependency by leveraging inherent image patterns. Finally, open-world
text-guided methods use vision-language models, enabling object class
descriptions via textual prompts, offering a flexible and promising solution.
Based on this taxonomy, we provide an overview of the architectures of 29 CAC
approaches and report their results on gold-standard benchmarks. We compare
their performance and discuss their strengths and limitations. Specifically, we
present results on the FSC-147 dataset, setting a leaderboard using
gold-standard metrics, and on the CARPK dataset to assess generalization
capabilities. Finally, we offer a critical discussion of persistent challenges,
such as annotation dependency and generalization, alongside future directions.
We believe this survey will be a valuable resource, showcasing CAC advancements
and guiding future research.
|
2501.19191
|
Secured Communication Schemes for UAVs in 5G: CRYSTALS-Kyber and IDS
|
cs.CR cs.AI
|
This paper introduces a secure communication architecture for Unmanned Aerial
Vehicles (UAVs) and ground stations in 5G networks, addressing critical
challenges in network security. The proposed solution integrates the Advanced
Encryption Standard (AES) with Elliptic Curve Cryptography (ECC) and
CRYSTALS-Kyber for key encapsulation, offering a hybrid cryptographic approach.
By incorporating CRYSTALS-Kyber, the framework mitigates vulnerabilities in ECC
against quantum attacks, positioning it as a quantum-resistant alternative. The
architecture is based on a server-client model, with UAVs functioning as
clients and the ground station acting as the server. The system was rigorously
evaluated in both VPN and 5G environments. Experimental results confirm that
CRYSTALS-Kyber delivers strong protection against quantum threats with minimal
performance overhead, making it highly suitable for UAVs with resource
constraints. Moreover, the proposed architecture integrates an Artificial
Intelligence (AI)-based Intrusion Detection System (IDS) to further enhance
security. In performance evaluations, the IDS demonstrated strong results
across multiple models with XGBoost, particularly in more demanding scenarios,
outperforming other models with an accuracy of 97.33% and an AUC of 0.94. These
findings underscore the potential of combining quantum-resistant encryption
mechanisms with AI-driven IDS to create a robust, scalable, and secure
communication framework for UAV networks, particularly within the
high-performance requirements of 5G environments.
|
2501.19194
|
APEX: Automated Parameter Exploration for Low-Power Wireless Protocols
|
cs.NI cs.SY eess.SY
|
Careful parametrization of networking protocols is crucial to maximize the
performance of low-power wireless systems and ensure that stringent application
requirements can be met. This is a non-trivial task involving thorough
characterization on testbeds and requiring expert knowledge. Unfortunately, the
community still lacks a tool to facilitate parameter exploration while
minimizing the necessary experimentation time on testbeds. Such a tool would be
invaluable, as exhaustive parameter searches can be time-prohibitive or
unfeasible given the limited availability of testbeds, whereas non-exhaustive
unguided searches rarely deliver satisfactory results. In this paper, we
present APEX, a framework enabling an automated and informed parameter
exploration for low-power wireless protocols and allowing to converge to an
optimal parameter set within a limited number of testbed trials. We design APEX
using Gaussian processes to effectively handle noisy experimental data and
estimate the optimality of a certain parameter combination. After developing a
prototype of APEX, we demonstrate its effectiveness by parametrizing two IEEE
802.15.4 protocols for a wide range of application requirements. Our results
show that APEX can return the best parameter set with up to 10.6x, 4.5x and
3.25x less testbed trials than traditional solutions based on exhaustive
search, greedy approaches, and reinforcement learning, respectively.
|
2501.19195
|
Rethinking Early Stopping: Refine, Then Calibrate
|
cs.LG cs.AI
|
Machine learning classifiers often produce probabilistic predictions that are
critical for accurate and interpretable decision-making in various domains. The
quality of these predictions is generally evaluated with proper losses like
cross-entropy, which decompose into two components: calibration error assesses
general under/overconfidence, while refinement error measures the ability to
distinguish different classes. In this paper, we provide theoretical and
empirical evidence that these two errors are not minimized simultaneously
during training. Selecting the best training epoch based on validation loss
thus leads to a compromise point that is suboptimal for both calibration error
and, most importantly, refinement error. To address this, we introduce a new
metric for early stopping and hyperparameter tuning that makes it possible to
minimize refinement error during training. The calibration error is minimized
after training, using standard techniques. Our method integrates seamlessly
with any architecture and consistently improves performance across diverse
classification tasks.
|
2501.19196
|
RaySplats: Ray Tracing based Gaussian Splatting
|
cs.CV
|
3D Gaussian Splatting (3DGS) is a process that enables the direct creation of
3D objects from 2D images. This representation offers numerous advantages,
including rapid training and rendering. However, a significant limitation of
3DGS is the challenge of incorporating light and shadow reflections, primarily
due to the utilization of rasterization rather than ray tracing for rendering.
This paper introduces RaySplats, a model that employs ray-tracing based
Gaussian Splatting. Rather than utilizing the projection of Gaussians, our
method employs a ray-tracing mechanism, operating directly on Gaussian
primitives represented by confidence ellipses with RGB colors. In practice, we
compute the intersection between ellipses and rays to construct ray-tracing
algorithms, facilitating the incorporation of meshes with Gaussian Splatting
models and the addition of lights, shadows, and other related effects.
|
2501.19200
|
A Variational Perspective on Generative Protein Fitness Optimization
|
cs.LG
|
The goal of protein fitness optimization is to discover new protein variants
with enhanced fitness for a given use. The vast search space and the sparsely
populated fitness landscape, along with the discrete nature of protein
sequences, pose significant challenges when trying to determine the gradient
towards configurations with higher fitness. We introduce Variational Latent
Generative Protein Optimization (VLGPO), a variational perspective on fitness
optimization. Our method embeds protein sequences in a continuous latent space
to enable efficient sampling from the fitness distribution and combines a
(learned) flow matching prior over sequence mutations with a fitness predictor
to guide optimization towards sequences with high fitness. VLGPO achieves
state-of-the-art results on two different protein benchmarks of varying
complexity. Moreover, the variational design with explicit prior and likelihood
functions offers a flexible plug-and-play framework that can be easily
customized to suit various protein design tasks.
|
2501.19201
|
Efficient Reasoning with Hidden Thinking
|
cs.CL cs.AI cs.LG
|
Chain-of-Thought (CoT) reasoning has become a powerful framework for
improving complex problem-solving capabilities in Multimodal Large Language
Models (MLLMs). However, the verbose nature of textual reasoning introduces
significant inefficiencies. In this work, we propose $\textbf{Heima}$ (as
hidden llama), an efficient reasoning framework that leverages reasoning CoTs
at hidden latent space. We design the Heima Encoder to condense each
intermediate CoT into a compact, higher-level hidden representation using a
single thinking token, effectively minimizing verbosity and reducing the
overall number of tokens required during the reasoning process. Meanwhile, we
design corresponding Heima Decoder with traditional Large Language Models
(LLMs) to adaptively interpret the hidden representations into variable-length
textual sequence, reconstructing reasoning processes that closely resemble the
original CoTs. Experimental results across diverse reasoning MLLM benchmarks
demonstrate that Heima model achieves higher generation efficiency while
maintaining or even better zero-shot task accuracy. Moreover, the effective
reconstruction of multimodal reasoning processes with Heima Decoder validates
both the robustness and interpretability of our approach.
|
2501.19202
|
Improving the Robustness of Representation Misdirection for Large
Language Model Unlearning
|
cs.CL
|
Representation Misdirection (RM) and variants are established large language
model (LLM) unlearning methods with state-of-the-art performance. In this
paper, we show that RM methods inherently reduce models' robustness, causing
them to misbehave even when a single non-adversarial forget-token is in the
retain-query. Toward understanding underlying causes, we reframe the unlearning
process as backdoor attacks and defenses: forget-tokens act as backdoor
triggers that, when activated in retain-queries, cause disruptions in RM
models' behaviors, similar to successful backdoor attacks. To mitigate this
vulnerability, we propose Random Noise Augmentation -- a model and method
agnostic approach with theoretical guarantees for improving the robustness of
RM methods. Extensive experiments demonstrate that RNA significantly improves
the robustness of RM models while enhancing the unlearning performances.
|
2501.19203
|
Single cell resolution 3D imaging and segmentation within intact live
tissues
|
q-bio.QM cs.AI cs.CV q-bio.CB q-bio.TO
|
Epithelial cells form diverse structures from squamous spherical organoids to
densely packed pseudostratified tissues. Quantification of cellular properties
in these contexts requires high-resolution deep imaging and computational
techniques to achieve truthful three-dimensional (3D) structural features.
Here, we describe a detailed step-by-step protocol for sample preparation,
imaging and deep-learning-assisted cell segmentation to achieve accurate
quantification of fluorescently labelled individual cells in 3D within live
tissues. We share the lessons learned through troubleshooting 3D imaging of
Drosophila wing discs, including considerations on the choice of microscopy
modality and settings (objective, sample mounting) and available segmentation
methods. In addition, we include a computational pipeline alongside custom code
to assist replication of the protocol. While we focus on the segmentation of
cell outlines from membrane labelling, this protocol applies to a wide variety
of samples, and we believe it be valuable for studying other tissues that
demand complex analysis in 3D.
|
2501.19205
|
RIGNO: A Graph-based framework for robust and accurate operator learning
for PDEs on arbitrary domains
|
cs.LG
|
Learning the solution operators of PDEs on arbitrary domains is challenging
due to the diversity of possible domain shapes, in addition to the often
intricate underlying physics. We propose an end-to-end graph neural network
(GNN) based neural operator to learn PDE solution operators from data on point
clouds in arbitrary domains. Our multi-scale model maps data between
input/output point clouds by passing it through a downsampled regional mesh.
Many novel elements are also incorporated to ensure resolution invariance and
temporal continuity. Our model, termed RIGNO, is tested on a challenging suite
of benchmarks, composed of various time-dependent and steady PDEs defined on a
diverse set of domains. We demonstrate that RIGNO is significantly more
accurate than neural operator baselines and robustly generalizes to unseen
spatial resolutions and time instances.
|
2501.19206
|
An Empirical Game-Theoretic Analysis of Autonomous Cyber-Defence Agents
|
cs.AI cs.CR cs.GT
|
The recent rise in increasingly sophisticated cyber-attacks raises the need
for robust and resilient autonomous cyber-defence (ACD) agents. Given the
variety of cyber-attack tactics, techniques and procedures (TTPs) employed,
learning approaches that can return generalisable policies are desirable.
Meanwhile, the assurance of ACD agents remains an open challenge. We address
both challenges via an empirical game-theoretic analysis of deep reinforcement
learning (DRL) approaches for ACD using the principled double oracle (DO)
algorithm. This algorithm relies on adversaries iteratively learning
(approximate) best responses against each others' policies; a computationally
expensive endeavour for autonomous cyber operations agents. In this work we
introduce and evaluate a theoretically-sound, potential-based reward shaping
approach to expedite this process. In addition, given the increasing number of
open-source ACD-DRL approaches, we extend the DO formulation to allow for
multiple response oracles (MRO), providing a framework for a holistic
evaluation of ACD approaches.
|
2501.19207
|
Learning Sheaf Laplacian Optimizing Restriction Maps
|
eess.SP cs.LG
|
The aim of this paper is to propose a novel framework to infer the sheaf
Laplacian, including the topology of a graph and the restriction maps, from a
set of data observed over the nodes of a graph. The proposed method is based on
sheaf theory, which represents an important generalization of graph signal
processing. The learning problem aims to find the sheaf Laplacian that
minimizes the total variation of the observed data, where the variation over
each edge is also locally minimized by optimizing the associated restriction
maps. Compared to alternative methods based on semidefinite programming, our
solution is significantly more numerically efficient, as all its fundamental
steps are resolved in closed form. The method is numerically tested on data
consisting of vectors defined over subspaces of varying dimensions at each
node. We demonstrate how the resulting graph is influenced by two key factors:
the cross-correlation and the dimensionality difference of the data residing on
the graph's nodes.
|
2501.19208
|
Learning While Repositioning in On-Demand Vehicle Sharing Networks
|
stat.ML cs.LG math.OC
|
We consider a network inventory problem motivated by one-way, on-demand
vehicle sharing services. Due to uncertainties in both demand and returns, as
well as a fixed number of rental units across an $n$-location network, the
service provider must periodically reposition vehicles to match supply with
demand spatially while minimizing costs. The optimal repositioning policy under
a general $n$-location network is intractable without knowing the optimal value
function. We introduce the best base-stock repositioning policy as a
generalization of the classical inventory control policy to $n$ dimensions, and
establish its asymptotic optimality in two distinct limiting regimes under
general network structures. We present reformulations to efficiently compute
this best base-stock policy in an offline setting with pre-collected data.
In the online setting, we show that a natural Lipschitz-bandit approach
achieves a regret guarantee of $\widetilde{O}(T^{\frac{n}{n+1}})$, which
suffers from the exponential dependence on $n$. We illustrate the challenges of
learning with censored data in networked systems through a regret lower bound
analysis and by demonstrating the suboptimality of alternative algorithmic
approaches. Motivated by these challenges, we propose an Online Gradient
Repositioning algorithm that relies solely on censored demand. Under a mild
cost-structure assumption, we prove that it attains an optimal regret of
$O(n^{2.5} \sqrt{T})$, which matches the regret lower bound in $T$ and achieves
only polynomial dependence on $n$. The key algorithmic innovation involves
proposing surrogate costs to disentangle intertemporal dependencies and
leveraging dual solutions to find the gradient of policy change. Numerical
experiments demonstrate the effectiveness of our proposed methods.
|
2501.19214
|
A single-loop SPIDER-type stochastic subgradient method for
expectation-constrained nonconvex nonsmooth optimization
|
math.OC cs.CC cs.LG cs.NA math.NA
|
Many real-world problems, such as those with fairness constraints, involve
complex expectation constraints and large datasets, necessitating the design of
efficient stochastic methods to solve them. Most existing research focuses on
cases with no {constraint} or easy-to-project constraints or deterministic
constraints. In this paper, we consider nonconvex nonsmooth stochastic
optimization problems with expectation constraints, for which we build a novel
exact penalty model. We first show the relationship between the penalty model
and the original problem. Then on solving the penalty problem, we present a
single-loop SPIDER-type stochastic subgradient method, which utilizes the
subgradients of both the objective and constraint functions, as well as the
constraint function value at each iteration. Under certain regularity
conditions (weaker than Slater-type constraint qualification or strong
feasibility assumed in existing works), we establish an iteration complexity
result of $O(\epsilon^{-4})$ to reach a near-$\epsilon$ stationary point of the
penalized problem in expectation, matching the lower bound for such tasks.
Building on the exact penalization, an $(\epsilon,\epsilon)$-KKT point of the
original problem is obtained. For a few scenarios, our complexity of either the
{objective} sample subgradient or the constraint sample function values can be
lower than the state-of-the-art results by a factor of $\epsilon^{-2}$.
Moreover, on solving two fairness-constrained problems, our method is
significantly (up to 466 times) faster than the state-of-the-art algorithms,
including switching subgradient method and inexact proximal point methods.
|
2501.19215
|
Strassen Attention: Unlocking Compositional Abilities in Transformers
Based on a New Lower Bound Method
|
cs.LG cs.AI
|
We propose a novel method to evaluate the theoretical limits of Transformers,
allowing us to prove the first lower bounds against one-layer softmax
Transformers with infinite precision. We establish those bounds for three tasks
that require advanced reasoning. The first task, Match3 (Sanford et al., 2023),
requires looking at all triples of positions. The second and third tasks
address compositionality-based reasoning: one is composition of functions (Peng
et al., 2024) and the other is composition of binary relations. We formally
prove the inability of one-layer softmax Transformers to solve any of these
tasks. In an attempt to overcome these limitations, we introduce Strassen
attention and prove that with this mechanism a one-layer Transformer can in
principle solve all these tasks. We also show that it enjoys sub-cubic
running-time complexity, making it more scalable than similar previously
proposed mechanisms, such as higher-order attention (Sanford et al., 2023). To
complement our theoretical findings, we experimentally studied Strassen
attention and compared it against standard (Vaswani et al, 2017), higher-order
attention (Sanford et al., 2023) and triangular attention (Bergen et al. 2021).
Our results help to disentangle all these attention mechanisms, highlighting
their strengths and limitations. In particular, Strassen attention outperforms
standard attention significantly on all the tasks. Altogether, understanding
the theoretical limitations can guide research towards scalable attention
mechanisms that improve the reasoning abilities of Transformers.
|
2501.19216
|
E2Former: A Linear-time Efficient and Equivariant Transformer for
Scalable Molecular Modeling
|
cs.LG cond-mat.mtrl-sci
|
Equivariant Graph Neural Networks (EGNNs) have demonstrated significant
success in modeling microscale systems, including those in chemistry, biology
and materials science. However, EGNNs face substantial computational challenges
due to the high cost of constructing edge features via spherical tensor
products, making them impractical for large-scale systems. To address this
limitation, we introduce E2Former, an equivariant and efficient transformer
architecture that incorporates the Wigner $6j$ convolution (Wigner $6j$ Conv).
By shifting the computational burden from edges to nodes, the Wigner $6j$ Conv
reduces the complexity from $O(|\mathcal{E}|)$ to $ O(| \mathcal{V}|)$ while
preserving both the model's expressive power and rotational equivariance. We
show that this approach achieves a 7x-30x speedup compared to conventional
$\mathrm{SO}(3)$ convolutions. Furthermore, our empirical results demonstrate
that the derived E2Former mitigates the computational challenges of existing
approaches without compromising the ability to capture detailed geometric
information. This development could suggest a promising direction for scalable
and efficient molecular modeling.
|
2501.19218
|
A parallelizable variant of HCA*
|
eess.SY cs.SY
|
This paper presents a parallelizable variant of the well-known Hierarchical
Cooperative A* algorithm (HCA*) for the multi-agent path finding (MAPF)
problem. In this variant, all agents initially find their shortest paths
disregarding the presence of others. This is done using A*. Then an
intersection graph (IG) is constructed; each agent is a node and two nodes have
an edge between them if the paths of corresponding agents collide. Thereafter,
an independent set is extracted with the aid of an approximation algorithm for
the maximum independent set problem. The paths for the agents belonging to
independent set are fixed. The rest of agents now again find their shortest
paths, this time ensuring no collision with the prior agents. Space-time A*,
which is a crucial component of HCA*, is used here. These iterations continue
until no agents are left. Since the tasks of finding shortest paths for the
agents in any iteration are independent of each other, the proposed algorithm
can be parallelized to a large extent. In addition to this, the task of
determining the IG can also be done in parallel by dividing the map into
sections and with each agent focusing on a particular section. The parallelism
does come at a cost of communication between the agents and the server. This is
accounted for in the simulations. As an added advantage, the user need not make
a choice for the priority order. It is observed, empirically, that the proposed
algorithm outperforms HCA* in terms of the computation time and the cost value
in many cases. Simulations are provided for corroboration.
|
2501.19220
|
Analysis and predictability of centrality measures in competition
networks
|
cs.SI
|
The Common Out-Neighbor (or CON) score quantifies shared influence through
outgoing links in competitive contexts. A dynamic analysis of competition
networks reveals the CON score as a powerful predictor of node rankings.
Defined in first-order and second-order forms, the CON score captures both
direct and indirect competitive interactions, offering a comprehensive metric
for evaluating node influence. Using datasets from Survivor, Chess.com, and
Dota~2 online gaming competitions, directed competition networks are
constructed, and the dynamic CON score is integrated into supervised machine
learning models. Empirical results show that the CON score consistently
outperforms traditional centrality measures such as PageRank, closeness, and
betweenness centrality in classification tasks.
By integrating dynamic centrality measures with machine learning, our
proposed methodology accurately predicts outcomes in competition networks. The
findings underline the CON score's robustness as a feature in node
classification, offering a significant advancement in understanding and
analyzing competitive interactions.
|
2501.19223
|
Through the Looking Glass: LLM-Based Analysis of AR/VR Android
Applications Privacy Policies
|
cs.CR cs.LG
|
\begin{abstract} This paper comprehensively analyzes privacy policies in
AR/VR applications, leveraging BERT, a state-of-the-art text classification
model, to evaluate the clarity and thoroughness of these policies. By comparing
the privacy policies of AR/VR applications with those of free and premium
websites, this study provides a broad perspective on the current state of
privacy practices within the AR/VR industry. Our findings indicate that AR/VR
applications generally offer a higher percentage of positive segments than free
content but lower than premium websites. The analysis of highlighted segments
and words revealed that AR/VR applications strategically emphasize critical
privacy practices and key terms. This enhances privacy policies' clarity and
effectiveness.
|
2501.19224
|
Fast exact recovery of noisy matrix from few entries: the infinity norm
approach
|
math.ST cs.LG math.CO math.PR stat.AP stat.TH
|
The matrix recovery (completion) problem, a central problem in data science
and theoretical computer science, is to recover a matrix $A$ from a relatively
small sample of entries.
While such a task is impossible in general, it has been shown that one can
recover $A$ exactly in polynomial time, with high probability, from a random
subset of entries, under three (basic and necessary) assumptions: (1) the rank
of $A$ is very small compared to its dimensions (low rank), (2) $A$ has
delocalized singular vectors (incoherence), and (3) the sample size is
sufficiently large.
There are many different algorithms for the task, including convex
optimization by Candes, Tao and Recht (2009), alternating projection by Hardt
and Wooters (2014) and low rank approximation with gradient descent by
Keshavan, Montanari and Oh (2009, 2010).
In applications, it is more realistic to assume that data is noisy. In this
case, these approaches provide an approximate recovery with small root mean
square error. However, it is hard to transform such approximate recovery to an
exact one.
Recently, results by Abbe et al. (2017) and Bhardwaj et al. (2023) concerning
approximation in the infinity norm showed that we can achieve exact recovery
even in the noisy case, given that the ground matrix has bounded precision.
Beyond the three basic assumptions above, they required either the condition
number of $A$ is small (Abbe et al.) or the gap between consecutive singular
values is large (Bhardwaj et al.).
In this paper, we remove these extra spectral assumptions. As a result, we
obtain a simple algorithm for exact recovery in the noisy case, under only
three basic assumptions. This is the first such algorithm. To analyse the
algorithm, we introduce a contour integration argument which is totally
different from all previous methods and may be of independent interest.
|
2501.19227
|
Integrating Semi-Supervised and Active Learning for Semantic
Segmentation
|
cs.CV cs.AI
|
In this paper, we propose a novel active learning approach integrated with an
improved semi-supervised learning framework to reduce the cost of manual
annotation and enhance model performance. Our proposed approach effectively
leverages both the labelled data selected through active learning and the
unlabelled data excluded from the selection process. The proposed active
learning approach pinpoints areas where the pseudo-labels are likely to be
inaccurate. Then, an automatic and efficient pseudo-label auto-refinement
(PLAR) module is proposed to correct pixels with potentially erroneous
pseudo-labels by comparing their feature representations with those of labelled
regions. This approach operates without increasing the labelling budget and is
based on the cluster assumption, which states that pixels belonging to the same
class should exhibit similar representations in feature space. Furthermore,
manual labelling is only applied to the most difficult and uncertain areas in
unlabelled data, where insufficient information prevents the PLAR module from
making a decision. We evaluated the proposed hybrid semi-supervised active
learning framework on two benchmark datasets, one from natural and the other
from remote sensing imagery domains. In both cases, it outperformed
state-of-the-art methods in the semantic segmentation task.
|
2501.19232
|
A Zero-Shot Generalization Framework for LLM-Driven Cross-Domain
Sequential Recommendation
|
cs.IR cs.AI
|
Zero-shot cross-domain sequential recommendation (ZCDSR) enables predictions
in unseen domains without the need for additional training or fine-tuning,
making it particularly valuable in data-sparse environments where traditional
models struggle. Recent advancements in large language models (LLMs) have
greatly improved ZCDSR by leveraging rich pretrained representations to
facilitate cross-domain knowledge transfer. However, a key challenge persists:
domain semantic bias, which arises from variations in vocabulary and content
focus across domains. This misalignment leads to inconsistencies in item
embeddings and hinders generalization.
To address this issue, we propose a novel framework designed to enhance
LLM-based ZCDSR by improving cross-domain alignment at both the item and
sequential levels. At the item level, we introduce a generalization loss that
promotes inter-domain compactness by aligning embeddings of similar items
across domains while maintaining intra-domain diversity to preserve unique item
characteristics. This prevents embeddings from becoming overly generic while
ensuring effective transferability. At the sequential level, we develop a
method for transferring user behavioral patterns by clustering user sequences
in the source domain and applying attention-based aggregation for target domain
inference. This dynamic adaptation of user embeddings allows effective
zero-shot recommendations without requiring target-domain interactions.
Comprehensive experiments across multiple datasets and domains demonstrate
that our framework significantly improves sequential recommendation performance
in the ZCDSR setting. By mitigating domain bias and enhancing the
transferability of sequential patterns, our method provides a scalable and
robust approach for achieving more effective zero-shot recommendations across
domains.
|
2501.19234
|
Hourly Short Term Load Forecasting for Residential Buildings and Energy
Communities
|
cs.LG
|
Electricity load consumption may be extremely complex in terms of profile
patterns, as it depends on a wide range of human factors, and it is often
correlated with several exogenous factors, such as the availability of
renewable energy and the weather conditions. The first goal of this paper is to
investigate the performance of a large selection of different types of
forecasting models in predicting the electricity load consumption within the
short time horizon of a day or few hours ahead. Such forecasts may be rather
useful for the energy management of individual residential buildings or small
energy communities. In particular, we introduce persistence models, standard
auto-regressive-based machine learning models, and more advanced deep learning
models. The second goal of this paper is to introduce two alternative modeling
approaches that are simpler in structure while they take into account domain
specific knowledge, as compared to the previously mentioned black-box modeling
techniques. In particular, we consider the persistence-based auto-regressive
model (PAR) and the seasonal persistence-based regressive model (SPR), priorly
introduced by the authors. In this paper, we specifically tailor these models
to accommodate the generation of hourly forecasts. The introduced models and
the induced comparative analysis extend prior work of the authors which was
restricted to day-ahead forecasts. We observed a 15-30% increase in the
prediction accuracy of the newly introduced hourly-based forecasting models
over existing approaches.
|
2501.19237
|
DINAMO: Dynamic and INterpretable Anomaly MOnitoring for Large-Scale
Particle Physics Experiments
|
hep-ex cs.LG
|
Ensuring reliable data collection in large-scale particle physics experiments
demands Data Quality Monitoring (DQM) procedures to detect possible detector
malfunctions and preserve data integrity. Traditionally, this
resource-intensive task has been handled by human shifters that struggle with
frequent changes in operational conditions. We present novel, interpretable,
robust, and scalable DQM algorithms designed to automate anomaly detection in
time-dependent settings. Our approach constructs evolving histogram templates
with built-in uncertainties, featuring both a statistical variant - extending
the classical Exponentially Weighted Moving Average (EWMA) - and a machine
learning (ML)-enhanced version that leverages a transformer encoder for
improved adaptability. Experimental validations on synthetic datasets
demonstrate the high accuracy, adaptability, and interpretability of these
methods, with the statistical variant being commissioned in the LHCb experiment
at the Large Hadron Collider, underscoring its real-world impact. The code used
in this study is available at https://github.com/ArseniiGav/DINAMO.
|
2501.19239
|
Multi-agent Multi-armed Bandit with Fully Heavy-tailed Dynamics
|
cs.LG stat.ML
|
We study decentralized multi-agent multi-armed bandits in fully heavy-tailed
settings, where clients communicate over sparse random graphs with heavy-tailed
degree distributions and observe heavy-tailed (homogeneous or heterogeneous)
reward distributions with potentially infinite variance. The objective is to
maximize system performance by pulling the globally optimal arm with the
highest global reward mean across all clients. We are the first to address such
fully heavy-tailed scenarios, which capture the dynamics and challenges in
communication and inference among multiple clients in real-world systems. In
homogeneous settings, our algorithmic framework exploits hub-like structures
unique to heavy-tailed graphs, allowing clients to aggregate rewards and reduce
noises via hub estimators when constructing UCB indices; under $M$ clients and
degree distributions with power-law index $\alpha > 1$, our algorithm attains a
regret bound (almost) of order $O(M^{1 -\frac{1}{\alpha}} \log{T})$. Under
heterogeneous rewards, clients synchronize by communicating with neighbors,
aggregating exchanged estimators in UCB indices; With our newly established
information delay bounds on sparse random graphs, we prove a regret bound of
$O(M \log{T})$. Our results improve upon existing work, which only address
time-invariant connected graphs, or light-tailed dynamics in dense graphs and
rewards.
|
2501.19241
|
Emancipatory Information Retrieval
|
cs.IR cs.HC
|
Our world today is facing a confluence of several mutually reinforcing crises
each of which intersects with concerns of social justice and emancipation. This
paper is a provocation for the role of computer-mediated information access in
our emancipatory struggles. We define emancipatory information retrieval as the
study and development of information access methods that challenge various
forms of human oppression, and situates its activities within broader
collective emancipatory praxis. The term "emancipatory" here signifies the
moral concerns of universal humanization of all peoples and the elimination of
oppression to create the conditions under which we can collectively flourish.
To develop an emancipatory research agenda for information retrieval (IR), in
this paper we speculate about the practices that the community can adopt,
enumerate some of the projects that the field should undertake, and discuss
provocations to spark new ideas and directions for research. We challenge the
field of IR research to embrace humanistic values and commit to universal
emancipation and social justice. We also invite scholars from fields such as
human-computer interaction, information sciences, media studies, design, social
sciences, humanities, democratic theory, and critical theory, as well as legal
and policy experts, civil rights and social justice activists, and artists to
join us in realizing this transformation. In this process, we must both imagine
post-oppressive worlds, and reimagine the role of IR in that world and in the
journey that leads us there.
|
2501.19243
|
Accelerating Diffusion Transformer via Error-Optimized Cache
|
cs.CV
|
Diffusion Transformer (DiT) is a crucial method for content generation.
However, it needs a lot of time to sample. Many studies have attempted to use
caching to reduce the time consumption of sampling. Existing caching methods
accelerate generation by reusing DiT features from the previous time step and
skipping calculations in the next, but they tend to locate and cache low-error
modules without focusing on reducing caching-induced errors, resulting in a
sharp decline in generated content quality when increasing caching intensity.
To solve this problem, we propose the Error-Optimized Cache (EOC). This method
introduces three key improvements: (1) Prior knowledge extraction: Extract and
process the caching differences; (2) A judgment method for cache optimization:
Determine whether certain caching steps need to be optimized; (3) Cache
optimization: reduce caching errors. Experiments show that this algorithm
significantly reduces the error accumulation caused by caching (especially
over-caching). On the ImageNet dataset, without significantly increasing the
computational burden, this method improves the quality of the generated images
under the over-caching, rule-based, and training-based methods. Specifically,
the Fr\'echet Inception Distance (FID) values are improved as follows: from
6.857 to 5.821, from 3.870 to 3.692 and form 3.539 to 3.451 respectively.
|
2501.19245
|
SHARPIE: A Modular Framework for Reinforcement Learning and Human-AI
Interaction Experiments
|
cs.AI cs.HC
|
Reinforcement learning (RL) offers a general approach for modeling and
training AI agents, including human-AI interaction scenarios. In this paper, we
propose SHARPIE (Shared Human-AI Reinforcement Learning Platform for
Interactive Experiments) to address the need for a generic framework to support
experiments with RL agents and humans. Its modular design consists of a
versatile wrapper for RL environments and algorithm libraries, a
participant-facing web interface, logging utilities, deployment on popular
cloud and participant recruitment platforms. It empowers researchers to study a
wide variety of research questions related to the interaction between humans
and RL agents, including those related to interactive reward specification and
learning, learning from human feedback, action delegation, preference
elicitation, user-modeling, and human-AI teaming. The platform is based on a
generic interface for human-RL interactions that aims to standardize the field
of study on RL in human contexts.
|
2501.19247
|
Clustering in hyperbolic balls
|
cs.LG
|
The idea of representations of the data in negatively curved manifolds
recently attracted a lot of attention and gave a rise to the new research
direction named {\it hyperbolic machine learning} (ML). In order to unveil the
full potential of this new paradigm, efficient techniques for data analysis and
statistical modeling in hyperbolic spaces are necessary. In the present paper
rigorous mathematical framework for clustering in hyperbolic spaces is
established. First, we introduce the $k$-means clustering in hyperbolic balls,
based on the novel definition of barycenter. Second, we present the
expectation-maximization (EM) algorithm for learning mixtures of novel
probability distributions in hyperbolic balls. In such a way we lay the
foundation of unsupervised learning in hyperbolic spaces.
|
2501.19252
|
Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search
|
cs.CV
|
The remarkable progress in text-to-video diffusion models enables
photorealistic generations, although the contents of the generated video often
include unnatural movement or deformation, reverse playback, and motionless
scenes. Recently, an alignment problem has attracted huge attention, where we
steer the output of diffusion models based on some quantity on the goodness of
the content. Because there is a large room for improvement of perceptual
quality along the frame direction, we should address which metrics we should
optimize and how we can optimize them in the video generation. In this paper,
we propose diffusion latent beam search with lookahead estimator, which can
select better diffusion latent to maximize a given alignment reward, at
inference time. We then point out that the improvement of perceptual video
quality considering the alignment to prompts requires reward calibration by
weighting existing metrics. When evaluating outputs by using vision language
models as a proxy of humans, many previous metrics to quantify the naturalness
of video do not always correlate with evaluation and also depend on the degree
of dynamic descriptions in evaluation prompts. We demonstrate that our method
improves the perceptual quality based on the calibrated reward, without model
parameter update, and outputs the best generation compared to greedy search and
best-of-N sampling. We provide practical guidelines on which axes, among search
budget, lookahead steps for reward estimate, and denoising steps, in the
reverse diffusion process, we should allocate the inference-time computation.
|
2501.19254
|
Linear $Q$-Learning Does Not Diverge: Convergence Rates to a Bounded Set
|
cs.LG cs.AI stat.ML
|
$Q$-learning is one of the most fundamental reinforcement learning
algorithms. Previously, it is widely believed that $Q$-learning with linear
function approximation (i.e., linear $Q$-learning) suffers from possible
divergence. This paper instead establishes the first $L^2$ convergence rate of
linear $Q$-learning to a bounded set. Notably, we do not make any modification
to the original linear $Q$-learning algorithm, do not make any Bellman
completeness assumption, and do not make any near-optimality assumption on the
behavior policy. All we need is an $\epsilon$-softmax behavior policy with an
adaptive temperature. The key to our analysis is the general result of
stochastic approximations under Markovian noise with fast-changing transition
functions. As a side product, we also use this general result to establish the
$L^2$ convergence rate of tabular $Q$-learning with an $\epsilon$-softmax
behavior policy, for which we rely on a novel pseudo-contraction property of
the weighted Bellman optimality operator.
|
2501.19255
|
ContextFormer: Redefining Efficiency in Semantic Segmentation
|
cs.CV
|
Semantic segmentation assigns labels to pixels in images, a critical yet
challenging task in computer vision. Convolutional methods, although capturing
local dependencies well, struggle with long-range relationships. Vision
Transformers (ViTs) excel in global context capture but are hindered by high
computational demands, especially for high-resolution inputs. Most research
optimizes the encoder architecture, leaving the bottleneck underexplored - a
key area for enhancing performance and efficiency. We propose ContextFormer, a
hybrid framework leveraging the strengths of CNNs and ViTs in the bottleneck to
balance efficiency, accuracy, and robustness for real-time semantic
segmentation. The framework's efficiency is driven by three synergistic
modules: the Token Pyramid Extraction Module (TPEM) for hierarchical
multi-scale representation, the Transformer and Modulating DepthwiseConv
(Trans-MDC) block for dynamic scale-aware feature modeling, and the Feature
Merging Module (FMM) for robust integration with enhanced spatial and
contextual consistency. Extensive experiments on ADE20K, Pascal Context,
CityScapes, and COCO-Stuff datasets show ContextFormer significantly
outperforms existing models, achieving state-of-the-art mIoU scores, setting a
new benchmark for efficiency and performance. The codes will be made publicly
available.
|
2501.19256
|
Objective Metrics for Human-Subjects Evaluation in Explainable
Reinforcement Learning
|
cs.AI cs.HC cs.RO
|
Explanation is a fundamentally human process. Understanding the goal and
audience of the explanation is vital, yet existing work on explainable
reinforcement learning (XRL) routinely does not consult humans in their
evaluations. Even when they do, they routinely resort to subjective metrics,
such as confidence or understanding, that can only inform researchers of users'
opinions, not their practical effectiveness for a given problem. This paper
calls on researchers to use objective human metrics for explanation evaluations
based on observable and actionable behaviour to build more reproducible,
comparable, and epistemically grounded research. To this end, we curate,
describe, and compare several objective evaluation methodologies for applying
explanations to debugging agent behaviour and supporting human-agent teaming,
illustrating our proposed methods using a novel grid-based environment. We
discuss how subjective and objective metrics complement each other to provide
holistic validation and how future work needs to utilise standardised
benchmarks for testing to enable greater comparisons between research.
|
2501.19258
|
VisualSpeech: Enhance Prosody with Visual Context in TTS
|
cs.CL
|
Text-to-Speech (TTS) synthesis faces the inherent challenge of producing
multiple speech outputs with varying prosody from a single text input. While
previous research has addressed this by predicting prosodic information from
both text and speech, additional contextual information, such as visual
features, remains underutilized. This paper investigates the potential of
integrating visual context to enhance prosody prediction. We propose a novel
model, VisualSpeech, which incorporates both visual and textual information for
improved prosody generation. Empirical results demonstrate that visual features
provide valuable prosodic cues beyond the textual input, significantly
enhancing the naturalness and accuracy of the synthesized speech. Audio samples
are available at https://ariameetgit.github.io/VISUALSPEECH-SAMPLES/.
|
2501.19259
|
Neuro-LIFT: A Neuromorphic, LLM-based Interactive Framework for
Autonomous Drone FlighT at the Edge
|
cs.RO cs.CV cs.LG cs.NE cs.SY eess.SY
|
The integration of human-intuitive interactions into autonomous systems has
been limited. Traditional Natural Language Processing (NLP) systems struggle
with context and intent understanding, severely restricting human-robot
interaction. Recent advancements in Large Language Models (LLMs) have
transformed this dynamic, allowing for intuitive and high-level communication
through speech and text, and bridging the gap between human commands and
robotic actions. Additionally, autonomous navigation has emerged as a central
focus in robotics research, with artificial intelligence (AI) increasingly
being leveraged to enhance these systems. However, existing AI-based navigation
algorithms face significant challenges in latency-critical tasks where rapid
decision-making is critical. Traditional frame-based vision systems, while
effective for high-level decision-making, suffer from high energy consumption
and latency, limiting their applicability in real-time scenarios. Neuromorphic
vision systems, combining event-based cameras and spiking neural networks
(SNNs), offer a promising alternative by enabling energy-efficient, low-latency
navigation. Despite their potential, real-world implementations of these
systems, particularly on physical platforms such as drones, remain scarce. In
this work, we present Neuro-LIFT, a real-time neuromorphic navigation framework
implemented on a Parrot Bebop2 quadrotor. Leveraging an LLM for natural
language processing, Neuro-LIFT translates human speech into high-level
planning commands which are then autonomously executed using event-based
neuromorphic vision and physics-driven planning. Our framework demonstrates its
capabilities in navigating in a dynamic environment, avoiding obstacles, and
adapting to human instructions in real-time.
|
2501.19264
|
mFollowIR: a Multilingual Benchmark for Instruction Following in
Retrieval
|
cs.IR cs.CL cs.LG
|
Retrieval systems generally focus on web-style queries that are short and
underspecified. However, advances in language models have facilitated the
nascent rise of retrieval models that can understand more complex queries with
diverse intents. However, these efforts have focused exclusively on English;
therefore, we do not yet understand how they work across languages. We
introduce mFollowIR, a multilingual benchmark for measuring
instruction-following ability in retrieval models. mFollowIR builds upon the
TREC NeuCLIR narratives (or instructions) that span three diverse languages
(Russian, Chinese, Persian) giving both query and instruction to the retrieval
models. We make small changes to the narratives and isolate how well retrieval
models can follow these nuanced changes. We present results for both
multilingual (XX-XX) and cross-lingual (En-XX) performance. We see strong
cross-lingual performance with English-based retrievers that trained using
instructions, but find a notable drop in performance in the multilingual
setting, indicating that more work is needed in developing data for
instruction-based multilingual retrievers.
|
2501.19265
|
Medical Semantic Segmentation with Diffusion Pretrain
|
cs.CV cs.LG
|
Recent advances in deep learning have shown that learning robust feature
representations is critical for the success of many computer vision tasks,
including medical image segmentation. In particular, both transformer and
convolutional-based architectures have benefit from leveraging pretext tasks
for pretraining. However, the adoption of pretext tasks in 3D medical imaging
has been less explored and remains a challenge, especially in the context of
learning generalizable feature representations.
We propose a novel pretraining strategy using diffusion models with
anatomical guidance, tailored to the intricacies of 3D medical image data. We
introduce an auxiliary diffusion process to pretrain a model that produce
generalizable feature representations, useful for a variety of downstream
segmentation tasks. We employ an additional model that predicts 3D universal
body-part coordinates, providing guidance during the diffusion process and
improving spatial awareness in generated representations. This approach not
only aids in resolving localization inaccuracies but also enriches the model's
ability to understand complex anatomical structures.
Empirical validation on a 13-class organ segmentation task demonstrate the
effectiveness of our pretraining technique. It surpasses existing restorative
pretraining methods in 3D medical image segmentation by $7.5\%$, and is
competitive with the state-of-the-art contrastive pretraining approach,
achieving an average Dice coefficient of 67.8 in a non-linear evaluation
scenario.
|
2501.19266
|
Jackpot! Alignment as a Maximal Lottery
|
cs.AI cs.LG econ.TH
|
Reinforcement Learning from Human Feedback (RLHF), the standard for aligning
Large Language Models (LLMs) with human values, is known to fail to satisfy
properties that are intuitively desirable, such as respecting the preferences
of the majority \cite{ge2024axioms}. To overcome these issues, we propose the
use of a probabilistic Social Choice rule called \emph{maximal lotteries} as a
replacement for RLHF. We show that a family of alignment techniques, namely
Nash Learning from Human Feedback (NLHF) \cite{munos2023nash} and variants,
approximate maximal lottery outcomes and thus inherit its beneficial
properties.
We confirm experimentally that our proposed methodology handles situations
that arise when working with preferences more robustly than standard RLHF,
including supporting the preferences of the majority, providing principled ways
of handling non-transitivities in the preference data, and robustness to
irrelevant alternatives. This results in systems that better incorporate human
values and respect human intentions.
|
2501.19267
|
Transformer-Based Financial Fraud Detection with Cloud-Optimized
Real-Time Streaming
|
cs.CE
|
As the financial industry becomes more interconnected and reliant on digital
systems, fraud detection systems must evolve to meet growing threats.
Cloud-enabled Transformer models present a transformative opportunity to
address these challenges. By leveraging the scalability, flexibility, and
advanced AI capabilities of cloud platforms, companies can deploy fraud
detection solutions that adapt to real-time data patterns and proactively
respond to evolving threats. Using the Graph self-attention Transformer neural
network module, we can directly excavate gang fraud features from the
transaction network without constructing complicated feature engineering.
Finally, the fraud prediction network is combined to optimize the topological
pattern and the temporal transaction pattern to realize the high-precision
detection of fraudulent transactions. The results of antifraud experiments on
credit card transaction data show that the proposed model outperforms the 7
baseline models on all evaluation indicators: In the transaction fraud
detection task, the average accuracy (AP) increased by 20% and the area under
the ROC curve (AUC) increased by 2.7% on average compared with the benchmark
graph attention neural network (GAT), which verified the effectiveness of the
proposed model in the detection of credit card fraud transactions.
|
2501.19270
|
Imagine with the Teacher: Complete Shape in a Multi-View Distillation
Way
|
cs.CV
|
Point cloud completion aims to recover the completed 3D shape of an object
from its partial observation caused by occlusion, sensor's limitation, noise,
etc. When some key semantic information is lost in the incomplete point cloud,
the neural network needs to infer the missing part based on the input
information. Intuitively we would apply an autoencoder architecture to solve
this kind of problem, which take the incomplete point cloud as input and is
supervised by the ground truth. This process that develops model's imagination
from incomplete shape to complete shape is done automatically in the latent
space. But the knowledge for mapping from incomplete to complete still remains
dark and could be further explored. Motivated by the knowledge distillation's
teacher-student learning strategy, we design a knowledge transfer way for
completing 3d shape. In this work, we propose a novel View Distillation Point
Completion Network (VD-PCN), which solve the completion problem by a multi-view
distillation way. The design methodology fully leverages the orderliness of 2d
pixels, flexibleness of 2d processing and powerfulness of 2d network. Extensive
evaluations on PCN, ShapeNet55/34, and MVP datasets confirm the effectiveness
of our design and knowledge transfer strategy, both quantitatively and
qualitatively. Committed to facilitate ongoing research, we will make our code
publicly available.
|
2501.19271
|
Concept-Based Explainable Artificial Intelligence: Metrics and
Benchmarks
|
cs.AI cs.LG
|
Concept-based explanation methods, such as concept bottleneck models (CBMs),
aim to improve the interpretability of machine learning models by linking their
decisions to human-understandable concepts, under the critical assumption that
such concepts can be accurately attributed to the network's feature space.
However, this foundational assumption has not been rigorously validated, mainly
because the field lacks standardised metrics and benchmarks to assess the
existence and spatial alignment of such concepts. To address this, we propose
three metrics: the concept global importance metric, the concept existence
metric, and the concept location metric, including a technique for visualising
concept activations, i.e., concept activation mapping. We benchmark post-hoc
CBMs to illustrate their capabilities and challenges. Through qualitative and
quantitative experiments, we demonstrate that, in many cases, even the most
important concepts determined by post-hoc CBMs are not present in input images;
moreover, when they are present, their saliency maps fail to align with the
expected regions by either activating across an entire object or misidentifying
relevant concept-specific regions. We analyse the root causes of these
limitations, such as the natural correlation of concepts. Our findings
underscore the need for more careful application of concept-based explanation
techniques especially in settings where spatial interpretability is critical.
|
2501.19273
|
Minimax discrete distribution estimation with self-consumption
|
cs.IT math.IT math.ST stat.TH
|
Learning distributions from i.i.d. samples is a well-understood problem.
However, advances in generative machine learning prompt an interesting new,
non-i.i.d. setting: after receiving a certain number of samples, an estimated
distribution is fixed, and samples from this estimate are drawn and introduced
into the sample corpus, undifferentiated from real samples. Subsequent
generations of estimators now face contaminated environments, an effect
referred to in the machine learning literature as self-consumption. In this
paper, we study the effect of such contamination from previous estimates on the
minimax loss of multi-stage discrete distribution estimation.
In the data accumulation setting, where all batches of samples are available
for estimation, we provide minimax bounds for the expected $\ell_2^2$ and
$\ell_1$ losses at every stage. We show examples where our bounds match under
mild conditions, and there is a strict gap with the corresponding
oracle-assisted minimax loss where real and synthetic samples are
differentiated. We also provide a lower bound on the minimax loss in the data
replacement setting, where only the latest batch of samples is available, and
use it to find a lower bound for the worst-case loss for bounded estimate
trajectories.
|
2501.19274
|
GO: The Great Outdoors Multimodal Dataset
|
cs.RO
|
The Great Outdoors (GO) dataset is a multi-modal annotated data resource
aimed at advancing ground robotics research in unstructured environments. This
dataset provides the most comprehensive set of data modalities and annotations
compared to existing off-road datasets. In total, the GO dataset includes six
unique sensor types with high-quality semantic annotations and GPS traces to
support tasks such as semantic segmentation, object detection, and SLAM. The
diverse environmental conditions represented in the dataset present significant
real-world challenges that provide opportunities to develop more robust
solutions to support the continued advancement of field robotics, autonomous
exploration, and perception systems in natural environments. The dataset can be
downloaded at: https://www.unmannedlab.org/the-great-outdoors-dataset/
|
2501.19277
|
On Pareto Optimality for the Multinomial Logistic Bandit
|
stat.ML cs.LG
|
We provide a new online learning algorithm for tackling the Multinomial Logit
Bandit (MNL-Bandit) problem. Despite the challenges posed by the combinatorial
nature of the MNL model, we develop a novel Upper Confidence Bound (UCB)-based
method that achieves Pareto optimality by balancing regret minimization and
estimation error of the assortment revenues and the MNL parameters. We develop
theoretical guarantees characterizing the tradeoff between regret and
estimation error for the MNL-Bandit problem through information-theoretic
bounds, and propose a modified UCB algorithm that incorporates forced
exploration to improve parameter estimation accuracy while maintaining low
regret. Our analysis sheds critical insights into how to optimally balance the
collected revenues and the treatment estimation in dynamic assortment
optimization.
|
2501.19278
|
Pheromone-based Learning of Optimal Reasoning Paths
|
cs.CL
|
Large Language Models (LLMs) have demonstrated remarkable reasoning
capabilities through chain-of-thought prompting, yet discovering effective
reasoning methods for complex problems remains challenging due to the vast
space of possible intermediate steps. We introduce Ant Colony
Optimization-guided Tree of Thought (ACO-ToT), a novel algorithm that combines
ACO with LLMs to discover optimal reasoning paths for complex problems
efficiently. Drawing inspiration from Hebbian learning in neurological systems,
our method employs a collection of distinctly fine-tuned LLM "ants" to traverse
and lay pheromone trails through a centralized tree of thought, with each ant's
movement governed by a weighted combination of existing pheromone trails and
its own specialized expertise. The algorithm evaluates complete reasoning paths
using a mixture-of-experts-based scoring function, with pheromones reinforcing
productive reasoning paths across iterations. Experiments on three challenging
reasoning tasks (GSM8K, ARC-Challenge, and MATH) demonstrate that ACO-ToT
performs significantly better than existing chain-of-thought optimization
approaches, suggesting that incorporating biologically inspired collective
search mechanisms into LLM inference can substantially enhance reasoning
capabilities.
|
2501.19279
|
S-VOTE: Similarity-based Voting for Client Selection in Decentralized
Federated Learning
|
cs.LG cs.DC
|
Decentralized Federated Learning (DFL) enables collaborative,
privacy-preserving model training without relying on a central server. This
decentralized approach reduces bottlenecks and eliminates single points of
failure, enhancing scalability and resilience. However, DFL also introduces
challenges such as suboptimal models with non-IID data distributions, increased
communication overhead, and resource usage. Thus, this work proposes S-VOTE, a
voting-based client selection mechanism that optimizes resource usage and
enhances model performance in federations with non-IID data conditions. S-VOTE
considers an adaptive strategy for spontaneous local training that addresses
participation imbalance, allowing underutilized clients to contribute without
significantly increasing resource costs. Extensive experiments on benchmark
datasets demonstrate the S-VOTE effectiveness. More in detail, it achieves
lower communication costs by up to 21%, 4-6% faster convergence, and improves
local performance by 9-17% compared to baseline methods in some configurations,
all while achieving a 14-24% energy consumption reduction. These results
highlight the potential of S-VOTE to address DFL challenges in heterogeneous
environments.
|
2501.19281
|
Statistical Physics of Deep Neural Networks: Generalization Capability,
Beyond the Infinite Width, and Feature Learning
|
cond-mat.dis-nn cs.LG
|
Deep Neural Networks (DNNs) excel at many tasks, often rivaling or surpassing
human performance. Yet their internal processes remain elusive, frequently
described as "black boxes." While performance can be refined experimentally,
achieving a fundamental grasp of their inner workings is still a challenge.
Statistical Mechanics has long tackled computational problems, and this
thesis applies physics-based insights to understand DNNs via three
complementary approaches.
First, by averaging over data, we derive an asymptotic bound on
generalization that depends solely on the size of the last layer, rather than
on the total number of parameters -- revealing how deep architectures process
information differently across layers.
Second, adopting a data-dependent viewpoint, we explore a finite-width
thermodynamic limit beyond the infinite-width regime. This leads to: (i) a
closed-form expression for the generalization error in a finite-width
one-hidden-layer network (regression task); (ii) an approximate partition
function for deeper architectures; and (iii) a link between deep networks in
this thermodynamic limit and Student's t-processes.
Finally, from a task-explicit perspective, we present a preliminary analysis
of how DNNs interact with a controlled dataset, investigating whether they
truly internalize its structure -- collapsing to the teacher -- or merely
memorize it. By understanding when a network must learn data structure rather
than just memorize, it sheds light on fostering meaningful internal
representations.
In essence, this thesis leverages the synergy between Statistical Physics and
Machine Learning to illuminate the inner behavior of DNNs.
|
2501.19283
|
Application of Generative Adversarial Network (GAN) for Synthetic
Training Data Creation to improve performance of ANN Classifier for
extracting Built-Up pixels from Landsat Satellite Imagery
|
cs.CV cs.LG
|
Training a neural network for pixel based classification task using low
resolution Landsat images is difficult as the size of the training data is
usually small due to less number of available pixels that represent a single
class without any mixing with other classes. Due to this scarcity of training
data, neural network may not be able to attain expected level of accuracy. This
limitation could be overcome using a generative network that aims to generate
synthetic data having the same distribution as the sample data with which it is
trained. In this work, we have proposed a methodology for improving the
performance of ANN classifier to identify built-up pixels in the Landsat$7$
image with the help of developing a simple GAN architecture that could generate
synthetic training pixels when trained using original set of sample built-up
pixels. To ensure that the marginal and joint distributions of all the bands
corresponding to the generated and original set of pixels are
indistinguishable, non-parametric Kolmogorov Smirnov Test and Ball Divergence
based Equality of Distributions Test have been performed respectively. It has
been observed that the overall accuracy and kappa coefficient of the ANN model
for built-up classification have continuously improved from $0.9331$ to
$0.9983$ and $0.8277$ to $0.9958$ respectively, with the inclusion of generated
sets of built-up pixels to the original one.
|
2501.19285
|
OneBatchPAM: A Fast and Frugal K-Medoids Algorithm
|
cs.LG
|
This paper proposes a novel k-medoids approximation algorithm to handle
large-scale datasets with reasonable computational time and memory complexity.
We develop a local-search algorithm that iteratively improves the medoid
selection based on the estimation of the k-medoids objective. A single batch of
size m << n provides the estimation, which reduces the required memory size and
the number of pairwise dissimilarities computations to O(mn), instead of O(n^2)
compared to most k-medoids baselines. We obtain theoretical results
highlighting that a batch of size m = O(log(n)) is sufficient to guarantee,
with strong probability, the same performance as the original local-search
algorithm. Multiple experiments conducted on real datasets of various sizes and
dimensions show that our algorithm provides similar performances as
state-of-the-art methods such as FasterPAM and BanditPAM++ with a drastically
reduced running time.
|
2501.19287
|
Differentially Private In-context Learning via Sampling Few-shot Mixed
with Zero-shot Outputs
|
cs.LG
|
In-context learning (ICL) has shown promising improvement in downstream task
adaptation of LLMs by augmenting prompts with relevant input-output examples
(demonstrations). However, the ICL demonstrations can contain privacy-sensitive
information, which can be leaked and/or regurgitated by the LLM output.
Differential Privacy (DP), a widely adopted privacy safeguard, has emerged to
mitigate this privacy leakage, with recent work demonstrating strong
privacy-utility tradeoffs in classification tasks for ICL. However, generation
tasks for ICL are challenging due to the high-dimensional output space of
open-ended generation. To this end, we propose $\texttt{dps-mozo}$,
Differentially Private Sampling by Mixing One-shot with Zero-shot Outputs, a
decoding framework that generates DP text by sampling from the product of
multiple one-shot outputs mixed with a zero-shot output. This mixing
effectively reduces the amount of information that can be leaked by each
demonstration. By utilizing the inherent randomness in sampling from the mixed
distributions, we can achieve DP without adding noise, thereby improving the
privacy-utility tradeoff. Our experimental evaluations show $\texttt{dps-mozo}$
can achieve a strong privacy guarantee, $\epsilon=2$, with minimal utility
degradation compared to non-private few-shot learning, $\textbf{0.3}$% ROUGE-L
F1 score decrease on the SAMSum dataset with Gemma 2 2B.
|
2501.19297
|
Analysis of LLMs vs Human Experts in Requirements Engineering
|
cs.SE cs.AI
|
The majority of research around Large Language Models (LLM) application to
software development has been on the subject of code generation. There is
little literature on LLMs' impact on requirements engineering (RE), which deals
with the process of developing and verifying the system requirements. Within
RE, there is a subdiscipline of requirements elicitation, which is the practice
of discovering and documenting requirements for a system from users, customers,
and other stakeholders. In this analysis, we compare LLM's ability to elicit
requirements of a software system, as compared to that of a human expert in a
time-boxed and prompt-boxed study. We found LLM-generated requirements were
evaluated as more aligned (+1.12) than human-generated requirements with a
trend of being more complete (+10.2%). Conversely, we found users tended to
believe that solutions they perceived as more aligned had been generated by
human experts. Furthermore, while LLM-generated documents scored higher and
performed at 720x the speed, their cost was, on average, only 0.06% that of a
human expert. Overall, these findings indicate that LLMs will play an
increasingly important role in requirements engineering by improving
requirements definitions, enabling more efficient resource allocation, and
reducing overall project timelines.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.