id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.04746
|
On $(\mathcal{L},\mathcal{P})$-Twisted Generalized Reed-Solomon Codes
|
cs.IT math.IT
|
Twisted generalized Reed-Solomon (TGRS) codes are an extension of the
generalized Reed-Solomon (GRS) codes by adding specific twists, which attract
much attention recently. This paper presents an in-depth and comprehensive
investigation of the TGRS codes for the most general form by using a universal
method. At first, we propose a more precise definition to describe TGRS codes,
namely $(\mathcal{L},\mathcal{P})$-TGRS codes, and provide a concise necessary
and sufficient condition for $(\mathcal{L},\mathcal{P})$-TGRS codes to be MDS,
which extends the related results in the previous works. Secondly, we
explicitly characterize the parity check matrices of
$(\mathcal{L},\mathcal{P})$-TGRS codes, and provide a sufficient condition for
$(\mathcal{L},\mathcal{P})$-TGRS codes to be self-dual. Finally, we conduct an
in-depth study into the non-GRS property of $(\mathcal{L},\mathcal{P})$-TGRS
codes via the Schur squares and the combinatorial techniques respectively. As a
result, we obtain a large infinite families of non-GRS MDS codes.
|
2502.04747
|
Every Software as an Agent: Blueprint and Case Study
|
cs.SE cs.AI
|
The rise of (multimodal) large language models (LLMs) has shed light on
software agent -- where software can understand and follow user instructions in
natural language. However, existing approaches such as API-based and GUI-based
agents are far from satisfactory at accuracy and efficiency aspects. Instead,
we advocate to endow LLMs with access to the software internals (source code
and runtime context) and the permission to dynamically inject generated code
into software for execution. In such a whitebox setting, one may better
leverage the software context and the coding ability of LLMs. We then present
an overall design architecture and case studies on two popular web-based
desktop applications. We also give in-depth discussion of the challenges and
future directions. We deem that such a new paradigm has the potential to
fundamentally overturn the existing software agent design, and finally creating
a digital world in which software can comprehend, operate, collaborate, and
even think to meet complex user needs.
|
2502.04748
|
Self-Supervised Learning for Pre-training Capsule Networks: Overcoming
Medical Imaging Dataset Challenges
|
cs.CV cs.LG
|
Deep learning techniques are increasingly being adopted in diagnostic medical
imaging. However, the limited availability of high-quality, large-scale medical
datasets presents a significant challenge, often necessitating the use of
transfer learning approaches. This study investigates self-supervised learning
methods for pre-training capsule networks in polyp diagnostics for colon
cancer. We used the PICCOLO dataset, comprising 3,433 samples, which
exemplifies typical challenges in medical datasets: small size, class
imbalance, and distribution shifts between data splits. Capsule networks offer
inherent interpretability due to their architecture and inter-layer information
routing mechanism. However, their limited native implementation in mainstream
deep learning frameworks and the lack of pre-trained versions pose a
significant challenge. This is particularly true if aiming to train them on
small medical datasets, where leveraging pre-trained weights as initial
parameters would be beneficial. We explored two auxiliary self-supervised
learning tasks, colourisation and contrastive learning, for capsule network
pre-training. We compared self-supervised pre-trained models against
alternative initialisation strategies. Our findings suggest that contrastive
learning and in-painting techniques are suitable auxiliary tasks for
self-supervised learning in the medical domain. These techniques helped guide
the model to capture important visual features that are beneficial for the
downstream task of polyp classification, increasing its accuracy by 5.26%
compared to other weight initialisation methods.
|
2502.04749
|
Bounding User Contributions in the Worst-Case for User-Level
Differentially Private Mean Estimation
|
cs.IT math.IT
|
In this article, we revisit the well-studied problem of mean estimation under
user-level $\varepsilon$-differential privacy (DP). While user-level
$\varepsilon$-DP mechanisms for mean estimation, which typically bound (or
clip) user contributions to reduce sensitivity, are well-known, an analysis of
their estimation errors usually assumes that the data samples are independent
and identically distributed (i.i.d.), and sometimes also that all participating
users contribute the same number of samples (data homogeneity). Our main result
is a precise characterization of the \emph{worst-case} estimation error under
general clipping strategies, for heterogeneous data, and as a by-product, the
clipping strategy that gives rise to the smallest worst-case error.
Interestingly, we show via experimental studies that even for i.i.d. samples,
our clipping strategy performs uniformly better that the well-known clipping
strategy of Amin et al. (2019), which involves additional, private parameter
estimation.
|
2502.04750
|
Tighter sparse variational Gaussian processes
|
stat.ML cs.LG
|
Sparse variational Gaussian process (GP) approximations based on inducing
points have become the de facto standard for scaling GPs to large datasets,
owing to their theoretical elegance, computational efficiency, and ease of
implementation. This paper introduces a provably tighter variational
approximation by relaxing the standard assumption that the conditional
approximate posterior given the inducing points must match that in the prior.
The key innovation is to modify the conditional posterior to have smaller
variances than that of the prior at the training points. We derive the
collapsed bound for the regression case, describe how to use the proposed
approximation in large data settings, and discuss its application to handle
orthogonally structured inducing points and GP latent variable models.
Extensive experiments on regression benchmarks, classification, and latent
variable models demonstrate that the proposed approximation consistently
matches or outperforms standard sparse variational GPs while maintaining the
same computational cost. An implementation will be made available in all
popular GP packages.
|
2502.04751
|
Holistically Guided Monte Carlo Tree Search for Intricate Information
Seeking
|
cs.IR cs.CL
|
In the era of vast digital information, the sheer volume and heterogeneity of
available information present significant challenges for intricate information
seeking. Users frequently face multistep web search tasks that involve
navigating vast and varied data sources. This complexity demands every step
remains comprehensive, accurate, and relevant. However, traditional search
methods often struggle to balance the need for localized precision with the
broader context required for holistic understanding, leaving critical facets of
intricate queries underexplored. In this paper, we introduce an LLM-based
search assistant that adopts a new information seeking paradigm with
holistically guided Monte Carlo tree search (HG-MCTS). We reformulate the task
as a progressive information collection process with a knowledge memory and
unite an adaptive checklist with multi-perspective reward modeling in MCTS. The
adaptive checklist provides explicit sub-goals to guide the MCTS process toward
comprehensive coverage of complex user queries. Simultaneously, our
multi-perspective reward modeling offers both exploration and retrieval
rewards, along with progress feedback that tracks completed and remaining
sub-goals, refining the checklist as the tree search progresses. By striking a
balance between localized tree expansion and global guidance, HG-MCTS reduces
redundancy in search paths and ensures that all crucial aspects of an intricate
query are properly addressed. Extensive experiments on real-world intricate
information seeking tasks demonstrate that HG-MCTS acquires thorough knowledge
collections and delivers more accurate final responses compared with existing
baselines.
|
2502.04756
|
Concept Navigation and Classification via Open Source Large Language
Model Processing
|
cs.CL cs.AI cs.LG
|
This paper presents a novel methodological framework for detecting and
classifying latent constructs, including frames, narratives, and topics, from
textual data using Open-Source Large Language Models (LLMs). The proposed
hybrid approach combines automated summarization with human-in-the-loop
validation to enhance the accuracy and interpretability of construct
identification. By employing iterative sampling coupled with expert refinement,
the framework guarantees methodological robustness and ensures conceptual
precision. Applied to diverse data sets, including AI policy debates, newspaper
articles on encryption, and the 20 Newsgroups data set, this approach
demonstrates its versatility in systematically analyzing complex political
discourses, media framing, and topic classification tasks.
|
2502.04757
|
ELITE: Enhanced Language-Image Toxicity Evaluation for Safety
|
cs.CV cs.CL
|
Current Vision Language Models (VLMs) remain vulnerable to malicious prompts
that induce harmful outputs. Existing safety benchmarks for VLMs primarily rely
on automated evaluation methods, but these methods struggle to detect implicit
harmful content or produce inaccurate evaluations. Therefore, we found that
existing benchmarks have low levels of harmfulness, ambiguous data, and limited
diversity in image-text pair combinations. To address these issues, we propose
the ELITE benchmark, a high-quality safety evaluation benchmark for VLMs,
underpinned by our enhanced evaluation method, the ELITE evaluator. The ELITE
evaluator explicitly incorporates a toxicity score to accurately assess
harmfulness in multimodal contexts, where VLMs often provide specific,
convincing, but unharmful descriptions of images. We filter out ambiguous and
low-quality image-text pairs from existing benchmarks using the ELITE evaluator
and generate diverse combinations of safe and unsafe image-text pairs. Our
experiments demonstrate that the ELITE evaluator achieves superior alignment
with human evaluations compared to prior automated methods, and the ELITE
benchmark offers enhanced benchmark quality and diversity. By introducing
ELITE, we pave the way for safer, more robust VLMs, contributing essential
tools for evaluating and mitigating safety risks in real-world applications.
|
2502.04758
|
Differential Privacy of Quantum and Quantum-Inspired-Classical
Recommendation Algorithms
|
quant-ph cs.CR cs.ET cs.LG
|
We analyze the DP (differential privacy) properties of the quantum
recommendation algorithm and the quantum-inspired-classical recommendation
algorithm. We discover that the quantum recommendation algorithm is a privacy
curating mechanism on its own, requiring no external noise, which is different
from traditional differential privacy mechanisms. In our analysis, a novel
perturbation method tailored for SVD (singular value decomposition) and
low-rank matrix approximation problems is introduced. Using the perturbation
method and random matrix theory, we are able to derive that both the quantum
and quantum-inspired-classical algorithms are
$\big(\tilde{\mathcal{O}}\big(\frac 1n\big),\,\,
\tilde{\mathcal{O}}\big(\frac{1}{\min\{m,n\}}\big)\big)$-DP under some
reasonable restrictions, where $m$ and $n$ are numbers of users and products in
the input preference database respectively. Nevertheless, a comparison shows
that the quantum algorithm has better privacy preserving potential than the
classical one.
|
2502.04759
|
Enhancing Phishing Email Identification with Large Language Models
|
cs.CR cs.AI
|
Phishing has long been a common tactic used by cybercriminals and continues
to pose a significant threat in today's digital world. When phishing attacks
become more advanced and sophisticated, there is an increasing need for
effective methods to detect and prevent them. To address the challenging
problem of detecting phishing emails, researchers have developed numerous
solutions, in particular those based on machine learning (ML) algorithms. In
this work, we take steps to study the efficacy of large language models (LLMs)
in detecting phishing emails. The experiments show that the LLM achieves a high
accuracy rate at high precision; importantly, it also provides interpretable
evidence for the decisions.
|
2502.04760
|
Graph Federated Learning Based Proactive Content Caching in Edge
Computing
|
cs.LG cs.AI
|
With the rapid growth of mobile data traffic and the increasing prevalence of
video streaming, proactive content caching in edge computing has become crucial
for reducing latency and alleviating network congestion. However, traditional
caching strategies such as FIFO, LRU, and LFU fail to effectively predict
future content popularity, while existing proactive caching approaches often
require users to upload data to a central server, raising concerns regarding
privacy and scalability. To address these challenges, this paper proposes a
Graph Federated Learning-based Proactive Content Caching (GFPCC) scheme that
enhances caching efficiency while preserving user privacy. The proposed
approach integrates federated learning and graph neural networks, enabling
users to locally train Light Graph Convolutional Networks (LightGCN) to capture
user-item relationships and predict content popularity. Instead of sharing raw
data, only the trained model parameters are transmitted to the central server,
where a federated averaging algorithm aggregates updates, refines the global
model, and selects the most popular files for proactive caching. Experimental
evaluations on real-world datasets, such as MovieLens, demonstrate that GFPCC
outperforms baseline caching algorithms by achieving higher cache efficiency
through more accurate content popularity predictions. Moreover, the federated
learning framework strengthens privacy protection while maintaining efficient
model training; however, scalability remains a challenge in large-scale
networks with dynamic user preferences.
|
2502.04762
|
Autoregressive Generation of Static and Growing Trees
|
cs.CV
|
We propose a transformer architecture and training strategy for tree
generation. The architecture processes data at multiple resolutions and has an
hourglass shape, with middle layers processing fewer tokens than outer layers.
Similar to convolutional networks, we introduce longer range skip connections
to completent this multi-resolution approach. The key advantage of this
architecture is the faster processing speed and lower memory consumption. We
are therefore able to process more complex trees than would be possible with a
vanilla transformer architecture. Furthermore, we extend this approach to
perform image-to-tree and point-cloud-to-tree conditional generation and to
simulate the tree growth processes, generating 4D trees. Empirical results
validate our approach in terms of speed, memory consumption, and generation
quality.
|
2502.04763
|
Shapley Value Approximation Based on k-Additive Games
|
cs.GT cs.LG
|
The Shapley value is the prevalent solution for fair division problems in
which a payout is to be divided among multiple agents. By adopting a
game-theoretic view, the idea of fair division and the Shapley value can also
be used in machine learning to quantify the individual contribution of features
or data points to the performance of a predictive model. Despite its popularity
and axiomatic justification, the Shapley value suffers from a computational
complexity that scales exponentially with the number of entities involved, and
hence requires approximation methods for its reliable estimation. We propose
SVA$k_{\text{ADD}}$, a novel approximation method that fits a $k$-additive
surrogate game. By taking advantage of $k$-additivity, we are able to elicit
the exact Shapley values of the surrogate game and then use these values as
estimates for the original fair division problem. The efficacy of our method is
evaluated empirically and compared to competing methods.
|
2502.04770
|
Efficient Evaluation of Quantization-Effects in Neural Codecs
|
eess.AS cs.LG
|
Neural codecs, comprising an encoder, quantizer, and decoder, enable signal
transmission at exceptionally low bitrates. Training these systems requires
techniques like the straight-through estimator, soft-to-hard annealing, or
statistical quantizer emulation to allow a non-zero gradient across the
quantizer. Evaluating the effect of quantization in neural codecs, like the
influence of gradient passing techniques on the whole system, is often costly
and time-consuming due to training demands and the lack of affordable and
reliable metrics. This paper proposes an efficient evaluation framework for
neural codecs using simulated data with a defined number of bits and
low-complexity neural encoders/decoders to emulate the non-linear behavior in
larger networks. Our system is highly efficient in terms of training time and
computational and hardware requirements, allowing us to uncover distinct
behaviors in neural codecs. We propose a modification to stabilize training
with the straight-through estimator based on our findings. We validate our
findings against an internal neural audio codec and against the
state-of-the-art descript-audio-codec.
|
2502.04771
|
DMPA: Model Poisoning Attacks on Decentralized Federated Learning for
Model Differences
|
cs.LG cs.AI
|
Federated learning (FL) has garnered significant attention as a prominent
privacy-preserving Machine Learning (ML) paradigm. Decentralized FL (DFL)
eschews traditional FL's centralized server architecture, enhancing the
system's robustness and scalability. However, these advantages of DFL also
create new vulnerabilities for malicious participants to execute adversarial
attacks, especially model poisoning attacks. In model poisoning attacks,
malicious participants aim to diminish the performance of benign models by
creating and disseminating the compromised model. Existing research on model
poisoning attacks has predominantly concentrated on undermining global models
within the Centralized FL (CFL) paradigm, while there needs to be more research
in DFL. To fill the research gap, this paper proposes an innovative model
poisoning attack called DMPA. This attack calculates the differential
characteristics of multiple malicious client models and obtains the most
effective poisoning strategy, thereby orchestrating a collusive attack by
multiple participants. The effectiveness of this attack is validated across
multiple datasets, with results indicating that the DMPA approach consistently
surpasses existing state-of-the-art FL model poisoning attack strategies.
|
2502.04773
|
An Extended Benchmarking of Multi-Agent Reinforcement Learning
Algorithms in Complex Fully Cooperative Tasks
|
cs.LG
|
Multi-Agent Reinforcement Learning (MARL) has recently emerged as a
significant area of research. However, MARL evaluation often lacks systematic
diversity, hindering a comprehensive understanding of algorithms' capabilities.
In particular, cooperative MARL algorithms are predominantly evaluated on
benchmarks such as SMAC and GRF, which primarily feature team game scenarios
without assessing adequately various aspects of agents' capabilities required
in fully cooperative real-world tasks such as multi-robot cooperation and
warehouse, resource management, search and rescue, and human-AI cooperation.
Moreover, MARL algorithms are mainly evaluated on low dimensional state spaces,
and thus their performance on high-dimensional (e.g., image) observations is
not well-studied. To fill this gap, this paper highlights the crucial need for
expanding systematic evaluation across a wider array of existing benchmarks. To
this end, we conduct extensive evaluation and comparisons of well-known MARL
algorithms on complex fully cooperative benchmarks, including tasks with images
as agents' observations. Interestingly, our analysis shows that many
algorithms, hailed as state-of-the-art on SMAC and GRF, may underperform
standard MARL baselines on fully cooperative benchmarks. Finally, towards more
systematic and better evaluation of cooperative MARL algorithms, we have
open-sourced PyMARLzoo+, an extension of the widely used (E)PyMARL libraries,
which addresses an open challenge from [TBG++21], facilitating seamless
integration and support with all benchmarks of PettingZoo, as well as
Overcooked, PressurePlate, Capture Target and Box Pushing.
|
2502.04774
|
SeDi-Instruct: Enhancing Alignment of Language Models through
Self-Directed Instruction Generation
|
cs.CL
|
The rapid evolution of Large Language Models (LLMs) has enabled the industry
to develop various AI-based services. Instruction tuning is considered
essential in adapting foundation models for target domains to provide
high-quality services to customers. A key challenge in instruction tuning is
obtaining high-quality instruction data. Self-Instruct, which automatically
generates instruction data using ChatGPT APIs, alleviates the data scarcity
problem. To improve the quality of instruction data, Self-Instruct discards
many of the instructions generated from ChatGPT, even though it is inefficient
in terms of cost owing to many useless API calls. To generate high-quality
instruction data at a low cost, we propose a novel data generation framework,
Self-Direct Instruction generation (SeDi-Instruct), which employs
diversity-based filtering and iterative feedback task generation.
Diversity-based filtering maintains model accuracy without excessively
discarding low-quality generated instructions by enhancing the diversity of
instructions in a batch. This reduces the cost of synthesizing instruction
data. The iterative feedback task generation integrates instruction generation
and training tasks and utilizes information obtained during the training to
create high-quality instruction sets. Our results show that SeDi-Instruct
enhances the accuracy of AI models by 5.2%, compared with traditional methods,
while reducing data generation costs by 36%.
|
2502.04777
|
Community detection for directed networks revisited using bimodularity
|
cs.SI
|
Community structure is a key feature omnipresent in real-world network data.
Plethora of methods have been proposed to reveal subsets of densely
interconnected nodes using criteria such as the modularity index. These
approaches have been successful for undirected graphs, but directed edge
information has not yet been dealt with in a satisfactory way. Here, we revisit
the concept of directed communities as a mapping between sending and receiving
communities. This translates into a new definition that we term bimodularity.
Using convex relaxation, bimodularity can be optimized with the singular value
decomposition of the directed modularity matrix. Subsequently, we propose an
edge-based clustering approach to reveal the directed communities including
their mappings. The feasibility of the new framework is illustrated on a
synthetic model and further applied to the neuronal wiring diagram of the
\textit{C. elegans}, for which it yields meaningful feedforward loops of the
head and body motion systems. This framework sets the ground for the
understanding and detection of community structures in directed networks.
|
2502.04778
|
Behavior-Regularized Diffusion Policy Optimization for Offline
Reinforcement Learning
|
cs.LG cs.AI
|
The primary focus of offline reinforcement learning (RL) is to manage the
risk of hazardous exploitation of out-of-distribution actions. An effective
approach to achieve this goal is through behavior regularization, which
augments conventional RL objectives by incorporating constraints that enforce
the policy to remain close to the behavior policy. Nevertheless, existing
literature on behavior-regularized RL primarily focuses on explicit policy
parameterizations, such as Gaussian policies. Consequently, it remains unclear
how to extend this framework to more advanced policy parameterizations, such as
diffusion models. In this paper, we introduce BDPO, a principled
behavior-regularized RL framework tailored for diffusion-based policies,
thereby combining the expressive power of diffusion policies and the robustness
provided by regularization. The key ingredient of our method is to calculate
the Kullback-Leibler (KL) regularization analytically as the accumulated
discrepancies in reverse-time transition kernels along the diffusion
trajectory. By integrating the regularization, we develop an efficient
two-time-scale actor-critic RL algorithm that produces the optimal policy while
respecting the behavior constraint. Comprehensive evaluations conducted on
synthetic 2D tasks and continuous control tasks from the D4RL benchmark
validate its effectiveness and superior performance.
|
2502.04780
|
SiriuS: Self-improving Multi-agent Systems via Bootstrapped Reasoning
|
cs.AI
|
Multi-agent AI systems powered by large language models (LLMs) are
increasingly applied to solve complex tasks. However, these systems often rely
on fragile, manually designed prompts and heuristics, making optimization
difficult. A key challenge in optimizing multi-agent systems is acquiring
suitable training data for specialized agents. We introduce SiriuS, a
self-improving, reasoning-driven optimization framework for multi-agent
systems. Central to our approach is the construction of an experience library:
a repository of high-quality reasoning trajectories. The library is built by
retaining reasoning steps that lead to successful outcomes, providing a robust
training set for optimizing multi-agent system. Additionally, we introduce a
library augmentation procedure that refines unsuccessful trajectories, further
enriching the library. SiriuS boosts performance by 2.86\% to 21.88\% on
reasoning and biomedical QA and enhances agent negotiation in competitive
settings. Our results show that SiriuS enhances multi-agent performance while
generating reusable data for self-correction and self-play enhancement in the
future.
|
2502.04786
|
Enhancing SQL Injection Detection and Prevention Using Generative Models
|
cs.CR cs.AI
|
SQL Injection (SQLi) continues to pose a significant threat to the security
of web applications, enabling attackers to manipulate databases and access
sensitive information without authorisation. Although advancements have been
made in detection techniques, traditional signature-based methods still
struggle to identify sophisticated SQL injection attacks that evade predefined
patterns. As SQLi attacks evolve, the need for more adaptive detection systems
becomes crucial. This paper introduces an innovative approach that leverages
generative models to enhance SQLi detection and prevention mechanisms. By
incorporating Variational Autoencoders (VAE), Conditional Wasserstein GAN with
Gradient Penalty (CWGAN-GP), and U-Net, synthetic SQL queries were generated to
augment training datasets for machine learning models. The proposed method
demonstrated improved accuracy in SQLi detection systems by reducing both false
positives and false negatives. Extensive empirical testing further illustrated
the ability of the system to adapt to evolving SQLi attack patterns, resulting
in enhanced precision and robustness.
|
2502.04789
|
Probing Internal Representations of Multi-Word Verbs in Large Language
Models
|
cs.CL
|
This study investigates the internal representations of verb-particle
combinations, called multi-word verbs, within transformer-based large language
models (LLMs), specifically examining how these models capture lexical and
syntactic properties at different neural network layers. Using the BERT
architecture, we analyze the representations of its layers for two different
verb-particle constructions: phrasal verbs like 'give up' and prepositional
verbs like 'look at'. Our methodology includes training probing classifiers on
the internal representations to classify these categories at both word and
sentence levels. The results indicate that the model's middle layers achieve
the highest classification accuracies. To further analyze the nature of these
distinctions, we conduct a data separability test using the Generalized
Discrimination Value (GDV). While GDV results show weak linear separability
between the two verb types, probing classifiers still achieve high accuracy,
suggesting that representations of these linguistic categories may be
non-linearly separable. This aligns with previous research indicating that
linguistic distinctions in neural networks are not always encoded in a linearly
separable manner. These findings computationally support usage-based claims on
the representation of verb-particle constructions and highlight the complex
interaction between neural network architectures and linguistic structures.
|
2502.04790
|
S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate
Efficiency
|
cs.CL cs.AI
|
Large language models (LLMs) have demonstrated remarkable capabilities across
various natural language processing (NLP) scenarios, but they still face
challenges when handling complex arithmetic and logical reasoning tasks. While
Chain-Of-Thought (CoT) reasoning, self-consistency (SC) and self-correction
strategies have attempted to guide models in sequential, multi-step reasoning,
Multi-agent Debate (MAD) has emerged as a viable approach for enhancing the
reasoning capabilities of LLMs. By increasing both the number of agents and the
frequency of debates, the performance of LLMs improves significantly. However,
this strategy results in a significant increase in token costs, presenting a
barrier to scalability. To address this challenge, we introduce a novel
sparsification strategy designed to reduce token costs within MAD. This
approach minimizes ineffective exchanges of information and unproductive
discussions among agents, thereby enhancing the overall efficiency of the
debate process. We conduct comparative experiments on multiple datasets across
various models, demonstrating that our approach significantly reduces the token
costs in MAD to a considerable extent. Specifically, compared to MAD, our
approach achieves an impressive reduction of up to 94.5\% in token costs while
maintaining performance degradation below 2.0\%.
|
2502.04793
|
$t$-Testing the Waters: Empirically Validating Assumptions for Reliable
A/B-Testing
|
stat.ME cs.LG
|
A/B-tests are a cornerstone of experimental design on the web, with
wide-ranging applications and use-cases. The statistical $t$-test comparing
differences in means is the most commonly used method for assessing treatment
effects, often justified through the Central Limit Theorem (CLT). The CLT
ascertains that, as the sample size grows, the sampling distribution of the
Average Treatment Effect converges to normality, making the $t$-test valid for
sufficiently large sample sizes. When outcome measures are skewed or
non-normal, quantifying what "sufficiently large" entails is not
straightforward.
To ensure that confidence intervals maintain proper coverage and that
$p$-values accurately reflect the false positive rate, it is critical to
validate this normality assumption. We propose a practical method to test this,
by analysing repeatedly resampled A/A-tests. When the normality assumption
holds, the resulting $p$-value distribution should be uniform, and this
property can be tested using the Kolmogorov-Smirnov test. This provides an
efficient and effective way to empirically assess whether the $t$-test's
assumptions are met, and the A/B-test is valid. We demonstrate our methodology
and highlight how it helps to identify scenarios prone to inflated Type-I
errors. Our approach provides a practical framework to ensure and improve the
reliability and robustness of A/B-testing practices.
|
2502.04794
|
MedMimic: Physician-Inspired Multimodal Fusion for Early Diagnosis of
Fever of Unknown Origin
|
eess.IV cs.AI cs.CV
|
Fever of unknown origin FUO remains a diagnostic challenge. MedMimic is
introduced as a multimodal framework inspired by real-world diagnostic
processes. It uses pretrained models such as DINOv2, Vision Transformer, and
ResNet-18 to convert high-dimensional 18F-FDG PET/CT imaging into
low-dimensional, semantically meaningful features. A learnable
self-attention-based fusion network then integrates these imaging features with
clinical data for classification. Using 416 FUO patient cases from Sichuan
University West China Hospital from 2017 to 2023, the multimodal fusion
classification network MFCN achieved macro-AUROC scores ranging from 0.8654 to
0.9291 across seven tasks, outperforming conventional machine learning and
single-modality deep learning methods. Ablation studies and five-fold
cross-validation further validated its effectiveness. By combining the
strengths of pretrained large models and deep learning, MedMimic offers a
promising solution for disease classification.
|
2502.04795
|
Developmentally-plausible Working Memory Shapes a Critical Period for
Language Acquisition
|
cs.CL
|
Large language models possess general linguistic abilities but acquire
language less efficiently than humans. This study proposes a method for
integrating the developmental characteristics of working memory during the
critical period, a stage when human language acquisition is particularly
efficient, into the training process of language models. The proposed method
introduces a mechanism that initially constrains working memory during the
early stages of training and gradually relaxes this constraint in an
exponential manner as learning progresses. Targeted syntactic evaluation shows
that the proposed method outperforms conventional methods without memory
constraints or with static memory constraints. These findings not only provide
new directions for designing data-efficient language models but also offer
indirect evidence supporting the role of the developmental characteristics of
working memory as the underlying mechanism of the critical period in language
acquisition.
|
2502.04797
|
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution
Evaluation on NLI-related tasks
|
cs.CL
|
Free-text explanations are expressive and easy to understand, but many
datasets lack annotated explanation data, making it challenging to train models
for explainable predictions. To address this, we investigate how to use
existing explanation datasets for self-rationalization and evaluate models'
out-of-distribution (OOD) performance. We fine-tune T5-Large and OLMo-7B models
and assess the impact of fine-tuning data quality, the number of fine-tuning
samples, and few-shot selection methods. The models are evaluated on 19 diverse
OOD datasets across three tasks: natural language inference (NLI),
fact-checking, and hallucination detection in abstractive summarization. For
the generated explanation evaluation, we conduct a human study on 13 selected
models and study its correlation with the Acceptability score (T5-11B) and
three other LLM-based reference-free metrics. Human evaluation shows that the
Acceptability score correlates most strongly with human judgments,
demonstrating its effectiveness in evaluating free-text explanations. Our
findings reveal: 1) few annotated examples effectively adapt models for OOD
explanation generation; 2) compared to sample selection strategies, fine-tuning
data source has a larger impact on OOD performance; and 3) models with higher
label prediction accuracy tend to produce better explanations, as reflected by
higher Acceptability scores.
|
2502.04799
|
A Regularized Newton Method for Nonconvex Optimization with Global and
Local Complexity Guarantees
|
math.OC cs.LG
|
We consider the problem of finding an $\epsilon$-stationary point of a
nonconvex function with a Lipschitz continuous Hessian and propose a quadratic
regularized Newton method incorporating a new class of regularizers constructed
from the current and previous gradients. The method leverages a recently
developed linear conjugate gradient approach with a negative curvature monitor
to solve the regularized Newton equation. Notably, our algorithm is adaptive,
requiring no prior knowledge of the Lipschitz constant of the Hessian, and
achieves a global complexity of $O(\epsilon^{-\frac{3}{2}}) + \tilde O(1)$ in
terms of the second-order oracle calls, and $\tilde O(\epsilon^{-\frac{7}{4}})$
for Hessian-vector products, respectively. Moreover, when the iterates converge
to a point where the Hessian is positive definite, the method exhibits
quadratic local convergence. Preliminary numerical results illustrate the
competitiveness of our algorithm.
|
2502.04804
|
DetVPCC: RoI-based Point Cloud Sequence Compression for 3D Object
Detection
|
cs.CV
|
While MPEG-standardized video-based point cloud compression (VPCC) achieves
high compression efficiency for human perception, it struggles with a poor
trade-off between bitrate savings and detection accuracy when supporting 3D
object detectors. This limitation stems from VPCC's inability to prioritize
regions of different importance within point clouds. To address this issue, we
propose DetVPCC, a novel method integrating region-of-interest (RoI) encoding
with VPCC for efficient point cloud sequence compression while preserving the
3D object detection accuracy. Specifically, we augment VPCC to support
RoI-based compression by assigning spatially non-uniform quality levels. Then,
we introduce a lightweight RoI detector to identify crucial regions that
potentially contain objects. Experiments on the nuScenes dataset demonstrate
that our approach significantly improves the detection accuracy. The code and
demo video are available in supplementary materials.
|
2502.04807
|
Robust Conformal Outlier Detection under Contaminated Reference Data
|
stat.ML cs.LG stat.ME
|
Conformal prediction is a flexible framework for calibrating machine learning
predictions, providing distribution-free statistical guarantees. In outlier
detection, this calibration relies on a reference set of labeled inlier data to
control the type-I error rate. However, obtaining a perfectly labeled inlier
reference set is often unrealistic, and a more practical scenario involves
access to a contaminated reference set containing a small fraction of outliers.
This paper analyzes the impact of such contamination on the validity of
conformal methods. We prove that under realistic, non-adversarial settings,
calibration on contaminated data yields conservative type-I error control,
shedding light on the inherent robustness of conformal methods. This
conservativeness, however, typically results in a loss of power. To alleviate
this limitation, we propose a novel, active data-cleaning framework that
leverages a limited labeling budget and an outlier detection model to
selectively annotate data points in the contaminated reference set that are
suspected as outliers. By removing only the annotated outliers in this
``suspicious'' subset, we can effectively enhance power while mitigating the
risk of inflating the type-I error rate, as supported by our theoretical
analysis. Experiments on real datasets validate the conservative behavior of
conformal methods under contamination and show that the proposed data-cleaning
strategy improves power without sacrificing validity.
|
2502.04809
|
Humans Co-exist, So Must Embodied Artificial Agents
|
cs.LG
|
Modern embodied artificial agents excel in static, predefined tasks but fall
short in dynamic and long-term interactions with humans. On the other hand,
humans can adapt and evolve continuously, exploiting the situated knowledge
embedded in their environment and other agents, thus contributing to meaningful
interactions. We introduce the concept of co-existence for embodied artificial
agents and argues that it is a prerequisite for meaningful, long-term
interaction with humans. We take inspiration from biology and design theory to
understand how human and non-human organisms foster entities that co-exist
within their specific niches. Finally, we propose key research directions for
the machine learning community to foster co-existing embodied agents, focusing
on the principles, hardware and learning methods responsible for shaping them.
|
2502.04813
|
Describing Nonstationary Data Streams in Frequency Domain
|
cs.LG
|
Concept drift is among the primary challenges faced by the data stream
processing methods. The drift detection strategies, designed to counteract the
negative consequences of such changes, often rely on analyzing the problem
metafeatures. This work presents the Frequency Filtering Metadescriptor -- a
tool for characterizing the data stream that searches for the informative
frequency components visible in the sample's feature vector. The frequencies
are filtered according to their variance across all available data batches. The
presented solution is capable of generating a metadescription of the data
stream, separating chunks into groups describing specific concepts on its
basis, and visualizing the frequencies in the original spatial domain. The
experimental analysis compared the proposed solution with two state-of-the-art
strategies and with the PCA baseline in the post-hoc concept identification
task. The research is followed by the identification of concepts in the
real-world data streams. The generalization in the frequency domain adapted in
the proposed solution allows to capture the complex feature dependencies as a
reduced number of frequency components, while maintaining the semantic meaning
of data.
|
2502.04818
|
Harnessing omnipresent oscillator networks as computational resource
|
cs.LG math.DS nlin.AO nlin.CD
|
Nature is pervaded with oscillatory behavior. In networks of coupled
oscillators patterns can arise when the system synchronizes to an external
input. Hence, these networks provide processing and memory of input. We present
a universal framework for harnessing oscillator networks as computational
resource. This reservoir computing framework is introduced by the ubiquitous
model for phase-locking, the Kuramoto model. We force the Kuramoto model by a
nonlinear target-system, then after substituting the target-system with a
trained feedback-loop it emulates the target-system. Our results are two-fold.
Firstly, the trained network inherits performance properties of the Kuramoto
model, where all-to-all coupling is performed in linear time with respect to
the number of nodes and parameters for synchronization are abundant. Secondly,
the learning capabilities of the oscillator network can be explained using
Kuramoto model's order parameter. This work provides the foundation for
utilizing nature's oscillator networks as a new class of information processing
systems.
|
2502.04827
|
Uplink Rate-Splitting Multiple Access for Mobile Edge Computing with
Short-Packet Communications
|
cs.IT eess.SP math.IT
|
In this paper, a Rate-Splitting Multiple Access (RSMA) scheme is proposed to
assist a Mobile Edge Computing (MEC) system where local computation tasks from
two users are offloaded to the MEC server, facilitated by uplink RSMA for
processing. The efficiency of the MEC service is hence primarily influenced by
the RSMA-aided task offloading phase and the subsequent task computation phase,
where reliable and low-latency communication is required. For this practical
consideration, short-packet communication in the Finite Blocklength (FBL)
regime is introduced. In this context, we propose a novel uplink RSMA-aided MEC
framework and derive the overall Successful Computation Probability (SCP) with
FBL consideration. To maximize the SCP of our proposed RSMA-aided MEC, we
strategically optimize: (1) the task offloading factor which determines the
number of tasks to be offloaded and processed by the MEC server; (2) the
transmit power allocation between different RSMA streams; and (3) the
task-splitting factor which decides how many tasks are allocated to splitting
streams, while adhering to FBL constraints. To address the strong coupling
between these variables in the SCP expression, we apply the Alternative
Optimization method, which formulates tractable subproblems to optimize each
variable iteratively. The resultant non-convex subproblems are then tackled by
Successive Convex Approximation. Numerical results demonstrate that applying
uplink RSMA in the MEC system with FBL constraints can not only improve the SCP
performance but also provide lower latency in comparison to conventional
transmission scheme such as Non-orthogonal Multiple Access (NOMA).
|
2502.04829
|
Optimistic Gradient Learning with Hessian Corrections for
High-Dimensional Black-Box Optimization
|
cs.LG cs.AI
|
Black-box algorithms are designed to optimize functions without relying on
their underlying analytical structure or gradient information, making them
essential when gradients are inaccessible or difficult to compute. Traditional
methods for solving black-box optimization (BBO) problems predominantly rely on
non-parametric models and struggle to scale to large input spaces. Conversely,
parametric methods that model the function with neural estimators and obtain
gradient signals via backpropagation may suffer from significant gradient
errors. A recent alternative, Explicit Gradient Learning (EGL), which directly
learns the gradient using a first-order Taylor approximation, has demonstrated
superior performance over both parametric and non-parametric methods. In this
work, we propose two novel gradient learning variants to address the robustness
challenges posed by high-dimensional, complex, and highly non-linear problems.
Optimistic Gradient Learning (OGL) introduces a bias toward lower regions in
the function landscape, while Higher-order Gradient Learning (HGL) incorporates
second-order Taylor corrections to improve gradient accuracy. We combine these
approaches into the unified OHGL algorithm, achieving state-of-the-art (SOTA)
performance on the synthetic COCO suite. Additionally, we demonstrate OHGLs
applicability to high-dimensional real-world machine learning (ML) tasks such
as adversarial training and code generation. Our results highlight OHGLs
ability to generate stronger candidates, offering a valuable tool for ML
researchers and practitioners tackling high-dimensional, non-linear
optimization challenges
|
2502.04832
|
Memory Capacity of Nonlinear Recurrent Networks: Is it Informative?
|
cs.LG stat.ML
|
The total memory capacity (MC) of linear recurrent neural networks (RNNs) has
been proven to be equal to the rank of the corresponding Kalman controllability
matrix, and it is almost surely maximal for connectivity and input weight
matrices drawn from regular distributions. This fact questions the usefulness
of this metric in distinguishing the performance of linear RNNs in the
processing of stochastic signals. This note shows that the MC of random
nonlinear RNNs yields arbitrary values within established upper and lower
bounds depending just on the input process scale. This confirms that the
existing definition of MC in linear and nonlinear cases has no practical value.
|
2502.04834
|
Lightweight Operations for Visual Speech Recognition
|
cs.CV cs.AI cs.CL cs.LG
|
Visual speech recognition (VSR), which decodes spoken words from video data,
offers significant benefits, particularly when audio is unavailable. However,
the high dimensionality of video data leads to prohibitive computational costs
that demand powerful hardware, limiting VSR deployment on resource-constrained
devices. This work addresses this limitation by developing lightweight VSR
architectures. Leveraging efficient operation design paradigms, we create
compact yet powerful models with reduced resource requirements and minimal
accuracy loss. We train and evaluate our models on a large-scale public dataset
for recognition of words from video sequences, demonstrating their
effectiveness for practical applications. We also conduct an extensive array of
ablative experiments to thoroughly analyze the size and complexity of each
model. Code and trained models will be made publicly available.
|
2502.04837
|
Online Robot Motion Planning Methodology Guided by Group Social
Proxemics Feature
|
cs.RO cs.SY eess.SY
|
Nowadays robot is supposed to demonstrate human-like perception, reasoning
and behavior pattern in social or service application. However, most of the
existing motion planning methods are incompatible with above requirement. A
potential reason is that the existing navigation algorithms usually intend to
treat people as another kind of obstacle, and hardly take the social principle
or awareness into consideration. In this paper, we attempt to model the
proxemics of group and blend it into the scenario perception and navigation of
robot. For this purpose, a group clustering method considering both social
relevance and spatial confidence is introduced. It can enable robot to identify
individuals and divide them into groups. Next, we propose defining the
individual proxemics within magnetic dipole model, and further established the
group proxemics and scenario map through vector-field superposition. On the
basis of the group clustering and proxemics modeling, we present the method to
obtain the optimal observation positions (OOPs) of group. Once the OOPs grid
and scenario map are established, a heuristic path is employed to generate path
that guide robot cruising among the groups for interactive purpose. A series of
experiments are conducted to validate the proposed methodology on the practical
robot, the results have demonstrated that our methodology has achieved
promising performance on group recognition accuracy and path-generation
efficiency. This concludes that the group awareness evolved as an important
module to make robot socially behave in the practical scenario.
|
2502.04840
|
Coherent Local Explanations for Mathematical Optimization
|
math.OC cs.LG
|
The surge of explainable artificial intelligence methods seeks to enhance
transparency and explainability in machine learning models. At the same time,
there is a growing demand for explaining decisions taken through complex
algorithms used in mathematical optimization. However, current explanation
methods do not take into account the structure of the underlying optimization
problem, leading to unreliable outcomes. In response to this need, we introduce
Coherent Local Explanations for Mathematical Optimization (CLEMO). CLEMO
provides explanations for multiple components of optimization models, the
objective value and decision variables, which are coherent with the underlying
model structure. Our sampling-based procedure can provide explanations for the
behavior of exact and heuristic solution algorithms. The effectiveness of CLEMO
is illustrated by experiments for the shortest path problem, the knapsack
problem, and the vehicle routing problem.
|
2502.04843
|
PoI: Pixel of Interest for Novel View Synthesis Assisted Scene
Coordinate Regression
|
cs.CV
|
The task of estimating camera poses can be enhanced through novel view
synthesis techniques such as NeRF and Gaussian Splatting to increase the
diversity and extension of training data. However, these techniques often
produce rendered images with issues like blurring and ghosting, which
compromise their reliability. These issues become particularly pronounced for
Scene Coordinate Regression (SCR) methods, which estimate 3D coordinates at the
pixel level. To mitigate the problems associated with unreliable rendered
images, we introduce a novel filtering approach, which selectively extracts
well-rendered pixels while discarding the inferior ones. This filter
simultaneously measures the SCR model's real-time reprojection loss and
gradient during training. Building on this filtering technique, we also develop
a new strategy to improve scene coordinate regression using sparse inputs,
drawing on successful applications of sparse input techniques in novel view
synthesis. Our experimental results validate the effectiveness of our method,
demonstrating state-of-the-art performance on indoor and outdoor datasets.
|
2502.04846
|
UAV-Based Cell-Free Massive MIMO: Joint Placement and Power Optimization
under Fronthaul Capacity Limitations
|
eess.SP cs.IT math.IT
|
We consider a cell-free massive multiple-input multiple-output (mMIMO)
network, where unmanned aerial vehicles (UAVs) equipped with multiple antennas
serve as distributed UAV-access points (UAV-APs). These UAV-APs provide
seamless coverage by jointly serving user equipments (UEs) with out predefined
cell boundaries. However, high-capacity wireless networks face significant
challenges due to fronthaul limitations in UAV-assisted architectures. This
letter proposes a novel UAV-based cell-free mMIMO framework that leverages
distributed UAV-APs to serve UEs while addressing the capacity constraints of
wireless fronthaul links. We evaluate functional split Options 7.2 and 8 for
the fronthaul links, aiming to maximize the minimum
signal-to-interference-plus-noise ratio (SINR) among the UEs and minimize the
power consumption by optimizing the transmit powers of UAV-APs and selectively
activating them. Our analysis compares sub-6 GHz and millimeter wave (mmWave)
bands for the fronthaul, showing that mmWave achieves superior SINR with lower
power consumption, particularly under Option 8. Additionally, we determine the
minimum fronthaul bandwidth required to activate a single UAV-AP under
different split options.
|
2502.04847
|
HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion
Video Generation
|
cs.CV
|
Human motion video generation has advanced significantly, while existing
methods still struggle with accurately rendering detailed body parts like hands
and faces, especially in long sequences and intricate motions. Current
approaches also rely on fixed resolution and struggle to maintain visual
consistency. To address these limitations, we propose HumanDiT, a pose-guided
Diffusion Transformer (DiT)-based framework trained on a large and wild dataset
containing 14,000 hours of high-quality video to produce high-fidelity videos
with fine-grained body rendering. Specifically, (i) HumanDiT, built on DiT,
supports numerous video resolutions and variable sequence lengths, facilitating
learning for long-sequence video generation; (ii) we introduce a prefix-latent
reference strategy to maintain personalized characteristics across extended
sequences. Furthermore, during inference, HumanDiT leverages Keypoint-DiT to
generate subsequent pose sequences, facilitating video continuation from static
images or existing videos. It also utilizes a Pose Adapter to enable pose
transfer with given sequences. Extensive experiments demonstrate its superior
performance in generating long-form, pose-accurate videos across diverse
scenarios.
|
2502.04849
|
Advancing Wasserstein Convergence Analysis of Score-Based Models:
Insights from Discretization and Second-Order Acceleration
|
stat.ML cs.LG math.PR
|
Score-based diffusion models have emerged as powerful tools in generative
modeling, yet their theoretical foundations remain underexplored. In this work,
we focus on the Wasserstein convergence analysis of score-based diffusion
models. Specifically, we investigate the impact of various discretization
schemes, including Euler discretization, exponential integrators, and midpoint
randomization methods. Our analysis provides a quantitative comparison of these
discrete approximations, emphasizing their influence on convergence behavior.
Furthermore, we explore scenarios where Hessian information is available and
propose an accelerated sampler based on the local linearization method. We
demonstrate that this Hessian-based approach achieves faster convergence rates
of order $\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon}\right)$
significantly improving upon the standard rate
$\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon^2}\right)$ of vanilla
diffusion models, where $\varepsilon$ denotes the target accuracy.
|
2502.04850
|
Aequa: Fair Model Rewards in Collaborative Learning via Slimmable
Networks
|
cs.LG cs.DC
|
Collaborative learning enables multiple participants to learn a single global
model by exchanging focused updates instead of sharing data. One of the core
challenges in collaborative learning is ensuring that participants are rewarded
fairly for their contributions, which entails two key sub-problems:
contribution assessment and reward allocation. This work focuses on fair reward
allocation, where the participants are incentivized through model rewards -
differentiated final models whose performance is commensurate with the
contribution. In this work, we leverage the concept of slimmable neural
networks to collaboratively learn a shared global model whose performance
degrades gracefully with a reduction in model width. We also propose a
post-training fair allocation algorithm that determines the model width for
each participant based on their contributions. We theoretically study the
convergence of our proposed approach and empirically validate it using
extensive experiments on different datasets and architectures. We also extend
our approach to enable training-time model reward allocation.
|
2502.04852
|
Relative Age Estimation Using Face Images
|
cs.CV
|
This work introduces a novel deep-learning approach for estimating age from a
single facial image by refining an initial age estimate. The refinement
leverages a reference face database of individuals with similar ages and
appearances. We employ a network that estimates age differences between an
input image and reference images with known ages, thus refining the initial
estimate. Our method explicitly models age-dependent facial variations using
differential regression, yielding improved accuracy compared to conventional
absolute age estimation. Additionally, we introduce an age augmentation scheme
that iteratively refines initial age estimates by modeling their error
distribution during training. This iterative approach further enhances the
initial estimates. Our approach surpasses existing methods, achieving
state-of-the-art accuracy on the MORPH II and CACD datasets. Furthermore, we
examine the biases inherent in contemporary state-of-the-art age estimation
techniques.
|
2502.04863
|
Enhancing Disinformation Detection with Explainable AI and Named Entity
Replacement
|
cs.CL
|
The automatic detection of disinformation presents a significant challenge in
the field of natural language processing. This task addresses a multifaceted
societal and communication issue, which needs approaches that extend beyond the
identification of general linguistic patterns through data-driven algorithms.
In this research work, we hypothesise that text classification methods are not
able to capture the nuances of disinformation and they often ground their
decision in superfluous features. Hence, we apply a post-hoc explainability
method (SHAP, SHapley Additive exPlanations) to identify spurious elements with
high impact on the classification models. Our findings show that
non-informative elements (e.g., URLs and emoticons) should be removed and named
entities (e.g., Rwanda) should be pseudo-anonymized before training to avoid
models' bias and increase their generalization capabilities. We evaluate this
methodology with internal dataset and external dataset before and after
applying extended data preprocessing and named entity replacement. The results
show that our proposal enhances on average the performance of a disinformation
classification method with external test data in 65.78% without a significant
decrease of the internal test performance.
|
2502.04864
|
$TAR^2$: Temporal-Agent Reward Redistribution for Optimal Policy
Preservation in Multi-Agent Reinforcement Learning
|
cs.MA cs.AI cs.LG cs.RO
|
In cooperative multi-agent reinforcement learning (MARL), learning effective
policies is challenging when global rewards are sparse and delayed. This
difficulty arises from the need to assign credit across both agents and time
steps, a problem that existing methods often fail to address in episodic,
long-horizon tasks. We propose Temporal-Agent Reward Redistribution $TAR^2$, a
novel approach that decomposes sparse global rewards into agent-specific,
time-step-specific components, thereby providing more frequent and accurate
feedback for policy learning. Theoretically, we show that $TAR^2$ (i) aligns
with potential-based reward shaping, preserving the same optimal policies as
the original environment, and (ii) maintains policy gradient update directions
identical to those under the original sparse reward, ensuring unbiased credit
signals. Empirical results on two challenging benchmarks, SMACLite and Google
Research Football, demonstrate that $TAR^2$ significantly stabilizes and
accelerates convergence, outperforming strong baselines like AREL and STAS in
both learning speed and final performance. These findings establish $TAR^2$ as
a principled and practical solution for agent-temporal credit assignment in
sparse-reward multi-agent systems.
|
2502.04870
|
IPSeg: Image Posterior Mitigates Semantic Drift in Class-Incremental
Segmentation
|
cs.CV
|
Class incremental learning aims to enable models to learn from sequential,
non-stationary data streams across different tasks without catastrophic
forgetting. In class incremental semantic segmentation (CISS), the semantic
content of image pixels evolves over incremental phases, known as semantic
drift. In this work, we identify two critical challenges in CISS that
contribute to semantic drift and degrade performance. First, we highlight the
issue of separate optimization, where different parts of the model are
optimized in distinct incremental stages, leading to misaligned probability
scales. Second, we identify noisy semantics arising from inappropriate
pseudo-labeling, which results in sub-optimal results. To address these
challenges, we propose a novel and effective approach, Image Posterior and
Semantics Decoupling for Segmentation (IPSeg). IPSeg introduces two key
mechanisms: (1) leveraging image posterior probabilities to align optimization
across stages and mitigate the effects of separate optimization, and (2)
employing semantics decoupling to handle noisy semantics and tailor learning
strategies for different semantics. Extensive experiments on the Pascal VOC
2012 and ADE20K datasets demonstrate that IPSeg achieves superior performance
compared to state-of-the-art methods, particularly in challenging long-term
incremental scenarios.
|
2502.04873
|
Training-free Task-oriented Grasp Generation
|
cs.RO
|
This paper presents a training-free pipeline for task-oriented grasp
generation that combines pre-trained grasp generation models with
vision-language models (VLMs). Unlike traditional approaches that focus solely
on stable grasps, our method incorporates task-specific requirements by
leveraging the semantic reasoning capabilities of VLMs. We evaluate five
querying strategies, each utilizing different visual representations of
candidate grasps, and demonstrate significant improvements over a baseline
method in both grasp success and task compliance rates, with absolute gains of
up to 36.9% in overall success rate. Our results underline the potential of
VLMs to enhance task-oriented manipulation, providing insights for future
research in robotic grasping and human-robot interaction.
|
2502.04874
|
The Role of Integrity Monitoring in Connected and Automated Vehicles:
Current State-of-Practice and Future Directions
|
cs.RO cs.SY eess.SY
|
Connected and Automated Vehicle (CAV) research has gained traction in the
last decade due to significant advancements in perception, navigation,
communication, and control functions. Accurate and reliable position
information is needed to meet the requirements of CAV applications, especially
when safety is concerned. With the advent of various perception sensors (e.g.
camera, LiDAR, etc.), the vehicular positioning system has improved both in
accuracy and robustness. Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure
(V2I) based cooperative positioning can improve the accuracy of the position
estimates, but the integrity risks involved in multi-sensor fusion in a
cooperative environment have not yet been fully explored. This paper reviews
existing research in the field of positioning Integrity Monitoring (IM) and
identifies various research gaps. Particular attention has been placed on
identifying research that highlights cooperative IM methods. This analysis
helps pave the way for the development of new IM frameworks for cooperative
positioning solutions in the future.
|
2502.04878
|
Sparse Autoencoders Do Not Find Canonical Units of Analysis
|
cs.LG cs.AI
|
A common goal of mechanistic interpretability is to decompose the activations
of neural networks into features: interpretable properties of the input
computed by the model. Sparse autoencoders (SAEs) are a popular method for
finding these features in LLMs, and it has been postulated that they can be
used to find a \textit{canonical} set of units: a unique and complete list of
atomic features. We cast doubt on this belief using two novel techniques: SAE
stitching to show they are incomplete, and meta-SAEs to show they are not
atomic. SAE stitching involves inserting or swapping latents from a larger SAE
into a smaller one. Latents from the larger SAE can be divided into two
categories: \emph{novel latents}, which improve performance when added to the
smaller SAE, indicating they capture novel information, and
\emph{reconstruction latents}, which can replace corresponding latents in the
smaller SAE that have similar behavior. The existence of novel features
indicates incompleteness of smaller SAEs. Using meta-SAEs -- SAEs trained on
the decoder matrix of another SAE -- we find that latents in SAEs often
decompose into combinations of latents from a smaller SAE, showing that larger
SAE latents are not atomic. The resulting decompositions are often
interpretable; e.g. a latent representing ``Einstein'' decomposes into
``scientist'', ``Germany'', and ``famous person''. Even if SAEs do not find
canonical units of analysis, they may still be useful tools. We suggest that
future research should either pursue different approaches for identifying such
units, or pragmatically choose the SAE size suited to their task. We provide an
interactive dashboard to explore meta-SAEs: https://metasaes.streamlit.app/
|
2502.04879
|
Statistical Collusion by Collectives on Learning Platforms
|
stat.ML cs.LG
|
As platforms increasingly rely on learning algorithms, collectives may form
and seek ways to influence these platforms to align with their own interests.
This can be achieved by coordinated submission of altered data. To evaluate the
potential impact of such behavior, it is essential to understand the
computations that collectives must perform to impact platforms in this way. In
particular, collectives need to make a priori assessments of the effect of the
collective before taking action, as they may face potential risks when
modifying their data. Moreover they need to develop implementable coordination
algorithms based on quantities that can be inferred from observed data. We
develop a framework that provides a theoretical and algorithmic treatment of
these issues and present experimental results in a product evaluation domain.
|
2502.04882
|
pytopicgram: A library for data extraction and topic modeling from
Telegram channels
|
cs.CL
|
Telegram is a popular platform for public communication, generating large
amounts of messages through its channels. pytopicgram is a Python library that
helps researchers collect, organize, and analyze these Telegram messages. The
library offers key features such as easy message retrieval, detailed channel
information, engagement metrics, and topic identification using advanced
modeling techniques. By simplifying data extraction and analysis, pytopicgram
allows users to understand how content spreads and how audiences interact on
Telegram. This paper describes the design, main features, and practical uses of
\pytopicgram, showcasing its effectiveness for studying public conversations on
Telegram.
|
2502.04883
|
Evaluating Standard and Dialectal Frisian ASR: Multilingual Fine-tuning
and Language Identification for Improved Low-resource Performance
|
cs.CL cs.LG cs.SD eess.AS
|
Automatic Speech Recognition (ASR) performance for low-resource languages is
still far behind that of higher-resource languages such as English, due to a
lack of sufficient labeled data. State-of-the-art methods deploy
self-supervised transfer learning where a model pre-trained on large amounts of
data is fine-tuned using little labeled data in a target low-resource language.
In this paper, we present and examine a method for fine-tuning an SSL-based
model in order to improve the performance for Frisian and its regional dialects
(Clay Frisian, Wood Frisian, and South Frisian). We show that Frisian ASR
performance can be improved by using multilingual (Frisian, Dutch, English and
German) fine-tuning data and an auxiliary language identification task. In
addition, our findings show that performance on dialectal speech suffers
substantially, and, importantly, that this effect is moderated by the
elicitation approach used to collect the dialectal data. Our findings also
particularly suggest that relying solely on standard language data for ASR
evaluation may underestimate real-world performance, particularly in languages
with substantial dialectal variation.
|
2502.04889
|
Any-stepsize Gradient Descent for Separable Data under Fenchel--Young
Losses
|
stat.ML cs.LG
|
The gradient descent (GD) has been one of the most common optimizer in
machine learning. In particular, the loss landscape of a neural network is
typically sharpened during the initial phase of training, making the training
dynamics hover on the edge of stability. This is beyond our standard
understanding of GD convergence in the stable regime where arbitrarily chosen
stepsize is sufficiently smaller than the edge of stability. Recently, Wu et
al. (COLT2024) have showed that GD converges with arbitrary stepsize under
linearly separable logistic regression. Although their analysis hinges on the
self-bounding property of the logistic loss, which seems to be a cornerstone to
establish a modified descent lemma, our pilot study shows that other loss
functions without the self-bounding property can make GD converge with
arbitrary stepsize. To further understand what property of a loss function
matters in GD, we aim to show arbitrary-stepsize GD convergence for a general
loss function based on the framework of \emph{Fenchel--Young losses}. We
essentially leverage the classical perceptron argument to derive the
convergence rate for achieving $\epsilon$-optimal loss, which is possible for a
majority of Fenchel--Young losses. Among typical loss functions, the Tsallis
entropy achieves the GD convergence rate $T=\Omega(\epsilon^{-1/2})$, and the
R{\'e}nyi entropy achieves the far better rate $T=\Omega(\epsilon^{-1/3})$. We
argue that these better rate is possible because of \emph{separation margin} of
loss functions, instead of the self-bounding property.
|
2502.04890
|
Exploit Gradient Skewness to Circumvent Byzantine Defenses for Federated
Learning
|
cs.LG
|
Federated Learning (FL) is notorious for its vulnerability to Byzantine
attacks. Most current Byzantine defenses share a common inductive bias: among
all the gradients, the densely distributed ones are more likely to be honest.
However, such a bias is a poison to Byzantine robustness due to a newly
discovered phenomenon in this paper - gradient skew. We discover that a group
of densely distributed honest gradients skew away from the optimal gradient
(the average of honest gradients) due to heterogeneous data. This gradient skew
phenomenon allows Byzantine gradients to hide within the densely distributed
skewed gradients. As a result, Byzantine defenses are confused into believing
that Byzantine gradients are honest. Motivated by this observation, we propose
a novel skew-aware attack called STRIKE: first, we search for the skewed
gradients; then, we construct Byzantine gradients within the skewed gradients.
Experiments on three benchmark datasets validate the effectiveness of our
attack
|
2502.04891
|
GNNs Getting ComFy: Community and Feature Similarity Guided Rewiring
|
cs.LG cs.SI stat.ML
|
Maximizing the spectral gap through graph rewiring has been proposed to
enhance the performance of message-passing graph neural networks (GNNs) by
addressing over-squashing. However, as we show, minimizing the spectral gap can
also improve generalization. To explain this, we analyze how rewiring can
benefit GNNs within the context of stochastic block models. Since spectral gap
optimization primarily influences community strength, it improves performance
when the community structure aligns with node labels. Building on this insight,
we propose three distinct rewiring strategies that explicitly target community
structure, node labels, and their alignment: (a) community structure-based
rewiring (ComMa), a more computationally efficient alternative to spectral gap
optimization that achieves similar goals; (b) feature similarity-based rewiring
(FeaSt), which focuses on maximizing global homophily; and (c) a hybrid
approach (ComFy), which enhances local feature similarity while preserving
community structure to optimize label-community alignment. Extensive
experiments confirm the effectiveness of these strategies and support our
theoretical insights.
|
2502.04892
|
A Foundational Brain Dynamics Model via Stochastic Optimal Control
|
cs.LG q-bio.NC stat.ML
|
We introduce a foundational model for brain dynamics that utilizes stochastic
optimal control (SOC) and amortized inference. Our method features a
continuous-discrete state space model (SSM) that can robustly handle the
intricate and noisy nature of fMRI signals. To address computational
limitations, we implement an approximation strategy grounded in the SOC
framework. Additionally, we present a simulation-free latent dynamics approach
that employs locally linear approximations, facilitating efficient and scalable
inference. For effective representation learning, we derive an Evidence Lower
Bound (ELBO) from the SOC formulation, which integrates smoothly with recent
advancements in self-supervised learning (SSL), thereby promoting robust and
transferable representations. Pre-trained on extensive datasets such as the
UKB, our model attains state-of-the-art results across a variety of downstream
tasks, including demographic prediction, trait analysis, disease diagnosis, and
prognosis. Moreover, evaluating on external datasets such as HCP-A, ABIDE, and
ADHD200 further validates its superior abilities and resilience across
different demographic and clinical distributions. Our foundational model
provides a scalable and efficient approach for deciphering brain dynamics,
opening up numerous applications in neuroscience.
|
2502.04895
|
Deep Learning Models for Physical Layer Communications
|
cs.LG eess.SP
|
The increased availability of data and computing resources has enabled
researchers to successfully adopt machine learning (ML) techniques and make
significant contributions in several engineering areas. ML and in particular
deep learning (DL) algorithms have shown to perform better in tasks where a
physical bottom-up description of the phenomenon is lacking and/or is
mathematically intractable. Indeed, they take advantage of the observations of
natural phenomena to automatically acquire knowledge and learn internal
relations. Despite the historical model-based mindset, communications
engineering recently started shifting the focus towards top-down data-driven
learning models, especially in domains such as channel modeling and physical
layer design, where in most of the cases no general optimal strategies are
known.
In this thesis, we aim at solving some fundamental open challenges in
physical layer communications exploiting new DL paradigms. In particular, we
mathematically formulate, under ML terms, classic problems such as channel
capacity and optimal coding-decoding schemes, for any arbitrary communication
medium. We design and develop the architecture, algorithm and code necessary to
train the equivalent DL model, and finally, we propose novel solutions to
long-standing problems in the field.
|
2502.04896
|
Goku: Flow Based Video Generative Foundation Models
|
cs.CV
|
This paper introduces Goku, a state-of-the-art family of joint
image-and-video generation models leveraging rectified flow Transformers to
achieve industry-leading performance. We detail the foundational elements
enabling high-quality visual generation, including the data curation pipeline,
model architecture design, flow formulation, and advanced infrastructure for
efficient and robust large-scale training. The Goku models demonstrate superior
performance in both qualitative and quantitative evaluations, setting new
benchmarks across major tasks. Specifically, Goku achieves 0.76 on GenEval and
83.65 on DPG-Bench for text-to-image generation, and 84.85 on VBench for
text-to-video tasks. We believe that this work provides valuable insights and
practical advancements for the research community in developing joint
image-and-video generation models.
|
2502.04898
|
ARTInp: CBCT-to-CT Image Inpainting and Image Translation in
Radiotherapy
|
eess.IV cs.AI cs.CV
|
A key step in Adaptive Radiation Therapy (ART) workflows is the evaluation of
the patient's anatomy at treatment time to ensure the accuracy of the delivery.
To this end, Cone Beam Computerized Tomography (CBCT) is widely used being
cost-effective and easy to integrate into the treatment process. Nonetheless,
CBCT images have lower resolution and more artifacts than CT scans, making them
less reliable for precise treatment validation. Moreover, in complex treatments
such as Total Marrow and Lymph Node Irradiation (TMLI), where full-body
visualization of the patient is critical for accurate dose delivery, the CBCT
images are often discontinuous, leaving gaps that could contain relevant
anatomical information. To address these limitations, we propose ARTInp
(Adaptive Radiation Therapy Inpainting), a novel deep-learning framework
combining image inpainting and CBCT-to-CT translation. ARTInp employs a
dual-network approach: a completion network that fills anatomical gaps in CBCT
volumes and a custom Generative Adversarial Network (GAN) to generate
high-quality synthetic CT (sCT) images. We trained ARTInp on a dataset of
paired CBCT and CT images from the SynthRad 2023 challenge, and the performance
achieved on a test set of 18 patients demonstrates its potential for enhancing
CBCT-based workflows in radiotherapy.
|
2502.04899
|
Unified Approaches in Self-Supervised Event Stream Modeling: Progress
and Prospects
|
cs.LG cs.AI
|
The proliferation of digital interactions across diverse domains, such as
healthcare, e-commerce, gaming, and finance, has resulted in the generation of
vast volumes of event stream (ES) data. ES data comprises continuous sequences
of timestamped events that encapsulate detailed contextual information relevant
to each domain. While ES data holds significant potential for extracting
actionable insights and enhancing decision-making, its effective utilization is
hindered by challenges such as the scarcity of labeled data and the fragmented
nature of existing research efforts. Self-Supervised Learning (SSL) has emerged
as a promising paradigm to address these challenges by enabling the extraction
of meaningful representations from unlabeled ES data. In this survey, we
systematically review and synthesize SSL methodologies tailored for ES modeling
across multiple domains, bridging the gaps between domain-specific approaches
that have traditionally operated in isolation. We present a comprehensive
taxonomy of SSL techniques, encompassing both predictive and contrastive
paradigms, and analyze their applicability and effectiveness within different
application contexts. Furthermore, we identify critical gaps in current
research and propose a future research agenda aimed at developing scalable,
domain-agnostic SSL frameworks for ES modeling. By unifying disparate research
efforts and highlighting cross-domain synergies, this survey aims to accelerate
innovation, improve reproducibility, and expand the applicability of SSL to
diverse real-world ES challenges.
|
2502.04901
|
On the Difficulty of Constructing a Robust and Publicly-Detectable
Watermark
|
cs.CR cs.LG
|
This work investigates the theoretical boundaries of creating
publicly-detectable schemes to enable the provenance of watermarked imagery.
Metadata-based approaches like C2PA provide unforgeability and
public-detectability. ML techniques offer robust retrieval and watermarking.
However, no existing scheme combines robustness, unforgeability, and
public-detectability. In this work, we formally define such a scheme and
establish its existence. Although theoretically possible, we find that at
present, it is intractable to build certain components of our scheme without a
leap in deep learning capabilities. We analyze these limitations and propose
research directions that need to be addressed before we can practically realize
robust and publicly-verifiable provenance.
|
2502.04903
|
Wavelet-Assisted Multi-Frequency Attention Network for Pansharpening
|
eess.IV cs.AI cs.CV
|
Pansharpening aims to combine a high-resolution panchromatic (PAN) image with
a low-resolution multispectral (LRMS) image to produce a high-resolution
multispectral (HRMS) image. Although pansharpening in the frequency domain
offers clear advantages, most existing methods either continue to operate
solely in the spatial domain or fail to fully exploit the benefits of the
frequency domain. To address this issue, we innovatively propose
Multi-Frequency Fusion Attention (MFFA), which leverages wavelet transforms to
cleanly separate frequencies and enable lossless reconstruction across
different frequency domains. Then, we generate Frequency-Query, Spatial-Key,
and Fusion-Value based on the physical meanings represented by different
features, which enables a more effective capture of specific information in the
frequency domain. Additionally, we focus on the preservation of frequency
features across different operations. On a broader level, our network employs a
wavelet pyramid to progressively fuse information across multiple scales.
Compared to previous frequency domain approaches, our network better prevents
confusion and loss of different frequency features during the fusion process.
Quantitative and qualitative experiments on multiple datasets demonstrate that
our method outperforms existing approaches and shows significant generalization
capabilities for real-world scenarios.
|
2502.04907
|
Scalable and consistent embedding of probability measures into Hilbert
spaces via measure quantization
|
stat.ML cs.LG
|
This paper is focused on statistical learning from data that come as
probability measures. In this setting, popular approaches consist in embedding
such data into a Hilbert space with either Linearized Optimal Transport or
Kernel Mean Embedding. However, the cost of computing such embeddings prohibits
their direct use in large-scale settings. We study two methods based on measure
quantization for approximating input probability measures with discrete
measures of small-support size. The first one is based on optimal quantization
of each input measure, while the second one relies on mean-measure
quantization. We study the consistency of such approximations, and its
implication for scalable embeddings of probability measures into a Hilbert
space at a low computational cost. We finally illustrate our findings with
various numerical experiments.
|
2502.04908
|
Effective Sampling for Robot Motion Planning Through the Lens of
Lattices
|
cs.RO cs.CG cs.DM
|
Sampling-based methods for motion planning, which capture the structure of
the robot's free space via (typically random) sampling, have gained popularity
due to their scalability, simplicity, and for offering global guarantees, such
as probabilistic completeness and asymptotic optimality. Unfortunately, the
practicality of those guarantees remains limited as they do not provide
insights into the behavior of motion planners for a finite number of samples
(i.e., a finite running time). In this work, we harness lattice theory and the
concept of $(\delta,\epsilon)$-completeness by Tsao et al. (2020) to construct
deterministic sample sets that endow their planners with strong finite-time
guarantees while minimizing running time. In particular, we introduce a
highly-efficient deterministic sampling approach based on the $A_d^*$ lattice,
which is the best-known geometric covering in dimensions $\leq 21$. Using our
new sampling approach, we obtain at least an order-of-magnitude speedup over
existing deterministic and uniform random sampling methods for complex
motion-planning problems. Overall, our work provides deep mathematical insights
while advancing the practical applicability of sampling-based motion planning.
|
2502.04910
|
On the Power of Heuristics in Temporal Graphs
|
cs.LG
|
Dynamic graph datasets often exhibit strong temporal patterns, such as
recency, which prioritizes recent interactions, and popularity, which favors
frequently occurring nodes. We demonstrate that simple heuristics leveraging
only these patterns can perform on par or outperform state-of-the-art neural
network models under standard evaluation protocols. To further explore these
dynamics, we introduce metrics that quantify the impact of recency and
popularity across datasets. Our experiments on BenchTemp and the Temporal Graph
Benchmark show that our approaches achieve state-of-the-art performance across
all datasets in the latter and secure top ranks on multiple datasets in the
former. These results emphasize the importance of refined evaluation schemes to
enable fair comparisons and promote the development of more robust temporal
graph models. Additionally, they reveal that current deep learning methods
often struggle to capture the key patterns underlying predictions in real-world
temporal graphs. For reproducibility, we have made our code publicly available.
|
2502.04912
|
Joint Beamforming Design for Integrated Sensing and Communication
Systems with Hybrid-Colluding Eavesdroppers
|
eess.SY cs.SY
|
In this paper, we consider the physical layer security (PLS) problem for
integrated sensing and communication (ISAC) systems in the presence of
hybrid-colluding eavesdroppers, where an active eavesdropper (AE) and a passive
eavesdropper (PE) collude to intercept the confidential information. To ensure
the accuracy of sensing while preventing the eavesdropping, a base station
transmits a signal consisting of information symbols and sensing waveform, in
which the sensing waveform can be also used as artificial noise to interfere
with eavesdroppers. Under this setup, we propose an alternating
optimization-based two stage scheme (AO-TSS) for improving the sensing and
communication performance. In the first stage, based on the assumptions that
the perfect channel state information (CSI) of the AE and statistical CSI of
the PE are known, the communication and sensing beamforming problem is
formulated with the objective of minimizing the weighted sum of the beampattern
matching mean squared error (MSE) and cross-correlation, subject to the secure
transmission constraint. To tackle the non-convexity, we propose a
semi-definite relaxation (SDR) algorithm and a reduced-complexity zero-forcing
(ZF) algorithm. Then, the scenarios are further extended to more general cases
with imperfect AE CSI and unknown PE CSI. To further improve the communication
performance, the second-stage problem is developed to optimize the secrecy rate
threshold under the radar performance constraint. Finally, numerical results
demonstrate the superiority of the proposed scheme in terms of sensing and
secure communication.
|
2502.04917
|
Complex Physics-Informed Neural Network
|
cs.LG cs.AI
|
We propose compleX-PINN, a novel physics-informed neural network (PINN)
architecture that incorporates a learnable activation function inspired by
Cauchy integral theorem. By learning the parameters of the activation function,
compleX-PINN achieves high accuracy with just a single hidden layer. Empirical
results show that compleX-PINN effectively solves problems where traditional
PINNs struggle and consistently delivers significantly higher precision, often
by an order of magnitude.
|
2502.04918
|
Explainable and externally validated machine learning for
neuropsychiatric diagnosis via electrocardiograms
|
eess.SP cs.LG
|
Electrocardiogram (ECG) analysis has emerged as a promising tool for
identifying physiological changes associated with neuropsychiatric conditions.
The relationship between cardiovascular health and neuropsychiatric disorders
suggests that ECG abnormalities could serve as valuable biomarkers for more
efficient detection, therapy monitoring, and risk stratification. However, the
potential of the ECG to accurately distinguish neuropsychiatric conditions,
particularly among diverse patient populations, remains underexplored. This
study utilized ECG markers and basic demographic data to predict
neuropsychiatric conditions using machine learning models, with targets defined
through ICD-10 codes. Both internal and external validation were performed
using the MIMIC-IV and ECG-View datasets respectively. Performance was assessed
using AUROC scores. To enhance model interpretability, Shapley values were
applied to provide insights into the contributions of individual ECG features
to the predictions. Significant predictive performance was observed for
conditions within the neurological and psychiatric groups. For the neurological
group, Alzheimer's disease (G30) achieved an internal AUROC of 0.813
(0.812-0.814) and an external AUROC of 0.868 (0.867-0.868). In the psychiatric
group, unspecified dementia (F03) showed an internal AUROC of 0.849
(0.848-0.849) and an external AUROC of 0.862 (0.861-0.863). Discriminative
features align with known ECG markers but also provide hints on potentially new
markers. ECG offers significant promise for diagnosing and monitoring
neuropsychiatric conditions, with robust predictive performance across internal
and external cohorts. Future work should focus on addressing potential
confounders, such as therapy-related cardiotoxicity, and expanding the scope of
ECG applications, including personalized care and early intervention
strategies.
|
2502.04923
|
Cached Multi-Lora Composition for Multi-Concept Image Generation
|
cs.CV cs.AI
|
Low-Rank Adaptation (LoRA) has emerged as a widely adopted technique in
text-to-image models, enabling precise rendering of multiple distinct elements,
such as characters and styles, in multi-concept image generation. However,
current approaches face significant challenges when composing these LoRAs for
multi-concept image generation, resulting in diminished generated image
quality. In this paper, we initially investigate the role of LoRAs in the
denoising process through the lens of the Fourier frequency domain. Based on
the hypothesis that applying multiple LoRAs could lead to "semantic conflicts",
we find that certain LoRAs amplify high-frequency features such as edges and
textures, whereas others mainly focus on low-frequency elements, including the
overall structure and smooth color gradients. Building on these insights, we
devise a frequency domain based sequencing strategy to determine the optimal
order in which LoRAs should be integrated during inference. This strategy
offers a methodical and generalizable solution compared to the naive
integration commonly found in existing LoRA fusion techniques. To fully
leverage our proposed LoRA order sequence determination method in multi-LoRA
composition tasks, we introduce a novel, training-free framework, Cached
Multi-LoRA (CMLoRA), designed to efficiently integrate multiple LoRAs while
maintaining cohesive image generation. With its flexible backbone for
multi-LoRA fusion and a non-uniform caching strategy tailored to individual
LoRAs, CMLoRA has the potential to reduce semantic conflicts in LoRA
composition and improve computational efficiency. Our experimental evaluations
demonstrate that CMLoRA outperforms state-of-the-art training-free LoRA fusion
methods by a significant margin -- it achieves an average improvement of
$2.19\%$ in CLIPScore, and $11.25\%$ in MLLM win rate compared to LoraHub, LoRA
Composite, and LoRA Switch.
|
2502.04925
|
Convergent NMPC-based Reinforcement Learning Using Deep Expected Sarsa
and Nonlinear Temporal Difference Learning
|
eess.SY cs.RO cs.SY
|
In this paper, we present a learning-based nonlinear model predictive
controller (NMPC) using an original reinforcement learning (RL) method to learn
the optimal weights of the NMPC scheme. The controller is used as the current
action-value function of a deep Expected Sarsa where the subsequent
action-value function, usually obtained with a secondary NMPC, is approximated
with a neural network (NN). With respect to existing methods, we add to the
NN's input the current value of the NMPC's learned parameters so that the
network is able to approximate the action-value function and stabilize the
learning performance. Additionally, with the use of the NN, the real-time
computational burden is approximately halved without affecting the closed-loop
performance. Furthermore, we combine gradient temporal difference methods with
parametrized NMPC as function approximator of the Expected Sarsa RL method to
overcome the potential parameters divergence and instability issues when
nonlinearities are present in the function approximation. The simulation result
shows that the proposed approach converges to a locally optimal solution
without instability problems.
|
2502.04928
|
Generative-enhanced optimization for knapsack problems: an
industry-relevant study
|
cs.LG quant-ph
|
Optimization is a crucial task in various industries such as logistics,
aviation, manufacturing, chemical, pharmaceutical, and insurance, where finding
the best solution to a problem can result in significant cost savings and
increased efficiency. Tensor networks (TNs) have gained prominence in recent
years in modeling classical systems with quantum-inspired approaches. More
recently, TN generative-enhanced optimization (TN-GEO) has been proposed as a
strategy which uses generative modeling to efficiently sample valid solutions
with respect to certain constraints of optimization problems. Moreover, it has
been shown that symmetric TNs (STNs) can encode certain constraints of
optimization problems, thus aiding in their solution process. In this work, we
investigate the applicability of TN- and STN-GEO to an industry relevant
problem class, a multi-knapsack problem, in which each object must be assigned
to an available knapsack. We detail a prescription for practitioners to use the
TN-and STN-GEO methodology and study its scaling behavior and dependence on its
hyper-parameters. We benchmark 60 different problem instances and find that
TN-GEO and STN-GEO produce results of similar quality to simulated annealing.
|
2502.04935
|
Conformal Prediction for Electricity Price Forecasting in the Day-Ahead
and Real-Time Balancing Market
|
cs.LG cs.AI
|
The integration of renewable energy into electricity markets poses
significant challenges to price stability and increases the complexity of
market operations. Accurate and reliable electricity price forecasting is
crucial for effective market participation, where price dynamics can be
significantly more challenging to predict. Probabilistic forecasting, through
prediction intervals, efficiently quantifies the inherent uncertainties in
electricity prices, supporting better decision-making for market participants.
This study explores the enhancement of probabilistic price prediction using
Conformal Prediction (CP) techniques, specifically Ensemble Batch Prediction
Intervals and Sequential Predictive Conformal Inference. These methods provide
precise and reliable prediction intervals, outperforming traditional models in
validity metrics. We propose an ensemble approach that combines the efficiency
of quantile regression models with the robust coverage properties of time
series adapted CP techniques. This ensemble delivers both narrow prediction
intervals and high coverage, leading to more reliable and accurate forecasts.
We further evaluate the practical implications of CP techniques through a
simulated trading algorithm applied to a battery storage system. The ensemble
approach demonstrates improved financial returns in energy trading in both the
Day-Ahead and Balancing Markets, highlighting its practical benefits for market
participants.
|
2502.04937
|
Data-driven Modality Fusion: An AI-enabled Framework for Large-Scale
Sensor Network Management
|
cs.NI cs.AI cs.LG
|
The development and operation of smart cities relyheavily on large-scale
Internet-of-Things (IoT) networks and sensor infrastructures that continuously
monitor various aspects of urban environments. These networks generate vast
amounts of data, posing challenges related to bandwidth usage, energy
consumption, and system scalability. This paper introduces a novel sensing
paradigm called Data-driven Modality Fusion (DMF), designed to enhance the
efficiency of smart city IoT network management. By leveraging correlations
between timeseries data from different sensing modalities, the proposed DMF
approach reduces the number of physical sensors required for monitoring,
thereby minimizing energy expenditure, communication bandwidth, and overall
deployment costs. The framework relocates computational complexity from the
edge devices to the core, ensuring that resource-constrained IoT devices are
not burdened with intensive processing tasks. DMF is validated using data from
a real-world IoT deployment in Madrid, demonstrating the effectiveness of the
proposed system in accurately estimating traffic, environmental, and pollution
metrics from a reduced set of sensors. The proposed solution offers a scalable,
efficient mechanism for managing urban IoT networks, while addressing issues of
sensor failure and privacy concerns.
|
2502.04942
|
WikiReddit: Tracing Information and Attention Flows Between Online
Platforms
|
cs.CY cs.DB cs.HC cs.SI
|
The World Wide Web is a complex interconnected digital ecosystem, where
information and attention flow between platforms and communities throughout the
globe. These interactions co-construct how we understand the world, reflecting
and shaping public discourse. Unfortunately, researchers often struggle to
understand how information circulates and evolves across the web because
platform-specific data is often siloed and restricted by linguistic barriers.
To address this gap, we present a comprehensive, multilingual dataset capturing
all Wikipedia links shared in posts and comments on Reddit from 2020 to 2023,
excluding those from private and NSFW subreddits. Each linked Wikipedia article
is enriched with revision history, page view data, article ID, redirects, and
Wikidata identifiers. Through a research agreement with Reddit, our dataset
ensures user privacy while providing a query and ID mechanism that integrates
with the Reddit and Wikipedia APIs. This enables extended analyses for
researchers studying how information flows across platforms. For example,
Reddit discussions use Wikipedia for deliberation and fact-checking which
subsequently influences Wikipedia content, by driving traffic to articles or
inspiring edits. By analyzing the relationship between information shared and
discussed on these platforms, our dataset provides a foundation for examining
the interplay between social media discourse and collaborative knowledge
consumption and production.
|
2502.04946
|
SurGen: 1020 H&E-stained Whole Slide Images With Survival and Genetic
Markers
|
cs.CV
|
$\textbf{Background}$: Cancer remains one of the leading causes of morbidity
and mortality worldwide. Comprehensive datasets that combine histopathological
images with genetic and survival data across various tumour sites are essential
for advancing computational pathology and personalised medicine.
$\textbf{Results}$: We present SurGen, a dataset comprising 1,020 H&E-stained
whole slide images (WSIs) from 843 colorectal cancer cases. The dataset
includes detailed annotations for key genetic mutations (KRAS, NRAS, BRAF) and
mismatch repair status, as well as survival data for 426 cases. To demonstrate
SurGen's practical utility, we conducted a proof-of-concept machine learning
experiment predicting mismatch repair status from the WSIs, achieving a test
AUROC of 0.8316. These preliminary results underscore the dataset's potential
to facilitate research in biomarker discovery, prognostic modelling, and
advanced machine learning applications in colorectal cancer.
$\textbf{Conclusions}$: SurGen offers a valuable resource for the scientific
community, enabling studies that require high-quality WSIs linked with
comprehensive clinical and genetic information on colorectal cancer. Our
initial findings affirm the dataset's capacity to advance diagnostic precision
and foster the development of personalised treatment strategies in colorectal
oncology. Data available online at https://doi.org/10.6019/S-BIAD1285.
|
2502.04949
|
Does Unsupervised Domain Adaptation Improve the Robustness of Amortized
Bayesian Inference? A Systematic Evaluation
|
stat.ML cs.LG stat.ME
|
Neural networks are fragile when confronted with data that significantly
deviates from their training distribution. This is true in particular for
simulation-based inference methods, such as neural amortized Bayesian inference
(ABI), where models trained on simulated data are deployed on noisy real-world
observations. Recent robust approaches employ unsupervised domain adaptation
(UDA) to match the embedding spaces of simulated and observed data. However,
the lack of comprehensive evaluations across different domain mismatches raises
concerns about the reliability in high-stakes applications. We address this gap
by systematically testing UDA approaches across a wide range of
misspecification scenarios in both a controlled and a high-dimensional
benchmark. We demonstrate that aligning summary spaces between domains
effectively mitigates the impact of unmodeled phenomena or noise. However, the
same alignment mechanism can lead to failures under prior misspecifications - a
critical finding with practical consequences. Our results underscore the need
for careful consideration of misspecification types when using UDA techniques
to increase the robustness of ABI in practice.
|
2502.04951
|
The Rising Threat to Emerging AI-Powered Search Engines
|
cs.CR cs.AI cs.LG
|
Recent advancements in Large Language Models (LLMs) have significantly
enhanced the capabilities of AI-Powered Search Engines (AIPSEs), offering
precise and efficient responses by integrating external databases with
pre-existing knowledge. However, we observe that these AIPSEs raise risks such
as quoting malicious content or citing malicious websites, leading to harmful
or unverified information dissemination. In this study, we conduct the first
safety risk quantification on seven production AIPSEs by systematically
defining the threat model, risk level, and evaluating responses to various
query types. With data collected from PhishTank, ThreatBook, and LevelBlue, our
findings reveal that AIPSEs frequently generate harmful content that contains
malicious URLs even with benign queries (e.g., with benign keywords). We also
observe that directly query URL will increase the risk level while query with
natural language will mitigate such risk. We further perform two case studies
on online document spoofing and phishing to show the ease of deceiving AIPSEs
in the real-world setting. To mitigate these risks, we develop an agent-based
defense with a GPT-4o-based content refinement tool and an XGBoost-based URL
detector. Our evaluation shows that our defense can effectively reduce the risk
but with the cost of reducing available information. Our research highlights
the urgent need for robust safety measures in AIPSEs.
|
2502.04955
|
Claim Extraction for Fact-Checking: Data, Models, and Automated Metrics
|
cs.CL
|
In this paper, we explore the problem of Claim Extraction using one-to-many
text generation methods, comparing LLMs, small summarization models finetuned
for the task, and a previous NER-centric baseline QACG. As the current
publications on Claim Extraction, Fact Extraction, Claim Generation and
Check-worthy Claim Detection are quite scattered in their means and
terminology, we compile their common objectives, releasing the FEVERFact
dataset, with 17K atomic factual claims extracted from 4K contextualised
Wikipedia sentences, adapted from the original FEVER. We compile the known
objectives into an Evaluation framework of: Atomicity, Fluency,
Decontextualization, Faithfulness checked for each generated claim separately,
and Focus and Coverage measured against the full set of predicted claims for a
single input. For each metric, we implement a scale using a reduction to an
already-explored NLP task. We validate our metrics against human grading of
generic claims, to see that the model ranking on $F_{fact}$, our hardest
metric, did not change and the evaluation framework approximates human grading
very closely in terms of $F_1$ and RMSE.
|
2502.04958
|
SSMLoRA: Enhancing Low-Rank Adaptation with State Space Model
|
cs.CL
|
Fine-tuning is a key approach for adapting language models to specific
downstream tasks, but updating all model parameters becomes impractical as
model sizes increase. Parameter-Efficient Fine-Tuning (PEFT) methods, such as
Low-Rank Adaptation (LoRA), address this challenge by introducing additional
adaptation parameters into pre-trained weight matrices. However, LoRA's
performance varies across different insertion points within the model,
highlighting potential parameter inefficiency due to unnecessary insertions. To
this end, we propose SSMLoRA (State Space Model Low-Rank Adaptation), an
extension of LoRA that incorporates a State Space Model (SSM) to interconnect
low-rank matrices. SSMLoRA ensures that performance is maintained even with
sparser insertions. SSMLoRA allows the model to not only map inputs to a
low-rank space for better feature extraction but also leverage the computations
from the previous low-rank space. Our method achieves comparable performance to
LoRA on the General Language Understanding Evaluation (GLUE) benchmark while
using only half the parameters. Additionally, due to its structure, SSMLoRA
shows promise in handling tasks with longer input sequences. .You can find our
code here:https://github.com/yuhkalhic/SSMLoRA.
|
2502.04959
|
No Task Left Behind: Isotropic Model Merging with Common and
Task-Specific Subspaces
|
cs.LG
|
Model merging integrates the weights of multiple task-specific models into a
single multi-task model. Despite recent interest in the problem, a significant
performance gap between the combined and single-task models remains. In this
paper, we investigate the key characteristics of task matrices -- weight update
matrices applied to a pre-trained model -- that enable effective merging. We
show that alignment between singular components of task-specific and merged
matrices strongly correlates with performance improvement over the pre-trained
model. Based on this, we propose an isotropic merging framework that flattens
the singular value spectrum of task matrices, enhances alignment, and reduces
the performance gap. Additionally, we incorporate both common and task-specific
subspaces to further improve alignment and performance. Our proposed approach
achieves state-of-the-art performance across multiple scenarios, including
various sets of tasks and model scales. This work advances the understanding of
model merging dynamics, offering an effective methodology to merge models
without requiring additional training. Code is available at
https://github.com/danielm1405/iso-merging .
|
2502.04960
|
Commonality and Individuality! Integrating Humor Commonality with
Speaker Individuality for Humor Recognition
|
cs.CL
|
Humor recognition aims to identify whether a specific speaker's text is
humorous. Current methods for humor recognition mainly suffer from two
limitations: (1) they solely focus on one aspect of humor commonalities,
ignoring the multifaceted nature of humor; and (2) they typically overlook the
critical role of speaker individuality, which is essential for a comprehensive
understanding of humor expressions. To bridge these gaps, we introduce the
Commonality and Individuality Incorporated Network for Humor Recognition
(CIHR), a novel model designed to enhance humor recognition by integrating
multifaceted humor commonalities with the distinctive individuality of
speakers. The CIHR features a Humor Commonality Analysis module that explores
various perspectives of multifaceted humor commonality within user texts, and a
Speaker Individuality Extraction module that captures both static and dynamic
aspects of a speaker's profile to accurately model their distinctive
individuality. Additionally, Static and Dynamic Fusion modules are introduced
to effectively incorporate the humor commonality with speaker's individuality
in the humor recognition process. Extensive experiments demonstrate the
effectiveness of CIHR, underscoring the importance of concurrently addressing
both multifaceted humor commonality and distinctive speaker individuality in
humor recognition.
|
2502.04963
|
Fast Adaptive Anti-Jamming Channel Access via Deep Q Learning and
Coarse-Grained Spectrum Prediction
|
cs.LG cs.AI
|
This paper investigates the anti-jamming channel access problem in complex
and unknown jamming environments, where the jammer could dynamically adjust its
strategies to target different channels. Traditional channel hopping
anti-jamming approaches using fixed patterns are ineffective against such
dynamic jamming attacks. Although the emerging deep reinforcement learning
(DRL) based dynamic channel access approach could achieve the Nash equilibrium
under fast-changing jamming attacks, it requires extensive training episodes.
To address this issue, we propose a fast adaptive anti-jamming channel access
approach guided by the intuition of ``learning faster than the jammer", where a
synchronously updated coarse-grained spectrum prediction serves as an auxiliary
task for the deep Q learning (DQN) based anti-jamming model. This helps the
model identify a superior Q-function compared to standard DRL while
significantly reducing the number of training episodes. Numerical results
indicate that the proposed approach significantly accelerates the rate of
convergence in model training, reducing the required training episodes by up to
70% compared to standard DRL. Additionally, it also achieves a 10% improvement
in throughput over NE strategies, owing to the effective use of coarse-grained
spectrum prediction.
|
2502.04964
|
CoCoA: A Generalized Approach to Uncertainty Quantification by
Integrating Confidence and Consistency of LLM Outputs
|
cs.CL
|
Uncertainty quantification (UQ) methods for Large Language Models (LLMs)
encompasses a variety of approaches, with two major types being particularly
prominent: information-based, which focus on model confidence expressed as
token probabilities, and consistency-based, which assess the semantic
relationship between multiple outputs generated using repeated sampling.
Several recent methods have combined these two approaches and shown impressive
performance in various applications. However, they sometimes fail to outperform
much simpler baseline methods. Our investigation reveals distinctive
characteristics of LLMs as probabilistic models, which help to explain why
these UQ methods underperform in certain tasks. Based on these findings, we
propose a new way of synthesizing model confidence and output consistency that
leads to a family of efficient and robust UQ methods. We evaluate our approach
across a variety of tasks such as question answering, abstractive
summarization, and machine translation, demonstrating sizable improvements over
state-of-the-art UQ approaches.
|
2502.04967
|
Towards Smarter Sensing: 2D Clutter Mitigation in RL-Driven Cognitive
MIMO Radar
|
eess.SP cs.LG
|
Motivated by the growing interest in integrated sensing and communication for
6th generation (6G) networks, this paper presents a cognitive Multiple-Input
Multiple-Output (MIMO) radar system enhanced by reinforcement learning (RL) for
robust multitarget detection in dynamic environments. The system employs a
planar array configuration and adapts its transmitted waveforms and beamforming
patterns to optimize detection performance in the presence of unknown
two-dimensional (2D) disturbances. A robust Wald-type detector is integrated
with a SARSA-based RL algorithm, enabling the radar to learn and adapt to
complex clutter environments modeled by a 2D autoregressive process. Simulation
results demonstrate significant improvements in detection probability compared
to omnidirectional methods, particularly for low Signal-to-Noise Ratio (SNR)
targets masked by clutter.
|
2502.04970
|
Gradient-based Explanations for Deep Learning Survival Models
|
stat.ML cs.LG
|
Deep learning survival models often outperform classical methods in
time-to-event predictions, particularly in personalized medicine, but their
"black box" nature hinders broader adoption. We propose a framework for
gradient-based explanation methods tailored to survival neural networks,
extending their use beyond regression and classification. We analyze the
implications of their theoretical assumptions for time-dependent explanations
in the survival setting and propose effective visualizations incorporating the
temporal dimension. Experiments on synthetic data show that gradient-based
methods capture the magnitude and direction of local and global feature
effects, including time dependencies. We introduce GradSHAP(t), a
gradient-based counterpart to SurvSHAP(t), which outperforms SurvSHAP(t) and
SurvLIME in a computational speed vs. accuracy trade-off. Finally, we apply
these methods to medical data with multi-modal inputs, revealing relevant
tabular features and visual patterns, as well as their temporal dynamics.
|
2502.04973
|
DE-PADA: Personalized Augmentation and Domain Adaptation for ECG
Biometrics Across Physiological States
|
cs.LG
|
Electrocardiogram (ECG)-based biometrics offer a promising method for user
identification, combining intrinsic liveness detection with morphological
uniqueness. However, elevated heart rates introduce significant physiological
variability, posing challenges to pattern recognition systems and leading to a
notable performance gap between resting and post-exercise conditions.
Addressing this gap is critical for advancing ECG-based biometric systems for
real-world applications. We propose DE-PADA, a Dual Expert model with
Personalized Augmentation and Domain Adaptation, designed to enhance robustness
across diverse physiological states. The model is trained primarily on
resting-state data from the evaluation dataset, without direct exposure to
their exercise data. To address variability, DE-PADA incorporates ECG-specific
innovations, including heartbeat segmentation into the PQRS interval, known for
its relative temporal consistency, and the heart rate-sensitive ST interval,
enabling targeted feature extraction tailored to each region's unique
characteristics. Personalized augmentation simulates subject-specific T-wave
variability across heart rates using individual T-wave peak predictions to
adapt augmentation ranges. Domain adaptation further improves generalization by
leveraging auxiliary data from supplementary subjects used exclusively for
training, including both resting and exercise conditions. Experiments on the
University of Toronto ECG Database demonstrate the model's effectiveness.
DE-PADA achieves relative improvements in post-exercise identification rates of
26.75% in the initial recovery phase and 11.72% in the late recovery phase,
while maintaining a 98.12% identification rate in the sitting position. These
results highlight DE-PADA's ability to address intra-subject variability and
enhance the robustness of ECG-based biometric systems across diverse
physiological states.
|
2502.04975
|
Training-free Neural Architecture Search through Variance of Knowledge
of Deep Network Weights
|
cs.CV
|
Deep learning has revolutionized computer vision, but it achieved its
tremendous success using deep network architectures which are mostly
hand-crafted and therefore likely suboptimal. Neural Architecture Search (NAS)
aims to bridge this gap by following a well-defined optimization paradigm which
systematically looks for the best architecture, given objective criterion such
as maximal classification accuracy. The main limitation of NAS is however its
astronomical computational cost, as it typically requires training each
candidate network architecture from scratch.
In this paper, we aim to alleviate this limitation by proposing a novel
training-free proxy for image classification accuracy based on Fisher
Information. The proposed proxy has a strong theoretical background in
statistics and it allows estimating expected image classification accuracy of a
given deep network without training the network, thus significantly reducing
computational cost of standard NAS algorithms.
Our training-free proxy achieves state-of-the-art results on three public
datasets and in two search spaces, both when evaluated using previously
proposed metrics, as well as using a new metric that we propose which we
demonstrate is more informative for practical NAS applications. The source code
is publicly available at http://www.github.com/ondratybl/VKDNW
|
2502.04979
|
Enhancing Pre-Trained Decision Transformers with Prompt-Tuning Bandits
|
cs.LG
|
Harnessing large offline datasets is vital for training foundation models
that can generalize across diverse tasks. Offline Reinforcement Learning (RL)
offers a powerful framework for these scenarios, enabling the derivation of
optimal policies even from suboptimal data. The Prompting Decision Transformer
(PDT) is an offline RL multi-task model that distinguishes tasks through
stochastic trajectory prompts, which are task-specific tokens maintained in
context during rollouts. However, PDT samples these tokens uniformly at random
from per-task demonstration datasets, failing to account for differences in
token informativeness and potentially leading to performance degradation. To
address this limitation, we introduce a scalable bandit-based prompt-tuning
method that dynamically learns to construct high-performance trajectory
prompts. Our approach significantly enhances downstream task performance
without modifying the pre-trained Transformer backbone. Empirical results on
benchmark tasks and a newly designed multi-task environment demonstrate the
effectiveness of our method, creating a seamless bridge between general
multi-task offline pre-training and task-specific online adaptation.
|
2502.04981
|
OccGS: Zero-shot 3D Occupancy Reconstruction with Semantic and
Geometric-Aware Gaussian Splatting
|
cs.CV
|
Obtaining semantic 3D occupancy from raw sensor data without manual
annotations remains an essential yet challenging task. While prior works have
approached this as a perception prediction problem, we formulate it as
scene-aware 3D occupancy reconstruction with geometry and semantics. In this
work, we propose OccGS, a novel 3D Occupancy reconstruction framework utilizing
Semantic and Geometric-Aware Gaussian Splatting in a zero-shot manner.
Leveraging semantics extracted from vision-language models and geometry guided
by LiDAR points, OccGS constructs Semantic and Geometric-Aware Gaussians from
raw multisensor data. We also develop a cumulative Gaussian-to-3D voxel
splatting method for reconstructing occupancy from the Gaussians. OccGS
performs favorably against self-supervised methods in occupancy prediction,
achieving comparable performance to fully supervised approaches and achieving
state-of-the-art performance on zero-shot semantic 3D occupancy estimation.
|
2502.04988
|
CMamba: Learned Image Compression with State Space Models
|
eess.IV cs.CV
|
Learned Image Compression (LIC) has explored various architectures, such as
Convolutional Neural Networks (CNNs) and transformers, in modeling image
content distributions in order to achieve compression effectiveness. However,
achieving high rate-distortion performance while maintaining low computational
complexity (\ie, parameters, FLOPs, and latency) remains challenging. In this
paper, we propose a hybrid Convolution and State Space Models (SSMs) based
image compression framework, termed \textit{CMamba}, to achieve superior
rate-distortion performance with low computational complexity. Specifically,
CMamba introduces two key components: a Content-Adaptive SSM (CA-SSM) module
and a Context-Aware Entropy (CAE) module. First, we observed that SSMs excel in
modeling overall content but tend to lose high-frequency details. In contrast,
CNNs are proficient at capturing local details. Motivated by this, we propose
the CA-SSM module that can dynamically fuse global content extracted by SSM
blocks and local details captured by CNN blocks in both encoding and decoding
stages. As a result, important image content is well preserved during
compression. Second, our proposed CAE module is designed to reduce spatial and
channel redundancies in latent representations after encoding. Specifically,
our CAE leverages SSMs to parameterize the spatial content in latent
representations. Benefiting from SSMs, CAE significantly improves spatial
compression efficiency while reducing spatial content redundancies. Moreover,
along the channel dimension, CAE reduces inter-channel redundancies of latent
representations via an autoregressive manner, which can fully exploit prior
knowledge from previous channels without sacrificing efficiency. Experimental
results demonstrate that CMamba achieves superior rate-distortion performance.
|
2502.04991
|
C2GM: Cascading Conditional Generation of Multi-scale Maps from Remote
Sensing Images Constrained by Geographic Features
|
eess.IV cs.CV
|
Multi-scale maps are essential representations of surveying and cartographic
results, serving as fundamental components of geographic services. Current
image generation networks can quickly produce map tiles from remote-sensing
images. However, generative models designed for natural images often focus on
texture features, neglecting the unique characteristics of remote-sensing
features and the scale attributes of tile maps. This limitation in generative
models impairs the accurate representation of geographic information, and the
quality of tile map generation still needs improvement. Diffusion models have
demonstrated remarkable success in various image generation tasks, highlighting
their potential to address this challenge. This paper presents C2GM, a novel
framework for generating multi-scale tile maps through conditional guided
diffusion and multi-scale cascade generation. Specifically, we implement a
conditional feature fusion encoder to extract object priors from remote sensing
images and cascade reference double branch input, ensuring an accurate
representation of complex features. Low-level generated tiles act as
constraints for high-level map generation, enhancing visual continuity.
Moreover, we incorporate map scale modality information using CLIP to simulate
the relationship between map scale and cartographic generalization in tile
maps. Extensive experimental evaluations demonstrate that C2GM consistently
achieves the state-of-the-art (SOTA) performance on all metrics, facilitating
the rapid and effective generation of multi-scale large-format maps for
emergency response and remote mapping applications.
|
2502.04995
|
A Variant of the Bravyi-Terhal Bound for Arbitrary Boundary Conditions
|
quant-ph cs.IT math.IT
|
We present a modified version of the Bravyi-Terhal bound that applies to
quantum codes defined by local parity-check constraints on a $D$-dimensional
lattice quotient. Specifically, we consider a quotient $\mathbb{Z}^D/\Lambda$
of $\mathbb{Z}^D$ of cardinality $n$, where $\Lambda$ is some $D$-dimensional
sublattice of $\mathbb{Z}^D$: we suppose that every vertex of this quotient
indexes $m$ qubits of a stabilizer code $C$, which therefore has length $nm$.
We prove that if all stabilizer generators act on qubits whose indices lie
within a ball of radius $\rho$, then the minimum distance $d$ of the code
satisfies $d \leq m\sqrt{\gamma_D}(\sqrt{D} + 4\rho)n^\frac{D-1}{D}$ whenever
$n^{1/D} \geq 8\rho\sqrt{\gamma_D}$, where $\gamma_D$ is the $D$-dimensional
Hermite constant. We apply this bound to derive an upper bound on the minimum
distance of Abelian Two-Block Group Algebra (2BGA) codes whose parity-check
matrices have the form $[\mathbf{A} \, \vert \, \mathbf{B}]$ with each
submatrix representing an element of a group algebra over a finite abelian
group.
|
2502.04997
|
Aligning Black-box Language Models with Human Judgments
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) are increasingly used as automated judges to
evaluate recommendation systems, search engines, and other subjective tasks,
where relying on human evaluators can be costly, time-consuming, and
unscalable. LLMs offer an efficient solution for continuous, automated
evaluation. However, since the systems that are built and improved with these
judgments are ultimately designed for human use, it is crucial that LLM
judgments align closely with human evaluators to ensure such systems remain
human-centered. On the other hand, aligning LLM judgments with human evaluators
is challenging due to individual variability and biases in human judgments. We
propose a simple yet effective framework to align LLM judgments with individual
human evaluators or their aggregated judgments, without retraining or
fine-tuning the LLM. Our approach learns a linear mapping between the LLM's
outputs and human judgments, achieving over 142% average improvement in
agreement across 29 tasks with only a small number of calibration examples used
for training. Notably, our method works in zero-shot and few-shot settings,
exceeds inter-human agreement on four out of six tasks, and enables smaller
LLMs to achieve performance comparable to that of larger models.
|
2502.04998
|
On Sequential Fault-Intolerant Process Planning
|
cs.AI
|
We propose and study a planning problem we call Sequential Fault-Intolerant
Process Planning (SFIPP). SFIPP captures a reward structure common in many
sequential multi-stage decision problems where the planning is deemed
successful only if all stages succeed. Such reward structures are different
from classic additive reward structures and arise in important applications
such as drug/material discovery, security, and quality-critical product design.
We design provably tight online algorithms for settings in which we need to
pick between different actions with unknown success chances at each stage. We
do so both for the foundational case in which the behavior of actions is
deterministic, and the case of probabilistic action outcomes, where we
effectively balance exploration for learning and exploitation for planning
through the usage of multi-armed bandit algorithms. In our empirical
evaluations, we demonstrate that the specialized algorithms we develop, which
leverage additional information about the structure of the SFIPP instance,
outperform our more general algorithm.
|
2502.05000
|
Robust Graph Learning Against Adversarial Evasion Attacks via Prior-Free
Diffusion-Based Structure Purification
|
cs.LG cs.AI
|
Adversarial evasion attacks pose significant threats to graph learning, with
lines of studies that have improved the robustness of Graph Neural Networks
(GNNs). However, existing works rely on priors about clean graphs or attacking
strategies, which are often heuristic and inconsistent. To achieve robust graph
learning over different types of evasion attacks and diverse datasets, we
investigate this problem from a prior-free structure purification perspective.
Specifically, we propose a novel Diffusion-based Structure Purification
framework named DiffSP, which creatively incorporates the graph diffusion model
to learn intrinsic distributions of clean graphs and purify the perturbed
structures by removing adversaries under the direction of the captured
predictive patterns without relying on priors. DiffSP is divided into the
forward diffusion process and the reverse denoising process, during which
structure purification is achieved. To avoid valuable information loss during
the forward process, we propose an LID-driven nonisotropic diffusion mechanism
to selectively inject noise anisotropically. To promote semantic alignment
between the clean graph and the purified graph generated during the reverse
process, we reduce the generation uncertainty by the proposed graph transfer
entropy guided denoising mechanism. Extensive experiments demonstrate the
superior robustness of DiffSP against evasion attacks.
|
2502.05001
|
A New Paradigm in Tuning Learned Indexes: A Reinforcement Learning
Enhanced Approach
|
cs.DB cs.AI cs.SY eess.SY
|
Learned Index Structures (LIS) have significantly advanced data management by
leveraging machine learning models to optimize data indexing. However,
designing these structures often involves critical trade-offs, making it
challenging for both designers and end-users to find an optimal balance
tailored to specific workloads and scenarios. While some indexes offer
adjustable parameters that demand intensive manual tuning, others rely on fixed
configurations based on heuristic auto-tuners or expert knowledge, which may
not consistently deliver optimal performance. This paper introduces LITune, a
novel framework for end-to-end automatic tuning of Learned Index Structures.
LITune employs an adaptive training pipeline equipped with a tailor-made Deep
Reinforcement Learning (DRL) approach to ensure stable and efficient tuning. To
accommodate long-term dynamics arising from online tuning, we further enhance
LITune with an on-the-fly updating mechanism termed the O2 system. These
innovations allow LITune to effectively capture state transitions in online
tuning scenarios and dynamically adjust to changing data distributions and
workloads, marking a significant improvement over other tuning methods. Our
experimental results demonstrate that LITune achieves up to a 98% reduction in
runtime and a 17-fold increase in throughput compared to default parameter
settings given a selected Learned Index instance. These findings highlight
LITune's effectiveness and its potential to facilitate broader adoption of LIS
in real-world applications.
|
2502.05003
|
QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
|
cs.LG
|
One approach to reducing the massive costs of large language models (LLMs) is
the use of quantized or sparse representations for training or deployment.
While post-training compression methods are very popular, the question of
obtaining even more accurate compressed models by directly training over such
representations, i.e., Quantization-Aware Training (QAT), is still open: for
example, a recent study (arXiv:2411.04330v2) put the "optimal" bit-width at
which models can be trained using QAT, while staying accuracy-competitive with
standard FP16/BF16 precision, at 8-bits weights and activations.
We advance this state-of-the-art via a new method called QuEST, which is
Pareto-competitive with FP16, i.e., it provides better accuracy at lower model
size, while training models with weights and activations in 4-bits or less.
Moreover, QuEST allows stable training with 1-bit weights and activations.
QuEST achieves this by improving two key aspects of QAT methods: (1) accurate
and fast quantization of the (continuous) distributions of weights and
activations via Hadamard normalization and MSE-optimal fitting; (2) a new trust
gradient estimator based on the idea of explicitly minimizing the error between
the noisy gradient computed over quantized states and the "true" (but unknown)
full-precision gradient. Experiments on Llama-type architectures show that
QuEST induces stable scaling laws across the entire range of hardware-supported
precisions, and can be extended to sparse representations. We provide GPU
kernel support showing that models produced by QuEST can be executed
efficiently. Our code is available at https://github.com/IST-DASLab/QuEST.
|
2502.05007
|
Analyzing Advanced AI Systems Against Definitions of Life and
Consciousness
|
cs.AI
|
Could artificial intelligence ever become truly conscious in a functional
sense; this paper explores that open-ended question through the lens of Life, a
concept unifying classical biological criteria (Oxford, NASA, Koshland) with
empirical hallmarks such as adaptive self maintenance, emergent complexity, and
rudimentary self referential modeling. We propose a number of metrics for
examining whether an advanced AI system has gained consciousness, while
emphasizing that we do not claim all AI stems can become conscious. Rather, we
suggest that sufficiently advanced architectures exhibiting immune like
sabotage defenses, mirror self-recognition analogs, or meta-cognitive updates
may cross key thresholds akin to life-like or consciousness-like traits. To
demonstrate these ideas, we start by assessing adaptive self-maintenance
capability, and introduce controlled data corruption sabotage into the training
process. The result demonstrates AI capability to detect these inconsistencies
and revert or self-correct analogous to regenerative biological processes. We
also adapt an animal-inspired mirror self recognition test to neural
embeddings, finding that partially trained CNNs can distinguish self from
foreign features with complete accuracy. We then extend our analysis by
performing a question-based mirror test on five state-of-the-art chatbots
(ChatGPT4, Gemini, Perplexity, Claude, and Copilot) and demonstrated their
ability to recognize their own answers compared to those of the other chatbots.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.