id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.11113
|
Valuable Hallucinations: Realizable Non-realistic Propositions
|
cs.CL
|
This paper introduces the first formal definition of valuable hallucinations
in large language models (LLMs), addressing a gap in the existing literature.
We provide a systematic definition and analysis of hallucination value,
proposing methods for enhancing the value of hallucinations. In contrast to
previous works, which often treat hallucinations as a broad flaw, we focus on
the potential value that certain types of hallucinations can offer in specific
contexts. Hallucinations in LLMs generally refer to the generation of
unfaithful, fabricated, inconsistent, or nonsensical content. Rather than
viewing all hallucinations negatively, this paper gives formal representations
and manual judgments of "valuable hallucinations" and explores how realizable
non-realistic propositions--ideas that are not currently true but could be
achievable under certain conditions--can have constructive value. We present
experiments using the Qwen2.5 model and HalluQA dataset, employing ReAct
prompting (which involves reasoning, confidence assessment, and answer
verification) to control and optimize hallucinations. Our findings show that
ReAct prompting results in a 5.12\% reduction in overall hallucinations and an
increase in the proportion of valuable hallucinations from 6.45\% to 7.92\%.
These results demonstrate that systematically controlling hallucinations can
improve their usefulness without compromising factual reliability.
|
2502.11114
|
Beyond Pairwise: Global Zero-shot Temporal Graph Generation
|
cs.CL
|
Temporal relation extraction (TRE) is a fundamental task in natural language
processing (NLP) that involves identifying the temporal relationships between
events in a document. Despite the advances in large language models (LLMs),
their application to TRE remains limited. Most existing approaches rely on
pairwise classification, in which event pairs are considered individually,
leading to computational inefficiency and a lack of global consistency in the
resulting temporal graph. In this work, we propose a novel zero-shot method for
TRE that generates a document's complete temporal graph at once, then applies
transitive constraints optimization to refine predictions and enforce temporal
consistency across relations. Additionally, we introduce OmniTemp, a new
dataset with complete annotations for all pairs of targeted events within a
document. Through experiments and analyses, we demonstrate that our method
significantly outperforms existing zero-shot approaches while achieving
competitive performance with supervised models.
|
2502.11115
|
Are Generative Models Underconfident? An Embarrassingly Simple Quality
Estimation Approach
|
cs.CL
|
Quality Estimation (QE) is estimating the quality of model output when the
ground truth reference is not available. Looking at model uncertainty from its
own output probabilities is the most trivial and low-effort way to estimate the
output quality. However, for generative model, output probabilities might not
be the best quality estimator. At an output step, there can be multiple correct
options, making the probability distribution spread out more. Thus, lower token
probability does not necessarily mean lower output quality. In other words, the
model can be considered underconfident. In this paper, we propose a QE approach
called Dominant Mass Probability (DMP}, that boosts the model confidence in
cases where there are multiple viable output options. We show that, with no
increase in complexity, DMP is notably better than sequence probability when
estimating the quality of different models (Whisper, Llama, etc.) on different
tasks (translation, summarization, etc.). Compared to sequence probability, DMP
achieves on average +0.208 improvement in Pearson correlation to ground-truth
quality.
|
2502.11116
|
Gumbel Reranking: Differentiable End-to-End Reranker Optimization
|
cs.CL cs.IR
|
RAG systems rely on rerankers to identify relevant documents. However,
fine-tuning these models remains challenging due to the scarcity of annotated
query-document pairs. Existing distillation-based approaches suffer from
training-inference misalignment and fail to capture interdependencies among
candidate documents. To overcome these limitations, we reframe the reranking
process as an attention-mask problem and propose Gumbel Reranking, an
end-to-end training framework for rerankers aimed at minimizing the
training-inference gap. In our approach, reranker optimization is reformulated
as learning a stochastic, document-wise Top-$k$ attention mask using the Gumbel
Trick and Relaxed Top-$k$ Sampling. This formulation enables end-to-end
optimization by minimizing the overall language loss. Experiments across
various settings consistently demonstrate performance gains, including a 10.4\%
improvement in recall on HotpotQA for distinguishing indirectly relevant
documents.
|
2502.11122
|
Hierarchical Expert Prompt for Large-Language-Model: An Approach Defeat
Elite AI in TextStarCraft II for the First Time
|
cs.AI
|
Since the emergence of the Large Language Model (LLM), LLM has been widely
used in fields such as writing, translating, and searching. However, there is
still great potential for LLM-based methods in handling complex tasks such as
decision-making in the StarCraft II environment. To address problems such as
lack of relevant knowledge and poor control over subtasks of varying
importance, we propose a Hierarchical Expert Prompt (HEP) for LLM. Our method
improves the understanding of game situations through expert-level tactical
knowledge, improving the processing quality of tasks of varying importance
through a hierarchical framework. Our approach defeated the highest level
(Elite) standard built-in agent in TextStarCraft II for the first time and
consistently outperformed the baseline method in other difficulties. Our
experiments suggest that the proposed method is a practical solution for
tackling complex decision-making challenges. The replay video can be viewed on
https://www.bilibili.com/video/BV1uz42187EF and https://youtu.be/dO3PshWLV5M,
and our codes have been open-sourced on
https://github.com/luchang1113/HEP-LLM-play-StarCraftII.
|
2502.11123
|
DuplexMamba: Enhancing Real-time Speech Conversations with Duplex and
Streaming Capabilities
|
cs.CL
|
Real-time speech conversation is essential for natural and efficient
human-machine interactions, requiring duplex and streaming capabilities.
Traditional Transformer-based conversational chatbots operate in a turn-based
manner and exhibit quadratic computational complexity that grows as the input
size increases. In this paper, we propose DuplexMamba, a Mamba-based end-to-end
multimodal duplex model for speech-to-text conversation. DuplexMamba enables
simultaneous input processing and output generation, dynamically adjusting to
support real-time streaming. Specifically, we develop a Mamba-based speech
encoder and adapt it with a Mamba-based language model. Furthermore, we
introduce a novel duplex decoding strategy that enables DuplexMamba to process
input and generate output simultaneously. Experimental results demonstrate that
DuplexMamba successfully implements duplex and streaming capabilities while
achieving performance comparable to several recently developed
Transformer-based models in automatic speech recognition (ASR) tasks and voice
assistant benchmark evaluations.
|
2502.11124
|
AdaManip: Adaptive Articulated Object Manipulation Environments and
Policy Learning
|
cs.RO cs.AI
|
Articulated object manipulation is a critical capability for robots to
perform various tasks in real-world scenarios. Composed of multiple parts
connected by joints, articulated objects are endowed with diverse functional
mechanisms through complex relative motions. For example, a safe consists of a
door, a handle, and a lock, where the door can only be opened when the latch is
unlocked. The internal structure, such as the state of a lock or joint angle
constraints, cannot be directly observed from visual observation. Consequently,
successful manipulation of these objects requires adaptive adjustment based on
trial and error rather than a one-time visual inference. However, previous
datasets and simulation environments for articulated objects have primarily
focused on simple manipulation mechanisms where the complete manipulation
process can be inferred from the object's appearance. To enhance the diversity
and complexity of adaptive manipulation mechanisms, we build a novel
articulated object manipulation environment and equip it with 9 categories of
objects. Based on the environment and objects, we further propose an adaptive
demonstration collection and 3D visual diffusion-based imitation learning
pipeline that learns the adaptive manipulation policy. The effectiveness of our
designs and proposed method is validated through both simulation and real-world
experiments. Our project page is available at: https://adamanip.github.io
|
2502.11127
|
G-Safeguard: A Topology-Guided Security Lens and Treatment on LLM-based
Multi-agent Systems
|
cs.CR cs.LG cs.MA
|
Large Language Model (LLM)-based Multi-agent Systems (MAS) have demonstrated
remarkable capabilities in various complex tasks, ranging from collaborative
problem-solving to autonomous decision-making. However, as these systems become
increasingly integrated into critical applications, their vulnerability to
adversarial attacks, misinformation propagation, and unintended behaviors have
raised significant concerns. To address this challenge, we introduce
G-Safeguard, a topology-guided security lens and treatment for robust LLM-MAS,
which leverages graph neural networks to detect anomalies on the multi-agent
utterance graph and employ topological intervention for attack remediation.
Extensive experiments demonstrate that G-Safeguard: (I) exhibits significant
effectiveness under various attack strategies, recovering over 40% of the
performance for prompt injection; (II) is highly adaptable to diverse LLM
backbones and large-scale MAS; (III) can seamlessly combine with mainstream MAS
with security guarantees. The code is available at
https://github.com/wslong20/G-safeguard.
|
2502.11128
|
FELLE: Autoregressive Speech Synthesis with Token-Wise Coarse-to-Fine
Flow Matching
|
cs.CL cs.SD eess.AS
|
To advance continuous-valued token modeling and temporal-coherence
enforcement, we propose FELLE, an autoregressive model that integrates language
modeling with token-wise flow matching. By leveraging the autoregressive nature
of language models and the generative efficacy of flow matching, FELLE
effectively predicts continuous-valued tokens (mel-spectrograms). For each
continuous-valued token, FELLE modifies the general prior distribution in flow
matching by incorporating information from the previous step, improving
coherence and stability. Furthermore, to enhance synthesis quality, FELLE
introduces a coarse-to-fine flow-matching mechanism, generating
continuous-valued tokens hierarchically, conditioned on the language model's
output. Experimental results demonstrate the potential of incorporating
flow-matching techniques in autoregressive mel-spectrogram modeling, leading to
significant improvements in TTS generation quality, as shown in
https://aka.ms/felle.
|
2502.11131
|
Improving Similar Case Retrieval Ranking Performance By Revisiting
RankSVM
|
cs.CL
|
Given the rapid development of Legal AI, a lot of attention has been paid to
one of the most important legal AI tasks--similar case retrieval, especially
with language models to use. In our paper, however, we try to improve the
ranking performance of current models from the perspective of learning to rank
instead of language models. Specifically, we conduct experiments using a
pairwise method--RankSVM as the classifier to substitute a fully connected
layer, combined with commonly used language models on similar case retrieval
datasets LeCaRDv1 and LeCaRDv2. We finally come to the conclusion that RankSVM
could generally help improve the retrieval performance on the LeCaRDv1 and
LeCaRDv2 datasets compared with original classifiers by optimizing the precise
ranking. It could also help mitigate overfitting owing to class imbalance. Our
code is available in https://github.com/liuyuqi123study/RankSVM_for_SLR
|
2502.11132
|
UNITE-FND: Reframing Multimodal Fake News Detection through Unimodal
Scene Translation
|
cs.LG cs.AI
|
Multimodal fake news detection typically demands complex architectures and
substantial computational resources, posing deployment challenges in real-world
settings. We introduce UNITE-FND, a novel framework that reframes multimodal
fake news detection as a unimodal text classification task. We propose six
specialized prompting strategies with Gemini 1.5 Pro, converting visual content
into structured textual descriptions, and enabling efficient text-only models
to preserve critical visual information. To benchmark our approach, we
introduce Uni-Fakeddit-55k, a curated dataset family of 55,000 samples each,
each processed through our multimodal-to-unimodal translation framework.
Experimental results demonstrate that UNITE-FND achieves 92.52% accuracy in
binary classification, surpassing prior multimodal models while reducing
computational costs by over 10x (TinyBERT variant: 14.5M parameters vs. 250M+
in SOTA models). Additionally, we propose a comprehensive suite of five novel
metrics to evaluate image-to-text conversion quality, ensuring optimal
information preservation. Our results demonstrate that structured text-based
representations can replace direct multimodal processing with minimal loss of
accuracy, making UNITE-FND a practical and scalable alternative for
resource-constrained environments.
|
2502.11133
|
MasRouter: Learning to Route LLMs for Multi-Agent Systems
|
cs.LG cs.MA
|
Multi-agent systems (MAS) powered by Large Language Models (LLMs) have been
demonstrated to push the boundaries of LLM capabilities, yet they often incur
significant costs and face challenges in dynamic LLM selection. Current LLM
routing methods effectively reduce overhead in single-agent scenarios by
customizing LLM selection for each query, but they overlook the critical
decisions regarding collaboration modes and agent roles in MAS. In response to
this challenge, we first introduce the problem of Multi-Agent System Routing
(MASR), which integrates all components of MAS into a unified routing
framework. Toward this goal, we propose MasRouter, the first high-performing,
cost-effective, and inductive MASR solution. MasRouter employs collaboration
mode determination, role allocation, and LLM routing through a cascaded
controller network, progressively constructing a MAS that balances
effectiveness and efficiency. Extensive experiments demonstrate that MasRouter
is (1) high-performing, achieving a $1.8\%\sim8.2\%$ improvement over the
state-of-the-art method on MBPP; (2) economical, reducing overhead by up to
$52.07\%$ compared to SOTA methods on HumanEval; and (3) plug-and-play,
seamlessly integrating with mainstream MAS frameworks, reducing overhead by
$17.21\%\sim28.17\%$ via customized routing. The code is available at
https://github.com/yanweiyue/masrouter.
|
2502.11134
|
Solving Online Resource-Constrained Scheduling for Follow-Up Observation
in Astronomy: a Reinforcement Learning Approach
|
cs.AI astro-ph.IM
|
In the astronomical observation field, determining the allocation of
observation resources of the telescope array and planning follow-up
observations for targets of opportunity (ToOs) are indispensable components of
astronomical scientific discovery. This problem is computationally challenging,
given the online observation setting and the abundance of time-varying factors
that can affect whether an observation can be conducted. This paper presents
ROARS, a reinforcement learning approach for online astronomical
resource-constrained scheduling. To capture the structure of the astronomical
observation scheduling, we depict every schedule using a directed acyclic graph
(DAG), illustrating the dependency of timing between different observation
tasks within the schedule. Deep reinforcement learning is used to learn a
policy that can improve the feasible solution by iteratively local rewriting
until convergence. It can solve the challenge of obtaining a complete solution
directly from scratch in astronomical observation scenarios, due to the high
computational complexity resulting from numerous spatial and temporal
constraints. A simulation environment is developed based on real-world
scenarios for experiments, to evaluate the effectiveness of our proposed
scheduling approach. The experimental results show that ROARS surpasses 5
popular heuristics, adapts to various observation scenarios and learns
effective strategies with hindsight.
|
2502.11137
|
Safety Evaluation of DeepSeek Models in Chinese Contexts
|
cs.CL cs.AI
|
Recently, the DeepSeek series of models, leveraging their exceptional
reasoning capabilities and open-source strategy, is reshaping the global AI
landscape. Despite these advantages, they exhibit significant safety
deficiencies. Research conducted by Robust Intelligence, a subsidiary of Cisco,
in collaboration with the University of Pennsylvania, revealed that DeepSeek-R1
has a 100\% attack success rate when processing harmful prompts. Additionally,
multiple safety companies and research institutions have confirmed critical
safety vulnerabilities in this model. As models demonstrating robust
performance in Chinese and English, DeepSeek models require equally crucial
safety assessments in both language contexts. However, current research has
predominantly focused on safety evaluations in English environments, leaving a
gap in comprehensive assessments of their safety performance in Chinese
contexts. In response to this gap, this study introduces CHiSafetyBench, a
Chinese-specific safety evaluation benchmark. This benchmark systematically
evaluates the safety of DeepSeek-R1 and DeepSeek-V3 in Chinese contexts,
revealing their performance across safety categories. The experimental results
quantify the deficiencies of these two models in Chinese contexts, providing
key insights for subsequent improvements. It should be noted that, despite our
efforts to establish a comprehensive, objective, and authoritative evaluation
benchmark, the selection of test samples, characteristics of data distribution,
and the setting of evaluation criteria may inevitably introduce certain biases
into the evaluation results. We will continuously optimize the evaluation
benchmark and periodically update this report to provide more comprehensive and
accurate assessment outcomes. Please refer to the latest version of the paper
for the most recent evaluation results and conclusions.
|
2502.11138
|
Machine Learning-Based Intrusion Detection and Prevention System for
IIoT Smart Metering Networks: Challenges and Solutions
|
cs.LG
|
The Industrial Internet of Things (IIoT) has revolutionized industries by
enabling automation, real-time data exchange, and smart decision-making.
However, its increased connectivity introduces cybersecurity threats,
particularly in smart metering networks, which play a crucial role in
monitoring and optimizing energy consumption. This paper explores the
challenges associated with securing IIoT-based smart metering networks and
proposes a Machine Learning (ML)-based Intrusion Detection and Prevention
System (IDPS) for safeguarding edge devices. The study reviews various
intrusion detection approaches, highlighting the strengths and limitations of
both signature-based and anomaly-based detection techniques. The findings
suggest that integrating ML-driven IDPS in IIoT smart metering environments
enhances security, efficiency, and resilience against evolving cyber threats.
|
2502.11140
|
VisPath: Automated Visualization Code Synthesis via Multi-Path Reasoning
and Feedback-Driven Optimization
|
cs.SE cs.AI cs.CL cs.HC
|
Unprecedented breakthroughs in Large Language Models (LLMs) has amplified its
penetration into application of automated visualization code generation.
Few-shot prompting and query expansion techniques have notably enhanced data
visualization performance, however, still fail to overcome ambiguity and
complexity of natural language queries - imposing an inherent burden for manual
human intervention. To mitigate such limitations, we propose a holistic
framework VisPath : A Multi-Path Reasoning and Feedback-Driven Optimization
Framework for Visualization Code Generation, which systematically enhances code
quality through structured reasoning and refinement. VisPath is a multi-stage
framework, specially designed to handle underspecified queries. To generate a
robust final visualization code, it first utilizes initial query to generate
diverse reformulated queries via Chain-of-Thought (CoT) prompting, each
representing a distinct reasoning path. Refined queries are used to produce
candidate visualization scripts, consequently executed to generate multiple
images. Comprehensively assessing correctness and quality of outputs, VisPath
generates feedback for each image, which are then fed to aggregation module to
generate optimal result. Extensive experiments on benchmarks including
MatPlotBench and the Qwen-Agent Code Interpreter Benchmark show that VisPath
significantly outperforms state-of-the-art (SOTA) methods, increased up to
average 17%, offering a more reliable solution for AI-driven visualization code
generation.
|
2502.11141
|
Cognitive Neural Architecture Search Reveals Hierarchical Entailment
|
cs.NE cs.AI q-bio.QM
|
Recent research has suggested that the brain is more shallow than previously
thought, challenging the traditionally assumed hierarchical structure of the
ventral visual pathway. Here, we demonstrate that optimizing convolutional
network architectures for brain-alignment via evolutionary neural architecture
search results in models with clear representational hierarchies. Despite
having random weights, the identified models achieve brain-alignment scores
surpassing even those of pretrained classification models - as measured by both
regression and representational similarity analysis. Furthermore, through
traditional supervised training, architectures optimized for alignment with
late ventral regions become competitive classification models. These findings
suggest that hierarchical structure is a fundamental mechanism of primate
visual processing. Finally, this work demonstrates the potential of neural
architecture search as a framework for computational cognitive neuroscience
research that could reduce the field's reliance on manually designed
convolutional networks.
|
2502.11142
|
NavRAG: Generating User Demand Instructions for Embodied Navigation
through Retrieval-Augmented LLM
|
cs.AI cs.CL cs.CV
|
Vision-and-Language Navigation (VLN) is an essential skill for embodied
agents, allowing them to navigate in 3D environments following natural language
instructions. High-performance navigation models require a large amount of
training data, the high cost of manually annotating data has seriously hindered
this field. Therefore, some previous methods translate trajectory videos into
step-by-step instructions for expanding data, but such instructions do not
match well with users' communication styles that briefly describe destinations
or state specific needs. Moreover, local navigation trajectories overlook
global context and high-level task planning. To address these issues, we
propose NavRAG, a retrieval-augmented generation (RAG) framework that generates
user demand instructions for VLN. NavRAG leverages LLM to build a hierarchical
scene description tree for 3D scene understanding from global layout to local
details, then simulates various user roles with specific demands to retrieve
from the scene tree, generating diverse instructions with LLM. We annotate over
2 million navigation instructions across 861 scenes and evaluate the data
quality and navigation performance of trained models.
|
2502.11147
|
Efficient Long-Decoding Inference with Reasoning-Aware Attention
Sparsity
|
cs.LG cs.AI
|
Large Language Models (LLMs) have demonstrated strong capabilities across
various domains, with recent advancements in challenging reasoning tasks such
as mathematics and programming. However, solving reasoning tasks often requires
long decoding chains (of thoughts), which incur $O(N)$ time and memory
consumption, where $N$ is the chain length. To mitigate $O(N)$ time and memory
consumption, existing sparsity-based algorithms propose retaining only the most
critical token's intermediate data (i.e., key-value cache) and discarding the
rest. However, these existing algorithms struggle with the ``impossible
trinity'' of accuracy, time, and memory. For example, the state-of-the-art
algorithm, Quest, achieves high accuracy with $O(L)$ time but $O(N)$ memory
($L$ is the cache budget, $L \ll N$). To address this issue, in this paper, we
identify a new attention pattern during the decode stage of reasoning tasks,
where milestone tokens (analogous to lemmas in mathematical proofs) emerge, are
utilized, and then become unimportant afterward. Based on this pattern, we
propose a new algorithm named RaaS that identifies and retains milestone tokens
only until they are no longer needed, achieving high accuracy with $O(L)$ time
and $O(L)$ memory complexity.
|
2502.11149
|
Large Language-Geometry Model: When LLM meets Equivariance
|
cs.LG cs.AI
|
Accurately predicting 3D structures and dynamics of physical systems is
crucial in scientific applications. Existing approaches that rely on geometric
Graph Neural Networks (GNNs) effectively enforce $\mathrm{E}(3)$-equivariance,
but they often fall in leveraging extensive broader information. While direct
application of Large Language Models (LLMs) can incorporate external knowledge,
they lack the capability for spatial reasoning with guaranteed equivariance. In
this paper, we propose EquiLLM, a novel framework for representing 3D physical
systems that seamlessly integrates E(3)-equivariance with LLM capabilities.
Specifically, EquiLLM comprises four key components: geometry-aware prompting,
an equivariant encoder, an LLM, and an equivariant adaptor. Essentially, the
LLM guided by the instructive prompt serves as a sophisticated invariant
feature processor, while 3D directional information is exclusively handled by
the equivariant encoder and adaptor modules. Experimental results demonstrate
that EquiLLM delivers significant improvements over previous methods across
molecular dynamics simulation, human motion simulation, and antibody design,
highlighting its promising generalizability.
|
2502.11150
|
Surprisal Takes It All: Eye Tracking Based Cognitive Evaluation of Text
Readability Measures
|
cs.CL
|
Text readability measures are widely used in many real-world scenarios and in
NLP. These measures have primarily been developed by predicting reading
comprehension outcomes, while largely neglecting what is perhaps the core
aspect of a readable text: reading ease. In this work, we propose a new eye
tracking based methodology for evaluating readability measures, which focuses
on their ability to account for reading facilitation effects in text
simplification, as well as for text reading ease more broadly. Using this
approach, we find that existing readability formulas are moderate to poor
predictors of reading ease. We further find that average per-word length,
frequency, and especially surprisal tend to outperform existing readability
formulas as measures of reading ease. We thus propose surprisal as a simple
unsupervised alternative to existing measures.
|
2502.11152
|
Error Bound Analysis for the Regularized Loss of Deep Linear Neural
Networks
|
math.OC cs.LG
|
The optimization foundations of deep linear networks have received
significant attention lately. However, due to the non-convexity and
hierarchical structure, analyzing the regularized loss of deep linear networks
remains a challenging task. In this work, we study the local geometric
landscape of the regularized squared loss of deep linear networks, providing a
deeper understanding of its optimization properties. Specifically, we
characterize the critical point set and establish an error-bound property for
all critical points under mild conditions. Notably, we identify the sufficient
and necessary conditions under which the error bound holds. To support our
theoretical findings, we conduct numerical experiments demonstrating that
gradient descent exhibits linear convergence when optimizing the regularized
loss of deep linear networks.
|
2502.11155
|
Uncertainty-Aware Search and Value Models: Mitigating Search Scaling
Flaws in LLMs
|
cs.AI cs.CL
|
Value model-guided search is effective in steering the generation but suffers
from scaling flaws: Its superiority diminishes with larger sample sizes,
underperforming non-search baselines. This limitation arises from reliability
degradation in value models in unseen reasoning paths. To address this, we
propose an uncertainty-aware search framework that includes two key components:
(1) uncertainty-aware value models that incorporate uncertainty into
predictions, and (2) an uncertainty-aware selection process using the proposed
efficient Group Thompson Sampling algorithm. Experiments on GSM8K show that our
method mitigates search scaling flaws, achieving 90.5% coverage at 16 samples
compared to 85.8% for conventional value-guided search. This work establishes
the first systematic integration of uncertainty quantification in LLM search
paradigms.
|
2502.11157
|
Dyve: Thinking Fast and Slow for Dynamic Process Verification
|
cs.AI
|
We present Dyve, a dynamic process verifier that enhances reasoning error
detection in large language models by integrating fast and slow thinking,
inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate
token-level confirmation System 1 for straightforward steps and comprehensive
analysis System 2 for complex ones. Leveraging a novel step-wise
consensus-filtered process supervision technique, combining Monte Carlo
estimation with LLM based evaluation, Dyve curates high-quality supervision
signals from noisy data. Experimental results on ProcessBench and the MATH
dataset confirm that Dyve significantly outperforms existing process-based
verifiers and boosts performance in Best-of-N settings.
|
2502.11158
|
AnyRefill: A Unified, Data-Efficient Framework for Left-Prompt-Guided
Vision Tasks
|
cs.CV
|
In this paper, we present a novel Left-Prompt-Guided (LPG) paradigm to
address a diverse range of reference-based vision tasks. Inspired by the human
creative process, we reformulate these tasks using a left-right stitching
formulation to construct contextual input. Building upon this foundation, we
propose AnyRefill, an extension of LeftRefill, that effectively adapts
Text-to-Image (T2I) models to various vision tasks. AnyRefill leverages the
inpainting priors of advanced T2I model based on the Diffusion Transformer
(DiT) architecture, and incorporates flexible components to enhance its
capabilities. By combining task-specific LoRAs with the stitching input,
AnyRefill unlocks its potential across diverse tasks, including conditional
generation, visual perception, and image editing, without requiring additional
visual encoders. Meanwhile, AnyRefill exhibits remarkable data efficiency,
requiring minimal task-specific fine-tuning while maintaining high generative
performance. Through extensive ablation studies, we demonstrate that AnyRefill
outperforms other image condition injection methods and achieves competitive
results compared to state-of-the-art open-source methods. Notably, AnyRefill
delivers results comparable to advanced commercial tools, such as IC-Light and
SeedEdit, even in challenging scenarios. Comprehensive experiments and ablation
studies across versatile tasks validate the strong generation of the proposed
simple yet effective LPG formulation, establishing AnyRefill as a unified,
highly data-efficient solution for reference-based vision tasks.
|
2502.11161
|
BFA: Best-Feature-Aware Fusion for Multi-View Fine-grained Manipulation
|
cs.RO cs.CV
|
In real-world scenarios, multi-view cameras are typically employed for
fine-grained manipulation tasks. Existing approaches (e.g., ACT) tend to treat
multi-view features equally and directly concatenate them for policy learning.
However, it will introduce redundant visual information and bring higher
computational costs, leading to ineffective manipulation. For a fine-grained
manipulation task, it tends to involve multiple stages while the most
contributed view for different stages is varied over time. In this paper, we
propose a plug-and-play best-feature-aware (BFA) fusion strategy for multi-view
manipulation tasks, which is adaptable to various policies. Built upon the
visual backbone of the policy network, we design a lightweight network to
predict the importance score of each view. Based on the predicted importance
scores, the reweighted multi-view features are subsequently fused and input
into the end-to-end policy network, enabling seamless integration. Notably, our
method demonstrates outstanding performance in fine-grained manipulations.
Experimental results show that our approach outperforms multiple baselines by
22-46% success rate on different tasks. Our work provides new insights and
inspiration for tackling key challenges in fine-grained manipulations.
|
2502.11162
|
Logarithmic Width Suffices for Robust Memorization
|
cs.LG stat.ML
|
The memorization capacity of neural networks with a given architecture has
been thoroughly studied in many works. Specifically, it is well-known that
memorizing $N$ samples can be done using a network of constant width,
independent of $N$. However, the required constructions are often quite
delicate. In this paper, we consider the natural question of how well
feedforward ReLU neural networks can memorize robustly, namely while being able
to withstand adversarial perturbations of a given radius. We establish both
upper and lower bounds on the possible radius for general $l_p$ norms, implying
(among other things) that width logarithmic in the number of input samples is
necessary and sufficient to achieve robust memorization (with robustness radius
independent of $N$).
|
2502.11163
|
VLMs as GeoGuessr Masters: Exceptional Performance, Hidden Biases, and
Privacy Risks
|
cs.CV cs.CL
|
Visual-Language Models (VLMs) have shown remarkable performance across
various tasks, particularly in recognizing geographic information from images.
However, significant challenges remain, including biases and privacy concerns.
To systematically address these issues in the context of geographic information
recognition, we introduce a benchmark dataset consisting of 1,200 images paired
with detailed geographic metadata. Evaluating four VLMs, we find that while
these models demonstrate the ability to recognize geographic information from
images, achieving up to $53.8\%$ accuracy in city prediction, they exhibit
significant regional biases. Specifically, performance is substantially higher
for economically developed and densely populated regions compared to less
developed ($-12.5\%$) and sparsely populated ($-17.0\%$) areas. Moreover, the
models exhibit regional biases, frequently overpredicting certain locations;
for instance, they consistently predict Sydney for images taken in Australia.
The strong performance of VLMs also raises privacy concerns, particularly for
users who share images online without the intent of being identified. Our code
and dataset are publicly available at
https://github.com/uscnlp-lime/FairLocator.
|
2502.11164
|
Quantifying the Capability Boundary of DeepSeek Models: An
Application-Driven Performance Analysis
|
cs.AI cs.LG
|
DeepSeek-R1, known for its low training cost and exceptional reasoning
capabilities, has achieved state-of-the-art performance on various benchmarks.
However, detailed evaluations from the perspective of real-world applications
are lacking, making it challenging for users to select the most suitable
DeepSeek models for their specific needs. To address this gap, we evaluate the
DeepSeek-V3, DeepSeek-R1, DeepSeek-R1-Distill-Qwen series, and
DeepSeek-R1-Distill-Llama series on A-Eval, an application-driven benchmark. By
comparing original instruction-tuned models with their distilled counterparts,
we analyze how reasoning enhancements impact performance across diverse
practical tasks. Our results show that reasoning-enhanced models, while
generally powerful, do not universally outperform across all tasks, with
performance gains varying significantly across tasks and models. To further
assist users in model selection, we quantify the capability boundary of
DeepSeek models through performance tier classifications and intuitive line
charts. Specific examples provide actionable insights to help users select and
deploy the most cost-effective DeepSeek models, ensuring optimal performance
and resource efficiency in real-world applications.
|
2502.11167
|
SURGE: On the Potential of Large Language Models as General-Purpose
Surrogate Code Executors
|
cs.LG cs.CL
|
Large language models (LLMs) have demonstrated remarkable capabilities in
code-related tasks, such as code understanding and code generation. However, an
equally important yet underexplored question is whether LLMs can serve as
general-purpose surrogate code executors, to predict the output and behavior of
a program without actually running it. To systematically investigate this
capability, we introduce SURGE, a comprehensive benchmark covering eight key
aspects: multi-language programming tasks, competition-level programming
problems, repository-level code analysis, high-cost scientific computing,
time-complexity-intensive algorithms, buggy code analysis, programs dependent
on specific compilers or execution environments, and formal mathematical proof
verification. We evaluate multiple open-source and proprietary LLMs on SURGE
and conduct a scaling study to analyze the impact of model size and training
data scale on surrogate execution accuracy. Additionally, we categorize model
prediction errors and explore potential areas for improvement. Our findings
indicate that while LLMs can predict code execution results in certain cases,
they exhibit limitations in general-purpose surrogate execution. This study
provides empirical insights into the feasibility of using LLMs as surrogate
code executors. Code and dataset are released at
https://github.com/Imbernoulli/SURGE.
|
2502.11168
|
Knowing Your Target: Target-Aware Transformer Makes Better
Spatio-Temporal Video Grounding
|
cs.CV cs.AI
|
Transformer has attracted increasing interest in STVG, owing to its
end-to-end pipeline and promising result. Existing Transformer-based STVG
approaches often leverage a set of object queries, which are initialized simply
using zeros and then gradually learn target position information via iterative
interactions with multimodal features, for spatial and temporal localization.
Despite simplicity, these zero object queries, due to lacking target-specific
cues, are hard to learn discriminative target information from interactions
with multimodal features in complicated scenarios (\e.g., with distractors or
occlusion), resulting in degradation. Addressing this, we introduce a novel
Target-Aware Transformer for STVG (TA-STVG), which seeks to adaptively generate
object queries via exploring target-specific cues from the given video-text
pair, for improving STVG. The key lies in two simple yet effective modules,
comprising text-guided temporal sampling (TTS) and attribute-aware spatial
activation (ASA), working in a cascade. The former focuses on selecting
target-relevant temporal cues from a video utilizing holistic text information,
while the latter aims at further exploiting the fine-grained visual attribute
information of the object from previous target-aware temporal cues, which is
applied for object query initialization. Compared to existing methods
leveraging zero-initialized queries, object queries in our TA-STVG, directly
generated from a given video-text pair, naturally carry target-specific cues,
making them adaptive and better interact with multimodal features for learning
more discriminative information to improve STVG. In our experiments on three
benchmarks, TA-STVG achieves state-of-the-art performance and significantly
outperforms the baseline, validating its efficacy.
|
2502.11169
|
Leveraging Constrained Monte Carlo Tree Search to Generate Reliable Long
Chain-of-Thought for Mathematical Reasoning
|
cs.CL
|
Recently, Long Chain-of-Thoughts (CoTs) have gained widespread attention for
improving the reasoning capabilities of Large Language Models (LLMs). This
necessitates that existing LLMs, which lack the ability to generate Long CoTs,
to acquire such capability through post-training methods. Without additional
training, LLMs typically enhance their mathematical reasoning abilities through
inference scaling methods such as MCTS. However, they are hindered by the large
action space and inefficient search strategies, making it challenging to
generate Long CoTs effectively. To tackle this issue, we propose constraining
the action space and guiding the emergence of Long CoTs through a refined
search strategy. In our proposed Constrained Monte Carlo Tree Search (C-MCTS)
framework, we limit the actions selected from a constrained action space, which
is divided into five disjoint subsets: \emph{understanding}, \emph{planning},
\emph{reflection}, \emph{coding}, and \emph{summary}. Each subset is further
constrained to a small number of predefined prompts, rather than allowing LLMs
to generate actions arbitrarily. Additionally, we refine the search strategy by
incorporating prior knowledge about the action sets, such as a human-like
partial order of the action subsets and the pretrained process reward models.
These strategies work together to significantly reduce the vast search space of
Long CoTs. Extensive evaluations on mathematical reasoning benchmarks show
that, under zero-shot settings, our method enables the 7B model to achieve
reasoning capabilities that surpass those of the 72B model.
|
2502.11173
|
Evaluating the Potential of Quantum Machine Learning in Cybersecurity: A
Case-Study on PCA-based Intrusion Detection Systems
|
quant-ph cs.CR cs.LG cs.NI
|
Quantum computing promises to revolutionize our understanding of the limits
of computation, and its implications in cryptography have long been evident.
Today, cryptographers are actively devising post-quantum solutions to counter
the threats posed by quantum-enabled adversaries. Meanwhile, quantum scientists
are innovating quantum protocols to empower defenders. However, the broader
impact of quantum computing and quantum machine learning (QML) on other
cybersecurity domains still needs to be explored. In this work, we investigate
the potential impact of QML on cybersecurity applications of traditional ML.
First, we explore the potential advantages of quantum computing in machine
learning problems specifically related to cybersecurity. Then, we describe a
methodology to quantify the future impact of fault-tolerant QML algorithms on
real-world problems. As a case study, we apply our approach to standard methods
and datasets in network intrusion detection, one of the most studied
applications of machine learning in cybersecurity. Our results provide insight
into the conditions for obtaining a quantum advantage and the need for future
quantum hardware and software advancements.
|
2502.11175
|
Investigating Language Preference of Multilingual RAG Systems
|
cs.CL
|
Multilingual Retrieval-Augmented Generation (mRAG) systems enhance language
models by integrating external multilingual information to produce
context-aware responses. However, mRAG systems struggle with retrieving
relevant information due to linguistic variations between queries and
documents, generating inconsistent responses when multilingual sources
conflict. In this work, we systematically investigate language preferences in
both retrieval and generation of mRAG through a series of experiments. Our
analysis indicates that retrievers tend to prefer high-resource and query
languages, yet this preference does not consistently improve generation
performance. Moreover, we observe that generators prefer the query language or
Latin scripts, leading to inconsistent outputs. To overcome these issues, we
propose Dual Knowledge Multilingual RAG (DKM-RAG), a simple yet effective
framework that fuses translated multilingual passages with complementary model
knowledge. Empirical results demonstrate that DKM-RAG mitigates language
preference in generation and enhances performance across diverse linguistic
settings.
|
2502.11176
|
LogiDynamics: Unraveling the Dynamics of Logical Inference in Large
Language Model Reasoning
|
cs.CL
|
Modern large language models (LLMs) employ various forms of logical
inference, both implicitly and explicitly, when addressing reasoning tasks.
Understanding how to optimally leverage these inference paradigms is critical
for advancing LLMs' reasoning capabilities. This paper adopts an exploratory
approach by introducing a controlled evaluation environment for analogical
reasoning -- a fundamental cognitive task -- that is systematically
parameterized across three dimensions: modality (textual, visual, symbolic),
difficulty (easy, medium, hard), and task format (multiple-choice or free-text
generation). We analyze the comparative dynamics of inductive, abductive, and
deductive inference pipelines across these dimensions, and demonstrate that our
findings generalize to broader in-context learning tasks. Additionally, we
investigate advanced paradigms such as hypothesis selection, verification, and
refinement, revealing their potential to scale up logical inference in LLM
reasoning. This exploratory study provides a foundation for future research in
enhancing LLM reasoning through systematic logical inference strategies.
|
2502.11177
|
The Mirage of Model Editing: Revisiting Evaluation in the Wild
|
cs.CL
|
Despite near-perfect results in artificial evaluations, the effectiveness of
model editing in real-world applications remains unexplored. To bridge this
gap, we propose to study model editing in question answering (QA) by
establishing a rigorous evaluation practice to assess the effectiveness of
editing methods in correcting LLMs' errors. It consists of QAEdit, a new
benchmark derived from popular QA datasets, and a standardized evaluation
framework. Our single editing experiments indicate that current editing methods
perform substantially worse than previously reported (38.5% vs. ~96%). Through
module analysis and controlled experiments, we demonstrate that this
performance decline stems from issues in evaluation practices of prior editing
research. One key issue is the inappropriate use of teacher forcing in testing
prevents error propagation by feeding ground truth tokens (inaccessible in
real-world scenarios) as input. Furthermore, we simulate real-world deployment
by sequential editing, revealing that current approaches fail drastically with
only 1000 edits. Our analysis provides a fundamental reexamination of both the
real-world applicability of existing model editing methods and their evaluation
practices, and establishes a rigorous evaluation framework with key insights to
advance reliable and practical model editing research.
|
2502.11178
|
DAViMNet: SSMs-Based Domain Adaptive Object Detection
|
cs.CV
|
Unsupervised domain adaptation (UDA) for object detection adapts models
trained on labeled source domains to unlabeled target domains, ensuring robust
performance across domain shifts. Transformer-based architectures excel at
capturing long-range dependencies but face efficiency challenges due to their
quadratic attention complexity, which limits scalability in UDA tasks. To
address these issues, we propose a hybrid domain-adaptive Mamba Transformer
architecture that combines Mamba's efficient state-space modeling with
attention mechanisms to tackle domain-specific spatial and channel-wise
variations. Each hybrid block integrates domain-adaptive Mamba blocks and
attention mechanisms: Domain-Adaptive Mamba employs spatial and channel
state-space models to adaptively model domain variations, while attention
mechanisms leverage self-attention for intra-domain feature enhancement and
cross-attention for effective source-target alignment. Our approach processes
both shallow and deeper features, employing an entropy-based knowledge
distillation framework with margin ReLU to emphasize discriminative features
and suppress noise. Gradient Reversal Layers enable adversarial alignment
across network layers, while entropy-driven gating attention with random
perturbations refines target features and mitigates overfitting. By unifying
these components, our architecture achieves state-of-the-art performance in UDA
object detection, balancing efficiency with robust generalization.
|
2502.11179
|
RT-DEMT: A hybrid real-time acupoint detection model combining mamba and
transformer
|
cs.CV cs.AI
|
Traditional Chinese acupuncture methods often face controversy in clinical
practice due to their high subjectivity. Additionally, current
intelligent-assisted acupuncture systems have two major limitations: slow
acupoint localization speed and low accuracy. To address these limitations, a
new method leverages the excellent inference efficiency of the state-space
model Mamba, while retaining the advantages of the attention mechanism in the
traditional DETR architecture, to achieve efficient global information
integration and provide high-quality feature information for acupoint
localization tasks. Furthermore, by employing the concept of residual
likelihood estimation, it eliminates the need for complex upsampling processes,
thereby accelerating the acupoint localization task. Our method achieved
state-of-the-art (SOTA) accuracy on a private dataset of acupoints on the human
back, with an average Euclidean distance pixel error (EPE) of 7.792 and an
average time consumption of 10.05 milliseconds per localization task. Compared
to the second-best algorithm, our method improved both accuracy and speed by
approximately 14\%. This significant advancement not only enhances the efficacy
of acupuncture treatment but also demonstrates the commercial potential of
automated acupuncture robot systems. Access to our method is available at
https://github.com/Sohyu1/RT-DEMT
|
2502.11181
|
Improving Scientific Document Retrieval with Concept Coverage-based
Query Set Generation
|
cs.IR cs.AI
|
In specialized fields like the scientific domain, constructing large-scale
human-annotated datasets poses a significant challenge due to the need for
domain expertise. Recent methods have employed large language models to
generate synthetic queries, which serve as proxies for actual user queries.
However, they lack control over the content generated, often resulting in
incomplete coverage of academic concepts in documents. We introduce Concept
Coverage-based Query set Generation (CCQGen) framework, designed to generate a
set of queries with comprehensive coverage of the document's concepts. A key
distinction of CCQGen is that it adaptively adjusts the generation process
based on the previously generated queries. We identify concepts not
sufficiently covered by previous queries, and leverage them as conditions for
subsequent query generation. This approach guides each new query to complement
the previous ones, aiding in a thorough understanding of the document.
Extensive experiments demonstrate that CCQGen significantly enhances query
quality and retrieval performance.
|
2502.11182
|
Stacked Intelligent Metasurface-Based Transceiver Design for Near-Field
Wideband Systems
|
cs.IT math.IT
|
Intelligent metasurfaces may be harnessed for realizing efficient holographic
multiple-input and multiple-output (MIMO) systems, at a low hardware-cost and
high energy-efficiency. As part of this family, we propose a hybrid beamforming
design for stacked intelligent metasurfaces (SIM) aided wideband wireless
systems relying on the near-field channel model. Specifically, the holographic
beamformer is designed based on configuring the phase shifts in each layer of
the SIM for maximizing the sum of the baseband eigen-channel gains of all
users. To optimize the SIM phase shifts, we propose a layer-by-layer iterative
algorithm for optimizing the phase shifts in each layer alternately. Then, the
minimum mean square error (MMSE) transmit precoding method is employed for the
digital beamformer to support multi-user access. Furthermore, the mitigation of
the SIM phase tuning error is also taken into account in the digital beamformer
by exploiting its statistics. The power sharing ratio of each user is designed
based on the iterative waterfilling power allocation algorithm. Additionally,
our analytical results indicate that the spectral efficiency attained saturates
in the high signal-to-noise ratio (SNR) region due to the phase tuning error
resulting from the imperfect SIM hardware quality. The simulation results show
that the SIM-aided holographic MIMO outperforms the state-of-the-art (SoA)
single-layer holographic MIMO in terms of its achievable rate. We further
demonstrate that the near-field channel model allows the SIM-based transceiver
design to support multiple users, since the spatial resources represented both
by the angle domain and the distance domain can be exploited.
|
2502.11183
|
Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming
Tree Search Exploration Pitfalls
|
cs.CL
|
Recent advancements in tree search algorithms guided by verifiers have
significantly enhanced the reasoning capabilities of large language models
(LLMs), but at the cost of increased computational resources. In this work, we
identify two key challenges contributing to this inefficiency:
$\textit{over-exploration}$ due to redundant states with semantically
equivalent content, and $\textit{under-exploration}$ caused by high variance in
verifier scoring leading to frequent trajectory switching. To address these
issues, we propose FETCH, an e$\textbf{f}$fici$\textbf{e}$nt $\textbf{t}$ree
sear$\textbf{ch}$ framework, which is a flexible, plug-and-play system
compatible with various tree search algorithms. Our framework mitigates
over-exploration by merging semantically similar states using agglomerative
clustering of text embeddings obtained from a fine-tuned SimCSE model. To
tackle under-exploration, we enhance verifiers by incorporating temporal
difference learning with adjusted $\lambda$-returns during training to reduce
variance, and employing a verifier ensemble to aggregate scores during
inference. Experiments on GSM8K, GSM-Plus, and MATH datasets demonstrate that
our methods significantly improve reasoning accuracy and computational
efficiency across four different tree search algorithms, paving the way for
more practical applications of LLM-based reasoning. The code will be released
upon acceptance.
|
2502.11184
|
Can't See the Forest for the Trees: Benchmarking Multimodal Safety
Awareness for Multimodal LLMs
|
cs.CL cs.AI cs.CV cs.MM
|
Multimodal Large Language Models (MLLMs) have expanded the capabilities of
traditional language models by enabling interaction through both text and
images. However, ensuring the safety of these models remains a significant
challenge, particularly in accurately identifying whether multimodal content is
safe or unsafe-a capability we term safety awareness. In this paper, we
introduce MMSafeAware, the first comprehensive multimodal safety awareness
benchmark designed to evaluate MLLMs across 29 safety scenarios with 1500
carefully curated image-prompt pairs. MMSafeAware includes both unsafe and
over-safety subsets to assess models abilities to correctly identify unsafe
content and avoid over-sensitivity that can hinder helpfulness. Evaluating nine
widely used MLLMs using MMSafeAware reveals that current models are not
sufficiently safe and often overly sensitive; for example, GPT-4V misclassifies
36.1% of unsafe inputs as safe and 59.9% of benign inputs as unsafe. We further
explore three methods to improve safety awareness-prompting-based approaches,
visual contrastive decoding, and vision-centric reasoning fine-tuning-but find
that none achieve satisfactory performance. Our findings highlight the profound
challenges in developing MLLMs with robust safety awareness, underscoring the
need for further research in this area. All the code and data will be publicly
available to facilitate future research.
|
2502.11187
|
TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking
|
cs.CL cs.AI
|
In this paper, we present TituLLMs, the first large pretrained Bangla LLMs,
available in 1B and 3B parameter sizes. Due to computational constraints during
both training and inference, we focused on smaller models. To train TituLLMs,
we collected a pretraining dataset of approximately 37 billion tokens. We
extended the Llama-3.2 tokenizer to incorporate language- and culture-specific
knowledge, which also enables faster training and inference. There was a lack
of benchmarking datasets to evaluate LLMs for Bangla. To address this gap, we
developed five benchmarking datasets. We benchmarked various LLMs, including
TituLLMs, and demonstrated that TituLLMs outperforms its initial multilingual
versions. However, this is not always the case, highlighting the complexities
of language adaptation. Our work lays the groundwork for adapting existing
multilingual open models to other low-resource languages. To facilitate broader
adoption and further research, we have made the TituLLMs models and
benchmarking datasets publicly available
(https://huggingface.co/collections/hishab/titulm-llama-family-6718d31fc1b83529276f490a).
|
2502.11188
|
Exploring information geometry: Recent Advances and Connections to
Topological Field Theory
|
math.DG cs.IT math.AG math.IT
|
This introductory text arises from a lecture given in G\"oteborg, Sweden,
given by the first author and is intended for undergraduate students, as well
as for any mathematically inclined reader wishing to explore a synthesis of
ideas connecting geometry and statistics. At its core, this work seeks to
illustrate the profound and yet natural interplay between differential
geometry, probability theory, and the rich algebraic structures encoded in
(pre-)Frobenius manifolds.
The exposition is structured into three principal parts. The first part
provides a concise introduction to differential topology and geometry,
emphasizing the role of smooth manifolds, connections, and curvature in the
formulation of geometric structures. The second part is devoted to probability,
measures, and statistics, where the notion of a probability space is refined
into a geometric object, thus paving the way for a deeper mathematical
understanding of statistical models. Finally, in the third part, we introduce
(pre-)Frobenius manifolds, revealing their surprising connection to exponential
families of probability distributions and, discuss more broadly, their role in
the geometry of information. At the end of those three parts the reader will
find stimulating exercises.
By bringing together these seemingly distant disciplines, we aim to highlight
the natural emergence of geometric structures in statistical theory. This work
does not seek to be exhaustive but rather to provide the reader with a pathway
into a domain of mathematics that is still in its formative stages, where many
fundamental questions remain open. The text is accessible without requiring
advanced prerequisites and should serve as an invitation to further
exploration.
|
2502.11190
|
ReLearn: Unlearning via Learning for Large Language Models
|
cs.CL cs.AI cs.CV cs.HC cs.LG
|
Current unlearning methods for large language models usually rely on reverse
optimization to reduce target token probabilities. However, this paradigm
disrupts the subsequent tokens prediction, degrading model performance and
linguistic coherence. Moreover, existing evaluation metrics overemphasize
contextual forgetting while inadequately assessing response fluency and
relevance. To address these challenges, we propose ReLearn, a data augmentation
and fine-tuning pipeline for effective unlearning, along with a comprehensive
evaluation framework. This framework introduces Knowledge Forgetting Rate (KFR)
and Knowledge Retention Rate (KRR) to measure knowledge-level preservation, and
Linguistic Score (LS) to evaluate generation quality. Our experiments show that
ReLearn successfully achieves targeted forgetting while preserving high-quality
output. Through mechanistic analysis, we further demonstrate how reverse
optimization disrupts coherent text generation, while ReLearn preserves this
essential capability. Code is available at https://github.com/zjunlp/unlearn.
|
2502.11191
|
Primus: A Pioneering Collection of Open-Source Datasets for
Cybersecurity LLM Training
|
cs.CR cs.AI cs.CL
|
Large Language Models (LLMs) have shown remarkable advancements in
specialized fields such as finance, law, and medicine. However, in
cybersecurity, we have noticed a lack of open-source datasets, with a
particular lack of high-quality cybersecurity pretraining corpora, even though
much research indicates that LLMs acquire their knowledge during pretraining.
To address this, we present a comprehensive suite of datasets covering all
major training stages, including pretraining, instruction fine-tuning, and
reasoning distillation with cybersecurity-specific self-reflection data.
Extensive ablation studies demonstrate their effectiveness on public
cybersecurity benchmarks. In particular, continual pre-training on our dataset
yields a 15.88% improvement in the aggregate score, while reasoning
distillation leads to a 10% gain in security certification (CISSP). We will
release all datasets and trained cybersecurity LLMs under the ODC-BY and MIT
licenses to encourage further research in the community. For access to all
datasets and model weights, please refer to
https://huggingface.co/collections/trendmicro-ailab/primus-67b1fd27052b802b4af9d243.
|
2502.11193
|
Large Language Models Penetration in Scholarly Writing and Peer Review
|
cs.CL
|
While the widespread use of Large Language Models (LLMs) brings convenience,
it also raises concerns about the credibility of academic research and
scholarly processes. To better understand these dynamics, we evaluate the
penetration of LLMs across academic workflows from multiple perspectives and
dimensions, providing compelling evidence of their growing influence. We
propose a framework with two components: \texttt{ScholarLens}, a curated
dataset of human- and LLM-generated content across scholarly writing and peer
review for multi-perspective evaluation, and \texttt{LLMetrica}, a tool for
assessing LLM penetration using rule-based metrics and model-based detectors
for multi-dimensional evaluation. Our experiments demonstrate the effectiveness
of \texttt{LLMetrica}, revealing the increasing role of LLMs in scholarly
processes. These findings emphasize the need for transparency, accountability,
and ethical practices in LLM usage to maintain academic credibility.
|
2502.11195
|
From Deception to Perception: The Surprising Benefits of Deepfakes for
Detecting, Measuring, and Mitigating Bias
|
cs.CV cs.AI
|
While deepfake technologies have predominantly been criticized for potential
misuse, our study demonstrates their significant potential as tools for
detecting, measuring, and mitigating biases in key societal domains. By
employing deepfake technology to generate controlled facial images, we extend
the scope of traditional correspondence studies beyond mere textual
manipulations. This enhancement is crucial in scenarios such as pain
assessments, where subjective biases triggered by sensitive features in facial
images can profoundly affect outcomes. Our results reveal that deepfakes not
only maintain the effectiveness of correspondence studies but also introduce
groundbreaking advancements in bias measurement and correction techniques. This
study emphasizes the constructive role of deepfake technologies as essential
tools for advancing societal equity and fairness.
|
2502.11196
|
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on
Continual Pre-Training
|
cs.LG cs.AI cs.CL cs.CV cs.HC
|
Despite exceptional capabilities in knowledge-intensive tasks, Large Language
Models (LLMs) face a critical gap in understanding how they internalize new
knowledge, particularly how to structurally embed acquired knowledge in their
neural computations. We address this issue through the lens of knowledge
circuit evolution, identifying computational subgraphs that facilitate
knowledge storage and processing. Our systematic analysis of circuit evolution
throughout continual pre-training reveals several key findings: (1) the
acquisition of new knowledge is influenced by its relevance to pre-existing
knowledge; (2) the evolution of knowledge circuits exhibits a distinct phase
shift from formation to optimization; (3) the evolution of knowledge circuits
follows a deep-to-shallow pattern. These insights not only advance our
theoretical understanding of the mechanisms of new knowledge acquisition in
LLMs, but also provide potential implications for improving continual
pre-training strategies to enhance model performance. Code and data will be
available at https://github.com/zjunlp/DynamicKnowledgeCircuits.
|
2502.11197
|
CSP: A Simulator For Multi-Agent Ranking Competitions
|
cs.IR cs.GT
|
In ranking competitions, document authors compete for the highest rankings by
modifying their content in response to past rankings. Previous studies focused
on human participants, primarily students, in controlled settings. The rise of
generative AI, particularly Large Language Models (LLMs), introduces a new
paradigm: using LLMs as document authors. This approach addresses scalability
constraints in human-based competitions and reflects the growing role of
LLM-generated content on the web-a prime example of ranking competition. We
introduce a highly configurable ranking competition simulator that leverages
LLMs as document authors. It includes analytical tools to examine the resulting
datasets. We demonstrate its capabilities by generating multiple datasets and
conducting an extensive analysis. Our code and datasets are publicly available
for research.
|
2502.11198
|
ANCHOLIK-NER: A Benchmark Dataset for Bangla Regional Named Entity
Recognition
|
cs.CL cs.LG
|
ANCHOLIK-NER is a linguistically diverse dataset for Named Entity Recognition
(NER) in Bangla regional dialects, capturing variations across Sylhet,
Chittagong, and Barishal. The dataset has around 10,443 sentences, 3,481
sentences per region. The data was collected from two publicly available
datasets and through web scraping from various online newspapers, articles. To
ensure high-quality annotations, the BIO tagging scheme was employed, and
professional annotators with expertise in regional dialects carried out the
labeling process. The dataset is structured into separate subsets for each
region and is available both in CSV format. Each entry contains textual data
along with identified named entities and their corresponding annotations. Named
entities are categorized into ten distinct classes: Person, Location,
Organization, Food, Animal, Colour, Role, Relation, Object, and Miscellaneous.
This dataset serves as a valuable resource for developing and evaluating NER
models for Bangla dialectal variations, contributing to regional language
processing and low-resource NLP applications. It can be utilized to enhance NER
systems in Bangla dialects, improve regional language understanding, and
support applications in machine translation, information retrieval, and
conversational AI.
|
2502.11201
|
Bridging the Gap: Enabling Natural Language Queries for NoSQL Databases
through Text-to-NoSQL Translation
|
cs.DB cs.AI
|
NoSQL databases have become increasingly popular due to their outstanding
performance in handling large-scale, unstructured, and semi-structured data,
highlighting the need for user-friendly interfaces to bridge the gap between
non-technical users and complex database queries. In this paper, we introduce
the Text-to-NoSQL task, which aims to convert natural language queries into
NoSQL queries, thereby lowering the technical barrier for non-expert users. To
promote research in this area, we developed a novel automated dataset
construction process and released a large-scale and open-source dataset for
this task, named TEND (short for Text-to-NoSQL Dataset). Additionally, we
designed a SLM (Small Language Model)-assisted and RAG (Retrieval-augmented
Generation)-assisted multi-step framework called SMART, which is specifically
designed for Text-to-NoSQL conversion. To ensure comprehensive evaluation of
the models, we also introduced a detailed set of metrics that assess the
model's performance from both the query itself and its execution results. Our
experimental results demonstrate the effectiveness of our approach and
establish a benchmark for future research in this emerging field. We believe
that our contributions will pave the way for more accessible and intuitive
interactions with NoSQL databases.
|
2502.11203
|
Multiscale autonomous forecasting of plasma systems' dynamics using
neural networks
|
physics.plasm-ph cs.LG
|
Plasma systems exhibit complex multiscale dynamics, resolving which poses
significant challenges for conventional numerical simulations. Machine learning
(ML) offers an alternative by learning data-driven representations of these
dynamics. Yet existing ML time-stepping models suffer from error accumulation,
instability, and limited long-term forecasting horizons. This paper
demonstrates the application of a hierarchical multiscale neural network
architecture for autonomous plasma forecasting. The framework integrates
multiple neural networks trained across different temporal scales to capture
both fine-scale and large-scale behaviors while mitigating compounding error in
recursive evaluation. Fine-scale networks accurately resolve fast-evolving
features, while coarse-scale networks provide broader temporal context,
reducing the frequency of recursive updates and limiting the accumulation of
small prediction errors over time. We first evaluate the method using canonical
nonlinear dynamical systems and compare its performance against classical
single-scale neural networks. The results demonstrate that single-scale neural
networks experience rapid divergence due to recursive error accumulation,
whereas the multiscale approach improves stability and extends prediction
horizons. Next, our ML model is applied to two plasma configurations of high
scientific and applied significance, demonstrating its ability to preserve
spatial structures and capture multiscale plasma dynamics. By leveraging
multiple time-stepping resolutions, the applied framework is shown to
outperform conventional single-scale networks for the studied plasma test
cases. The results of this work position the hierarchical multiscale neural
network as a promising tool for efficient plasma forecasting and digital twin
applications.
|
2502.11205
|
Deep Contrastive Learning for Feature Alignment: Insights from
Housing-Household Relationship Inference
|
cs.LG cs.CY
|
Housing and household characteristics are key determinants of social and
economic well-being, yet our understanding of their interrelationships remains
limited. This study addresses this knowledge gap by developing a deep
contrastive learning (DCL) model to infer housing-household relationships using
the American Community Survey (ACS) Public Use Microdata Sample (PUMS). More
broadly, the proposed model is suitable for a class of problems where the goal
is to learn joint relationships between two distinct entities without
explicitly labeled ground truth data. Our proposed dual-encoder DCL approach
leverages co-occurrence patterns in PUMS and introduces a bisect K-means
clustering method to overcome the absence of ground truth labels. The
dual-encoder DCL architecture is designed to handle the semantic differences
between housing (building) and household (people) features while mitigating
noise introduced by clustering. To validate the model, we generate a synthetic
ground truth dataset and conduct comprehensive evaluations. The model further
demonstrates its superior performance in capturing housing-household
relationships in Delaware compared to state-of-the-art methods. A
transferability test in North Carolina confirms its generalizability across
diverse sociodemographic and geographic contexts. Finally, the post-hoc
explainable AI analysis using SHAP values reveals that tenure status and
mortgage information play a more significant role in housing-household matching
than traditionally emphasized factors such as the number of persons and rooms.
|
2502.11211
|
A Survey of LLM-based Agents in Medicine: How far are we from Baymax?
|
cs.CL cs.AI cs.CV
|
Large Language Models (LLMs) are transforming healthcare through the
development of LLM-based agents that can understand, reason about, and assist
with medical tasks. This survey provides a comprehensive review of LLM-based
agents in medicine, examining their architectures, applications, and
challenges. We analyze the key components of medical agent systems, including
system profiles, clinical planning mechanisms, medical reasoning frameworks,
and external capacity enhancement. The survey covers major application
scenarios such as clinical decision support, medical documentation, training
simulations, and healthcare service optimization. We discuss evaluation
frameworks and metrics used to assess these agents' performance in healthcare
settings. While LLM-based agents show promise in enhancing healthcare delivery,
several challenges remain, including hallucination management, multimodal
integration, implementation barriers, and ethical considerations. The survey
concludes by highlighting future research directions, including advances in
medical reasoning inspired by recent developments in LLM architectures,
integration with physical systems, and improvements in training simulations.
This work provides researchers and practitioners with a structured overview of
the current state and future prospects of LLM-based agents in medicine.
|
2502.11213
|
Stochastic Optimization of Inventory at Large-scale Supply Chains
|
math.OC cs.AI cs.LG
|
Today's global supply chains face growing challenges due to rapidly changing
market conditions, increased network complexity and inter-dependency, and
dynamic uncertainties in supply, demand, and other factors. To combat these
challenges, organizations employ Material Requirements Planning (MRP) software
solutions to set inventory stock buffers - for raw materials, work-in-process
goods, and finished products - to help them meet customer service levels.
However, holding excess inventory further complicates operations and can lock
up millions of dollars of capital that could be otherwise deployed.
Furthermore, most commercially available MRP solutions fall short in
considering uncertainties and do not result in optimal solutions for modern
enterprises.
At C3 AI, we fundamentally reformulate the inventory management problem as a
constrained stochastic optimization. We then propose a simulation-optimization
framework that minimizes inventory and related costs while maintaining desired
service levels. The framework's goal is to find the optimal reorder parameters
that minimize costs subject to a pre-defined service-level constraint and all
other real-world operational constraints. These optimal reorder parameters can
be fed back into an MRP system to drive optimal order placement, or used to
place optimal orders directly. This approach has proven successful in reducing
inventory levels by 10-35 percent, resulting in hundreds of millions of dollars
of economic benefit for major enterprises at a global scale.
|
2502.11221
|
PlanGenLLMs: A Modern Survey of LLM Planning Capabilities
|
cs.AI cs.CL
|
LLMs have immense potential for generating plans, transforming an initial
world state into a desired goal state. A large body of research has explored
the use of LLMs for various planning tasks, from web navigation to travel
planning and database querying. However, many of these systems are tailored to
specific problems, making it challenging to compare them or determine the best
approach for new tasks. There is also a lack of clear and consistent evaluation
criteria. Our survey aims to offer a comprehensive overview of current LLM
planners to fill this gap. It builds on foundational work by Kartam and Wilkins
(1990) and examines six key performance criteria: completeness, executability,
optimality, representation, generalization, and efficiency. For each, we
provide a thorough analysis of representative works and highlight their
strengths and weaknesses. Our paper also identifies crucial future directions,
making it a valuable resource for both practitioners and newcomers interested
in leveraging LLM planning to support agentic workflows.
|
2502.11223
|
Asymmetric Conflict and Synergy in Post-training for LLM-based
Multilingual Machine Translation
|
cs.CL
|
The emergence of Large Language Models (LLMs) has advanced the multilingual
machine translation (MMT), yet the Curse of Multilinguality (CoM) remains a
major challenge. Existing work in LLM-based MMT typically mitigates this issue
via scaling up training and computation budget, which raises a critical
question: Is scaling up the training and computation budget truly necessary for
high-quality MMT, or can a deeper understanding of CoM provide a more efficient
solution? To explore this problem, we analyze the linguistic conflicts and
synergy, the underlying mechanism of CoM during post-training phase. We
identify an asymmetric phenomenon in linguistic conflicts and synergy: the
dominance of conflicts and synergy varies in different translation directions,
leading to sub-optimal adaptation in existing post-training methods. We further
find that a significant bottleneck in MMT appears to lie in post-training
rather than multilingual pre-training, suggesting the need for more effective
adaptation strategies. Building on these new insights, we propose a
direction-aware training approach, combined with group-wise model merging, to
address asymmetry in linguistic conflicts and synergy explicitly. Leveraging
this strategy, our method fine-tunes X-ALMA-13B-Pretrain-trained only with
multilingual pre-training-achieving comparable performance to XALMA-13B (only
SFT) while using only 20B pretraining tokens and 17B parameters-5.5x fewer
pretraining-tokens and 1.7x fewer model size-with just 0.85 COMET drop on
Flores-200 testsets of 50 languages.
|
2502.11225
|
METAFOR: A Hybrid Metaheuristics Software Framework for Single-Objective
Continuous Optimization Problems
|
cs.NE cs.AI
|
Hybrid metaheuristics are powerful techniques for solving difficult
optimization problems that exploit the strengths of different approaches in a
single implementation. For algorithm designers, however, creating hybrid
metaheuristic implementations has become increasingly challenging due to the
vast number of design options available in the literature and the fact that
they often rely on their knowledge and intuition to come up with new algorithm
designs. In this paper, we propose a modular metaheuristic software framework,
called METAFOR, that can be coupled with an automatic algorithm configuration
tool to automatically design hybrid metaheuristics. METAFOR is specifically
designed to hybridize Particle Swarm Optimization, Differential Evolution and
Covariance Matrix Adaptation-Evolution Strategy, and includes a local search
module that allows their execution to be interleaved with a subordinate local
search. We use the configuration tool irace to automatically generate 17
different metaheuristic implementations and evaluate their performance on a
diverse set of continuous optimization problems. Our results show that, across
all the considered problem classes, automatically generated hybrid
implementations are able to outperform configured single-approach
implementations, while these latter offer advantages on specific classes of
functions. We provide useful insights on the type of hybridization that works
best for specific problem classes, the algorithm components that contribute to
the performance of the algorithms, and the advantages and disadvantages of two
well-known instance separation strategies, creating stratified training set
using a fix percentage and leave-one-class-out cross-validation.
|
2502.11227
|
Integrating Retrospective Framework in Multi-Robot Collaboration
|
cs.RO
|
Recent advancements in Large Language Models (LLMs) have demonstrated
substantial capabilities in enhancing communication and coordination in
multi-robot systems. However, existing methods often struggle to achieve
efficient collaboration and decision-making in dynamic and uncertain
environments, which are common in real-world multi-robot scenarios. To address
these challenges, we propose a novel retrospective actor-critic framework for
multi-robot collaboration. This framework integrates two key components: (1) an
actor that performs real-time decision-making based on observations and task
directives, and (2) a critic that retrospectively evaluates the outcomes to
provide feedback for continuous refinement, such that the proposed framework
can adapt effectively to dynamic conditions. Extensive experiments conducted in
simulated environments validate the effectiveness of our approach,
demonstrating significant improvements in task performance and adaptability.
This work offers a robust solution to persistent challenges in robotic
collaboration.
|
2502.11228
|
Vendi-RAG: Adaptively Trading-Off Diversity And Quality Significantly
Improves Retrieval Augmented Generation With LLMs
|
cs.CL cs.AI
|
Retrieval-augmented generation (RAG) enhances large language models (LLMs)
for domain-specific question-answering (QA) tasks by leveraging external
knowledge sources. However, traditional RAG systems primarily focus on
relevance-based retrieval and often struggle with redundancy, especially when
reasoning requires connecting information from multiple sources. This paper
introduces Vendi-RAG, a framework based on an iterative process that jointly
optimizes retrieval diversity and answer quality. This joint optimization leads
to significantly higher accuracy for multi-hop QA tasks. Vendi-RAG leverages
the Vendi Score (VS), a flexible similarity-based diversity metric, to promote
semantic diversity in document retrieval. It then uses an LLM judge that
evaluates candidate answers, generated after a reasoning step, and outputs a
score that the retriever uses to balance relevance and diversity among the
retrieved documents during each iteration. Experiments on three challenging
datasets -- HotpotQA, MuSiQue, and 2WikiMultiHopQA -- demonstrate Vendi-RAG's
effectiveness in multi-hop reasoning tasks. The framework achieves significant
accuracy improvements over traditional single-step and multi-step RAG
approaches, with accuracy increases reaching up to +4.2% on HotpotQA, +4.1% on
2WikiMultiHopQA, and +1.3% on MuSiQue compared to Adaptive-RAG, the current
best baseline. The benefits of Vendi-RAG are even more pronounced as the number
of retrieved documents increases. Finally, we evaluated Vendi-RAG across
different LLM backbones, including GPT-3.5, GPT-4, and GPT-4o-mini, and
observed consistent improvements, demonstrating that the framework's advantages
are model-agnostic.
|
2502.11229
|
Provable and Practical Online Learning Rate Adaptation with
Hypergradient Descent
|
math.OC cs.LG
|
This paper investigates the convergence properties of the hypergradient
descent method (HDM), a 25-year-old heuristic originally proposed for adaptive
stepsize selection in stochastic first-order methods. We provide the first
rigorous convergence analysis of HDM using the online learning framework of
[Gao24] and apply this analysis to develop new state-of-the-art adaptive
gradient methods with empirical and theoretical support. Notably, HDM
automatically identifies the optimal stepsize for the local optimization
landscape and achieves local superlinear convergence. Our analysis explains the
instability of HDM reported in the literature and proposes efficient strategies
to address it. We also develop two HDM variants with heavy-ball and Nesterov
momentum. Experiments on deterministic convex problems show HDM with heavy-ball
momentum (HDM-HB) exhibits robust performance and significantly outperforms
other adaptive first-order methods. Moreover, HDM-HB often matches the
performance of L-BFGS, an efficient and practical quasi-Newton method, using
less memory and cheaper iterations.
|
2502.11234
|
MaskFlow: Discrete Flows For Flexible and Efficient Long Video
Generation
|
cs.CV
|
Generating long, high-quality videos remains a challenge due to the complex
interplay of spatial and temporal dynamics and hardware limitations. In this
work, we introduce \textbf{MaskFlow}, a unified video generation framework that
combines discrete representations with flow-matching to enable efficient
generation of high-quality long videos. By leveraging a frame-level masking
strategy during training, MaskFlow conditions on previously generated unmasked
frames to generate videos with lengths ten times beyond that of the training
sequences. MaskFlow does so very efficiently by enabling the use of fast Masked
Generative Model (MGM)-style sampling and can be deployed in both fully
autoregressive as well as full-sequence generation modes. We validate the
quality of our method on the FaceForensics (FFS) and Deepmind Lab (DMLab)
datasets and report Fr\'echet Video Distance (FVD) competitive with
state-of-the-art approaches. We also provide a detailed analysis on the
sampling efficiency of our method and demonstrate that MaskFlow can be applied
to both timestep-dependent and timestep-independent models in a training-free
manner.
|
2502.11238
|
Span-Agnostic Optimal Sample Complexity and Oracle Inequalities for
Average-Reward RL
|
cs.LG cs.IT math.IT math.OC stat.ML
|
We study the sample complexity of finding an $\varepsilon$-optimal policy in
average-reward Markov Decision Processes (MDPs) with a generative model. The
minimax optimal span-based complexity of $\widetilde{O}(SAH/\varepsilon^2)$,
where $H$ is the span of the optimal bias function, has only been achievable
with prior knowledge of the value of $H$. Prior-knowledge-free algorithms have
been the objective of intensive research, but several natural approaches
provably fail to achieve this goal. We resolve this problem, developing the
first algorithms matching the optimal span-based complexity without $H$
knowledge, both when the dataset size is fixed and when the suboptimality level
$\varepsilon$ is fixed. Our main technique combines the discounted reduction
approach with a method for automatically tuning the effective horizon based on
empirical confidence intervals or lower bounds on performance, which we term
horizon calibration. We also develop an empirical span penalization approach,
inspired by sample variance penalization, which satisfies an oracle inequality
performance guarantee. In particular this algorithm can outperform the minimax
complexity in benign settings such as when there exist near-optimal policies
with span much smaller than $H$.
|
2502.11239
|
Towards identifying possible fault-tolerant advantage of quantum linear
system algorithms in terms of space, time and energy
|
quant-ph cs.AI cs.LG math.OC
|
Quantum computing, a prominent non-Von Neumann paradigm beyond Moore's law,
can offer superpolynomial speedups for certain problems. Yet its advantages in
efficiency for tasks like machine learning remain under investigation, and
quantum noise complicates resource estimations and classical comparisons. We
provide a detailed estimation of space, time, and energy resources for
fault-tolerant superconducting devices running the Harrow-Hassidim-Lloyd (HHL)
algorithm, a quantum linear system solver relevant to linear algebra and
machine learning. Excluding memory and data transfer, possible quantum
advantages over the classical conjugate gradient method could emerge at $N
\approx 2^{33} \sim 2^{48}$ or even lower, requiring ${O}(10^5)$ physical
qubits, ${O}(10^{12}\sim10^{13})$ Joules, and ${O}(10^6)$ seconds under surface
code fault-tolerance with three types of magic state distillation (15-1,
116-12, 225-1). Key parameters include condition number, sparsity, and
precision $\kappa, s\approx{O}(10\sim100)$, $\epsilon\sim0.01$, and physical
error $10^{-5}$. Our resource estimator adjusts $N, \kappa, s, \epsilon$,
providing a map of quantum-classical boundaries and revealing where a practical
quantum advantage may arise. Our work quantitatively determine how advanced a
fault-tolerant quantum computer should be to achieve possible, significant
benefits on problems related to real-world.
|
2502.11244
|
Soteria: Language-Specific Functional Parameter Steering for
Multilingual Safety Alignment
|
cs.CL cs.AI
|
Ensuring consistent safety across multiple languages remains a significant
challenge for large language models (LLMs). We introduce Soteria, a lightweight
yet powerful strategy that locates and minimally adjusts the "functional heads"
most responsible for harmful content generation in each language. By altering
only a fraction of parameters, Soteria drastically reduces policy violations
without sacrificing overall model performance, even in low-resource settings.
To rigorously evaluate our approach, we also present XThreatBench, a
specialized multilingual dataset capturing fine-grained harmful behaviors drawn
from real policy guidelines. Experiments with leading open-source LLMs (e.g.,
Llama, Qwen, Mistral) show that Soteria consistently improves safety metrics
across high-, mid-, and low-resource languages. These findings highlight a
promising path toward scalable, linguistically attuned, and ethically aligned
LLMs worldwide.
|
2502.11245
|
Shortcuts and Identifiability in Concept-based Models from a
Neuro-Symbolic Lens
|
cs.LG cs.AI
|
Concept-based Models are neural networks that learn a concept extractor to
map inputs to high-level concepts and an inference layer to translate these
into predictions. Ensuring these modules produce interpretable concepts and
behave reliably in out-of-distribution is crucial, yet the conditions for
achieving this remain unclear. We study this problem by establishing a novel
connection between Concept-based Models and reasoning shortcuts (RSs), a common
issue where models achieve high accuracy by learning low-quality concepts, even
when the inference layer is fixed and provided upfront. Specifically, we first
extend RSs to the more complex setting of Concept-based Models and then derive
theoretical conditions for identifying both the concepts and the inference
layer. Our empirical results highlight the impact of reasoning shortcuts and
show that existing methods, even when combined with multiple natural mitigation
strategies, often fail to meet these conditions in practice.
|
2502.11246
|
MemeSense: An Adaptive In-Context Framework for Social Commonsense
Driven Meme Moderation
|
cs.IR cs.CL cs.CY
|
Memes present unique moderation challenges due to their subtle, multimodal
interplay of images, text, and social context. Standard systems relying
predominantly on explicit textual cues often overlook harmful content
camouflaged by irony, symbolism, or cultural references. To address this gap,
we introduce MemeSense, an adaptive in-context learning framework that fuses
social commonsense reasoning with visually and semantically related reference
examples. By encoding crucial task information into a learnable cognitive shift
vector, MemeSense effectively balances lexical, visual, and ethical
considerations, enabling precise yet context-aware meme intervention. Extensive
evaluations on a curated set of implicitly harmful memes demonstrate that
MemeSense substantially outperforms strong baselines, paving the way for safer
online communities. Code and data available at:
https://github.com/sayantan11995/MemeSense
|
2502.11248
|
Prevalence, Sharing Patterns, and Spreaders of Multimodal AI-Generated
Content on X during the 2024 U.S. Presidential Election
|
cs.SI cs.CY
|
While concerns about the risks of AI-generated content (AIGC) to the
integrity of social media discussions have been raised, little is known about
its scale and the actors responsible for its dissemination online. In this
work, we identify and characterize the prevalence, sharing patterns, and
spreaders of AIGC in different modalities, including images and texts.
Analyzing a large-scale dataset from X related to the 2024 U.S. Presidential
Election, we find that approximately 12% of images and 1.4% of texts are deemed
AI-generated. Notably, roughly 3% of text spreaders and 10% of image spreaders
account for 80% of the AI-generated content within their respective modalities.
Superspreaders of AIGC are more likely to be X Premium subscribers with a
right-leaning orientation and exhibit automated behavior. Additionally, AI
image spreaders have a higher proportion of AI-generated content in their
profiles compared to AI text spreaders. This study serves as a very first step
toward understanding the role generative AI plays in shaping online
socio-political environments and offers implications for platform governance.
|
2502.11250
|
Uncertainty-Aware Step-wise Verification with Generative Reward Models
|
cs.CL
|
Complex multi-step reasoning tasks, such as solving mathematical problems,
remain challenging for large language models (LLMs). While outcome supervision
is commonly used, process supervision via process reward models (PRMs) provides
intermediate rewards to verify step-wise correctness in solution traces.
However, as proxies for human judgement, PRMs suffer from reliability issues,
including susceptibility to reward hacking. In this work, we propose leveraging
uncertainty quantification (UQ) to enhance the reliability of step-wise
verification with generative reward models for mathematical reasoning tasks. We
introduce CoT Entropy, a novel UQ method that outperforms existing approaches
in quantifying a PRM's uncertainty in step-wise verification. Our results
demonstrate that incorporating uncertainty estimates improves the robustness of
judge-LM PRMs, leading to more reliable verification.
|
2502.11251
|
Explaining Necessary Truths
|
cs.AI cs.CC math.HO q-bio.NC
|
Knowing the truth is rarely enough -- we also seek out reasons why the fact
is true. While much is known about how we explain contingent truths, we
understand less about how we explain facts, such as those in mathematics, that
are true as a matter of logical necessity. We present a framework, based in
computational complexity, where explanations for deductive truths co-emerge
with discoveries of simplifying steps during the search process. When such
structures are missing, we revert, in turn, to error-based reasons, where a
(corrected) mistake can serve as fictitious, but explanatory,
contingency-cause: not making the mistake serves as a reason why the truth
takes the form it does. We simulate human subjects, using GPT-4o, presented
with SAT puzzles of varying complexity and reasonableness, validating our
theory and showing how its predictions can be tested in future human studies.
|
2502.11256
|
Unveiling Environmental Impacts of Large Language Model Serving: A
Functional Unit View
|
cs.LG cs.AR cs.CL
|
Large language models (LLMs) offer powerful capabilities but come with
significant environmental costs, particularly in carbon emissions. Existing
studies benchmark these emissions but lack a standardized basis for comparison
across models. To address this, we introduce the concept of a functional unit
(FU) and develop FUEL, the first FU-based framework for evaluating LLM
serving's environmental impact. Through case studies on model size,
quantization, and hardware, we uncover key trade-offs in sustainability. Our
findings highlight the potential for reducing carbon emissions by optimizing
model selection, deployment strategies, and hardware choices, paving the way
for more sustainable AI infrastructure.
|
2502.11258
|
Leveraging Conditional Mutual Information to Improve Large Language
Model Fine-Tuning For Classification
|
cs.CL
|
Although large language models (LLMs) have demonstrated remarkable
capabilities in recent years, the potential of information theory (IT) to
enhance LLM development remains underexplored. This paper introduces the
information theoretic principle of Conditional Mutual Information (CMI) to LLM
fine-tuning for classification tasks, exploring its promise in two main ways:
minimizing CMI to improve a model's standalone performance and maximizing CMI
to enhance knowledge distillation (KD) for more capable student models. To
apply CMI in LLM fine-tuning, we adapt the recently proposed CMI-constrained
deep learning framework, which was initially developed for image
classification, with some modification. By minimizing CMI during LLM
fine-tuning, we achieve superior performance gains on 6 of 8 GLUE
classification tasks compared to BERT. Additionally, maximizing CMI during the
KD process results in significant performance improvements in 6 of 8 GLUE
classification tasks compared to DistilBERT. These findings demonstrate CMI's
adaptability for optimizing both standalone LLMs and student models, showcasing
its potential as a robust framework for advancing LLM fine-tuning. Our work
bridges the gap between information theory and LLM development, offering new
insights for building high-performing language models.
|
2502.11259
|
Exploiting network optimization stability for enhanced PET image
denoising using deep image prior
|
physics.med-ph cs.CV
|
PET is affected by statistical noise due to constraints on tracer dose and
scan duration, impacting both diagnostic performance and quantitative accuracy.
While deep learning (DL)-based PET denoising methods have been used to improve
image quality, they may introduce over-smoothing, compromising quantitative
accuracy. We propose a method for making a DL solution more reliable and apply
it to the conditional deep image prior (DIP). We introduce the idea of
stability information in the optimization process of conditional DIP, enabling
the identification of unstable regions within the network's optimization
trajectory. Our method incorporates a stability map, which is derived from
multiple intermediate outputs of moderate network at different optimization
steps. The final denoised image is then obtained by computing linear
combination of the DIP output and the original reconstructed image, weighted by
the stability map. Our method effectively reduces noise while preserving small
structure details in brain FDG images. Results demonstrated that our approach
outperformed existing methods in peak-to-valley ratio and noise suppression
across various low-dose levels. Region-of-interest analysis confirmed that the
proposed method maintains quantitative accuracy without introducing under- or
over-estimation. We applied our method to full-dose PET data to assess its
impact on image quality. The results revealed that the proposed method
significantly reduced background noise while preserving the peak-to-valley
ratio at a level comparable to that of unfiltered full-dose PET images. The
proposed method introduces a robust approach to DL-based PET denoising,
enhancing its reliability and preserving quantitative accuracy. This strategy
has the potential to advance performance in high-sensitivity PET scanners,
demonstrating that DL can extend PET imaging capabilities beyond low-dose
applications.
|
2502.11260
|
Scalable Multi-Agent Offline Reinforcement Learning and the Role of
Information
|
cs.LG
|
Offline Reinforcement Learning (RL) focuses on learning policies solely from
a batch of previously collected data. offering the potential to leverage such
datasets effectively without the need for costly or risky active exploration.
While recent advances in Offline Multi-Agent RL (MARL) have shown promise, most
existing methods either rely on large datasets jointly collected by all agents
or agent-specific datasets collected independently. The former approach ensures
strong performance but raises scalability concerns, while the latter emphasizes
scalability at the expense of performance guarantees. In this work, we propose
a novel scalable routine for both dataset collection and offline learning.
Agents first collect diverse datasets coherently with a pre-specified
information-sharing network and subsequently learn coherent localized policies
without requiring either full observability or falling back to complete
decentralization. We theoretically demonstrate that this structured approach
allows a multi-agent extension of the seminal Fitted Q-Iteration (FQI)
algorithm to globally converge, in high probability, to near-optimal policies.
The convergence is subject to error terms that depend on the informativeness of
the shared information. Furthermore, we show how this approach allows to bound
the inherent error of the supervised-learning phase of FQI with the mutual
information between shared and unshared information. Our algorithm, SCAlable
Multi-agent FQI (SCAM-FQI), is then evaluated on a distributed decision-making
problem. The empirical results align with our theoretical findings, supporting
the effectiveness of SCAM-FQI in achieving a balance between scalability and
policy performance.
|
2502.11262
|
Generating Skyline Datasets for Data Science Models
|
cs.DB cs.AI
|
Preparing high-quality datasets required by various data-driven AI and
machine learning models has become a cornerstone task in data-driven analysis.
Conventional data discovery methods typically integrate datasets towards a
single pre-defined quality measure that may lead to bias for downstream tasks.
This paper introduces MODis, a framework that discovers datasets by optimizing
multiple user-defined, model-performance measures. Given a set of data sources
and a model, MODis selects and integrates data sources into a skyline dataset,
over which the model is expected to have the desired performance in all the
performance measures. We formulate MODis as a multi-goal finite state
transducer, and derive three feasible algorithms to generate skyline datasets.
Our first algorithm adopts a "reduce-from-universal" strategy, that starts with
a universal schema and iteratively prunes unpromising data. Our second
algorithm further reduces the cost with a bi-directional strategy that
interleaves data augmentation and reduction. We also introduce a
diversification algorithm to mitigate the bias in skyline datasets. We
experimentally verify the efficiency and effectiveness of our skyline data
discovery algorithms, and showcase their applications in optimizing data
science pipelines.
|
2502.11265
|
Towards Automatic Identification of Missing Tissues using a
Geometric-Learning Correspondence Model
|
cs.CV physics.med-ph
|
Missing tissue presents a big challenge for dose mapping, e.g., in the
reirradiation setting. We propose a pipeline to identify missing tissue on
intra-patient structure meshes using a previously trained geometric-learning
correspondence model. For our application, we relied on the prediction
discrepancies between forward and backward correspondences of the input meshes,
quantified using a correspondence-based Inverse Consistency Error (cICE). We
optimised the threshold applied to cICE to identify missing points in a dataset
of 35 simulated mandible resections. Our identified threshold, 5.5 mm, produced
a balanced accuracy score of 0.883 in the training data, using an ensemble
approach. This pipeline produced plausible results for a real case where ~25%
of the mandible was removed after a surgical intervention. The pipeline,
however, failed on a more extreme case where ~50% of the mandible was removed.
This is the first time geometric-learning modelling is proposed to identify
missing points in corresponding anatomy.
|
2502.11266
|
The Shrinking Landscape of Linguistic Diversity in the Age of Large
Language Models
|
cs.CL
|
Language is far more than a communication tool. A wealth of information -
including but not limited to the identities, psychological states, and social
contexts of its users - can be gleaned through linguistic markers, and such
insights are routinely leveraged across diverse fields ranging from product
development and marketing to healthcare. In four studies utilizing experimental
and observational methods, we demonstrate that the widespread adoption of large
language models (LLMs) as writing assistants is linked to notable declines in
linguistic diversity and may interfere with the societal and psychological
insights language provides. We show that while the core content of texts is
retained when LLMs polish and rewrite texts, not only do they homogenize
writing styles, but they also alter stylistic elements in a way that
selectively amplifies certain dominant characteristics or biases while
suppressing others - emphasizing conformity over individuality. By varying
LLMs, prompts, classifiers, and contexts, we show that these trends are robust
and consistent. Our findings highlight a wide array of risks associated with
linguistic homogenization, including compromised diagnostic processes and
personalization efforts, the exacerbation of existing divides and barriers to
equity in settings like personnel selection where language plays a critical
role in assessing candidates' qualifications, communication skills, and
cultural fit, and the undermining of efforts for cultural preservation.
|
2502.11267
|
Prompting in the Dark: Assessing Human Performance in Prompt Engineering
for Data Labeling When Gold Labels Are Absent
|
cs.HC cs.AI cs.CL cs.LG
|
Millions of users prompt large language models (LLMs) for various tasks, but
how good are people at prompt engineering? Do users actually get closer to
their desired outcome over multiple iterations of their prompts? These
questions are crucial when no gold-standard labels are available to measure
progress. This paper investigates a scenario in LLM-powered data labeling,
"prompting in the dark," where users iteratively prompt LLMs to label data
without using manually-labeled benchmarks. We developed PromptingSheet, a
Google Sheets add-on that enables users to compose, revise, and iteratively
label data through spreadsheets. Through a study with 20 participants, we found
that prompting in the dark was highly unreliable-only 9 participants improved
labeling accuracy after four or more iterations. Automated prompt optimization
tools like DSPy also struggled when few gold labels were available. Our
findings highlight the importance of gold labels and the needs, as well as the
risks, of automated support in human prompt engineering, providing insights for
future tool design.
|
2502.11268
|
Improved Unbiased Watermark for Large Language Models
|
cs.CL
|
As artificial intelligence surpasses human capabilities in text generation,
the necessity to authenticate the origins of AI-generated content has become
paramount. Unbiased watermarks offer a powerful solution by embedding
statistical signals into language model-generated text without distorting the
quality. In this paper, we introduce MCmark, a family of unbiased,
Multi-Channel-based watermarks. MCmark works by partitioning the model's
vocabulary into segments and promoting token probabilities within a selected
segment based on a watermark key. We demonstrate that MCmark not only preserves
the original distribution of the language model but also offers significant
improvements in detectability and robustness over existing unbiased watermarks.
Our experiments with widely-used language models demonstrate an improvement in
detectability of over 10% using MCmark, compared to existing state-of-the-art
unbiased watermarks. This advancement underscores MCmark's potential in
enhancing the practical application of watermarking in AI-generated texts.
|
2502.11269
|
Unlocking the Potential of Generative AI through Neuro-Symbolic
Architectures: Benefits and Limitations
|
cs.AI cs.LG cs.SC
|
Neuro-symbolic artificial intelligence (NSAI) represents a transformative
approach in artificial intelligence (AI) by combining deep learning's ability
to handle large-scale and unstructured data with the structured reasoning of
symbolic methods. By leveraging their complementary strengths, NSAI enhances
generalization, reasoning, and scalability while addressing key challenges such
as transparency and data efficiency. This paper systematically studies diverse
NSAI architectures, highlighting their unique approaches to integrating neural
and symbolic components. It examines the alignment of contemporary AI
techniques such as retrieval-augmented generation, graph neural networks,
reinforcement learning, and multi-agent systems with NSAI paradigms. This study
then evaluates these architectures against comprehensive set of criteria,
including generalization, reasoning capabilities, transferability, and
interpretability, therefore providing a comparative analysis of their
respective strengths and limitations. Notably, the Neuro > Symbolic < Neuro
model consistently outperforms its counterparts across all evaluation metrics.
This result aligns with state-of-the-art research that highlight the efficacy
of such architectures in harnessing advanced technologies like multi-agent
systems.
|
2502.11271
|
OctoTools: An Agentic Framework with Extensible Tools for Complex
Reasoning
|
cs.LG cs.CL cs.CV cs.MA
|
Solving complex reasoning tasks may involve visual understanding, domain
knowledge retrieval, numerical calculation, and multi-step reasoning. Existing
methods augment large language models (LLMs) with external tools but are
restricted to specialized domains, limited tool types, or require additional
training data. In this paper, we introduce OctoTools, a training-free,
user-friendly, and easily extensible open-source agentic framework designed to
tackle complex reasoning across diverse domains. OctoTools introduces
standardized tool cards to encapsulate tool functionality, a planner for both
high-level and low-level planning, and an executor to carry out tool usage. We
validate OctoTools' generality across 16 diverse tasks (including MathVista,
MMLU-Pro, MedQA, and GAIA-Text), achieving substantial average accuracy gains
of 9.3% over GPT-4o. Furthermore, OctoTools outperforms AutoGen, GPT-Functions
and LangChain by up to 10.6% when given the same set of tools. Through
comprehensive analysis and ablations, OctoTools demonstrates advantages in task
planning, effective tool usage, and multi-step problem solving.
|
2502.11273
|
FairFare: A Tool for Crowdsourcing Rideshare Data to Empower Labor
Organizers
|
cs.HC cs.AI cs.CY
|
Rideshare workers experience unpredictable working conditions due to gig work
platforms' reliance on opaque AI and algorithmic systems. In response to these
challenges, we found that labor organizers want data to help them advocate for
legislation to increase the transparency and accountability of these platforms.
To address this need, we collaborated with a Colorado-based rideshare union to
develop FairFare, a tool that crowdsources and analyzes workers' data to
estimate the take rate -- the percentage of the rider price retained by the
rideshare platform. We deployed FairFare with our partner organization that
collaborated with us in collecting data on 76,000+ trips from 45 drivers over
18 months. During evaluation interviews, organizers reported that FairFare
helped influence the bill language and passage of Colorado Senate Bill 24-75,
calling for greater transparency and data disclosure of platform operations,
and create a national narrative. Finally, we reflect on complexities of
translating quantitative data into policy outcomes, nature of community based
audits, and design implications for future transparency tools.
|
2502.11275
|
Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest
|
cs.CL
|
Massive high-quality data, both pre-training raw texts and post-training
annotations, have been carefully prepared to incubate advanced large language
models (LLMs). In contrast, for information extraction (IE), pre-training data,
such as BIO-tagged sequences, are hard to scale up. We show that IE models can
act as free riders on LLM resources by reframing next-token \emph{prediction}
into \emph{extraction} for tokens already present in the context. Specifically,
our proposed next tokens extraction (NTE) paradigm learns a versatile IE model,
\emph{Cuckoo}, with 102.6M extractive data converted from LLM's pre-training
and post-training data. Under the few-shot setting, Cuckoo adapts effectively
to traditional and complex instruction-following IE with better performance
than existing pre-trained IE models. As a free rider, Cuckoo can naturally
evolve with the ongoing advancements in LLM data preparation, benefiting from
improvements in LLM training pipelines without additional manual effort.
|
2502.11276
|
The Rotary Position Embedding May Cause Dimension Inefficiency in
Attention Heads for Long-Distance Retrieval
|
cs.CL cs.LG
|
The Rotary Position Embedding (RoPE) is widely used in the attention heads of
many large language models (LLM). It rotates dimensions in the query and the
key vectors by different angles according to their positions in the input
sequence. For long context modeling, the range of positions may vary a lot, and
thus RoPE rotates some dimensions by a great range of angles. We hypothesize
that the wide range of rotation angles may prevent LLMs from utilizing those
dimensions. To validate this hypothesis, we present a controlled experiment
showing that applying RoPE causes low utility of certain dimensions. Our
analyses on three LLMs also indicate that these dimensions do not help LLMs do
long-context question answering.
|
2502.11278
|
Reducing Computational Complexity of Rigidity-Based UAV Trajectory
Optimization for Real-Time Cooperative Target Localization
|
eess.SY cs.SY
|
Accurate and swift localization of the target is crucial in emergencies.
However, accurate position data of a target mobile device, typically obtained
from global navigation satellite systems (GNSS), cellular networks, or WiFi,
may not always be accessible to first responders. For instance, 1) accuracy and
availability can be limited in challenging signal reception environments, and
2) in regions where emergency location services are not mandatory, certain
mobile devices may not transmit their location during emergencies. As an
alternative localization method, a network of unmanned aerial vehicles (UAVs)
can be employed to passively locate targets by collecting radio frequency (RF)
signal measurements, such as received signal strength (RSS). In these
situations, UAV trajectories play a critical role in localization performance,
influencing both accuracy and search time. Previous studies optimized UAV
trajectories using the determinant of the Fisher information matrix (FIM), but
its performance declines under unfavorable geometric conditions, such as when
UAVs start from a single base, leading to position ambiguity. To address this,
our prior work introduced a rigidity-based approach, which improved the search
time compared to FIM-based methods in our simulation case. However, the high
computational cost of rigidity-based optimization, primarily due to singular
value decomposition (SVD), limits its practicality. In this paper, we applied
techniques to reduce computational complexity, including randomized SVD, smooth
SVD, and vertex pruning.
|
2502.11279
|
Neural Operators for Stochastic Modeling of Nonlinear Structural System
Response to Natural Hazards
|
cs.LG
|
Traditionally, neural networks have been employed to learn the mapping
between finite-dimensional Euclidean spaces. However, recent research has
opened up new horizons, focusing on the utilization of deep neural networks to
learn operators capable of mapping infinite-dimensional function spaces. In
this work, we employ two state-of-the-art neural operators, the deep operator
network (DeepONet) and the Fourier neural operator (FNO) for the prediction of
the nonlinear time history response of structural systems exposed to natural
hazards, such as earthquakes and wind. Specifically, we propose two
architectures, a self-adaptive FNO and a Fast Fourier Transform-based DeepONet
(DeepFNOnet), where we employ a FNO beyond the DeepONet to learn the
discrepancy between the ground truth and the solution predicted by the
DeepONet. To demonstrate the efficiency and applicability of the architectures,
two problems are considered. In the first, we use the proposed model to predict
the seismic nonlinear dynamic response of a six-story shear building subject to
stochastic ground motions. In the second problem, we employ the operators to
predict the wind-induced nonlinear dynamic response of a high-rise building
while explicitly accounting for the stochastic nature of the wind excitation.
In both cases, the trained metamodels achieve high accuracy while being orders
of magnitude faster than their corresponding high-fidelity models.
|
2502.11284
|
Balancing the Budget: Understanding Trade-offs Between Supervised and
Preference-Based Finetuning
|
cs.LG
|
Post-training of Large Language Models often involves a pipeline of
Supervised Finetuning (SFT) followed by Preference Finetuning (PFT) using
methods like Direct Preference Optimization. Both stages require annotated data
that are very different in structure and costs. We study how to optimally
allocate a fixed training data budget between the two stages, through extensive
experiments spanning four diverse tasks, multiple model sizes and various data
annotation costs. Our findings reveal that just SFT on the base model dominates
performance in low-data regimes ($<1,000$ annotated examples). With larger
data-budgets, we observe that a combination of SFT and PFT, often with
increasing portions allocated towards preference data yields optimal
performance. However, completely eliminating SFT and running PFT directly on
the base model yields suboptimal performance, described as the cold start
problem on tasks like mathematics. We observe that this is due to the
distribution shift arising from using DPO directly on the base model to elicit
step-by-step reasoning. This limitation can be effectively addressed by
allocating even a small portion ($<10$%) of the budget to SFT first, resulting
in performance improvements of $15-20$% on analytical benchmarks like GSM8k.
These results provide actionable insights for researchers and practitioners
optimizing model development under budget constraints, where high-quality data
curation often represents a significant portion of the total costs of model
development.
|
2502.11287
|
MC-BEVRO: Multi-Camera Bird Eye View Road Occupancy Detection for
Traffic Monitoring
|
cs.CV
|
Single camera 3D perception for traffic monitoring faces significant
challenges due to occlusion and limited field of view. Moreover, fusing
information from multiple cameras at the image feature level is difficult
because of different view angles. Further, the necessity for practical
implementation and compatibility with existing traffic infrastructure compounds
these challenges. To address these issues, this paper introduces a novel
Bird's-Eye-View road occupancy detection framework that leverages multiple
roadside cameras to overcome the aforementioned limitations. To facilitate the
framework's development and evaluation, a synthetic dataset featuring diverse
scenes and varying camera configurations is generated using the CARLA
simulator. A late fusion and three early fusion methods were implemented within
the proposed framework, with performance further enhanced by integrating
backgrounds. Extensive evaluations were conducted to analyze the impact of
multi-camera inputs and varying BEV occupancy map sizes on model performance.
Additionally, a real-world data collection pipeline was developed to assess the
model's ability to generalize to real-world environments. The sim-to-real
capabilities of the model were evaluated using zero-shot and few-shot
fine-tuning, demonstrating its potential for practical application. This
research aims to advance perception systems in traffic monitoring, contributing
to improved traffic management, operational efficiency, and road safety.
|
2502.11291
|
Dialogue-based Explanations for Logical Reasoning using Structured
Argumentation
|
cs.AI cs.DB cs.HC cs.LO
|
The problem of explaining inconsistency-tolerant reasoning in knowledge bases
(KBs) is a prominent topic in Artificial Intelligence (AI). While there is some
work on this problem, the explanations provided by existing approaches often
lack critical information or fail to be expressive enough for non-binary
conflicts. In this paper, we identify structural weaknesses of the
state-of-the-art and propose a generic argumentation-based approach to address
these problems. This approach is defined for logics involving reasoning with
maximal consistent subsets and shows how any such logic can be translated to
argumentation. Our work provides dialogue models as dialectic-proof procedures
to compute and explain a query answer wrt inconsistency-tolerant semantics.
This allows us to construct dialectical proof trees as explanations, which are
more expressive and arguably more intuitive than existing explanation
formalisms.
|
2502.11295
|
Game-Of-Goals: Using adversarial games to achieve strategic resilience
|
cs.AI cs.GT
|
Our objective in this paper is to develop a machinery that makes a given
organizational strategic plan resilient to the actions of competitor agents
(adverse environmental actions). We assume that we are given a goal tree
representing strategic goals (can also be seen business requirements for a
software systems) with the assumption that competitor agents are behaving in a
maximally adversarial fashion(opposing actions against our sub goals or goals
in general). We use game tree search methods (such as minimax) to select an
optimal execution strategy(at a given point in time), such that it can maximize
our chances of achieving our (high level) strategic goals. Our machinery helps
us determine which path to follow(strategy selection) to achieve the best end
outcome. This is done by comparing alternative execution strategies available
to us via an evaluation function. Our evaluation function is based on the idea
that we want to make our execution plans defensible(future-proof) by selecting
execution strategies that make us least vulnerable to adversarial actions by
the competitor agents. i.e we want to select an execution strategy such that
its leaves minimum room(or options) for the adversary to cause
impediment/damage to our business goals/plans.
|
2502.11298
|
Integrating Language Models for Enhanced Network State Monitoring in
DRL-Based SFC Provisioning
|
cs.NI cs.AI cs.CL
|
Efficient Service Function Chain (SFC) provisioning and Virtual Network
Function (VNF) placement are critical for enhancing network performance in
modern architectures such as Software-Defined Networking (SDN) and Network
Function Virtualization (NFV). While Deep Reinforcement Learning (DRL) aids
decision-making in dynamic network environments, its reliance on structured
inputs and predefined rules limits adaptability in unforeseen scenarios.
Additionally, incorrect actions by a DRL agent may require numerous training
iterations to correct, potentially reinforcing suboptimal policies and
degrading performance. This paper integrates DRL with Language Models (LMs),
specifically Bidirectional Encoder Representations from Transformers (BERT) and
DistilBERT, to enhance network management. By feeding final VNF allocations
from DRL into the LM, the system can process and respond to queries related to
SFCs, DCs, and VNFs, enabling real-time insights into resource utilization,
bottleneck detection, and future demand planning. The LMs are fine-tuned to our
domain-specific dataset using Low-Rank Adaptation (LoRA). Results show that
BERT outperforms DistilBERT with a lower test loss (0.28 compared to 0.36) and
higher confidence (0.83 compared to 0.74), though BERT requires approximately
46% more processing time.
|
2502.11299
|
Grassroots Platforms with Atomic Transactions: Social Networks,
Cryptocurrencies, and Democratic Federations
|
cs.DC cs.NI cs.SI
|
Grassroots platforms aim to offer an egalitarian alternative to global
platforms -- centralized/autocratic (Facebook etc.) and
decentralized/plutocratic (Bitcoin etc.) alike. Key grassroots platforms
include grassroots social networks, grassroots cryptocurrencies, and grassroots
democratic federations. Previously, grassroots platforms were defined formally
and proven grassroots using unary distributed transition systems, in which each
transition is carried out by a single agent. However, grassroots platforms
cater for a more abstract specification using transactions carried out
atomically by multiple agents, something that cannot be expressed by unary
transition systems. As a result, their original specifications and proofs were
unnecessarily cumbersome and opaque.
Here, we aim to provide a more suitable formal foundation for grassroots
platforms. To do so, we enhance the notion of a distributed transition system
to include atomic transactions and revisit the notion of grassroots platforms
within this new foundation. We present crisp specifications of key grassroots
platforms using atomic transactions: befriending and defriending for grassroots
social networks, coin swaps for grassroots cryptocurrencies, and communities
forming, joining, and leaving a federation for grassroots democratic
federations. We prove a general theorem that a platform specified by atomic
transactions that are so-called interactive is grassroots; show that the atomic
transactions used to specify all three platforms are interactive; and conclude
that the platforms thus specified are indeed grassroots. We thus provide a
better mathematical foundation for grassroots platforms and a solid and clear
starting point from which their implementation can commence.
|
2502.11300
|
CORDIAL: Can Multimodal Large Language Models Effectively Understand
Coherence Relationships?
|
cs.CL cs.AI cs.CV
|
Multimodal Large Language Models (MLLMs) are renowned for their superior
instruction-following and reasoning capabilities across diverse problem
domains. However, existing benchmarks primarily focus on assessing factual and
logical correctness in downstream tasks, with limited emphasis on evaluating
MLLMs' ability to interpret pragmatic cues and intermodal relationships. To
address this gap, we assess the competency of MLLMs in performing Multimodal
Discourse Analysis (MDA) using Coherence Relations. Our benchmark, CORDIAL,
encompasses a broad spectrum of Coherence Relations across 3 different
discourse domains at varying levels of granularity. Through our experiments on
10+ MLLMs employing different prompting strategies, we show that even top
models like Gemini 1.5 Pro and GPT-4o fail to match the performance of simple
classifier-based baselines. This study emphasizes the need to move beyond
similarity-based metrics and adopt a discourse-driven framework for evaluating
MLLMs, providing a more nuanced assessment of their capabilities. The benchmark
and code are available at: https://github.com/aashish2000/CORDIAL.
|
2502.11304
|
Leveraging Multimodal-LLMs Assisted by Instance Segmentation for
Intelligent Traffic Monitoring
|
cs.AI cs.CL cs.CV
|
A robust and efficient traffic monitoring system is essential for smart
cities and Intelligent Transportation Systems (ITS), using sensors and cameras
to track vehicle movements, optimize traffic flow, reduce congestion, enhance
road safety, and enable real-time adaptive traffic control. Traffic monitoring
models must comprehensively understand dynamic urban conditions and provide an
intuitive user interface for effective management. This research leverages the
LLaVA visual grounding multimodal large language model (LLM) for traffic
monitoring tasks on the real-time Quanser Interactive Lab simulation platform,
covering scenarios like intersections, congestion, and collisions. Cameras
placed at multiple urban locations collect real-time images from the
simulation, which are fed into the LLaVA model with queries for analysis. An
instance segmentation model integrated into the cameras highlights key elements
such as vehicles and pedestrians, enhancing training and throughput. The system
achieves 84.3% accuracy in recognizing vehicle locations and 76.4% in
determining steering direction, outperforming traditional models.
|
2502.11305
|
Non-Uniform Memory Sampling in Experience Replay
|
cs.LG
|
Continual learning is the process of training machine learning models on a
sequence of tasks where data distributions change over time. A well-known
obstacle in this setting is catastrophic forgetting, a phenomenon in which a
model drastically loses performance on previously learned tasks when learning
new ones. A popular strategy to alleviate this problem is experience replay, in
which a subset of old samples is stored in a memory buffer and replayed with
new data. Despite continual learning advances focusing on which examples to
store and how to incorporate them into the training loss, most approaches
assume that sampling from this buffer is uniform by default.
We challenge the assumption that uniform sampling is necessarily optimal. We
conduct an experiment in which the memory buffer updates the same way in every
trial, but the replay probability of each stored sample changes between trials
based on different random weight distributions. Specifically, we generate 50
different non-uniform sampling probability weights for each trial and compare
their final accuracy to the uniform sampling baseline. We find that there is
always at least one distribution that significantly outperforms the baseline
across multiple buffer sizes, models, and datasets. These results suggest that
more principled adaptive replay policies could yield further gains. We discuss
how exploiting this insight could inspire new research on non-uniform memory
sampling in continual learning to better mitigate catastrophic forgetting.
The code supporting this study is available at
$\href{https://github.com/DentonJC/memory-sampling}{https://github.com/DentonJC/memory-sampling}$.
|
2502.11306
|
Smoothing Out Hallucinations: Mitigating LLM Hallucination with Smoothed
Knowledge Distillation
|
cs.CL cs.LG
|
Large language models (LLMs) often suffer from hallucination, generating
factually incorrect or ungrounded content, which limits their reliability in
high-stakes applications. A key factor contributing to hallucination is the use
of hard labels during training, which enforce deterministic supervision,
encourage overconfidence, and disregard the uncertainty inherent in natural
language. To address this, we propose mitigating hallucination through
knowledge distillation (KD), where a teacher model provides smoothed soft
labels to a student model, reducing overconfidence and improving factual
grounding. We apply KD during supervised finetuning on instructional data,
evaluating its effectiveness across LLMs from different families. Experimental
results on summarization benchmarks demonstrate that KD reduces hallucination
compared to standard finetuning while preserving performance on general NLP
tasks. These findings highlight KD as a promising approach for mitigating
hallucination in LLMs and improving model reliability.
|
2502.11307
|
Exploiting Point-Language Models with Dual-Prompts for 3D Anomaly
Detection
|
cs.CV cs.AI
|
Anomaly detection (AD) in 3D point clouds is crucial in a wide range of
industrial applications, especially in various forms of precision
manufacturing. Considering the industrial demand for reliable 3D AD, several
methods have been developed. However, most of these approaches typically
require training separate models for each category, which is memory-intensive
and lacks flexibility. In this paper, we propose a novel Point-Language model
with dual-prompts for 3D ANomaly dEtection (PLANE). The approach leverages
multi-modal prompts to extend the strong generalization capabilities of
pre-trained Point-Language Models (PLMs) to the domain of 3D point cloud AD,
achieving impressive detection performance across multiple categories using a
single model. Specifically, we propose a dual-prompt learning method,
incorporating both text and point cloud prompts. The method utilizes a dynamic
prompt creator module (DPCM) to produce sample-specific dynamic prompts, which
are then integrated with class-specific static prompts for each modality,
effectively driving the PLMs. Additionally, based on the characteristics of
point cloud data, we propose a pseudo 3D anomaly generation method (Ano3D) to
improve the model's detection capabilities in an unsupervised setting.
Experimental results demonstrate that the proposed method, which is under the
multi-class-one-model paradigm, achieves a +8.7%/+17% gain on anomaly detection
and localization performance as compared to the state-of-the-art
one-class-one-model methods for the Anomaly-ShapeNet dataset, and obtains
+4.3%/+4.1% gain for the Real3D-AD dataset. Code will be available upon
publication.
|
2502.11308
|
ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment
and Generation
|
cs.CR cs.AI cs.CL
|
With the growing popularity of Large Language Models (LLMs) and vector
databases, private textual data is increasingly processed and stored as
numerical embeddings. However, recent studies have proven that such embeddings
are vulnerable to inversion attacks, where original text is reconstructed to
reveal sensitive information. Previous research has largely assumed access to
millions of sentences to train attack models, e.g., through data leakage or
nearly unrestricted API access. With our method, a single data point is
sufficient for a partially successful inversion attack. With as little as 1k
data samples, performance reaches an optimum across a range of black-box
encoders, without training on leaked data. We present a Few-shot Textual
Embedding Inversion Attack using ALignment and GENeration (ALGEN), by aligning
victim embeddings to the attack space and using a generative model to
reconstruct text. We find that ALGEN attacks can be effectively transferred
across domains and languages, revealing key information. We further examine a
variety of defense mechanisms against ALGEN, and find that none are effective,
highlighting the vulnerabilities posed by inversion attacks. By significantly
lowering the cost of inversion and proving that embedding spaces can be aligned
through one-step optimization, we establish a new textual embedding inversion
paradigm with broader applications for embedding alignment in NLP.
|
2502.11310
|
Generalized Factor Neural Network Model for High-dimensional Regression
|
stat.ML cs.LG q-fin.ST
|
We tackle the challenges of modeling high-dimensional data sets, particularly
those with latent low-dimensional structures hidden within complex, non-linear,
and noisy relationships. Our approach enables a seamless integration of
concepts from non-parametric regression, factor models, and neural networks for
high-dimensional regression. Our approach introduces PCA and Soft PCA layers,
which can be embedded at any stage of a neural network architecture, allowing
the model to alternate between factor modeling and non-linear transformations.
This flexibility makes our method especially effective for processing
hierarchical compositional data. We explore ours and other techniques for
imposing low-rank structures on neural networks and examine how architectural
design impacts model performance. The effectiveness of our method is
demonstrated through simulation studies, as well as applications to forecasting
future price movements of equity ETF indices and nowcasting with macroeconomic
data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.