id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.10914
|
LLM-driven Knowledge Distillation for Dynamic Text-Attributed Graphs
|
cs.LG
|
Dynamic Text-Attributed Graphs (DyTAGs) have numerous real-world
applications, e.g. social, collaboration, citation, communication, and review
networks. In these networks, nodes and edges often contain text descriptions,
and the graph structure can evolve over time. Future link prediction, edge
classification, relation generation, and other downstream tasks on DyTAGs
require powerful representations that encode structural, temporal, and textual
information. Although graph neural networks (GNNs) excel at handling structured
data, encoding temporal information within dynamic graphs remains a significant
challenge. In this work, we propose LLM-driven Knowledge Distillation for
Dynamic Text Attributed Graph (LKD4DyTAG) with temporal encoding to address
these challenges. We use a simple, yet effective approach to encode temporal
information in edges so that graph convolution can simultaneously capture both
temporal and structural information in the hidden representations. To leverage
LLM's text processing capabilities for learning richer representations on
DyTAGs, we distill knowledge from LLM-driven edge representations (based on a
neighborhood's text attributes) into saptio-temporal representations using a
lightweight GNN model that encodes temporal and structural information. The
objective of knowledge distillation enables the GNN to learn representations
that more effectively encode the available structural, temporal, and textual
information in DyTAG. We conducted extensive experimentation on six real-world
DyTAG datasets to verify the effectiveness of our approach LKD4DyTAG for future
link prediction and edge classification task. The results show that our
approach significantly improves the performance of downstream tasks compared to
the baseline models.
|
2502.10916
|
An Open-Source Web-Based Tool for Evaluating Open-Source Large Language
Models Leveraging Information Retrieval from Custom Documents
|
cs.CL cs.IR
|
In our work, we present the first-of-its-kind open-source web-based tool
which is able to demonstrate the impacts of a user's speech act during
discourse with conversational agents, which leverages open-source large
language models. With this software resource, it is possible for researchers
and experts to evaluate the performance of various dialogues, visualize the
user's communicative intents, and utilise uploaded specific documents for the
chat agent to use for its information retrieval to respond to the user query.
The context gathered by these models is obtained from a set of linguistic
features extracted, which forms the context embeddings of the models.
Regardless of these models showing good context understanding based on these
features, there still remains a gap in including deeper pragmatic features to
improve the model's comprehension of the query, hence the efforts to develop
this web resource, which is able to extract and then inject this overlooked
feature in the encoder-decoder pipeline of the conversational agent. To
demonstrate the effect and impact of the resource, we carried out an experiment
which evaluated the system using 2 knowledge files for information retrieval,
with two user queries each, across 5 open-source large language models using 10
standard metrics. Our results showed that larger open-source models,
demonstrated an improved alignment when the user speech act was included with
their query. The smaller models in contrast showed an increased perplexity and
mixed performance, which explicitly indicated struggles in processing queries
that explicitly included speech acts. The results from the analysis using the
developed web resource highlight the potential of speech acts towards enhancing
conversational depths while underscoring the need for model-specific
optimizations to address increased computational costs and response times.
|
2502.10920
|
Do Deepfake Detectors Work in Reality?
|
cs.CV cs.AI
|
Deepfakes, particularly those involving faceswap-based manipulations, have
sparked significant societal concern due to their increasing realism and
potential for misuse. Despite rapid advancements in generative models,
detection methods have not kept pace, creating a critical gap in defense
strategies. This disparity is further amplified by the disconnect between
academic research and real-world applications, which often prioritize different
objectives and evaluation criteria. In this study, we take a pivotal step
toward bridging this gap by presenting a novel observation: the post-processing
step of super-resolution, commonly employed in real-world scenarios,
substantially undermines the effectiveness of existing deepfake detection
methods. To substantiate this claim, we introduce and publish the first
real-world faceswap dataset, collected from popular online faceswap platforms.
We then qualitatively evaluate the performance of state-of-the-art deepfake
detectors on real-world deepfakes, revealing that their accuracy approaches the
level of random guessing. Furthermore, we quantitatively demonstrate the
significant performance degradation caused by common post-processing
techniques. By addressing this overlooked challenge, our study underscores a
critical avenue for enhancing the robustness and practical applicability of
deepfake detection methods in real-world settings.
|
2502.10921
|
Evolving Hate Speech Online: An Adaptive Framework for Detection and
Mitigation
|
cs.CL cs.SI
|
The proliferation of social media platforms has led to an increase in the
spread of hate speech, particularly targeting vulnerable communities.
Unfortunately, existing methods for automatically identifying and blocking
toxic language rely on pre-constructed lexicons, making them reactive rather
than adaptive. As such, these approaches become less effective over time,
especially when new communities are targeted with slurs not included in the
original datasets. To address this issue, we present an adaptive approach that
uses word embeddings to update lexicons and develop a hybrid model that adjusts
to emerging slurs and new linguistic patterns. This approach can effectively
detect toxic language, including intentional spelling mistakes employed by
aggressors to avoid detection. Our hybrid model, which combines BERT with
lexicon-based techniques, achieves an accuracy of 95% for most state-of-the-art
datasets. Our work has significant implications for creating safer online
environments by improving the detection of toxic content and proactively
updating the lexicon. Content Warning: This paper contains examples of hate
speech that may be triggering.
|
2502.10927
|
The underlying structures of self-attention: symmetry, directionality,
and emergent dynamics in Transformer training
|
cs.LG
|
Self-attention is essential to Transformer architectures, yet how information
is embedded in the self-attention matrices and how different objective
functions impact this process remains unclear. We present a mathematical
framework to analyze self-attention matrices by deriving the structures
governing their weight updates. Using this framework, we demonstrate that
bidirectional training induces symmetry in the weight matrices, while
autoregressive training results in directionality and column dominance. Our
theoretical findings are validated across multiple Transformer models -
including ModernBERT, GPT, LLaMA3, and Mistral - and input modalities like
text, vision, and audio. Finally, we apply these insights by showing that
symmetric initialization improves the performance of encoder-only models on
language tasks. This mathematical analysis offers a novel theoretical
perspective on how information is embedded through self-attention, thereby
improving the interpretability of Transformer models.
|
2502.10928
|
Semantic Specialization in MoE Appears with Scale: A Study of DeepSeek
R1 Expert Specialization
|
cs.LG cs.AI cs.CL
|
DeepSeek-R1, the largest open-source Mixture-of-Experts (MoE) model, has
demonstrated reasoning capabilities comparable to proprietary frontier models.
Prior research has explored expert routing in MoE models, but findings suggest
that expert selection is often token-dependent rather than semantically driven.
Given DeepSeek-R1's enhanced reasoning abilities, we investigate whether its
routing mechanism exhibits greater semantic specialization than previous MoE
models. To explore this, we conduct two key experiments: (1) a word sense
disambiguation task, where we examine expert activation patterns for words with
differing senses, and (2) a cognitive reasoning analysis, where we assess
DeepSeek-R1's structured thought process in an interactive task setting of
DiscoveryWorld. We conclude that DeepSeek-R1's routing mechanism is more
semantically aware and it engages in structured cognitive processes.
|
2502.10930
|
Reduced Order Modeling with Shallow Recurrent Decoder Networks
|
cs.LG math.DS
|
Reduced Order Modeling is of paramount importance for efficiently inferring
high-dimensional spatio-temporal fields in parametric contexts, enabling
computationally tractable parametric analyses, uncertainty quantification and
control. However, conventional dimensionality reduction techniques are
typically limited to known and constant parameters, inefficient for nonlinear
and chaotic dynamics, and uninformed to the actual system behavior. In this
work, we propose sensor-driven SHallow REcurrent Decoder networks for Reduced
Order Modeling (SHRED-ROM). Specifically, we consider the composition of a long
short-term memory network, which encodes the temporal dynamics of limited
sensor data in multiple scenarios, and a shallow decoder, which reconstructs
the corresponding high-dimensional states. SHRED-ROM is a robust decoding-only
strategy that circumvents the numerically unstable approximation of an inverse
which is required by encoding-decoding schemes. To enhance computational
efficiency and memory usage, the full-order state snapshots are reduced by,
e.g., proper orthogonal decomposition, allowing for compressive training of the
networks with minimal hyperparameter tuning. Through applications on chaotic
and nonlinear fluid dynamics, we show that SHRED-ROM (i) accurately
reconstructs the state dynamics for new parameter values starting from limited
fixed or mobile sensors, independently on sensor placement, (ii) can cope with
both physical, geometrical and time-dependent parametric dependencies, while
being agnostic to their actual values, (iii) can accurately estimate unknown
parameters, and (iv) can deal with different data sources, such as
high-fidelity simulations, coupled fields and videos.
|
2502.10931
|
D-CIPHER: Dynamic Collaborative Intelligent Agents with Planning and
Heterogeneous Execution for Enhanced Reasoning in Offensive Security
|
cs.AI cs.CR
|
Large Language Models (LLMs) have been used in cybersecurity in many ways,
including their recent use as intelligent agent systems for autonomous security
analysis. Capture the Flag (CTF) challenges serve as benchmarks for assessing
the automated task-planning abilities of LLM agents across various
cybersecurity skill sets. Early attempts to apply LLMs for solving CTF
challenges relied on single-agent systems, where feedback was restricted to a
single reasoning-action loop. This approach proved inadequate for handling
complex CTF tasks. Drawing inspiration from real-world CTF competitions, where
teams of experts collaborate, we introduce the D-CIPHER multi-agent LLM
framework for collaborative CTF challenge solving. D-CIPHER integrates agents
with distinct roles, enabling dynamic feedback loops to enhance reasoning on
CTF challenges. It introduces the Planner-Executor agent system, consisting of
a Planner agent for overall problem-solving along with multiple heterogeneous
Executor agents for individual tasks, facilitating efficient allocation of
responsibilities among the LLMs. Additionally, D-CIPHER incorporates an
Auto-prompter agent, which improves problem-solving by exploring the challenge
environment and generating a highly relevant initial prompt. We evaluate
D-CIPHER on CTF benchmarks using multiple LLM models and conduct comprehensive
studies to highlight the impact of our enhancements. Our results demonstrate
that the multi-agent D-CIPHER system achieves a significant improvement in
challenges solved, setting a state-of-the-art performance on three benchmarks:
22.0% on NYU CTF Bench, 22.5% on Cybench, and 44.0% on HackTheBox. D-CIPHER is
available at https://github.com/NYU-LLM-CTF/nyuctf_agents as the
nyuctf_multiagent package.
|
2502.10932
|
PPAC Driven Multi-die and Multi-technology Floorplanning
|
eess.SY cs.SY
|
In heterogeneous integration, where different dies may utilize distinct
technologies, floorplanning across multiple dies inherently requires
simultaneous technology selection. This work presents the first systematic
study of multi-die and multi-technology floorplanning. Unlike many conventional
approaches, which are primarily driven by area and wirelength, this study
additionally considers performance, power, and cost, highlighting the impact of
technology selection. A simulated annealing method and a reinforcement learning
techniques are developed. Experimental results show that the proposed
techniques significantly outperform a naive baseline approach.
|
2502.10934
|
Fundamental Principles of Linguistic Structure are Not Represented by o3
|
cs.CL
|
A core component of a successful artificial general intelligence would be the
rapid creation and manipulation of grounded compositional abstractions and the
demonstration of expertise in the family of recursive hierarchical syntactic
objects necessary for the creative use of human language. We evaluated the
recently released o3 model (OpenAI; o3-mini-high) and discovered that while it
succeeds on some basic linguistic tests relying on linear, surface statistics
(e.g., the Strawberry Test), it fails to generalize basic phrase structure
rules; it fails with comparative sentences involving semantically illegal
cardinality comparisons ('Escher sentences'); its fails to correctly rate and
explain acceptability dynamics; and it fails to distinguish between
instructions to generate unacceptable semantic vs. unacceptable syntactic
outputs. When tasked with generating simple violations of grammatical rules, it
is seemingly incapable of representing multiple parses to evaluate against
various possible semantic interpretations. In stark contrast to many recent
claims that artificial language models are on the verge of replacing the field
of linguistics, our results suggest not only that deep learning is hitting a
wall with respect to compositionality (Marcus 2022), but that it is hitting [a
[stubbornly [resilient wall]]] that cannot readily be surmounted to reach
human-like compositional reasoning simply through more compute.
|
2502.10937
|
SCALE: Towards Collaborative Content Analysis in Social Science with
Large Language Model Agents and Human Intervention
|
cs.AI cs.CL cs.MA
|
Content analysis breaks down complex and unstructured texts into
theory-informed numerical categories. Particularly, in social science, this
process usually relies on multiple rounds of manual annotation, domain expert
discussion, and rule-based refinement. In this paper, we introduce SCALE, a
novel multi-agent framework that effectively $\underline{\textbf{S}}$imulates
$\underline{\textbf{C}}$ontent $\underline{\textbf{A}}$nalysis via
$\underline{\textbf{L}}$arge language model (LLM)
ag$\underline{\textbf{E}}$nts. SCALE imitates key phases of content analysis,
including text coding, collaborative discussion, and dynamic codebook
evolution, capturing the reflective depth and adaptive discussions of human
researchers. Furthermore, by integrating diverse modes of human intervention,
SCALE is augmented with expert input to further enhance its performance.
Extensive evaluations on real-world datasets demonstrate that SCALE achieves
human-approximated performance across various complex content analysis tasks,
offering an innovative potential for future social science research.
|
2502.10938
|
PEA: Enhancing LLM Performance on Computational-Reasoning Tasks
|
cs.AI
|
Large Language Models (LLMs) have exhibited remarkable capabilities across
diverse domains, prompting investigations into their potential as generic
reasoning engines. While recent studies have explored inference-time
computation to enhance model performance on complex problems, current research
lacks a formal framework to characterize the complexity of reasoning tasks.
This study introduces the Predicate-Enumeration-Aggregation (PEA) framework, a
formal approach to describe and solve a class of important reasoning tasks
termed computational reasoning problems. The PEA framework decomposes these
problems into predicate and enumeration components, using LLMs to synthesize
programs based on specified predicates, enumeration, and aggregation rules.
These synthesized programs are then executed to obtain solutions to the
computational tasks. We demonstrate the framework's efficacy on benchmark tasks
including Boolean satisfiability problems, game of $24$, and planning problems.
Empirical evaluation reveals that PEA substantially enhances the performance of
underlying models on benchmark computational problems, yielding an average
accuracy improvement of approximately $50\%$, coupled with increased
efficiency.
|
2502.10940
|
CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation
|
cs.LG cs.AI
|
Large language models (LLMs) are revolutionizing many science and engineering
fields. However, their huge model sizes impose extremely demanding needs of
computational resources in the pre-training stage. Although low-rank
factorizations can reduce model parameters, their direct application in LLM
pre-training often lead to non-negligible performance loss. To address this
fundamental challenge, we introduce CoLA and its memory-efficient
implementation, CoLA-M. We leverage the low-rank structure observed widely in
model activations, enforcing non-linear transformations between factorized
weight matrices to reduce model size, boost model capacity and training
efficiency. Experiments on LLaMA models with 60 million to 7 billion parameters
show that CoLA reduces the computing cost by $\bf 2\pmb{\times}$ and improves
training throughput by $\bf 1.86\pmb{\times}$ while maintaining full-rank level
performance. CoLA-M further squeezes memory cost without sacrificing
throughput, offering a pre-training approach with collectively superior
parameter, computing, and memory efficiency. The LLMs produced are also $\bf
2\pmb{\times}$ smaller, enabling faster inference with lower memory cost on
resource-constrained platforms
|
2502.10942
|
Exploring Contextual Flux in Large Language Models: A Novel Approach to
Self-Modulating Semantic Networks
|
cs.CL
|
Self-modulating mechanisms introduce dynamic adaptation capabilities within
language models through contextual realignment strategies that influence token
embedding trajectories across extended sequences. Contextual Flux is explored
as an approach to embedding modulation, integrating an auxiliary gating
mechanism within the self-attention framework to dynamically adjust token
representations based on evolving contextual dependencies. The empirical
analysis evaluates entropy variations, latent space realignments, and coherence
stability to assess the extent to which self-regulation enhances text
generation consistency while preserving generative flexibility. Quantitative
assessments suggest that embedding shifts contribute to more structured
adaptation in long-form sequences, with measured reductions in redundant phrase
repetitions and improvements in thematic retention. Variability in contextual
weight computation affects modulation stability, leading to differing levels of
adaptation across diverse linguistic structures. The computational demands
introduced through real-time embedding reconfiguration are examined in relation
to model scalability, emphasizing the need for optimization strategies in
high-volume generative applications. The findings suggest that while adaptive
embedding updates improve certain aspects of coherence, their impact remains
contingent on model capacity and input complexity.
|
2502.10947
|
The Relationship between No-Regret Learning and Online Conformal
Prediction
|
cs.LG cs.GT stat.ML
|
Existing algorithms for online conformal prediction -- guaranteeing marginal
coverage in adversarial settings -- are variants of online gradient descent
(OGD), but their analyses of worst-case coverage do not follow from the regret
guarantee of OGD. What is the relationship between no-regret learning and
online conformal prediction? We observe that although standard regret
guarantees imply marginal coverage in i.i.d. settings, this connection fails as
soon as we either move to adversarial environments or ask for group conditional
coverage. On the other hand, we show a tight connection between threshold
calibrated coverage and swap-regret in adversarial settings, which extends to
group-conditional (multi-valid) coverage. We also show that algorithms in the
follow the perturbed leader family of no regret learning algorithms (which
includes online gradient descent) can be used to give group-conditional
coverage guarantees in adversarial settings for arbitrary grouping functions.
Via this connection we analyze and conduct experiments using a multi-group
generalization of the ACI algorithm of Gibbs & Candes [2021]
(arXiv:2106.00170).
|
2502.10949
|
Learning the Exact Time Integration Algorithm for Initial Value Problems
by Randomized Neural Networks
|
math.NA cs.LG cs.NA physics.comp-ph
|
We present a method leveraging extreme learning machine (ELM) type randomized
neural networks (NNs) for learning the exact time integration algorithm for
initial value problems (IVPs). The exact time integration algorithm for
non-autonomous systems can be represented by an algorithmic function in higher
dimensions, which satisfies an associated system of partial differential
equations with corresponding boundary conditions. Our method learns the
algorithmic function by solving this associated system using ELM with a physics
informed approach. The trained ELM network serves as the learned algorithm and
can be used to solve the IVP with arbitrary initial data or step sizes from
some domain. When the right hand side of the non-autonomous system exhibits a
periodicity with respect to any of its arguments, while the solution itself to
the problem is not periodic, we show that the algorithmic function is either
periodic, or when it is not, satisfies a well-defined relation for different
periods. This property can greatly simplify the algorithm learning in many
problems. We consider explicit and implicit NN formulations, leading to
explicit or implicit time integration algorithms, and discuss how to train the
ELM network by the nonlinear least squares method. Extensive numerical
experiments with benchmark problems, including non-stiff, stiff and chaotic
systems, show that the learned NN algorithm produces highly accurate solutions
in long-time simulations, with its time-marching errors decreasing nearly
exponentially with increasing degrees of freedom in the neural network. We
compare extensively the computational performance (accuracy vs.~cost) between
the current NN algorithm and the leading traditional time integration
algorithms. The learned NN algorithm is computationally competitive, markedly
outperforming the traditional algorithms in many problems.
|
2502.10953
|
Empirical evaluation of LLMs in predicting fixes of Configuration bugs
in Smart Home System
|
cs.SE cs.AI
|
This empirical study evaluates the effectiveness of Large Language Models
(LLMs) in predicting fixes for configuration bugs in smart home systems. The
research analyzes three prominent LLMs - GPT-4, GPT-4o (GPT-4 Turbo), and
Claude 3.5 Sonnet - using four distinct prompt designs to assess their ability
to identify appropriate fix strategies and generate correct solutions. The
study utilized a dataset of 129 debugging issues from the Home Assistant
Community, focusing on 21 randomly selected cases for in-depth analysis.
Results demonstrate that GPT-4 and Claude 3.5 Sonnet achieved 80\% accuracy in
strategy prediction when provided with both bug descriptions and original
scripts. GPT-4 exhibited consistent performance across different prompt types,
while GPT-4o showed advantages in speed and cost-effectiveness despite slightly
lower accuracy. The findings reveal that prompt design significantly impacts
model performance, with comprehensive prompts containing both description and
original script yielding the best results. This research provides valuable
insights for improving automated bug fixing in smart home system configurations
and demonstrates the potential of LLMs in addressing configuration-related
challenges.
|
2502.10954
|
Learning to Stop Overthinking at Test Time
|
cs.CV cs.AI cs.LG
|
Test time scaling is currently one of the most active research areas that
shows promise after training time scaling has reached its limits. Deep-thinking
(DT) models are a class of recurrent models that can perform easy-to-hard
generalization by assigning more compute to harder test samples. However, due
to their inability to determine the complexity of a test sample, DT models have
to use a large amount of computation for both easy and hard test samples.
Excessive test time computation is wasteful and can cause the ``overthinking''
problem where more test time computation leads to worse results. In this paper,
we introduce a test time training method for determining the optimal amount of
computation needed for each sample during test time. We also propose
Conv-LiGRU, a novel recurrent architecture for efficient and robust visual
reasoning. Extensive experiments demonstrate that Conv-LiGRU is more stable
than DT, effectively mitigates the ``overthinking'' phenomenon, and achieves
superior accuracy.
|
2502.10955
|
A recurrent vision transformer shows signatures of primate visual
attention
|
cs.CV cs.AI q-bio.NC
|
Attention is fundamental to both biological and artificial intelligence, yet
research on animal attention and AI self attention remains largely
disconnected. We propose a Recurrent Vision Transformer (Recurrent ViT) that
integrates self-attention with recurrent memory, allowing both current inputs
and stored information to guide attention allocation. Trained solely via sparse
reward feedback on a spatially cued orientation change detection task, a
paradigm used in primate studies, our model exhibits primate like signatures of
attention, including improved accuracy and faster responses for cued stimuli
that scale with cue validity. Analysis of self-attention maps reveals dynamic
spatial prioritization with reactivation prior to expected changes, and
targeted perturbations produce performance shifts similar to those observed in
primate frontal eye fields and superior colliculus. These findings demonstrate
that incorporating recurrent feedback into self attention can capture key
aspects of primate visual attention.
|
2502.10956
|
Fine-Tuning Hard-to-Simulate Objectives for Quadruped Locomotion: A Case
Study on Total Power Saving
|
cs.RO
|
Legged locomotion is not just about mobility; it also encompasses crucial
objectives such as energy efficiency, safety, and user experience, which are
vital for real-world applications. However, key factors such as battery power
consumption and stepping noise are often inaccurately modeled or missing in
common simulators, leaving these aspects poorly optimized or unaddressed by
current sim-to-real methods. Hand-designed proxies, such as mechanical power
and foot contact forces, have been used to address these challenges but are
often problem-specific and inaccurate.
In this paper, we propose a data-driven framework for fine-tuning locomotion
policies, targeting these hard-to-simulate objectives. Our framework leverages
real-world data to model these objectives and incorporates the learned model
into simulation for policy improvement. We demonstrate the effectiveness of our
framework on power saving for quadruped locomotion, achieving a significant
24-28\% net reduction in total power consumption from the battery pack at
various speeds. In essence, our approach offers a versatile solution for
optimizing hard-to-simulate objectives in quadruped locomotion, providing an
easy-to-adapt paradigm for continual improving with real-world knowledge.
Project page https://hard-to-sim.github.io/.
|
2502.10957
|
Skillful Nowcasting of Convective Clouds With a Cascade Diffusion Model
|
cs.CV physics.ao-ph
|
Accurate nowcasting of convective clouds from satellite imagery is essential
for mitigating the impacts of meteorological disasters, especially in
developing countries and remote regions with limited ground-based observations.
Recent advances in deep learning have shown promise in video prediction;
however, existing models frequently produce blurry results and exhibit reduced
accuracy when forecasting physical fields. Here, we introduce SATcast, a
diffusion model that leverages a cascade architecture and multimodal inputs for
nowcasting cloud fields in satellite imagery. SATcast incorporates physical
fields predicted by FuXi, a deep-learning weather model, alongside past
satellite observations as conditional inputs to generate high-quality future
cloud fields. Through comprehensive evaluation, SATcast outperforms
conventional methods on multiple metrics, demonstrating its superior accuracy
and robustness. Ablation studies underscore the importance of its multimodal
design and the cascade architecture in achieving reliable predictions. Notably,
SATcast maintains predictive skill for up to 24 hours, underscoring its
potential for operational nowcasting applications.
|
2502.10959
|
Revisiting the Design of In-Memory Dynamic Graph Storage
|
cs.DB
|
The effectiveness of in-memory dynamic graph storage (DGS) for supporting
concurrent graph read and write queries is crucial for real-time graph
analytics and updates. Various methods have been proposed, for example, LLAMA,
Aspen, LiveGraph, Teseo, and Sortledton. These approaches differ significantly
in their support for read and write operations, space overhead, and concurrency
control. However, there has been no systematic study to explore the trade-offs
among these dimensions. In this paper, we evaluate the effectiveness of
individual techniques and identify the performance factors affecting these
storage methods by proposing a common abstraction for DGS design and
implementing a generic test framework based on this abstraction. Our findings
highlight several key insights: 1) Existing DGS methods exhibit substantial
space overhead. For example, Aspen consumes 3.3-10.8x more memory than CSR,
while the optimal fine-grained methods consume 4.1-8.9x more memory than CSR,
indicating a significant memory overhead. 2) Existing methods often overlook
memory access impact of modern architectures, leading to performance
degradation compared to continuous storage methods. 3) Fine-grained concurrency
control methods, in particular, suffer from severe efficiency and space issues
due to maintaining versions and performing checks for each neighbor. These
methods also experience significant contention on high-degree vertices. Our
systematic study reveals these performance bottlenecks and outlines future
directions to improve DGS for real-time graph analytics.
|
2502.10961
|
Graders should cheat: privileged information enables expert-level
automated evaluations
|
cs.LG cs.AI
|
Auto-evaluating language models (LMs), i.e., using a grader LM to evaluate
the candidate LM, is an appealing way to accelerate the evaluation process and
the cost associated with it. But this presents a paradox: how can we trust the
grader LM, which is presumably weaker than the candidate LM, to assess problems
that are beyond the frontier of the capabilities of either model or both? For
instance, today's LMs struggle on graduate-level physics and Olympiad-level
math, making them unreliable graders in these domains.
We show that providing privileged information -- such as ground-truth
solutions or problem-specific guidelines -- improves automated evaluations on
such frontier problems. This approach offers two key advantages. First, it
expands the range of problems where LMs graders apply. Specifically, weaker
models can now rate the predictions of stronger models. Second, privileged
information can be used to devise easier variations of challenging problems
which improves the separability of different LMs on tasks where their
performance is generally low. With this approach, general-purpose LM graders
match the state of the art performance on RewardBench, surpassing almost all
the specially-tuned models. LM graders also outperform individual human raters
on Vibe-Eval, and approach human expert graders on Olympiad-level math
problems.
|
2502.10966
|
Neural Networks Remember More: The Power of Parameter Isolation and
Combination
|
cs.CL cs.AI
|
Catastrophic forgetting is a pervasive issue for pre-trained language models
(PLMs) during continual learning, where models lose previously acquired
knowledge when sequentially trained on a series of tasks. The model's ability
to retain old tasks is referred to as stability, while its adaptability to new
tasks is called plasticity. Therefore, the key to solving this problem is to
find a trade-off between the plasticity and stability of the model. To address
this issue, in this paper, we propose a novel method to achieve a balance
between model stability and plasticity, thereby mitigating catastrophic
forgetting. More specifically, our proposed approach leverages parameter
isolation and a subsequent combination strategy. Initially, in the training
stage, the model adapts to each downstream task via a parameter isolation
method to prevent potential interference among different tasks. We then combine
all trained parameters, which contain acquired knowledge, using the task
arithmetic method and finally apply them to the backbone model. Empirical
evaluations on continual language learning benchmarks substantiate the
effectiveness of our approach, revealing a marked enhancement over existing
state-of-the-art approaches.
|
2502.10967
|
Open-Set Cross-Network Node Classification via Unknown-Excluded
Adversarial Graph Domain Alignment
|
cs.SI
|
Existing cross-network node classification methods are mainly proposed for
closed-set setting, where the source network and the target network share
exactly the same label space. Such a setting is restricted in real-world
applications, since the target network might contain additional classes that
are not present in the source. In this work, we study a more realistic open-set
cross-network node classification (O-CNNC) problem, where the target network
contains all the known classes in the source and further contains several
target-private classes unseen in the source. Borrowing the concept from
open-set domain adaptation, all target-private classes are defined as an
additional unknown class. To address the challenging O-CNNC problem, we propose
an unknown-excluded adversarial graph domain alignment (UAGA) model with a
separate-adapt training strategy. Firstly, UAGA roughly separates known classes
from unknown class, by training a graph neural network encoder and a
neighborhood-aggregation node classifier in an adversarial framework. Then,
unknown-excluded adversarial domain alignment is customized to align only
target nodes from known classes with the source, while pushing target nodes
from unknown class far away from the source, by assigning positive and negative
domain adaptation coefficient to known class nodes and unknown class nodes.
Extensive experiments on real-world datasets demonstrate significant
outperformance of the proposed UAGA over state-of-the-art methods on O-CNNC.
|
2502.10973
|
Akan Cinematic Emotions (ACE): A Multimodal Multi-party Dataset for
Emotion Recognition in Movie Dialogues
|
cs.CL
|
In this paper, we introduce the Akan Conversation Emotion (ACE) dataset, the
first multimodal emotion dialogue dataset for an African language, addressing
the significant lack of resources for low-resource languages in emotion
recognition research. ACE, developed for the Akan language, contains 385
emotion-labeled dialogues and 6,162 utterances across audio, visual, and
textual modalities, along with word-level prosodic prominence annotations. The
presence of prosodic labels in this dataset also makes it the first
prosodically annotated African language dataset. We demonstrate the quality and
utility of ACE through experiments using state-of-the-art emotion recognition
methods, establishing solid baselines for future research. We hope ACE inspires
further work on inclusive, linguistically and culturally diverse NLP resources.
|
2502.10975
|
GS-GVINS: A Tightly-integrated GNSS-Visual-Inertial Navigation System
Augmented by 3D Gaussian Splatting
|
cs.RO cs.CV eess.IV
|
Recently, the emergence of 3D Gaussian Splatting (3DGS) has drawn significant
attention in the area of 3D map reconstruction and visual SLAM. While extensive
research has explored 3DGS for indoor trajectory tracking using visual sensor
alone or in combination with Light Detection and Ranging (LiDAR) and Inertial
Measurement Unit (IMU), its integration with GNSS for large-scale outdoor
navigation remains underexplored. To address these concerns, we proposed
GS-GVINS: a tightly-integrated GNSS-Visual-Inertial Navigation System augmented
by 3DGS. This system leverages 3D Gaussian as a continuous differentiable scene
representation in largescale outdoor environments, enhancing navigation
performance through the constructed 3D Gaussian map. Notably, GS-GVINS is the
first GNSS-Visual-Inertial navigation application that directly utilizes the
analytical jacobians of SE3 camera pose with respect to 3D Gaussians. To
maintain the quality of 3DGS rendering in extreme dynamic states, we introduce
a motionaware 3D Gaussian pruning mechanism, updating the map based on relative
pose translation and the accumulated opacity along the camera ray. For
validation, we test our system under different driving environments: open-sky,
sub-urban, and urban. Both self-collected and public datasets are used for
evaluation. The results demonstrate the effectiveness of GS-GVINS in enhancing
navigation accuracy across diverse driving environments.
|
2502.10976
|
QuOTE: Question-Oriented Text Embeddings
|
cs.IR cs.AI cs.CL cs.LG
|
We present QuOTE (Question-Oriented Text Embeddings), a novel enhancement to
retrieval-augmented generation (RAG) systems, aimed at improving document
representation for accurate and nuanced retrieval. Unlike traditional RAG
pipelines, which rely on embedding raw text chunks, QuOTE augments chunks with
hypothetical questions that the chunk can potentially answer, enriching the
representation space. This better aligns document embeddings with user query
semantics, and helps address issues such as ambiguity and context-dependent
relevance. Through extensive experiments across diverse benchmarks, we
demonstrate that QuOTE significantly enhances retrieval accuracy, including in
multi-hop question-answering tasks. Our findings highlight the versatility of
question generation as a fundamental indexing strategy, opening new avenues for
integrating question generation into retrieval-based AI pipelines.
|
2502.10978
|
Agentic LLM Framework for Adaptive Decision Discourse
|
cs.AI cs.CY
|
Effective decision-making in complex systems requires synthesizing diverse
perspectives to address multifaceted challenges under uncertainty. This study
introduces a real-world inspired agentic Large Language Models (LLMs)
framework, to simulate and enhance decision discourse-the deliberative process
through which actionable strategies are collaboratively developed. Unlike
traditional decision-support tools, the framework emphasizes dialogue,
trade-off exploration, and the emergent synergies generated by interactions
among agents embodying distinct personas. These personas simulate diverse
stakeholder roles, each bringing unique priorities, expertise, and value-driven
reasoning to the table. The framework incorporates adaptive and self-governing
mechanisms, enabling agents to dynamically summon additional expertise and
refine their assembly to address evolving challenges. An illustrative
hypothetical example focused on extreme flooding in a Midwestern township
demonstrates the framework's ability to navigate uncertainty, balance competing
priorities, and propose mitigation and adaptation strategies by considering
social, economic, and environmental dimensions. Results reveal how the
breadth-first exploration of alternatives fosters robust and equitable
recommendation pathways. This framework transforms how decisions are approached
in high-stakes scenarios and can be incorporated in digital environments. It
not only augments decision-makers' capacity to tackle complexity but also sets
a foundation for scalable and context-aware AI-driven recommendations. This
research explores novel and alternate routes leveraging agentic LLMs for
adaptive, collaborative, and equitable recommendation processes, with
implications across domains where uncertainty and complexity converge.
|
2502.10980
|
DFM: Deep Fourier Mimic for Expressive Dance Motion Learning
|
cs.RO
|
As entertainment robots gain popularity, the demand for natural and
expressive motion, particularly in dancing, continues to rise. Traditionally,
dancing motions have been manually designed by artists, a process that is both
labor-intensive and restricted to simple motion playback, lacking the
flexibility to incorporate additional tasks such as locomotion or gaze control
during dancing. To overcome these challenges, we introduce Deep Fourier Mimic
(DFM), a novel method that combines advanced motion representation with
Reinforcement Learning (RL) to enable smooth transitions between motions while
concurrently managing auxiliary tasks during dance sequences. While previous
frequency domain based motion representations have successfully encoded dance
motions into latent parameters, they often impose overly rigid periodic
assumptions at the local level, resulting in reduced tracking accuracy and
motion expressiveness, which is a critical aspect for entertainment robots. By
relaxing these locally periodic constraints, our approach not only enhances
tracking precision but also facilitates smooth transitions between different
motions. Furthermore, the learned RL policy that supports simultaneous base
activities, such as locomotion and gaze control, allows entertainment robots to
engage more dynamically and interactively with users rather than merely
replaying static, pre-designed dance routines.
|
2502.10982
|
TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction
|
cs.CV
|
3D facial reconstruction from a single in-the-wild image is a crucial task in
human-centered computer vision tasks. While existing methods can recover
accurate facial shapes, there remains significant space for improvement in
fine-grained expression capture. Current approaches struggle with irregular
mouth shapes, exaggerated expressions, and asymmetrical facial movements. We
present TEASER (Token EnhAnced Spatial modeling for Expressions
Reconstruction), which addresses these challenges and enhances 3D facial
geometry performance. TEASER tackles two main limitations of existing methods:
insufficient photometric loss for self-reconstruction and inaccurate
localization of subtle expressions. We introduce a multi-scale tokenizer to
extract facial appearance information. Combined with a neural renderer, these
tokens provide precise geometric guidance for expression reconstruction.
Furthermore, TEASER incorporates a pose-dependent landmark loss to further
improve geometric performances. Our approach not only significantly enhances
expression reconstruction quality but also offers interpretable tokens suitable
for various downstream applications, such as photorealistic facial video
driving, expression transfer, and identity swapping. Quantitative and
qualitative experimental results across multiple datasets demonstrate that
TEASER achieves state-of-the-art performance in precise expression
reconstruction.
|
2502.10983
|
Learning Quiet Walking for a Small Home Robot
|
cs.RO
|
As home robotics gains traction, robots are increasingly integrated into
households, offering companionship and assistance. Quadruped robots,
particularly those resembling dogs, have emerged as popular alternatives for
traditional pets. However, user feedback highlights concerns about the noise
these robots generate during walking at home, particularly the loud footstep
sound. To address this issue, we propose a sim-to-real based reinforcement
learning (RL) approach to minimize the foot contact velocity highly related to
the footstep sound. Our framework incorporates three key elements: learning
varying PD gains to actively dampen and stiffen each joint, utilizing foot
contact sensors, and employing curriculum learning to gradually enforce
penalties on foot contact velocity. Experiments demonstrate that our learned
policy achieves superior quietness compared to a RL baseline and the carefully
handcrafted Sony commercial controllers. Furthermore, the trade-off between
robustness and quietness is shown. This research contributes to developing
quieter and more user-friendly robotic companions in home environments.
|
2502.10985
|
Is Elo Rating Reliable? A Study Under Model Misspecification
|
cs.LG cs.AI stat.ME stat.ML
|
Elo rating, widely used for skill assessment across diverse domains ranging
from competitive games to large language models, is often understood as an
incremental update algorithm for estimating a stationary Bradley-Terry (BT)
model. However, our empirical analysis of practical matching datasets reveals
two surprising findings: (1) Most games deviate significantly from the
assumptions of the BT model and stationarity, raising questions on the
reliability of Elo. (2) Despite these deviations, Elo frequently outperforms
more complex rating systems, such as mElo and pairwise models, which are
specifically designed to account for non-BT components in the data,
particularly in terms of win rate prediction. This paper explains this
unexpected phenomenon through three key perspectives: (a) We reinterpret Elo as
an instance of online gradient descent, which provides no-regret guarantees
even in misspecified and non-stationary settings. (b) Through extensive
synthetic experiments on data generated from transitive but non-BT models, such
as strongly or weakly stochastic transitive models, we show that the
''sparsity'' of practical matching data is a critical factor behind Elo's
superior performance in prediction compared to more complex rating systems. (c)
We observe a strong correlation between Elo's predictive accuracy and its
ranking performance, further supporting its effectiveness in ranking.
|
2502.10988
|
OMG: Opacity Matters in Material Modeling with Gaussian Splatting
|
cs.CV
|
Decomposing geometry, materials and lighting from a set of images, namely
inverse rendering, has been a long-standing problem in computer vision and
graphics. Recent advances in neural rendering enable photo-realistic and
plausible inverse rendering results. The emergence of 3D Gaussian Splatting has
boosted it to the next level by showing real-time rendering potentials. An
intuitive finding is that the models used for inverse rendering do not take
into account the dependency of opacity w.r.t. material properties, namely cross
section, as suggested by optics. Therefore, we develop a novel approach that
adds this dependency to the modeling itself. Inspired by radiative transfer, we
augment the opacity term by introducing a neural network that takes as input
material properties to provide modeling of cross section and a physically
correct activation function. The gradients for material properties are
therefore not only from color but also from opacity, facilitating a constraint
for their optimization. Therefore, the proposed method incorporates more
accurate physical properties compared to previous works. We implement our
method into 3 different baselines that use Gaussian Splatting for inverse
rendering and achieve significant improvements universally in terms of novel
view synthesis and material modeling.
|
2502.10990
|
FinMTEB: Finance Massive Text Embedding Benchmark
|
cs.CL cs.IR
|
Embedding models play a crucial role in representing and retrieving
information across various NLP applications. Recent advances in large language
models (LLMs) have further enhanced the performance of embedding models. While
these models are often benchmarked on general-purpose datasets, real-world
applications demand domain-specific evaluation. In this work, we introduce the
Finance Massive Text Embedding Benchmark (FinMTEB), a specialized counterpart
to MTEB designed for the financial domain. FinMTEB comprises 64 financial
domain-specific embedding datasets across 7 tasks that cover diverse textual
types in both Chinese and English, such as financial news articles, corporate
annual reports, ESG reports, regulatory filings, and earnings call transcripts.
We also develop a finance-adapted model, FinPersona-E5, using a persona-based
data synthetic method to cover diverse financial embedding tasks for training.
Through extensive evaluation of 15 embedding models, including FinPersona-E5,
we show three key findings: (1) performance on general-purpose benchmarks shows
limited correlation with financial domain tasks; (2) domain-adapted models
consistently outperform their general-purpose counterparts; and (3)
surprisingly, a simple Bag-of-Words (BoW) approach outperforms sophisticated
dense embeddings in financial Semantic Textual Similarity (STS) tasks,
underscoring current limitations in dense embedding techniques. Our work
establishes a robust evaluation framework for financial NLP applications and
provides crucial insights for developing domain-specific embedding models.
|
2502.10993
|
RoseRAG: Robust Retrieval-augmented Generation with Small-scale LLMs via
Margin-aware Preference Optimization
|
cs.CL cs.LG
|
Large language models (LLMs) have achieved impressive performance but face
high computational costs and latency, limiting their deployment in
resource-constrained settings. In contrast, small-scale LLMs (SLMs) are more
efficient yet struggle to capture evolving real-world knowledge.
Retrieval-augmented generation (RAG) helps by integrating external knowledge,
but imperfect retrieval can introduce distracting noise that misleads SLMs. We
propose RoseRAG, a robust RAG framework for SLMs via Margin-aware Preference
Optimization. RoseRAG employs multi-turn prompting for detailed reasoning,
rejection sampling for high-quality explanations, and contrastive preference
selection to refine responses by maximizing the likelihood gap between
preferred and non-preferred outputs. By integrating these components into a
margin-aware optimization process, RoseRAG robustly enhances the accuracy and
reliability of SLMs for RAG applications. Extensive experiments on three
open-domain question answering benchmarks indicate that our innovative RoseRAG
surpasses state-of-the-art baselines significantly.
|
2502.10994
|
SSVEP-BiMA: Bifocal Masking Attention Leveraging Native and
Symmetric-Antisymmetric Components for Robust SSVEP Decoding
|
cs.LG
|
Brain-computer interface (BCI) based on steady-state visual evoked potentials
(SSVEP) is a popular paradigm for its simplicity and high information transfer
rate (ITR). Accurate and fast SSVEP decoding is crucial for reliable BCI
performance. However, conventional decoding methods demand longer time windows,
and deep learning models typically require subject-specific fine-tuning,
leaving challenges in achieving optimal performance in cross-subject settings.
This paper proposed a biofocal masking attention-based method (SSVEP-BiMA) that
synergistically leverages the native and symmetric-antisymmetric components for
decoding SSVEP. By utilizing multiple signal representations, the network is
able to integrate features from a wider range of sample perspectives, leading
to more generalized and comprehensive feature learning, which enhances both
prediction accuracy and robustness. We performed experiments on two public
datasets, and the results demonstrate that our proposed method surpasses
baseline approaches in both accuracy and ITR. We believe that this work will
contribute to the development of more efficient SSVEP-based BCI systems.
|
2502.10995
|
Evaluating Large language models on Understanding Korean indirect Speech
acts
|
cs.CL
|
To accurately understand the intention of an utterance is crucial in
conversational communication. As conversational artificial intelligence models
are rapidly being developed and applied in various fields, it is important to
evaluate the LLMs' capabilities of understanding the intentions of user's
utterance. This study evaluates whether current LLMs can understand the
intention of an utterance by considering the given conversational context,
particularly in cases where the actual intention differs from the
surface-leveled, literal intention of the sentence, i.e. indirect speech acts.
Our findings reveal that Claude3-Opus outperformed the other competing models,
with 71.94% in MCQ and 65% in OEQ, showing a clear advantage. In general,
proprietary models exhibited relatively higher performance compared to
open-source models. Nevertheless, no LLMs reached the level of human
performance. Most LLMs, except for Claude3-Opus, demonstrated significantly
lower performance in understanding indirect speech acts compared to direct
speech acts, where the intention is explicitly revealed through the utterance.
This study not only performs an overall pragmatic evaluation of each LLM's
language use through the analysis of OEQ response patterns, but also emphasizes
the necessity for further research to improve LLMs' understanding of indirect
speech acts for more natural communication with humans.
|
2502.10996
|
RAS: Retrieval-And-Structuring for Knowledge-Intensive LLM Generation
|
cs.CL
|
Retrieval-augmented language models often struggle with knowledge-intensive
tasks due to inefficient retrieval, unstructured knowledge integration, and
single-pass architectures. We present Retrieval-And-Structuring (RAS), a novel
framework that dynamically constructs and reasons over query-specific knowledge
graphs through iterative retrieval and structuring. RAS introduces four key
technical innovations: (1) a themescoped retrieval mechanism that efficiently
narrows the search space while maintaining retrieval quality, (2) an action
planning module that determines knowledge needs and generates focused
sub-queries, (3) a dynamic knowledge structuring approach that converts
retrieved text into an evolving knowledge graph, and (4) a graph-augmented
answering component that leverages the accumulated structured information. Our
framework achieves state-of-the-art performance, surpassing leading baselines
by 6.4% with open-source language models and 7.0% with proprietary models on
seven knowledge-intensive generation datasets across all evaluation metrics.
Detailed ablation studies verify the contribution of each technical component
to the overall system performance.
|
2502.10997
|
New Rates in Stochastic Decision-Theoretic Online Learning under
Differential Privacy
|
cs.LG cs.CR cs.DS
|
Hu and Mehta (2024) posed an open problem: what is the optimal
instance-dependent rate for the stochastic decision-theoretic online learning
(with $K$ actions and $T$ rounds) under $\varepsilon$-differential privacy?
Before, the best known upper bound and lower bound are $O\left(\frac{\log
K}{\Delta_{\min}} + \frac{\log K\log T}{\varepsilon}\right)$ and
$\Omega\left(\frac{\log K}{\Delta_{\min}} + \frac{\log K}{\varepsilon}\right)$
(where $\Delta_{\min}$ is the gap between the optimal and the second actions).
In this paper, we partially address this open problem by having two new
results. First, we provide an improved upper bound for this problem
$O\left(\frac{\log K}{\Delta_{\min}} + \frac{\log^2K}{\varepsilon}\right)$,
where the $T$-dependency has been removed. Second, we introduce the
deterministic setting, a weaker setting of this open problem, where the
received loss vector is deterministic and we can focus on the analysis for
$\varepsilon$ regardless of the sampling error. At the deterministic setting,
we prove upper and lower bounds that match at $\Theta\left(\frac{\log
K}{\varepsilon}\right)$, while a direct application of the analysis and
algorithms from the original setting still leads to an extra log factor.
Technically, we introduce the Bernoulli resampling trick, which enforces a
monotonic property for the output from report-noisy-max mechanism that enables
a tighter analysis. Moreover, by replacing the Laplace noise with Gumbel noise,
we derived explicit integral form that gives a tight characterization of the
regret in the deterministic case.
|
2502.10999
|
ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering
without Font Annotations
|
cs.CV cs.AI cs.CL cs.MM
|
This work demonstrates that diffusion models can achieve font-controllable
multilingual text rendering using just raw images without font label
annotations. Visual text rendering remains a significant challenge. While
recent methods condition diffusion on glyphs, it is impossible to retrieve
exact font annotations from large-scale, real-world datasets, which prevents
user-specified font control. To address this, we propose a data-driven solution
that integrates the conditional diffusion model with a text segmentation model,
utilizing segmentation masks to capture and represent fonts in pixel space in a
self-supervised manner, thereby eliminating the need for any ground-truth
labels and enabling users to customize text rendering with any multilingual
font of their choice. The experiment provides a proof of concept of our
algorithm in zero-shot text and font editing across diverse fonts and
languages, providing valuable insights for the community and industry toward
achieving generalized visual text rendering.
|
2502.11001
|
CL-MFAP: A Contrastive Learning-Based Multimodal Foundation Model for
Molecular Property Prediction and Antibiotic Screening
|
q-bio.BM cs.AI cs.LG q-bio.QM
|
Due to the rise in antimicrobial resistance, identifying novel compounds with
antibiotic potential is crucial for combatting this global health issue.
However, traditional drug development methods are costly and inefficient.
Recognizing the pressing need for more effective solutions, researchers have
turned to machine learning techniques to streamline the prediction and
development of novel antibiotic compounds. While foundation models have shown
promise in antibiotic discovery, current mainstream efforts still fall short of
fully leveraging the potential of multimodal molecular data. Recent studies
suggest that contrastive learning frameworks utilizing multimodal data exhibit
excellent performance in representation learning across various domains.
Building upon this, we introduce CL-MFAP, an unsupervised contrastive learning
(CL)-based multimodal foundation (MF) model specifically tailored for
discovering small molecules with potential antibiotic properties (AP) using
three types of molecular data. This model employs 1.6 million bioactive
molecules with drug-like properties from the ChEMBL dataset to jointly pretrain
three encoders: (1) a transformer-based encoder with rotary position embedding
for processing SMILES strings; (2) another transformer-based encoder,
incorporating a novel bi-level routing attention mechanism to handle molecular
graph representations; and (3) a Morgan fingerprint encoder using a multilayer
perceptron, to achieve the contrastive learning purpose. The CL-MFAP
outperforms baseline models in antibiotic property prediction by effectively
utilizing different molecular modalities and demonstrates superior
domain-specific performance when fine-tuned for antibiotic-related property
prediction tasks.
|
2502.11002
|
Adjust Your Focus: Defocus Deblurring From Dual-Pixel Images Using
Explicit Multi-Scale Cross-Correlation
|
cs.CV
|
Defocus blur is a common problem in photography. It arises when an image is
captured with a wide aperture, resulting in a shallow depth of field. Sometimes
it is desired, e.g., in portrait effect. Otherwise, it is a problem from both
an aesthetic point of view and downstream computer vision tasks, such as
segmentation and depth estimation. Defocusing an out-of-focus image to obtain
an all-in-focus image is a highly challenging and often ill-posed problem. A
recent work exploited dual-pixel (DP) image information, widely available in
consumer DSLRs and high-end smartphones, to solve the problem of defocus
deblurring. DP sensors result in two sub-aperture views containing defocus
disparity cues. A given pixel's disparity is directly proportional to the
distance from the focal plane. However, the existing methods adopt a na\"ive
approach of a channel-wise concatenation of the two DP views without explicitly
utilizing the disparity cues within the network. In this work, we propose to
perform an explicit cross-correlation between the two DP views to guide the
network for appropriate deblurring in different image regions. We adopt
multi-scale cross-correlation to handle blur and disparities at different
scales. Quantitative and qualitative evaluation of our multi-scale
cross-correlation network (MCCNet) reveals that it achieves better defocus
deblurring than existing state-of-the-art methods despite having lesser
computational complexity.
|
2502.11003
|
FeaKM: Robust Collaborative Perception under Noisy Pose Conditions
|
cs.CV
|
Collaborative perception is essential for networks of agents with limited
sensing capabilities, enabling them to work together by exchanging information
to achieve a robust and comprehensive understanding of their environment.
However, localization inaccuracies often lead to significant spatial message
displacement, which undermines the effectiveness of these collaborative
efforts. To tackle this challenge, we introduce FeaKM, a novel method that
employs Feature-level Keypoints Matching to effectively correct pose
discrepancies among collaborating agents. Our approach begins by utilizing a
confidence map to identify and extract salient points from intermediate feature
representations, allowing for the computation of their descriptors. This step
ensures that the system can focus on the most relevant information, enhancing
the matching process. We then implement a target-matching strategy that
generates an assignment matrix, correlating the keypoints identified by
different agents. This is critical for establishing accurate correspondences,
which are essential for effective collaboration. Finally, we employ a
fine-grained transformation matrix to synchronize the features of all agents
and ascertain their relative statuses, ensuring coherent communication among
them. Our experimental results demonstrate that FeaKM significantly outperforms
existing methods on the DAIR-V2X dataset, confirming its robustness even under
severe noise conditions. The code and implementation details are available at
https://github.com/uestchjw/FeaKM.
|
2502.11006
|
Prompt Inject Detection with Generative Explanation as an Investigative
Tool
|
cs.CR cs.AI
|
Large Language Models (LLMs) are vulnerable to adversarial prompt based
injects. These injects could jailbreak or exploit vulnerabilities within these
models with explicit prompt requests leading to undesired responses. In the
context of investigating prompt injects, the challenge is the sheer volume of
input prompts involved that are likely to be largely benign. This investigative
challenge is further complicated by the semantics and subjectivity of the input
prompts involved in the LLM conversation with its user and the context of the
environment to which the conversation is being carried out. Hence, the
challenge for AI security investigators would be two-fold. The first is to
identify adversarial prompt injects and then to assess whether the input prompt
is contextually benign or adversarial. For the first step, this could be done
using existing AI security solutions like guardrails to detect and protect the
LLMs. Guardrails have been developed using a variety of approaches. A popular
approach is to use signature based. Another popular approach to develop AI
models to classify such prompts include the use of NLP based models like a
language model. However, in the context of conducting an AI security
investigation of prompt injects, these guardrails lack the ability to aid
investigators in triaging or assessing the identified input prompts. In this
applied research exploration, we explore the use of a text generation
capabilities of LLM to detect prompt injects and generate explanation for its
detections to aid AI security investigators in assessing and triaging of such
prompt inject detections. The practical benefit of such a tool is to ease the
task of conducting investigation into prompt injects.
|
2502.11007
|
Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task,
Multi-Dialogue Settings
|
cs.LG cs.DC
|
Compared to traditional machine learning models, recent large language models
(LLMs) can exhibit multi-task-solving capabilities through multiple dialogues
and multi-modal data sources. These unique characteristics of LLMs, beyond
their large size, make their deployment more challenging during the inference
stage. Specifically, (i) deploying LLMs on local devices faces computational,
memory, and energy resource issues, while (ii) deploying them in the cloud
cannot guarantee real-time service and incurs communication/usage costs. In
this paper, we design a local-cloud LLM inference offloading (LCIO) system,
featuring (i) a large-scale cloud LLM that can handle multi-modal data sources
and (ii) a lightweight local LLM that can process simple tasks at high speed.
LCIO employs resource-constrained reinforcement learning (RCRL) to determine
where to make the inference (i.e., local vs. cloud) and which multi-modal data
sources to use for each dialogue/task, aiming to maximize the long-term reward
(which incorporates response quality, latency, and usage cost) while adhering
to resource constraints. We also propose M4A1, a new dataset that accounts for
multi-modal, multi-task, multi-dialogue, and multi-LLM characteristics, to
investigate the capabilities of LLMs in various practical scenarios. We
demonstrate the effectiveness of LCIO compared to baselines, showing
significant savings in latency and cost while achieving satisfactory response
quality.
|
2502.11008
|
CounterBench: A Benchmark for Counterfactuals Reasoning in Large
Language Models
|
cs.CL
|
Counterfactual reasoning is widely recognized as one of the most challenging
and intricate aspects of causality in artificial intelligence. In this paper,
we evaluate the performance of large language models (LLMs) in counterfactual
reasoning. In contrast to previous studies that primarily focus on commonsense
causal reasoning, where LLMs often rely on prior knowledge for inference, we
specifically assess their ability to perform counterfactual inference using a
set of formal rules. To support this evaluation, we introduce a new benchmark
dataset, CounterBench, comprising 1K counterfactual reasoning questions. The
dataset is designed with varying levels of difficulty, diverse causal graph
structures, distinct types of counterfactual questions, and multiple
nonsensical name variants. Our experiments demonstrate that counterfactual
reasoning poses a significant challenge for LLMs, with most models performing
at levels comparable to random guessing. To enhance LLM's counterfactual
reasoning ability, we propose a novel reasoning paradigm, CoIn, which guides
LLMs through iterative reasoning and backtracking to systematically explore
counterfactual solutions. Experimental results show that our method
significantly improves LLM performance on counterfactual reasoning tasks and
consistently enhances performance across different LLMs.Our dataset is
available at https://huggingface.co/datasets/CounterBench/CounterBench.
|
2502.11009
|
Computing Inconsistency Measures Under Differential Privacy
|
cs.DB
|
Assessing data quality is crucial to knowing whether and how to use the data
for different purposes. Specifically, given a collection of integrity
constraints, various ways have been proposed to quantify the inconsistency of a
database. Inconsistency measures are particularly important when we wish to
assess the quality of private data without revealing sensitive information. We
study the estimation of inconsistency measures for a database protected under
Differential Privacy (DP). Such estimation is nontrivial since some measures
intrinsically query sensitive information, and the computation of others
involves functions on underlying sensitive data. Among five inconsistency
measures that have been proposed in recent work, we identify that two are
intractable in the DP setting. The major challenge for the other three is high
sensitivity: adding or removing one tuple from the dataset may significantly
affect the outcome. To mitigate that, we model the dataset using a conflict
graph and investigate private graph statistics to estimate these measures. The
proposed machinery includes adapting graph-projection techniques with parameter
selection optimizations on the conflict graph and a DP variant of approximate
vertex cover size. We experimentally show that we can effectively compute DP
estimates of the three measures on five real-world datasets with denial
constraints, where the density of the conflict graphs highly varies.
|
2502.11013
|
Collaborative Deterministic-Diffusion Model for Probabilistic Urban
Spatiotemporal Prediction
|
cs.LG cs.AI
|
Accurate prediction of urban spatiotemporal dynamics is essential for
enhancing urban management and decision-making. Existing spatiotemporal
prediction models are predominantly deterministic, focusing on primary
spatiotemporal patterns. However, those dynamics are highly complex, exhibiting
multi-modal distributions that are challenging for deterministic models to
capture. In this paper, we highlight the critical role of probabilistic
prediction in capturing the uncertainties and complexities inherent in
spatiotemporal data. While mainstream probabilistic models can capture
uncertainty, they struggle with accurately learning primary patterns and often
suffer from computational inefficiency. To address these challenges, we propose
CoST, which collaborates deterministic and probabilistic models to improve both
predictive accuracy and the ability to handle uncertainty. To achieve this, we
design a mean-residual decomposition framework, where the mean value is modeled
by a deterministic model, and the residual variations are learned by a
probabilistic model, specifically diffusion models. Moreover, we introduce a
scale-aware diffusion process, which better accounts for spatially
heterogeneous dynamics across different regions. Extensive experiments on eight
real-world datasets demonstrate that CoST significantly outperforms existing
methods in both deterministic and probabilistic metrics, achieving a 20%
improvement with low computational cost. CoST bridges the gap between
deterministic precision and probabilistic uncertainty, making a significant
advancement in the field of urban spatiotemporal prediction.
|
2502.11018
|
GRIFFIN: Effective Token Alignment for Faster Speculative Decoding
|
cs.CL cs.AI
|
Speculative decoding accelerates inference in large language models (LLMs) by
generating multiple draft tokens simultaneously. However, existing methods
often struggle with token misalignment between the training and decoding
phases, limiting their performance. To address this, we propose GRIFFIN, a
novel framework that incorporates a token-alignable training strategy and a
token-alignable draft model to mitigate misalignment. The training strategy
employs a loss masking mechanism to exclude highly misaligned tokens during
training, preventing them from negatively impacting the draft model's
optimization. The token-alignable draft model introduces input tokens to
correct inconsistencies in generated features. Experiments on LLaMA-series and
Vicuna models demonstrate that GRIFFIN achieves an average acceptance length
improvement of over 7\% and a speedup ratio exceeding 8%, outperforming current
SoTAs as shown in Fig. 1 (a) and (b).
|
2502.11019
|
Unlocking the Power of Function Vectors for Characterizing and
Mitigating Catastrophic Forgetting in Continual Instruction Tuning
|
cs.LG cs.AI
|
Catastrophic forgetting (CF) poses a significant challenge in machine
learning, where a model forgets previously learned information upon learning
new tasks. Despite the advanced capabilities of Large Language Models (LLMs),
they continue to face challenges with CF during continual learning. The
majority of existing research focuses on analyzing forgetting patterns through
a singular training sequence, thereby overlooking the intricate effects that
diverse tasks have on model behavior. Our study explores CF across various
settings, discovering that model forgetting is influenced by both the specific
training tasks and the models themselves. To this end, we interpret forgetting
by examining the function vector (FV), a compact representation of functions in
LLMs, offering a model-dependent indicator for the occurrence of CF. Through
theoretical and empirical analyses, we demonstrated that CF in LLMs primarily
stems from biases in function activation rather than the overwriting of task
processing functions. Leveraging these insights, we propose a novel function
vector guided training methodology, incorporating a regularization technique to
stabilize the FV and mitigate forgetting. Empirical tests on four benchmarks
confirm the effectiveness of our proposed training method, substantiating our
theoretical framework concerning CF and model function dynamics. We plan to
make our code publicly accessible in the near future.
|
2502.11020
|
TUMLU: A Unified and Native Language Understanding Benchmark for Turkic
Languages
|
cs.CL cs.AI
|
Being able to thoroughly assess massive multi-task language understanding
(MMLU) capabilities is essential for advancing the applicability of
multilingual language models. However, preparing such benchmarks in high
quality native language is often costly and therefore limits the
representativeness of evaluation datasets. While recent efforts focused on
building more inclusive MMLU benchmarks, these are conventionally built using
machine translation from high-resource languages, which may introduce errors
and fail to account for the linguistic and cultural intricacies of the target
languages. In this paper, we address the lack of native language MMLU benchmark
especially in the under-represented Turkic language family with distinct
morphosyntactic and cultural characteristics. We propose two benchmarks for
Turkic language MMLU: TUMLU is a comprehensive, multilingual, and natively
developed language understanding benchmark specifically designed for Turkic
languages. It consists of middle- and high-school level questions spanning 11
academic subjects in Azerbaijani, Crimean Tatar, Karakalpak, Kazakh, Tatar,
Turkish, Uyghur, and Uzbek. We also present TUMLU-mini, a more concise,
balanced, and manually verified subset of the dataset. Using this dataset, we
systematically evaluate a diverse range of open and proprietary multilingual
large language models (LLMs), including Claude, Gemini, GPT, and LLaMA,
offering an in-depth analysis of their performance across different languages,
subjects, and alphabets. To promote further research and development in
multilingual language understanding, we release TUMLU-mini and all
corresponding evaluation scripts.
|
2502.11021
|
Leveraging Uncertainty Estimation for Efficient LLM Routing
|
cs.NI cs.CL
|
Deploying large language models (LLMs) in edge-cloud environments requires an
efficient routing strategy to balance cost and response quality. Traditional
approaches prioritize either human-preference data or accuracy metrics from
benchmark datasets as routing criteria, but these methods suffer from rigidity
and subjectivity. Moreover, existing routing frameworks primarily focus on
accuracy and cost, neglecting response quality from a human preference
perspective. In this work, we propose the Confidence-Driven LLM Router, a novel
framework that leverages uncertainty estimation to optimize routing decisions.
To comprehensively assess routing performance, we evaluate both system cost
efficiency and response quality. In particular, we introduce the novel use of
LLM-as-a-Judge to simulate human rating preferences, providing the first
systematic assessment of response quality across different routing strategies.
Extensive experiments on MT-Bench, GSM8K, and MMLU demonstrate that our
approach outperforms state-of-the-art routing methods, achieving superior
response quality while maintaining cost efficiency.
|
2502.11022
|
MultiTEND: A Multilingual Benchmark for Natural Language to NoSQL Query
Translation
|
cs.CL cs.AI
|
Natural language interfaces for NoSQL databases are increasingly vital in the
big data era, enabling users to interact with complex, unstructured data
without deep technical expertise. However, most recent advancements focus on
English, leaving a gap for multilingual support. This paper introduces
MultiTEND, the first and largest multilingual benchmark for natural language to
NoSQL query generation, covering six languages: English, German, French,
Russian, Japanese and Mandarin Chinese. Using MultiTEND, we analyze challenges
in translating natural language to NoSQL queries across diverse linguistic
structures, including lexical and syntactic differences. Experiments show that
performance accuracy in both English and non-English settings remains
relatively low, with a 4%-6% gap across scenarios like fine-tuned SLM,
zero-shot LLM, and RAG for LLM. To address the aforementioned challenges, we
introduce MultiLink, a novel framework that bridges the multilingual input to
NoSQL query generation gap through a Parallel Linking Process. It breaks down
the task into multiple steps, integrating parallel multilingual processing,
Chain-of-Thought (CoT) reasoning, and Retrieval-Augmented Generation (RAG) to
tackle lexical and structural challenges inherent in multilingual NoSQL
generation. MultiLink shows enhancements in all metrics for every language
against the top baseline, boosting execution accuracy by about 15% for English
and averaging a 10% improvement for non-English languages.
|
2502.11023
|
DT4ECG: A Dual-Task Learning Framework for ECG-Based Human Identity
Recognition and Human Activity Detection
|
eess.SP cs.LG
|
This article introduces DT4ECG, an innovative dual-task learning framework
for Electrocardiogram (ECG)-based human identity recognition and activity
detection. The framework employs a robust one-dimensional convolutional neural
network (1D-CNN) backbone integrated with residual blocks to extract
discriminative ECG features. To enhance feature representation, we propose a
novel Sequence Channel Attention (SCA) mechanism, which combines channel-wise
and sequential context attention to prioritize informative features across both
temporal and channel dimensions. Furthermore, to address gradient imbalance in
multi-task learning, we integrate GradNorm, a technique that dynamically
adjusts loss weights based on gradient magnitudes, ensuring balanced training
across tasks. Experimental results demonstrate the superior performance of our
model, achieving accuracy rates of 99.12% in ID classification and 90.11% in
activity classification. These findings underscore the potential of the DT4ECG
framework in enhancing security and user experience across various applications
such as fitness monitoring and personalized healthcare, thereby presenting a
transformative approach to integrating ECG-based biometrics in everyday
technologies.
|
2502.11024
|
TPCap: Unlocking Zero-Shot Image Captioning with Trigger-Augmented and
Multi-Modal Purification Modules
|
cs.CV
|
Recent advancements in large language models (LLMs) have significantly
enhanced the fluency and logical coherence of image captioning.
Retrieval-Augmented Generation (RAG) is widely adopted to incorporate external
knowledge into LLMs; however, existing RAG-based methods rely on separate
retrieval banks, introducing computational overhead and limiting the
utilization of LLMs' inherent zero-shot capabilities. To address these
limitations, we propose TPCap, a novel trigger-augmented and multi-modal
purification framework for zero-shot image captioning without external
retrieval libraries. TPCap consists of two key components: trigger-augmented
(TA) generation and multi-modal purification (MP). The TA module employs a
trigger projector with frozen and learnable projections to activate LLMs'
contextual reasoning, enhance visual-textual alignment, and mitigate data bias.
The MP module further refines the generated entity-related information by
filtering noise and enhancing feature quality, ensuring more precise and
factually consistent captions. We evaluate TPCap on COCO, NoCaps, Flickr30k,
and WHOOPS datasets. With only 0.82M trainable parameters and training on a
single NVIDIA RTX 4090 GPU, TPCap achieves competitive performance comparable
to state-of-the-art models.
|
2502.11026
|
Simplify RLHF as Reward-Weighted SFT: A Variational Method
|
cs.LG cs.AI cs.CL
|
Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning
Large Language Models (LLMs) with human values. However, RLHF has been
continuously challenged by its high complexity in implementation and
computation consumption. Even with recent simplifications, such as Direct
Preference Optimization (DPO) and Advantage Leftover Lunch (A-LoL), the
problems of over-fitting and training instability remain hindering the
alignment process from the expected optimal performance. To address the
existing challenges, we propose a novel simplification of RLHF from the
perspective of variational inference, called $\textbf{V}$ariational
$\textbf{A}$lignment with $\textbf{R}$e-weighting ($\textbf{VAR}$). More
specifically, by directly minimizing the distribution gap between the learning
LLM policy and the optimal solution of RLHF, we transform the alignment
objective into a reward-driven re-weighted supervised fine-tuning (SFT) form,
which only requires minor adjustment on the SFT loss to obtain noticeable
improvement on training stability and effectiveness. On comprehensive alignment
and generation benchmarks, our VAR method has numerically achieved competitive
performance in LLM alignment helpfulness and harmlessness.
|
2502.11027
|
Diversified Sampling Improves Scaling LLM inference
|
cs.LG
|
While increasing training compute has significantly improved the performance
of large language models (LLMs), similar gains have not been observed when
scaling inference compute. We hypothesize that the primary issue lies in the
uniformity of LLM outputs, which leads to inefficient sampling as models
repeatedly generate similar but inaccurate responses. Motivated by an
intriguing relationship between solution accuracy (Pass@10) and response
diversity, we propose DivSampling -- a novel and versatile sampling technique
designed to enhance the diversity of candidate solutions by introducing prompt
perturbations.DivSampling incorporates two categories of perturbations:
task-agnostic approaches, which are general and not tailored to any specific
task, and task-specific approaches, which are customized based on task content.
Our theoretical analysis demonstrates that, under mild assumptions, the error
rates of responses generated from diverse prompts are significantly lower
compared to those produced by stationary prompts. Comprehensive evaluations
across various tasks -- including reasoning, mathematics, and code generation
-- highlight the effectiveness of DivSampling in improving solution accuracy.
This scalable and efficient approach offers a new perspective on optimizing
test-time inference, addressing limitations in current sampling strategies.
|
2502.11028
|
Mind the Confidence Gap: Overconfidence, Calibration, and Distractor
Effects in Large Language Models
|
cs.CL cs.AI
|
Large Language Models (LLMs) demonstrate impressive performance across
diverse tasks, yet confidence calibration remains a challenge. Miscalibration -
where models are overconfident or underconfident - poses risks, particularly in
high-stakes applications. This paper presents an empirical study on LLM
calibration, examining how model size, distractors, and question types affect
confidence alignment. We introduce an evaluation framework to measure
overconfidence and investigate whether multiple-choice formats mitigate or
worsen miscalibration. Our findings show that while larger models (e.g.,
GPT-4o) are better calibrated overall, they are more prone to distraction,
whereas smaller models benefit more from answer choices but struggle with
uncertainty estimation. Unlike prior work, which primarily reports
miscalibration trends, we provide actionable insights into failure modes and
conditions that worsen overconfidence. These findings highlight the need for
calibration-aware interventions and improved uncertainty estimation methods.
|
2502.11031
|
A Critical Review of Predominant Bias in Neural Networks
|
cs.LG
|
Bias issues of neural networks garner significant attention along with its
promising advancement. Among various bias issues, mitigating two predominant
biases is crucial in advancing fair and trustworthy AI: (1) ensuring neural
networks yields even performance across demographic groups, and (2) ensuring
algorithmic decision-making does not rely on protected attributes. However,
upon the investigation of \pc papers in the relevant literature, we find that
there exists a persistent, extensive but under-explored confusion regarding
these two types of biases. Furthermore, the confusion has already significantly
hampered the clarity of the community and subsequent development of debiasing
methodologies. Thus, in this work, we aim to restore clarity by providing two
mathematical definitions for these two predominant biases and leveraging these
definitions to unify a comprehensive list of papers. Next, we highlight the
common phenomena and the possible reasons for the existing confusion. To
alleviate the confusion, we provide extensive experiments on synthetic, census,
and image datasets, to validate the distinct nature of these biases,
distinguish their different real-world manifestations, and evaluate the
effectiveness of a comprehensive list of bias assessment metrics in assessing
the mitigation of these biases. Further, we compare these two types of biases
from multiple dimensions including the underlying causes, debiasing methods,
evaluation protocol, prevalent datasets, and future directions. Last, we
provide several suggestions aiming to guide researchers engaged in bias-related
work to avoid confusion and further enhance clarity in the community.
|
2502.11033
|
Convergence of Policy Mirror Descent Beyond Compatible Function
Approximation
|
cs.LG math.OC stat.ML
|
Modern policy optimization methods roughly follow the policy mirror descent
(PMD) algorithmic template, for which there are by now numerous theoretical
convergence results. However, most of these either target tabular environments,
or can be applied effectively only when the class of policies being optimized
over satisfies strong closure conditions, which is typically not the case when
working with parametric policy classes in large-scale environments. In this
work, we develop a theoretical framework for PMD for general policy classes
where we replace the closure conditions with a strictly weaker variational
gradient dominance assumption, and obtain upper bounds on the rate of
convergence to the best-in-class policy. Our main result leverages a novel
notion of smoothness with respect to a local norm induced by the occupancy
measure of the current policy, and casts PMD as a particular instance of smooth
non-convex optimization in non-Euclidean space.
|
2502.11034
|
AdaGC: Improving Training Stability for Large Language Model Pretraining
|
cs.LG
|
Large Language Models (LLMs) face increasing loss spikes during scaling,
undermining training stability and final performance. While gradient clipping
mitigates this issue, traditional global approaches poorly handle
parameter-specific gradient variations and decaying gradient norms. We propose
**AdaGC**, an adaptive gradient clipping framework that automatically adjusts
local thresholds per parameter through exponential moving average of gradient
norms. Theoretical analysis proves AdaGC's convergence under non-convex
conditions. Extensive experiments demonstrate significant improvements: On
Llama-2 7B/13B, AdaGC completely eliminates loss spikes while reducing WikiText
perplexity by 3.5% (+0.14pp LAMBADA accuracy) for 7B and achieving 0.65% lower
training loss with 1.47% reduced validation perplexity for 13B compared to
global clipping. For CLIP ViT-Base, AdaGC converges 25% faster than StableAdamW
with full spike elimination. The method shows universal effectiveness across
architectures (Llama-2 7B/13B) and modalities (CLIP), with successful
integration into diverse optimizers like AdamW and Lion. Source code will be
released on GitHub.
|
2502.11037
|
Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs
|
cs.LG cs.AI cs.CV
|
Multi-View Representation Learning (MVRL) aims to derive a unified
representation from multi-view data by leveraging shared and complementary
information across views. However, when views are irregularly missing, the
incomplete data can lead to representations that lack sufficiency and
consistency. To address this, we propose Multi-View Permutation of Variational
Auto-Encoders (MVP), which excavates invariant relationships between views in
incomplete data. MVP establishes inter-view correspondences in the latent space
of Variational Auto-Encoders, enabling the inference of missing views and the
aggregation of more sufficient information. To derive a valid Evidence Lower
Bound (ELBO) for learning, we apply permutations to randomly reorder variables
for cross-view generation and then partition them by views to maintain
invariant meanings under permutations. Additionally, we enhance consistency by
introducing an informational prior with cyclic permutations of posteriors,
which turns the regularization term into a similarity measure across
distributions. We demonstrate the effectiveness of our approach on seven
diverse datasets with varying missing ratios, achieving superior performance in
multi-view clustering and generation tasks.
|
2502.11044
|
Detecting Cadastral Boundary from Satellite Images Using U-Net model
|
cs.CV cs.LG
|
Finding the cadastral boundaries of farmlands is a crucial concern for land
administration. Therefore, using deep learning methods to expedite and simplify
the extraction of cadastral boundaries from satellite and unmanned aerial
vehicle (UAV) images is critical. In this paper, we employ transfer learning to
train a U-Net model with a ResNet34 backbone to detect cadastral boundaries
through three-class semantic segmentation: "boundary", "field", and
"background". We evaluate the performance on two satellite images from
farmlands in Iran using "precision", "recall", and "F-score", achieving high
values of 88%, 75%, and 81%, respectively, which indicate promising results.
|
2502.11049
|
Faces of Fairness: Examining Bias in Facial Expression Recognition
Datasets and Models
|
cs.CV
|
Building AI systems, including Facial Expression Recognition (FER), involves
two critical aspects: data and model design. Both components significantly
influence bias and fairness in FER tasks. Issues related to bias and fairness
in FER datasets and models remain underexplored. This study investigates bias
sources in FER datasets and models. Four common FER datasets--AffectNet, ExpW,
Fer2013, and RAF-DB--are analyzed. The findings demonstrate that AffectNet and
ExpW exhibit high generalizability despite data imbalances. Additionally, this
research evaluates the bias and fairness of six deep models, including three
state-of-the-art convolutional neural network (CNN) models: MobileNet, ResNet,
XceptionNet, as well as three transformer-based models: ViT, CLIP, and
GPT-4o-mini. Experimental results reveal that while GPT-4o-mini and ViT achieve
the highest accuracy scores, they also display the highest levels of bias.
These findings underscore the urgent need for developing new methodologies to
mitigate bias and ensure fairness in datasets and models, particularly in
affective computing applications. See our implementation details at
https://github.com/MMHosseini/bias_in_FER.
|
2502.11051
|
MMUNLEARNER: Reformulating Multimodal Machine Unlearning in the Era of
Multimodal Large Language Models
|
cs.CL cs.AI
|
Recent progress in Machine Unlearning (MU) has introduced solutions for the
selective removal of private or sensitive information encoded within deep
neural networks. Nonetheless, MU for Multimodal Large Language Models (MLLMs)
remains in its nascent phase. Therefore, we propose to reformulate the task of
multimodal MU in the era of MLLMs, which aims to erase only the visual patterns
associated with a given entity while preserving the corresponding textual
knowledge encoded within the original parameters of the language model
backbone. Furthermore, we develop a novel geometry-constrained gradient descent
method MMUnlearner. It updates the weights of MLLMs with a weight saliency map
jointly restricted by the remaining concepts and textual knowledge during
unlearning, thereby preserving parameters essential for non-target knowledge.
Extensive experiments demonstrate that MMUnlearner surpasses baselines that
finetuning MLLMs with VQA data directly through Gradient Ascent (GA) or
Negative Preference Optimization (NPO), across all evaluation dimensions. Our
code will be released upon acceptance.
|
2502.11053
|
Demystifying 5G Polar and LDPC Codes: A Comprehensive Review and
Foundations
|
cs.IT math.IT
|
This paper serves as a comprehensive guide for practitioners and scholars
aiming to understand the channel coding and decoding schemes integral to the 5G
NR standard, with a particular focus on LDPC and polar codes. We start by
explaining the design procedures that underlie these channel codes, offering
fundamental information from perspectives of both encoding and decoding. In
order to determine the present status of research in this area, we also provide
a thorough literature review. Notably, we add comprehensive, standard-specific
information to these foundational evaluations that is frequently difficult to
extract from technical specification documents.
|
2502.11054
|
Reasoning-Augmented Conversation for Multi-Turn Jailbreak Attacks on
Large Language Models
|
cs.CL cs.AI cs.CR
|
Multi-turn jailbreak attacks simulate real-world human interactions by
engaging large language models (LLMs) in iterative dialogues, exposing critical
safety vulnerabilities. However, existing methods often struggle to balance
semantic coherence with attack effectiveness, resulting in either benign
semantic drift or ineffective detection evasion. To address this challenge, we
propose Reasoning-Augmented Conversation, a novel multi-turn jailbreak
framework that reformulates harmful queries into benign reasoning tasks and
leverages LLMs' strong reasoning capabilities to compromise safety alignment.
Specifically, we introduce an attack state machine framework to systematically
model problem translation and iterative reasoning, ensuring coherent query
generation across multiple turns. Building on this framework, we design
gain-guided exploration, self-play, and rejection feedback modules to preserve
attack semantics, enhance effectiveness, and sustain reasoning-driven attack
progression. Extensive experiments on multiple LLMs demonstrate that RACE
achieves state-of-the-art attack effectiveness in complex conversational
scenarios, with attack success rates (ASRs) increasing by up to 96%. Notably,
our approach achieves ASRs of 82% and 92% against leading commercial models,
OpenAI o1 and DeepSeek R1, underscoring its potency. We release our code at
https://github.com/NY1024/RACE to facilitate further research in this critical
domain.
|
2502.11057
|
A Physics-Informed Machine Learning Framework for Safe and Optimal
Control of Autonomous Systems
|
cs.RO cs.AI cs.SY eess.SY
|
As autonomous systems become more ubiquitous in daily life, ensuring high
performance with guaranteed safety is crucial. However, safety and performance
could be competing objectives, which makes their co-optimization difficult.
Learning-based methods, such as Constrained Reinforcement Learning (CRL),
achieve strong performance but lack formal safety guarantees due to safety
being enforced as soft constraints, limiting their use in safety-critical
settings. Conversely, formal methods such as Hamilton-Jacobi (HJ) Reachability
Analysis and Control Barrier Functions (CBFs) provide rigorous safety
assurances but often neglect performance, resulting in overly conservative
controllers. To bridge this gap, we formulate the co-optimization of safety and
performance as a state-constrained optimal control problem, where performance
objectives are encoded via a cost function and safety requirements are imposed
as state constraints. We demonstrate that the resultant value function
satisfies a Hamilton-Jacobi-Bellman (HJB) equation, which we approximate
efficiently using a novel physics-informed machine learning framework. In
addition, we introduce a conformal prediction-based verification strategy to
quantify the learning errors, recovering a high-confidence safety value
function, along with a probabilistic error bound on performance degradation.
Through several case studies, we demonstrate the efficacy of the proposed
framework in enabling scalable learning of safe and performant controllers for
complex, high-dimensional autonomous systems.
|
2502.11059
|
ClimateLLM: Efficient Weather Forecasting via Frequency-Aware Large
Language Models
|
cs.LG cs.AI
|
Weather forecasting is crucial for public safety, disaster prevention and
mitigation, agricultural production, and energy management, with global
relevance. Although deep learning has significantly advanced weather
prediction, current methods face critical limitations: (i) they often struggle
to capture both dynamic temporal dependencies and short-term abrupt changes,
making extreme weather modeling difficult; (ii) they incur high computational
costs due to extensive training and resource requirements; (iii) they have
limited adaptability to multi-scale frequencies, leading to challenges when
separating global trends from local fluctuations. To address these issues, we
propose ClimateLLM, a foundation model for weather forecasting. It captures
spatiotemporal dependencies via a cross-temporal and cross-spatial
collaborative modeling framework that integrates Fourier-based frequency
decomposition with Large Language Models (LLMs) to strengthen spatial and
temporal modeling. Our framework uses a Mixture-of-Experts (MoE) mechanism that
adaptively processes different frequency components, enabling efficient
handling of both global signals and localized extreme events. In addition, we
introduce a cross-temporal and cross-spatial dynamic prompting mechanism,
allowing LLMs to incorporate meteorological patterns across multiple scales
effectively. Extensive experiments on real-world datasets show that ClimateLLM
outperforms state-of-the-art approaches in accuracy and efficiency, as a
scalable solution for global weather forecasting.
|
2502.11061
|
D\'ej\`a Vu? Decoding Repeated Reading from Eye Movements
|
cs.CL
|
Be it your favorite novel, a newswire article, a cooking recipe or an
academic paper -- in many daily situations we read the same text more than
once. In this work, we ask whether it is possible to automatically determine
whether the reader has previously encountered a text based on their eye
movement patterns. We introduce two variants of this task and address them with
considerable success using both feature-based and neural models. We further
introduce a general strategy for enhancing these models with machine generated
simulations of eye movements from a cognitive model. Finally, we present an
analysis of model performance which on the one hand yields insights on the
information used by the models, and on the other hand leverages predictive
modeling as an analytic tool for better characterization of the role of memory
in repeated reading. Our work advances the understanding of the extent and
manner in which eye movements in reading capture memory effects from prior text
exposure, and paves the way for future applications that involve predictive
modeling of repeated reading.
|
2502.11062
|
Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning
Data Selection
|
cs.CL
|
Large language models (LLMs) have shown great potential across various
industries due to their remarkable ability to generalize through instruction
tuning. However, the limited availability of domain-specific data significantly
hampers their performance on specialized tasks. While existing methods
primarily focus on selecting training data from general datasets that are
similar to the target domain, they often fail to consider the joint
distribution of instructions, resulting in inefficient learning and suboptimal
knowledge transfer. To address these challenges, we introduce G2IS
(Gradient-based Graph Instruction Selection), a novel method that constructs a
mixed gradient-based instruction graph to capture the joint distribution and
interdependencies between instructions. By accounting for the relationships
between instructions, G2IS improves domain adaptation efficiency. Additionally,
we propose a gradient walk algorithm to refine the data selection process,
enhancing both training effectiveness and efficiency. Our experiments
demonstrate that G2IS outperforms traditional methods across various domain
adaptation tasks, yielding significant performance gains, particularly in
complex, data-scarce scenarios. These results underscore the potential of G2IS
in advancing the development of large, domain-specific models.
|
2502.11066
|
CARMA: Enhanced Compositionality in LLMs via Advanced Regularisation and
Mutual Information Alignment
|
cs.CL
|
Large language models (LLMs) struggle with compositional generalisation,
limiting their ability to systematically combine learned components to
interpret novel inputs. While architectural modifications, fine-tuning, and
data augmentation improve compositionality, they often have limited
adaptability, face scalability constraints, or yield diminishing returns on
real data. To address this, we propose CARMA, an intervention that enhances the
stability and robustness of compositional reasoning in LLMs while preserving
fine-tuned performance. CARMA employs mutual information regularisation and
layer-wise stability constraints to mitigate feature fragmentation, ensuring
structured representations persist across and within layers. We evaluate CARMA
on inverse dictionary modelling and sentiment classification, measuring its
impact on semantic consistency, performance stability, and robustness to
lexical perturbations. Results show that CARMA reduces the variability
introduced by fine-tuning, stabilises token representations, and improves
compositional reasoning. While its effectiveness varies across architectures,
CARMA's key strength lies in reinforcing learned structures rather than
introducing new capabilities, making it a scalable auxiliary method. These
findings suggest that integrating CARMA with fine-tuning can improve
compositional generalisation while maintaining task-specific performance in
LLMs.
|
2502.11067
|
A Survey on Active Feature Acquisition Strategies
|
cs.LG
|
Active feature acquisition studies the challenge of making accurate
predictions while limiting the cost of collecting complete data. By selectively
acquiring only the most informative features for each instance, these
strategies enable efficient decision-making in scenarios where data collection
is expensive or time-consuming. This survey reviews recent progress in active
feature acquisition, discussing common problem formulations, practical
challenges, and key insights. We also highlight open issues and promising
directions for future research.
|
2502.11068
|
Accelerating Anchors via Specialization and Feature Transformation
|
cs.LG cs.AI
|
Anchors is a popular local model-agnostic explanation technique whose
applicability is limited by its computational inefficiency. To address this
limitation, we propose a pre-training-based approach to accelerate Anchors
without compromising the explanation quality. Our approach leverages the
iterative nature of Anchors' algorithm which gradually refines an explanation
until it is precise enough for a given input by providing a general explanation
that is obtained through pre-training as Anchors' initial explanation.
Specifically, we develop a two-step rule transformation process: the horizontal
transformation adapts a pre-trained explanation to the current input by
replacing features, and the vertical transformation refines the general
explanation until it is precise enough for the input. We evaluate our method
across tabular, text, and image datasets, demonstrating that it significantly
reduces explanation generation time while maintaining fidelity and
interpretability, thereby enabling the practical adoption of Anchors in
time-sensitive applications.
|
2502.11070
|
A Survey on Vulnerability Prioritization: Taxonomy, Metrics, and
Research Challenges
|
cs.CR cs.AI
|
In the highly interconnected digital landscape of today, safeguarding complex
infrastructures against cyber threats has become increasingly challenging due
to the exponential growth in the number and complexity of vulnerabilities.
Resource constraints necessitate effective vulnerability prioritization
strategies, focusing efforts on the most critical risks. This paper presents a
systematic literature review of 82 studies, introducing a novel taxonomy that
categorizes metrics into severity, exploitability, contextual factors,
predictive indicators, and aggregation methods. Our analysis reveals
significant gaps in existing approaches and challenges with multi-domain
applicability. By emphasizing the need for dynamic, context-aware metrics and
scalable solutions, we provide actionable insights to bridge the gap between
research and real-world applications. This work contributes to the field by
offering a comprehensive framework for evaluating vulnerability prioritization
methodologies and setting a research agenda to advance the state of practice.
|
2502.11071
|
Generalization of the Gibbs algorithm with high probability at low
temperatures
|
cs.LG stat.ML
|
The paper gives a bound on the generalization error of the Gibbs algorithm,
which recovers known data-independent bounds for the high temperature range and
extends to the low-temperature range, where generalization depends critically
on the data-dependent loss-landscape. It is shown, that with high probability
the generalization error of a single hypothesis drawn from the Gibbs posterior
decreases with the total prior volume of all hypotheses with similar or smaller
empirical error. This gives theoretical support to the belief in the benefit of
flat minima. The zero temperature limit is discussed and the bound is extended
to a class of similar stochastic algorithms.
|
2502.11073
|
Demystifying Hateful Content: Leveraging Large Multimodal Models for
Hateful Meme Detection with Explainable Decisions
|
cs.CL
|
Hateful meme detection presents a significant challenge as a multimodal task
due to the complexity of interpreting implicit hate messages and contextual
cues within memes. Previous approaches have fine-tuned pre-trained
vision-language models (PT-VLMs), leveraging the knowledge they gained during
pre-training and their attention mechanisms to understand meme content.
However, the reliance of these models on implicit knowledge and complex
attention mechanisms renders their decisions difficult to explain, which is
crucial for building trust in meme classification. In this paper, we introduce
IntMeme, a novel framework that leverages Large Multimodal Models (LMMs) for
hateful meme classification with explainable decisions. IntMeme addresses the
dual challenges of improving both accuracy and explainability in meme
moderation. The framework uses LMMs to generate human-like, interpretive
analyses of memes, providing deeper insights into multimodal content and
context. Additionally, it uses independent encoding modules for both memes and
their interpretations, which are then combined to enhance classification
performance. Our approach addresses the opacity and misclassification issues
associated with PT-VLMs, optimizing the use of LMMs for hateful meme detection.
We demonstrate the effectiveness of IntMeme through comprehensive experiments
across three datasets, showcasing its superiority over state-of-the-art models.
|
2502.11075
|
Exposing Numeracy Gaps: A Benchmark to Evaluate Fundamental Numerical
Abilities in Large Language Models
|
cs.CL cs.AI
|
Large Language Models (LLMs) have demonstrated impressive capabilities in
natural language processing tasks, such as text generation and semantic
understanding. However, their performance on numerical reasoning tasks, such as
basic arithmetic, numerical retrieval, and magnitude comparison, remains
surprisingly poor. This gap arises from their reliance on surface-level
statistical patterns rather than understanding numbers as continuous
magnitudes. Existing benchmarks primarily focus on either linguistic competence
or structured mathematical problem-solving, neglecting fundamental numerical
reasoning required in real-world scenarios. To bridge this gap, we propose
NumericBench, a comprehensive benchmark to evaluate six fundamental numerical
capabilities: number recognition, arithmetic operations, contextual retrieval,
comparison, summary, and logical reasoning. NumericBench includes datasets
ranging from synthetic number lists to the crawled real-world data, addressing
challenges like long contexts, noise, and multi-step reasoning. Extensive
experiments on state-of-the-art LLMs, including GPT-4 and DeepSeek, reveal
persistent weaknesses in numerical reasoning, highlighting the urgent need to
improve numerically-aware language modeling. The benchmark is released in:
https://github.com/TreeAI-Lab/NumericBench.
|
2502.11078
|
DEEPER Insight into Your User: Directed Persona Refinement for Dynamic
Persona Modeling
|
cs.CL
|
To advance personalized applications such as recommendation systems and user
behavior prediction, recent research increasingly adopts large language models
(LLMs) for human -readable persona modeling. In dynamic real -world scenarios,
effective persona modeling necessitates leveraging streaming behavior data to
continually optimize user personas. However, existing methods -whether
regenerating personas or incrementally extending them with new behaviors -often
fail to achieve sustained improvements in persona quality or future behavior
prediction accuracy. To address this, we propose DEEPER, a novel approach for
dynamic persona modeling that enables continual persona optimization.
Specifically, we enhance the model's direction -search capability through an
iterative reinforcement learning framework, allowing it to automatically
identify effective update directions and optimize personas using discrepancies
between user behaviors and model predictions. Extensive experiments on dynamic
persona modeling involving 4800 users across 10 domains highlight the superior
persona optimization capabilities of DEEPER, delivering an impressive 32.2%
average reduction in user behavior prediction error over four update rounds
-outperforming the best baseline by a remarkable 22.92%.
|
2502.11079
|
Phantom: Subject-consistent video generation via cross-modal alignment
|
cs.CV cs.AI
|
The continuous development of foundational models for video generation is
evolving into various applications, with subject-consistent video generation
still in the exploratory stage. We refer to this as Subject-to-Video, which
extracts subject elements from reference images and generates
subject-consistent video through textual instructions. We believe that the
essence of subject-to-video lies in balancing the dual-modal prompts of text
and image, thereby deeply and simultaneously aligning both text and visual
content. To this end, we propose Phantom, a unified video generation framework
for both single and multi-subject references. Building on existing
text-to-video and image-to-video architectures, we redesign the joint
text-image injection model and drive it to learn cross-modal alignment via
text-image-video triplet data. In particular, we emphasize subject consistency
in human generation, covering existing ID-preserving video generation while
offering enhanced advantages. The project homepage is here
https://phantom-video.github.io/Phantom/.
|
2502.11083
|
Streamlining the Collaborative Chain of Models into A Single Forward
Pass in Generation-Based Tasks
|
cs.CL
|
In Retrieval-Augmented Generation (RAG) and agent-based frameworks, the
"Chain of Models" approach is widely used, where multiple specialized models
work sequentially on distinct sub-tasks. This approach is effective but
increases resource demands as each model must be deployed separately. Recent
advancements attempt to address this by applying prompt tuning, which allows a
shared base model to adapt to multiple tasks with minimal parameter changes.
However, a key challenge remains: intermediate outputs, passed between models
as plain text, require recomputation of hidden states (i.e., Key and Value (KV)
states in Transformers) during inference. In this paper, we introduce FTHSS, a
novel prompt-tuning method that enables models to share KV hidden states,
eliminating redundant forward passes and reducing KV cache storage. By
modifying input and attention masks during training, FTHSS allows models to
effectively utilize KV hidden states from prior models in both single- and
multi-round scenarios. Empirical results on four tasks show that FTHSS matches
the performance of traditional model chains while improving inference
efficiency.
|
2502.11084
|
Rewrite to Jailbreak: Discover Learnable and Transferable Implicit
Harmfulness Instruction
|
cs.CL
|
As Large Language Models (LLMs) are widely applied in various domains, the
safety of LLMs is increasingly attracting attention to avoid their powerful
capabilities being misused. Existing jailbreak methods create a forced
instruction-following scenario, or search adversarial prompts with prefix or
suffix tokens to achieve a specific representation manually or automatically.
However, they suffer from low efficiency and explicit jailbreak patterns, far
from the real deployment of mass attacks to LLMs. In this paper, we point out
that simply rewriting the original instruction can achieve a jailbreak, and we
find that this rewriting approach is learnable and transferable. We propose the
Rewrite to Jailbreak (R2J) approach, a transferable black-box jailbreak method
to attack LLMs by iteratively exploring the weakness of the LLMs and
automatically improving the attacking strategy. The jailbreak is more efficient
and hard to identify since no additional features are introduced. Extensive
experiments and analysis demonstrate the effectiveness of R2J, and we find that
the jailbreak is also transferable to multiple datasets and various types of
models with only a few queries. We hope our work motivates further
investigation of LLM safety.
|
2502.11085
|
Towards Data-Efficient Pretraining for Atomic Property Prediction
|
cs.LG cs.AI
|
This paper challenges the recent paradigm in atomic property prediction that
links progress to growing dataset sizes and computational resources. We show
that pretraining on a carefully selected, task-relevant dataset can match or
even surpass large-scale pretraining, while using as little as 1/24th of the
computational cost. We introduce the Chemical Similarity Index (CSI), a novel
metric inspired by computer vision's Fr\'echet Inception Distance, for
molecular graphs which quantifies the alignment between upstream pretraining
datasets and downstream tasks. By selecting the most relevant dataset with
minimal CSI distance, we show that models pretrained on a smaller, focused
dataset consistently outperform those pretrained on massive, mixed datasets
such as JMP, even when those larger datasets include the relevant dataset.
Counterintuitively, we also find that indiscriminately adding more data can
degrade model performance when the additional data poorly aligns with the task
at hand. Our findings highlight that quality often outperforms quantity in
pretraining for atomic property prediction.
|
2502.11089
|
Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse
Attention
|
cs.CL cs.AI cs.LG
|
Long-context modeling is crucial for next-generation language models, yet the
high computational cost of standard attention mechanisms poses significant
computational challenges. Sparse attention offers a promising direction for
improving efficiency while maintaining model capabilities. We present NSA, a
Natively trainable Sparse Attention mechanism that integrates algorithmic
innovations with hardware-aligned optimizations to achieve efficient
long-context modeling. NSA employs a dynamic hierarchical sparse strategy,
combining coarse-grained token compression with fine-grained token selection to
preserve both global context awareness and local precision. Our approach
advances sparse attention design with two key innovations: (1) We achieve
substantial speedups through arithmetic intensity-balanced algorithm design,
with implementation optimizations for modern hardware. (2) We enable end-to-end
training, reducing pretraining computation without sacrificing model
performance. As shown in Figure 1, experiments show the model pretrained with
NSA maintains or exceeds Full Attention models across general benchmarks,
long-context tasks, and instruction-based reasoning. Meanwhile, NSA achieves
substantial speedups over Full Attention on 64k-length sequences across
decoding, forward propagation, and backward propagation, validating its
efficiency throughout the model lifecycle.
|
2502.11090
|
SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models
in Multi-Turn Dialogues with Diverse Jailbreak Attacks
|
cs.CL cs.AI
|
With the rapid advancement of Large Language Models (LLMs), the safety of
LLMs has been a critical concern requiring precise assessment. Current
benchmarks primarily concentrate on single-turn dialogues or a single jailbreak
attack method to assess the safety. Additionally, these benchmarks have not
taken into account the LLM's capability of identifying and handling unsafe
information in detail. To address these issues, we propose a fine-grained
benchmark SafeDialBench for evaluating the safety of LLMs across various
jailbreak attacks in multi-turn dialogues. Specifically, we design a two-tier
hierarchical safety taxonomy that considers 6 safety dimensions and generates
more than 4000 multi-turn dialogues in both Chinese and English under 22
dialogue scenarios. We employ 7 jailbreak attack strategies, such as reference
attack and purpose reverse, to enhance the dataset quality for dialogue
generation. Notably, we construct an innovative assessment framework of LLMs,
measuring capabilities in detecting, and handling unsafe information and
maintaining consistency when facing jailbreak attacks. Experimental results
across 17 LLMs reveal that Yi-34B-Chat and GLM4-9B-Chat demonstrate superior
safety performance, while Llama3.1-8B-Instruct and o3-mini exhibit safety
vulnerabilities.
|
2502.11093
|
Text-promptable Propagation for Referring Medical Image Sequence
Segmentation
|
cs.CV
|
Medical image sequences, generated by both 2D video-based examinations and 3D
imaging techniques, consist of sequential frames or slices that capture the
same anatomical entities (e.g., organs or lesions) from multiple perspectives.
Existing segmentation studies typically process medical images using either 2D
or 3D methods in isolation, often overlooking the inherent consistencies among
these images. Additionally, interactive segmentation, while highly beneficial
in clinical scenarios, faces the challenge of integrating text prompts
effectively across multi-modalities. To address these issues, we introduce an
innovative task, Referring Medical Image Sequence Segmentation for the first
time, which aims to segment the referred anatomical entities corresponding to
medical text prompts. We develop a strong baseline model, Text-Promptable
Propagation (TPP), designed to exploit the intrinsic relationships among
sequential images and their associated textual descriptions. TPP supports the
segmentation of arbitrary objects of interest based on cross-modal prompt
fusion. Carefully designed medical prompts are fused and employed as queries to
guide image sequence segmentation through triple-propagation. We curate a large
and comprehensive benchmark covering 4 modalities and 20 different organs and
lesions. Experimental results consistently demonstrate the superior performance
of our approach compared to previous methods across these datasets.
|
2502.11094
|
SyncSpeech: Low-Latency and Efficient Dual-Stream Text-to-Speech based
on Temporal Masked Transformer
|
cs.SD cs.AI
|
This paper presents a dual-stream text-to-speech (TTS) model, SyncSpeech,
capable of receiving streaming text input from upstream models while
simultaneously generating streaming speech, facilitating seamless interaction
with large language models. SyncSpeech has the following advantages: Low
latency, as it begins generating streaming speech upon receiving the second
text token; High efficiency, as it decodes all speech tokens corresponding to
the each arrived text token in one step. To achieve this, we propose a temporal
masked transformer as the backbone of SyncSpeech, combined with token-level
duration prediction to predict speech tokens and the duration for the next
step. Additionally, we design a two-stage training strategy to improve training
efficiency and the quality of generated speech. We evaluated the SyncSpeech on
both English and Mandarin datasets. Compared to the recent dual-stream TTS
models, SyncSpeech significantly reduces the first packet delay of speech
tokens and accelerates the real-time factor. Moreover, with the same data
scale, SyncSpeech achieves performance comparable to that of traditional
autoregressive-based TTS models in terms of both speech quality and robustness.
Speech samples are available at
https://SyncSpeech.github.io/}{https://SyncSpeech.github.io/.
|
2502.11095
|
A Survey of Large Language Models in Psychotherapy: Current Landscape
and Future Directions
|
cs.CL
|
Mental health remains a critical global challenge, with increasing demand for
accessible, effective interventions. Large language models (LLMs) offer
promising solutions in psychotherapy by enhancing the assessment, diagnosis,
and treatment of mental health conditions through dynamic, context-aware
interactions. This survey provides a comprehensive overview of the current
landscape of LLM applications in psychotherapy, highlighting the roles of LLMs
in symptom detection, severity estimation, cognitive assessment, and
therapeutic interventions. We present a novel conceptual taxonomy to organize
the psychotherapy process into three core components: assessment, diagnosis,
and treatment, and examine the challenges and advancements in each area. The
survey also addresses key research gaps, including linguistic biases, limited
disorder coverage, and underrepresented therapeutic models. Finally, we discuss
future directions to integrate LLMs into a holistic, end-to-end psychotherapy
framework, addressing the evolving nature of mental health conditions and
fostering more inclusive, personalized care.
|
2502.11096
|
Mixture of Tunable Experts -- Behavior Modification of DeepSeek-R1 at
Inference Time
|
cs.AI cs.CL
|
We present the Mixture-of-Tunable-Experts (MoTE), a method that extends the
Mixture-of-Experts architecture of Large Language Models (LLMs). Without
additional training, MoTE enables meaningful and focused behavior changes in
LLMs on-the-fly during inference time. By analyzing the digital LLM brain of
DeepSeek-R1 using a technique we dub 'functional Token Resonance Imaging'
(fTRI) -- inspired by fMRI and using prompts designed to elicit specific
behavior (e.g., 'What happened {time}{place}?') -- we empirically identify
distinctive experts associated with behaviors like refusal responses. Using
MoTE we are able to intervene and control such specific behavior. We switched
off the top 10 most refusal-relevant experts (0.07% of R1's 14,848 routed
experts), achieving a 52% refusal reduction on sensitive reference prompts
without performance degradation on MT-Bench. Random expert deactivation
resulted in smaller behavioral shifts with increased noise, whereas forced
expert activation led to significantly higher refusal rates. Our approach
shares similarities with sparse autoencoders (SAEs) in terms of explainability
and steerability. Unlike SAEs, MoTE does not require large training efforts, as
within MoEs with a vast number of experts, specialization already emerged
naturally during pretraining. Our findings suggest that significant functional
mechanisms in Mixture-of-Experts architectures can at least partially be
localized in a small number of specific experts, rather than being distributed
throughout the model's weights. Expert subgroups can be tuned to trigger
significant behavior variations, providing insights into the inner workings of
LLMs.
|
2502.11098
|
Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM
Multi-Agent Systems
|
cs.AI cs.LG cs.MA
|
Recent advancements in LLM-based multi-agent (LLM-MA) systems have shown
promise, yet significant challenges remain in managing communication and
refinement when agents collaborate on complex tasks. In this paper, we propose
\textit{Talk Structurally, Act Hierarchically (TalkHier)}, a novel framework
that introduces a structured communication protocol for context-rich exchanges
and a hierarchical refinement system to address issues such as incorrect
outputs, falsehoods, and biases. \textit{TalkHier} surpasses various types of
SoTA, including inference scaling model (OpenAI-o1), open-source multi-agent
models (e.g., AgentVerse), and majority voting strategies on current LLM and
single-agent baselines (e.g., ReAct, GPT4o), across diverse tasks, including
open-domain question answering, domain-specific selective questioning, and
practical advertisement text generation. These results highlight its potential
to set a new standard for LLM-MA systems, paving the way for more effective,
adaptable, and collaborative multi-agent frameworks. The code is available
https://github.com/sony/talkhier.
|
2502.11100
|
Towards Achieving Concept Completeness for Unsupervised Textual Concept
Bottleneck Models
|
cs.CL
|
Textual Concept Bottleneck Models (TBMs) are interpretable-by-design models
for text classification that predict a set of salient concepts before making
the final prediction. This paper proposes Complete Textual Concept Bottleneck
Model (CT-CBM),a novel TCBM generator building concept labels in a fully
unsupervised manner using a small language model, eliminating both the need for
predefined human labeled concepts and LLM annotations. CT-CBM iteratively
targets and adds important concepts in the bottleneck layer to create a
complete concept basis and addresses downstream classification leakage through
a parallel residual connection. CT-CBM achieves good results against
competitors, offering a promising solution to enhance interpretability of NLP
classifiers without sacrificing performance.
|
2502.11101
|
CacheFocus: Dynamic Cache Re-Positioning for Efficient
Retrieval-Augmented Generation
|
cs.CL cs.AI
|
Large Language Models (LLMs) excel across a variety of language tasks yet are
constrained by limited input lengths and high computational costs. Existing
approaches\textemdash such as relative positional encodings (e.g., RoPE, ALiBi)
and sliding window mechanisms\textemdash partially alleviate these issues but
often require additional training or suffer from performance degradation with
longer inputs. In this paper, we introduce \textbf{\textit{CacheFocus}}, a
method that enhances length normalization and reduces inference latency without
any further training. Our approach leverages query-independent, offline caching
to efficiently reuse a Context KV Cache Store. We address the amplification of
abnormal token distributions problem by re-positioning cached keys and
introducing Layer-Adaptive Cache Pruning to discard low-relevance caches during
pre-filling. Additionally, our Adaptive Positional Allocation Strategy
dynamically reassigns cache positions to maximize the use of the available
positional encoding range. Experiments on the Natural Questions and TriviaQA
datasets demonstrate that CacheFocus outperforms alternative methods even when
inputs exceed the $4$K limit of the \texttt{LLaMA-2} model, emphasizing its
practical effectiveness for long-context LLMs. Moreover, even with large
maximum input length of \texttt{Qwen2}, the performance of CacheFocus shows
that it maintains consistent performance even as the number of documents
increases, effectively managing long-text generation without degradation.
|
2502.11102
|
OptMATH: A Scalable Bidirectional Data Synthesis Framework for
Optimization Modeling
|
cs.AI cs.LG
|
Despite the rapid development of large language models (LLMs), a fundamental
challenge persists: the lack of high-quality optimization modeling datasets
hampers LLMs' robust modeling of practical optimization problems from natural
language descriptions (NL). This data scarcity also contributes to the
generalization difficulties experienced by learning-based methods. To address
these challenges, we propose a scalable framework for synthesizing a
high-quality dataset, named OptMATH. Starting from curated seed data with
mathematical formulations (MF), this framework automatically generates problem
data (PD) with controllable complexity. Then, a back-translation step is
employed to obtain NL. To verify the correspondence between the NL and the PD,
a forward modeling step followed by rejection sampling is used. The accepted
pairs constitute the training part of OptMATH. Then a collection of rejected
pairs is identified and further filtered. This collection serves as a new
benchmark for optimization modeling, containing difficult instances whose
lengths are much longer than these of NL4OPT and MAMO. Through extensive
experiments, we demonstrate that models of various sizes (0.5B-32B parameters)
trained on OptMATH achieve superior results on multiple modeling benchmarks,
thereby validating the effectiveness and scalability of our approach.
|
2502.11104
|
Enhancing Cross-Tokenizer Knowledge Distillation with Contextual
Dynamical Mapping
|
cs.CL
|
Knowledge Distillation (KD) has emerged as a prominent technique for model
compression. However, conventional KD approaches primarily focus on homogeneous
architectures with identical tokenizers, constraining their applicability in
cross-architecture scenarios. As for the cross-tokenizer KD, the differences in
the tokenizers give rise to two fundamental challenges: (1) sequence
misalignment caused by divergent tokenization strategies, and (2) mismatched
vocabulary size and composition. While existing probability-matching methods
attempt to address these issues, their efficacy remains limited due to
suboptimal alignment in both the sequence and vocabulary aspects. To overcome
these limitations, we propose Contextual Dynamic Mapping (CDM), a novel
cross-tokenizer distillation framework that employs contextual information to
enhance sequence alignment precision and dynamically improves vocabulary
mapping. We evaluated the effectiveness of our approach across five advanced
and widely-used model families (i.e, LLama3, Phi3, Gemma2, OPT and Qwen2),
which were configured into three distinct teacher-student pairs. Our method
shows significant advantages over existing cross-tokenizer distillation
baselines across diverse benchmarks, including instruction-following, code
generation and math. Notably, our analysis reveals that combining conventional
same-tokenizer distillation and cross-tokenizer distillation through CDM yields
further performance improvements. The code is available at
https://github.com/pppa2019/ContexualDynamicMapping
|
2502.11105
|
Graceful forgetting: Memory as a process
|
q-bio.NC cs.IR cs.LG
|
A rational theory of memory is proposed to explain how we can accommodate
unbounded sensory input within bounded storage space. Memory is stored as
statistics, organized into complex structures that are constantly summarized
and compressed to make room for new input. This process, driven by space
constraints, is guided by heuristics that optimize the memory for future needs.
Sensory input is rapidly encoded as simple statistics that are more slowly
elaborated into more abstract constructs. This theory differs from previous
accounts of memory by (a) its reliance on statistics, (b) its use of heuristics
to guide the choice of statistics, and (c) the emphasis on memory as a process
that is intensive, complex, and expensive. The theory is intended as an aid to
make sense of our extensive knowledge of memory, and bring us closer to an
understanding of memory in functional and mechanistic terms.
|
2502.11107
|
Revisiting Weak-to-Strong Generalization in Theory and Practice: Reverse
KL vs. Forward KL
|
cs.LG cs.AI
|
As large language models advance toward superhuman performance, ensuring
their alignment with human values and abilities grows increasingly complex.
Weak-to-strong generalization offers a promising approach by leveraging
predictions from weaker models to guide stronger systems, but its effectiveness
could be constrained by the inherent noise and inaccuracies in these weak
predictions. To address this, we propose a theoretically grounded approach that
replaces forward KL divergence-whose mass-covering behavior risks overfitting
to imperfect weak signals-with reverse KL divergence. Reverse KL divergence's
zero-forcing effect prioritizes high-confidence predictions, effectively
mitigating the influence of unreliable weak supervision. Theoretically, we
extend existing bounds and derive tighter lower bounds for both forward and
reverse KL divergence, establishing that reverse KL achieves at least
comparable guarantees to forward KL. Notably, when a sufficiently pre-trained
strong model is fine-tuned on the last layer, reverse KL uniquely guarantees
that it outperforms its weak supervisor by the magnitude of their
disagreement-a guarantee that forward KL cannot provide. Empirically, we
demonstrate that reverse KL and reverse cross-entropy enable strong models to
consistently outperform those trained with forward KL and standard
cross-entropy across most settings, highlighting the practical advantages of
these reverse losses.
|
2502.11108
|
Knowledge Graph-Driven Retrieval-Augmented Generation: Integrating
Deepseek-R1 with Weaviate for Advanced Chatbot Applications
|
cs.CL cs.AI
|
Large language models (LLMs) have significantly advanced the field of natural
language generation. However, they frequently generate unverified outputs,
which compromises their reliability in critical applications. In this study, we
propose an innovative framework that combines structured biomedical knowledge
with LLMs through a retrieval-augmented generation technique. Our system
develops a thorough knowledge graph by identifying and refining causal
relationships and named entities from medical abstracts related to age-related
macular degeneration (AMD). Using a vector-based retrieval process and a
locally deployed language model, our framework produces responses that are both
contextually relevant and verifiable, with direct references to clinical
evidence. Experimental results show that this method notably decreases
hallucinations, enhances factual precision, and improves the clarity of
generated responses, providing a robust solution for advanced biomedical
chatbot applications.
|
2502.11109
|
Explosive Growth in Large-Scale Collaboration Networks
|
cs.SI physics.soc-ph
|
We analyse the evolution of two large collaboration networks: the Microsoft
Academic Graph (1800-2020) and Internet Movie Database (1900-2020), comprising
$2.72 \times 10^8$ and $1.88 \times 10^6$ nodes respectively. The networks show
super-linear growth, with node counts following power laws $N(t) \propto
t^{\alpha}$ where $\alpha = 2.3$ increasing to $3.1$ after 1950 (MAG) and
$\alpha = 1.8$ (IMDb). Node and edge processes maintain stable but noisy
timescale ratios ($\tau_N/\tau_E \approx 2.8 \pm 0.3$ MAG, $2.3 \pm 0.2$ IMDb).
The probability of waiting a time $t$ between successive collaborations was
found to be scale-free, $P(t) \propto t^{-\gamma}$, with indices evolving from
$\gamma \approx 2.3$ to $1.6$ (MAG) and $2.6$ to $2.1$ (IMDb). Academic
collaboration sizes increased from $1.2$ to $5.8$ authors per paper, while
entertainment collaborations remained more stable ($3.2$ to $4.5$ actors).
These observations indicate that current network models might be enhanced by
considering accelerating growth, coupled timescales, and environmental
influence, while explaining stable local properties.
|
2502.11112
|
Parametric Analysis of Network Evolution Processes
|
cs.SI physics.soc-ph
|
We present a comprehensive parametric analysis of node and edge lifetimes
processes in two large-scale collaboration networks: the Microsoft Academic
Graph (1800-2020) and Internet Movie Database (1900-2020). Node and edge
lifetimes (career and collaboration durations) follow Weibull distributions
with consistent shape parameters ($k \approx 0.2$ for academic, $k \approx 0.5$
for entertainment careers) across centuries of evolution. These distributions
persist despite dramatic changes in network size and structure. Edge processes
show domain-specific evolution: academic collaboration durations increase over
time (power-law index $1.6$ to $2.3$) while entertainment collaborations
maintain more stable patterns (index $2.6$ to $2.1$). These findings indicate
that while career longevity exhibits consistent patterns, collaboration
dynamics appear to be influenced by domain-specific factors. The results
provide new constraints for models of social network evolution, requiring
incorporation of both universal lifetime distributions and domain-specific
growth dynamics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.