id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.13514
|
Shall Your Data Strategy Work? Perform a Swift Study
|
cs.CL
|
This work presents a swift method to assess the efficacy of particular types
of instruction-tuning data, utilizing just a handful of probe examples and
eliminating the need for model retraining. This method employs the idea of
gradient-based data influence estimation, analyzing the gradient projections of
probe examples from the chosen strategy onto evaluation examples to assess its
advantages. Building upon this method, we conducted three swift studies to
investigate the potential of Chain-of-thought (CoT) data, query clarification
data, and response evaluation data in enhancing model generalization.
Subsequently, we embarked on a validation study to corroborate the findings of
these swift studies. In this validation study, we developed training datasets
tailored to each studied strategy and compared model performance with and
without the use of these datasets. The results of the validation study aligned
with the findings of the swift studies, validating the efficacy of our proposed
method.
|
2502.13516
|
SPPD: Self-training with Process Preference Learning Using Dynamic Value
Margin
|
cs.AI
|
Recently, enhancing the numerical and logical reasoning capability of Large
Language Models (LLMs) has emerged as a research hotspot. Existing methods face
several limitations: inference-phase techniques (e.g., Chain of Thoughts) rely
on prompt selection and the pretrained knowledge; sentence-level Supervised
Fine-Tuning (SFT) and Direct Preference Optimization (DPO) struggle with
step-wise mathematical correctness and depend on stronger models distillation
or human annotations; while Reinforcement Learning (RL) approaches incur high
GPU memory costs and unstable training. To address these, we propose
\textbf{S}elf-training framework integrating \textbf{P}rocess
\textbf{P}reference learning using \textbf{D}ynamic value margin (SPPD). SPPD
leverages a process-based Markov Decision Process (MDP) and Bellman optimality
equation to derive \textbf{dynamic value margin} on step-level preference
optimization, which employs tree-based self-sampling on model responses
\textbf{without any distillation} from other models. Furthermore, we
theoretically prove that SPPD is \textbf{equivalent to on-policy policy
gradient methods} under reward constraints. Experiments on 7B-scale models
demonstrate superior performance across in-domain and out-domain mathematical
benchmarks. We open-source our code at
\href{https://anonymous.4open.science/r/SSDPO-D-DCDD}{https://anonymous.4open.science/r/SPPD-DCDD}.
|
2502.13519
|
MILE: Model-based Intervention Learning
|
cs.RO cs.AI cs.LG
|
Imitation learning techniques have been shown to be highly effective in
real-world control scenarios, such as robotics. However, these approaches not
only suffer from compounding error issues but also require human experts to
provide complete trajectories. Although there exist interactive methods where
an expert oversees the robot and intervenes if needed, these extensions usually
only utilize the data collected during intervention periods and ignore the
feedback signal hidden in non-intervention timesteps. In this work, we create a
model to formulate how the interventions occur in such cases, and show that it
is possible to learn a policy with just a handful of expert interventions. Our
key insight is that it is possible to get crucial information about the quality
of the current state and the optimality of the chosen action from expert
feedback, regardless of the presence or the absence of intervention. We
evaluate our method on various discrete and continuous simulation environments,
a real-world robotic manipulation task, as well as a human subject study.
Videos and the code can be found at https://liralab.usc.edu/mile .
|
2502.13520
|
A Large and Balanced Corpus for Fine-grained Arabic Readability
Assessment
|
cs.CL
|
This paper introduces the Balanced Arabic Readability Evaluation Corpus
BAREC, a large-scale, fine-grained dataset for Arabic readability assessment.
BAREC consists of 68,182 sentences spanning 1+ million words, carefully curated
to cover 19 readability levels, from kindergarten to postgraduate
comprehension. The corpus balances genre diversity, topical coverage, and
target audiences, offering a comprehensive resource for evaluating Arabic text
complexity. The corpus was fully manually annotated by a large team of
annotators. The average pairwise inter-annotator agreement, measured by
Quadratic Weighted Kappa, is 81.3%, reflecting a high level of substantial
agreement. Beyond presenting the corpus, we benchmark automatic readability
assessment across different granularity levels, comparing a range of
techniques. Our results highlight the challenges and opportunities in Arabic
readability modeling, demonstrating competitive performance across various
methods. To support research and education, we will make BAREC openly
available, along with detailed annotation guidelines and benchmark results.
|
2502.13522
|
Enhancing Machine Learning Potentials through Transfer Learning across
Chemical Elements
|
cs.LG cond-mat.mtrl-sci
|
Machine Learning Potentials (MLPs) can enable simulations of ab initio
accuracy at orders of magnitude lower computational cost. However, their
effectiveness hinges on the availability of considerable datasets to ensure
robust generalization across chemical space and thermodynamic conditions. The
generation of such datasets can be labor-intensive, highlighting the need for
innovative methods to train MLPs in data-scarce scenarios. Here, we introduce
transfer learning of potential energy surfaces between chemically similar
elements. Specifically, we leverage the trained MLP for silicon to initialize
and expedite the training of an MLP for germanium. Utilizing classical force
field and ab initio datasets, we demonstrate that transfer learning surpasses
traditional training from scratch in force prediction, leading to more stable
simulations and improved temperature transferability. These advantages become
even more pronounced as the training dataset size decreases. The out-of-target
property analysis shows that transfer learning leads to beneficial but
sometimes adversarial effects. Our findings demonstrate that transfer learning
across chemical elements is a promising technique for developing accurate and
numerically stable MLPs, particularly in a data-scarce regime.
|
2502.13524
|
MobileViM: A Light-weight and Dimension-independent Vision Mamba for 3D
Medical Image Analysis
|
cs.CV cs.AI cs.LG cs.NI
|
Efficient evaluation of three-dimensional (3D) medical images is crucial for
diagnostic and therapeutic practices in healthcare. Recent years have seen a
substantial uptake in applying deep learning and computer vision to analyse and
interpret medical images. Traditional approaches, such as convolutional neural
networks (CNNs) and vision transformers (ViTs), face significant computational
challenges, prompting the need for architectural advancements. Recent efforts
have led to the introduction of novel architectures like the ``Mamba'' model as
alternative solutions to traditional CNNs or ViTs. The Mamba model excels in
the linear processing of one-dimensional data with low computational demands.
However, Mamba's potential for 3D medical image analysis remains underexplored
and could face significant computational challenges as the dimension increases.
This manuscript presents MobileViM, a streamlined architecture for efficient
segmentation of 3D medical images. In the MobileViM network, we invent a new
dimension-independent mechanism and a dual-direction traversing approach to
incorporate with a vision-Mamba-based framework. MobileViM also features a
cross-scale bridging technique to improve efficiency and accuracy across
various medical imaging modalities. With these enhancements, MobileViM achieves
segmentation speeds exceeding 90 frames per second (FPS) on a single graphics
processing unit (i.e., NVIDIA RTX 4090). This performance is over 24 FPS faster
than the state-of-the-art deep learning models for processing 3D images with
the same computational resources. In addition, experimental evaluations
demonstrate that MobileViM delivers superior performance, with Dice similarity
scores reaching 92.72%, 86.69%, 80.46%, and 77.43% for PENGWIN, BraTS2024,
ATLAS, and Toothfairy2 datasets, respectively, which significantly surpasses
existing models.
|
2502.13525
|
AS-GCL: Asymmetric Spectral Augmentation on Graph Contrastive Learning
|
cs.LG
|
Graph Contrastive Learning (GCL) has emerged as the foremost approach for
self-supervised learning on graph-structured data. GCL reduces reliance on
labeled data by learning robust representations from various augmented views.
However, existing GCL methods typically depend on consistent stochastic
augmentations, which overlook their impact on the intrinsic structure of the
spectral domain, thereby limiting the model's ability to generalize
effectively. To address these limitations, we propose a novel paradigm called
AS-GCL that incorporates asymmetric spectral augmentation for graph contrastive
learning. A typical GCL framework consists of three key components: graph data
augmentation, view encoding, and contrastive loss. Our method introduces
significant enhancements to each of these components. Specifically, for data
augmentation, we apply spectral-based augmentation to minimize spectral
variations, strengthen structural invariance, and reduce noise. With respect to
encoding, we employ parameter-sharing encoders with distinct diffusion
operators to generate diverse, noise-resistant graph views. For contrastive
loss, we introduce an upper-bound loss function that promotes generalization by
maintaining a balanced distribution of intra- and inter-class distance. To our
knowledge, we are the first to encode augmentation views of the spectral domain
using asymmetric encoders. Extensive experiments on eight benchmark datasets
across various node-level tasks demonstrate the advantages of the proposed
method.
|
2502.13527
|
Exploiting Prefix-Tree in Structured Output Interfaces for Enhancing
Jailbreak Attacking
|
cs.CR cs.AI
|
The rise of Large Language Models (LLMs) has led to significant applications
but also introduced serious security threats, particularly from jailbreak
attacks that manipulate output generation. These attacks utilize prompt
engineering and logit manipulation to steer models toward harmful content,
prompting LLM providers to implement filtering and safety alignment strategies.
We investigate LLMs' safety mechanisms and their recent applications, revealing
a new threat model targeting structured output interfaces, which enable
attackers to manipulate the inner logit during LLM generation, requiring only
API access permissions. To demonstrate this threat model, we introduce a
black-box attack framework called AttackPrefixTree (APT). APT exploits
structured output interfaces to dynamically construct attack patterns. By
leveraging prefixes of models' safety refusal response and latent harmful
outputs, APT effectively bypasses safety measures. Experiments on benchmark
datasets indicate that this approach achieves higher attack success rate than
existing methods. This work highlights the urgent need for LLM providers to
enhance security protocols to address vulnerabilities arising from the
interaction between safety patterns and structured outputs.
|
2502.13530
|
Breaking the Clusters: Uniformity-Optimization for Text-Based Sequential
Recommendation
|
cs.IR
|
Traditional sequential recommendation (SR) methods heavily rely on explicit
item IDs to capture user preferences over time. This reliance introduces
critical limitations in cold-start scenarios and domain transfer tasks, where
unseen items and new contexts often lack established ID mappings. To overcome
these limitations, recent studies have shifted towards leveraging text-only
information for recommendation, thereby improving model generalization and
adaptability across domains. Although promising, text-based SR faces unique
difficulties: items' text descriptions often share semantic similarities that
lead to clustered item representations, compromising their uniformity, a
property essential for promoting diversity and enhancing generalization in
recommendation systems. In this paper, we explore a novel framework to improve
the uniformity of item representations in text-based SR. Our analysis reveals
that items within a sequence exhibit marked semantic similarity, meaning they
are closer in representation than items overall, and that this effect is more
pronounced for less popular items, which form tighter clusters compared to
their more popular counterparts. Based on these findings, we propose UniT, a
framework that employs three pairwise item sampling strategies: Unified General
Sampling Strategy, Sequence-Driven Sampling Strategy, and Popularity-Driven
Sampling Strategy. Each strategy applies varying degrees of repulsion to
selectively adjust the distances between item pairs, thereby refining
representation uniformity while considering both sequence context and item
popularity. Extensive experiments on multiple real-world datasets demonstrate
that our proposed approach outperforms state-of-the-art models, validating the
effectiveness of UniT in enhancing both representation uniformity and
recommendation accuracy.The source code is available at
https://github.com/ccwwhhh/Model-Rec.
|
2502.13531
|
Quotients of skew polynomial rings: new constructions of division
algebras and MRD codes
|
math.CO cs.IT math.IT math.RA
|
We achieve new results on skew polynomial rings and their quotients,
including the first explicit example of a skew polynomial ring where the ratio
of the degree of a skew polynomial to the degree of its bound is not extremal.
These methods lead to the construction of new (not necessarily associative)
division algebras and maximum rank distance (MRD) codes over both finite and
infinite division rings. In particular, we construct new non-associative
division algebras whose right nucleus is a central simple algebra having degree
greater than 1. Over finite fields, we obtain new semifields and MRD codes for
infinitely many choices of parameters. These families extend and contain many
of the best previously known constructions.
|
2502.13533
|
Train Small, Infer Large: Memory-Efficient LoRA Training for Large
Language Models
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) have significantly advanced natural language
processing with exceptional task generalization capabilities. Low-Rank Adaption
(LoRA) offers a cost-effective fine-tuning solution, freezing the original
model parameters and training only lightweight, low-rank adapter matrices.
However, the memory footprint of LoRA is largely dominated by the original
model parameters. To mitigate this, we propose LoRAM, a memory-efficient LoRA
training scheme founded on the intuition that many neurons in
over-parameterized LLMs have low training utility but are essential for
inference. LoRAM presents a unique twist: it trains on a pruned (small) model
to obtain pruned low-rank matrices, which are then recovered and utilized with
the original (large) model for inference. Additionally, minimal-cost continual
pre-training, performed by the model publishers in advance, aligns the
knowledge discrepancy between pruned and original models. Our extensive
experiments demonstrate the efficacy of LoRAM across various pruning strategies
and downstream tasks. For a model with 70 billion parameters, LoRAM enables
training on a GPU with only 20G HBM, replacing an A100-80G GPU for LoRA
training and 15 GPUs for full fine-tuning. Specifically, QLoRAM implemented by
structured pruning combined with 4-bit quantization, for LLaMA-3.1-70B
(LLaMA-2-70B), reduces the parameter storage cost that dominates the memory
usage in low-rank matrix training by 15.81$\times$ (16.95$\times$), while
achieving dominant performance gains over both the original LLaMA-3.1-70B
(LLaMA-2-70B) and LoRA-trained LLaMA-3.1-8B (LLaMA-2-13B).
|
2502.13534
|
Solving the Encoding Bottleneck: Of the HHL Algorithm, By the HHL
Algorithm
|
quant-ph cs.AI cs.LG
|
The Harrow-Hassidim-Lloyd (HHL) algorithm offers exponential speedup for
solving the quantum linear-system problem. But some caveats for the speedup
could be hard to met. One of the difficulties is the encoding bottleneck, i.e.,
the efficient preparation of the initial quantum state. To prepare an arbitrary
$N$-dimensional state exactly, existing state-preparation approaches generally
require a runtime of $O(N)$, which will ruin the speedup of the HHL algorithm.
Here we show that the states can be prepared approximately with a runtime of
$O(poly(\log N))$ by employing a slightly modified version of the HHL algorithm
itself. Thus, applying this approach to prepare the initial state of the
original HHL algorithm can preserve the exponential speedup advantage. It can
also serve as a standalone solution for other applications demanding rapid
state preparation.
|
2502.13539
|
Bursting Filter Bubble: Enhancing Serendipity Recommendations with
Aligned Large Language Models
|
cs.IR
|
Recommender systems (RSs) often suffer from the feedback loop phenomenon,
e.g., RSs are trained on data biased by their recommendations. This leads to
the filter bubble effect that reinforces homogeneous content and reduces user
satisfaction. To this end, serendipity recommendations, which offer unexpected
yet relevant items, are proposed. Recently, large language models (LLMs) have
shown potential in serendipity prediction due to their extensive world
knowledge and reasoning capabilities. However, they still face challenges in
aligning serendipity judgments with human assessments, handling long user
behavior sequences, and meeting the latency requirements of industrial RSs. To
address these issues, we propose SERAL (Serendipity Recommendations with
Aligned Large Language Models), a framework comprising three stages: (1)
Cognition Profile Generation to compress user behavior into multi-level
profiles; (2) SerenGPT Alignment to align serendipity judgments with human
preferences using enriched training data; and (3) Nearline Adaptation to
integrate SerenGPT into industrial RSs pipelines efficiently. Online
experiments demonstrate that SERAL improves exposure ratio (PVR), clicks, and
transactions of serendipitous items by 5.7%, 29.56%, and 27.6%, enhancing user
experience without much impact on overall revenue. Now, it has been fully
deployed in the "Guess What You Like" of the Taobao App homepage.
|
2502.13542
|
Activation-aware Probe-Query: Effective Key-Value Retrieval for
Long-Context LLMs Inference
|
cs.CL cs.AI
|
Recent advances in large language models (LLMs) have showcased exceptional
performance in long-context tasks, while facing significant inference
efficiency challenges with limited GPU memory. Existing solutions first
proposed the sliding-window approach to accumulate a set of historical
\textbf{key-value} (KV) pairs for reuse, then further improvements selectively
retain its subsets at each step. However, due to the sparse attention
distribution across a long context, it is hard to identify and recall relevant
KV pairs, as the attention is distracted by massive candidate pairs.
Additionally, we found it promising to select representative tokens as
probe-Query in each sliding window to effectively represent the entire context,
which is an approach overlooked by existing methods. Thus, we propose
\textbf{ActQKV}, a training-free, \textbf{Act}ivation-aware approach that
dynamically determines probe-\textbf{Q}uery and leverages it to retrieve the
relevant \textbf{KV} pairs for inference. Specifically, ActQKV monitors a
token-level indicator, Activation Bias, within each context window, enabling
the proper construction of probe-Query for retrieval at pre-filling stage. To
accurately recall the relevant KV pairs and minimize the irrelevant ones, we
design a dynamic KV cut-off mechanism guided by information density across
layers at the decoding stage. Experiments on the Long-Bench and $\infty$
Benchmarks demonstrate its state-of-the-art performance with competitive
inference quality and resource efficiency.
|
2502.13544
|
From Sub-Ability Diagnosis to Human-Aligned Generation: Bridging the Gap
for Text Length Control via MARKERGEN
|
cs.CL cs.AI
|
Despite the rapid progress of large language models (LLMs), their
length-controllable text generation (LCTG) ability remains below expectations,
posing a major limitation for practical applications. Existing methods mainly
focus on end-to-end training to reinforce adherence to length constraints.
However, the lack of decomposition and targeted enhancement of LCTG
sub-abilities restricts further progress.To bridge this gap, we conduct a
bottom-up decomposition of LCTG sub-abilities with human patterns as reference
and perform a detailed error analysis.On this basis, we propose MarkerGen, a
simple-yet-effective plug-and-play approach that:(1) mitigates LLM fundamental
deficiencies via external tool integration;(2) conducts explicit length
modeling with dynamically inserted markers;(3) employs a three-stage generation
scheme to better align length constraints while maintaining content
quality.Comprehensive experiments demonstrate that MarkerGen significantly
improves LCTG across various settings, exhibiting outstanding effectiveness and
generalizability.
|
2502.13548
|
Detecting Linguistic Bias in Government Documents Using Large language
Models
|
cs.CL
|
This paper addresses the critical need for detecting bias in government
documents, an underexplored area with significant implications for governance.
Existing methodologies often overlook the unique context and far-reaching
impacts of governmental documents, potentially obscuring embedded biases that
shape public policy and citizen-government interactions. To bridge this gap, we
introduce the Dutch Government Data for Bias Detection (DGDB), a dataset
sourced from the Dutch House of Representatives and annotated for bias by
experts. We fine-tune several BERT-based models on this dataset and compare
their performance with that of generative language models. Additionally, we
conduct a comprehensive error analysis that includes explanations of the
models' predictions. Our findings demonstrate that fine-tuned models achieve
strong performance and significantly outperform generative language models,
indicating the effectiveness of DGDB for bias detection. This work underscores
the importance of labeled datasets for bias detection in various languages and
contributes to more equitable governance practices.
|
2502.13550
|
STaR-SQL: Self-Taught Reasoner for Text-to-SQL
|
cs.CL
|
Generating step-by-step "chain-of-thought" rationales has proven effective
for improving the performance of large language models on complex reasoning
tasks. However, applying such techniques to structured tasks, such as
text-to-SQL, remains largely unexplored. In this paper, we introduce
Self-Taught Reasoner for text-to-SQL (STaR-SQL), a novel approach that reframes
SQL query generation as a reasoning-driven process. Our method prompts the LLM
to produce detailed reasoning steps for SQL queries and fine-tunes it on
rationales that lead to correct outcomes. Unlike traditional methods, STaR-SQL
dedicates additional test-time computation to reasoning, thereby positioning
LLMs as spontaneous reasoners rather than mere prompt-based agents. To further
scale the inference process, we incorporate an outcome-supervised reward model
(ORM) as a verifier, which enhances SQL query accuracy. Experimental results on
the challenging Spider benchmark demonstrate that STaR-SQL significantly
improves text-to-SQL performance, achieving an execution accuracy of 86.6%.
This surpasses a few-shot baseline by 31.6% and a baseline fine-tuned to
predict answers directly by 18.0%. Additionally, STaR-SQL outperforms
agent-like prompting methods that leverage more powerful yet closed-source
models such as GPT-4. These findings underscore the potential of
reasoning-augmented training for structured tasks and open the door to
extending self-improving reasoning models to text-to-SQL generation and beyond.
|
2502.13555
|
Democratizing Large Language Model-Based Graph Data Augmentation via
Latent Knowledge Graphs
|
cs.LG cs.AI
|
Data augmentation is necessary for graph representation learning due to the
scarcity and noise present in graph data. Most of the existing augmentation
methods overlook the context information inherited from the dataset as they
rely solely on the graph structure for augmentation. Despite the success of
some large language model-based (LLM) graph learning methods, they are mostly
white-box which require access to the weights or latent features from the
open-access LLMs, making them difficult to be democratized for everyone as
existing LLMs are mostly closed-source for commercial considerations. To
overcome these limitations, we propose a black-box context-driven graph data
augmentation approach, with the guidance of LLMs -- DemoGraph. Leveraging the
text prompt as context-related information, we task the LLM with generating
knowledge graphs (KGs), which allow us to capture the structural interactions
from the text outputs. We then design a dynamic merging schema to
stochastically integrate the LLM-generated KGs into the original graph during
training. To control the sparsity of the augmented graph, we further devise a
granularity-aware prompting strategy and an instruction fine-tuning module,
which seamlessly generates text prompts according to different granularity
levels of the dataset. Extensive experiments on various graph learning tasks
validate the effectiveness of our method over existing graph data augmentation
methods. Notably, our approach excels in scenarios involving electronic health
records (EHRs), which validates its maximal utilization of contextual
knowledge, leading to enhanced predictive performance and interpretability.
|
2502.13559
|
Implementation of an IEEE 802.11ax-Based Maritime Mesh Network in the
Red Sea
|
eess.SY cs.SY
|
In this article, we explore the limitations of satellite phones in meeting
the communication needs of fishermen operating in the Red Sea. We propose
AX-MMN, a maritime mesh network based on the IEEE 802.11ax standard, to address
these shortcomings of satellite phones and outline AX-MMN's system
architecture. To validate the performance of AX-MMN, we conduct extensive
real-world experiments, demonstrating its potential to enhance maritime
connectivity significantly. We also discuss the broader benefits of AX-MMN,
particularly for fishermen in underdeveloped East African countries bordering
the Red Sea, emphasizing its capacity to improve their communication
capabilities and overall quality of life.
|
2502.13562
|
Are Large Language Models In-Context Graph Learners?
|
cs.LG cs.AI
|
Large language models (LLMs) have demonstrated remarkable in-context
reasoning capabilities across a wide range of tasks, particularly with
unstructured inputs such as language or images. However, LLMs struggle to
handle structured data, such as graphs, due to their lack of understanding of
non-Euclidean structures. As a result, without additional fine-tuning, their
performance significantly lags behind that of graph neural networks (GNNs) in
graph learning tasks. In this paper, we show that learning on graph data can be
conceptualized as a retrieval-augmented generation (RAG) process, where
specific instances (e.g., nodes or edges) act as queries, and the graph itself
serves as the retrieved context. Building on this insight, we propose a series
of RAG frameworks to enhance the in-context learning capabilities of LLMs for
graph learning tasks. Comprehensive evaluations demonstrate that our proposed
RAG frameworks significantly improve LLM performance on graph-based tasks,
particularly in scenarios where a pretrained LLM must be used without
modification or accessed via an API.
|
2502.13564
|
PRIV-QA: Privacy-Preserving Question Answering for Cloud Large Language
Models
|
cs.CL
|
The rapid development of large language models (LLMs) is redefining the
landscape of human-computer interaction, and their integration into various
user-service applications is becoming increasingly prevalent. However,
transmitting user data to cloud-based LLMs presents significant risks of data
breaches and unauthorized access to personal identification information. In
this paper, we propose a privacy preservation pipeline for protecting privacy
and sensitive information during interactions between users and LLMs in
practical LLM usage scenarios. We construct SensitiveQA, the first privacy
open-ended question-answering dataset. It comprises 57k interactions in Chinese
and English, encompassing a diverse range of user-sensitive information within
the conversations. Our proposed solution employs a multi-stage strategy aimed
at preemptively securing user information while simultaneously preserving the
response quality of cloud-based LLMs. Experimental validation underscores our
method's efficacy in balancing privacy protection with maintaining robust
interaction quality. The code and dataset are available at
https://github.com/ligw1998/PRIV-QA.
|
2502.13566
|
Extracting Social Connections from Finnish Karelian Refugee Interviews
Using LLMs
|
cs.CL
|
We performed a zero-shot information extraction study on a historical
collection of 89,339 brief Finnish-language interviews of refugee families
relocated post-WWII from Finnish Eastern Karelia. Our research objective is
two-fold. First, we aim to extract social organizations and hobbies from the
free text of the interviews, separately for each family member. These can act
as a proxy variable indicating the degree of social integration of refugees in
their new environment. Second, we aim to evaluate several alternative ways to
approach this task, comparing a number of generative models and a supervised
learning approach, to gain a broader insight into the relative merits of these
different approaches and their applicability in similar studies.
We find that the best generative model (GPT-4) is roughly on par with human
performance, at an F-score of 88.8%. Interestingly, the best open generative
model (Llama-3-70B-Instruct) reaches almost the same performance, at 87.7%
F-score, demonstrating that open models are becoming a viable alternative for
some practical tasks even on non-English data. Additionally, we test a
supervised learning alternative, where we fine-tune a Finnish BERT model
(FinBERT) using GPT-4 generated training data. By this method, we achieved an
F-score of 84.1% already with 6K interviews up to an F-score of 86.3% with 30k
interviews. Such an approach would be particularly appealing in cases where the
computational resources are limited, or there is a substantial mass of data to
process.
|
2502.13568
|
LSR-Adapt: Ultra-Efficient Parameter Tuning with Matrix Low Separation
Rank Kernel Adaptation
|
cs.LG cs.CL
|
Imposing an effective structural assumption on neural network weight matrices
has been the major paradigm for designing Parameter-Efficient Fine-Tuning
(PEFT) systems for adapting modern large pre-trained models to various
downstream tasks. However, low rank based adaptation has become increasingly
challenging due to the sheer scale of modern large language models. In this
paper, we propose an effective kernelization to further reduce the number of
parameters required for adaptation tasks. Specifically, from the classical idea
in numerical analysis regarding matrix Low-Separation-Rank (LSR)
representations, we develop a kernel using this representation for the low rank
adapter matrices of the linear layers from large networks, named the Low
Separation Rank Adaptation (LSR-Adapt) kernel. With the ultra-efficient kernel
representation of the low rank adapter matrices, we manage to achieve
state-of-the-art performance with even higher accuracy with almost half the
number of parameters as compared to conventional low rank based methods. This
structural assumption also opens the door to further GPU-side optimizations due
to the highly parallelizable nature of Kronecker computations.
|
2502.13569
|
Model Evolution Framework with Genetic Algorithm for Multi-Task
Reinforcement Learning
|
cs.AI
|
Multi-task reinforcement learning employs a single policy to complete various
tasks, aiming to develop an agent with generalizability across different
scenarios. Given the shared characteristics of tasks, the agent's learning
efficiency can be enhanced through parameter sharing. Existing approaches
typically use a routing network to generate specific routes for each task and
reconstruct a set of modules into diverse models to complete multiple tasks
simultaneously. However, due to the inherent difference between tasks, it is
crucial to allocate resources based on task difficulty, which is constrained by
the model's structure. To this end, we propose a Model Evolution framework with
Genetic Algorithm (MEGA), which enables the model to evolve during training
according to the difficulty of the tasks. When the current model is
insufficient for certain tasks, the framework will automatically incorporate
additional modules, enhancing the model's capabilities. Moreover, to adapt to
our model evolution framework, we introduce a genotype module-level model,
using binary sequences as genotype policies for model reconstruction, while
leveraging a non-gradient genetic algorithm to optimize these genotype
policies. Unlike routing networks with fixed output dimensions, our approach
allows for the dynamic adjustment of the genotype policy length, enabling it to
accommodate models with a varying number of modules. We conducted experiments
on various robotics manipulation tasks in the Meta-World benchmark. Our
state-of-the-art performance demonstrated the effectiveness of the MEGA
framework. We will release our source code to the public.
|
2502.13570
|
An Efficient Permutation-Based Kernel Two-Sample Test
|
stat.ML cs.LG math.ST stat.ME stat.TH
|
Two-sample hypothesis testing-determining whether two sets of data are drawn
from the same distribution-is a fundamental problem in statistics and machine
learning with broad scientific applications. In the context of nonparametric
testing, maximum mean discrepancy (MMD) has gained popularity as a test
statistic due to its flexibility and strong theoretical foundations. However,
its use in large-scale scenarios is plagued by high computational costs. In
this work, we use a Nystr\"om approximation of the MMD to design a
computationally efficient and practical testing algorithm while preserving
statistical guarantees. Our main result is a finite-sample bound on the power
of the proposed test for distributions that are sufficiently separated with
respect to the MMD. The derived separation rate matches the known minimax
optimal rate in this setting. We support our findings with a series of
numerical experiments, emphasizing realistic scientific data.
|
2502.13571
|
Diffusion Model Agnostic Social Influence Maximization in Hyperbolic
Space
|
cs.SI cs.LG
|
The Influence Maximization (IM) problem aims to find a small set of
influential users to maximize their influence spread in a social network.
Traditional methods rely on fixed diffusion models with known parameters,
limiting their generalization to real-world scenarios. In contrast, graph
representation learning-based methods have gained wide attention for overcoming
this limitation by learning user representations to capture influence
characteristics. However, existing studies are built on Euclidean space, which
fails to effectively capture the latent hierarchical features of social
influence distribution. As a result, users' influence spread cannot be
effectively measured through the learned representations. To alleviate these
limitations, we propose HIM, a novel diffusion model agnostic method that
leverages hyperbolic representation learning to estimate users' potential
influence spread from social propagation data. HIM consists of two key
components. First, a hyperbolic influence representation module encodes
influence spread patterns from network structure and historical influence
activations into expressive hyperbolic user representations. Hence, the
influence magnitude of users can be reflected through the geometric properties
of hyperbolic space, where highly influential users tend to cluster near the
space origin. Second, a novel adaptive seed selection module is developed to
flexibly and effectively select seed users using the positional information of
learned user representations. Extensive experiments on five network datasets
demonstrate the superior effectiveness and efficiency of our method for the IM
problem with unknown diffusion model parameters, highlighting its potential for
large-scale real-world social networks.
|
2502.13573
|
Noise May Contain Transferable Knowledge: Understanding Semi-supervised
Heterogeneous Domain Adaptation from an Empirical Perspective
|
cs.LG
|
Semi-supervised heterogeneous domain adaptation (SHDA) addresses learning
across domains with distinct feature representations and distributions, where
source samples are labeled while most target samples are unlabeled, with only a
small fraction labeled. Moreover, there is no one-to-one correspondence between
source and target samples. Although various SHDA methods have been developed to
tackle this problem, the nature of the knowledge transferred across
heterogeneous domains remains unclear. This paper delves into this question
from an empirical perspective. We conduct extensive experiments on about 330
SHDA tasks, employing two supervised learning methods and seven representative
SHDA methods. Surprisingly, our observations indicate that both the category
and feature information of source samples do not significantly impact the
performance of the target domain. Additionally, noise drawn from simple
distributions, when used as source samples, may contain transferable knowledge.
Based on this insight, we perform a series of experiments to uncover the
underlying principles of transferable knowledge in SHDA. Specifically, we
design a unified Knowledge Transfer Framework (KTF) for SHDA. Based on the KTF,
we find that the transferable knowledge in SHDA primarily stems from the
transferability and discriminability of the source domain. Consequently,
ensuring those properties in source samples, regardless of their origin (e.g.,
image, text, noise), can enhance the effectiveness of knowledge transfer in
SHDA tasks. The codes and datasets are available at
https://github.com/yyyaoyuan/SHDA.
|
2502.13574
|
RestoreGrad: Signal Restoration Using Conditional Denoising Diffusion
Models with Jointly Learned Prior
|
eess.IV cs.LG eess.AS
|
Denoising diffusion probabilistic models (DDPMs) can be utilized for
recovering a clean signal from its degraded observation(s) by conditioning the
model on the degraded signal. The degraded signals are themselves contaminated
versions of the clean signals; due to this correlation, they may encompass
certain useful information about the target clean data distribution. However,
existing adoption of the standard Gaussian as the prior distribution in turn
discards such information, resulting in sub-optimal performance. In this paper,
we propose to improve conditional DDPMs for signal restoration by leveraging a
more informative prior that is jointly learned with the diffusion model. The
proposed framework, called RestoreGrad, seamlessly integrates DDPMs into the
variational autoencoder framework and exploits the correlation between the
degraded and clean signals to encode a better diffusion prior. On speech and
image restoration tasks, we show that RestoreGrad demonstrates faster
convergence (5-10 times fewer training steps) to achieve better quality of
restored signals over existing DDPM baselines, and improved robustness to using
fewer sampling steps in inference time (2-2.5 times fewer), advocating the
advantages of leveraging jointly learned prior for efficiency improvements in
the diffusion process.
|
2502.13575
|
ETS: Efficient Tree Search for Inference-Time Scaling
|
cs.LG
|
Test-time compute scaling has emerged as a new axis along which to improve
model accuracy, where additional computation is used at inference time to allow
the model to think longer for more challenging problems. One promising approach
for test-time compute scaling is search against a process reward model, where a
model generates multiple potential candidates at each step of the search, and
these partial trajectories are then scored by a separate reward model in order
to guide the search process. The diversity of trajectories in the tree search
process affects the accuracy of the search, since increasing diversity promotes
more exploration. However, this diversity comes at a cost, as divergent
trajectories have less KV sharing, which means they consume more memory and
slow down the search process. Previous search methods either do not perform
sufficient exploration, or else explore diverse trajectories but have high
latency. We address this challenge by proposing Efficient Tree Search (ETS),
which promotes KV sharing by pruning redundant trajectories while maintaining
necessary diverse trajectories. ETS incorporates a linear programming cost
model to promote KV cache sharing by penalizing the number of nodes retained,
while incorporating a semantic coverage term into the cost model to ensure that
we retain trajectories which are semantically different. We demonstrate how ETS
can achieve 1.8$\times$ reduction in average KV cache size during the search
process, leading to 1.4$\times$ increased throughput relative to prior
state-of-the-art methods, with minimal accuracy degradation and without
requiring any custom kernel implementation. Code is available at:
https://github.com/SqueezeAILab/ETS.
|
2502.13576
|
Beyond One-Size-Fits-All: Tailored Benchmarks for Efficient Evaluation
|
cs.LG cs.AI
|
Evaluating models on large benchmarks is very resource-intensive, especially
during the period of rapid model evolution. Existing efficient evaluation
methods estimate the performance of target models by testing them only on a
small and static coreset of the benchmark, which is derived from the publicly
available evaluation results of source models. These methods rely on the
assumption that target models have high prediction consistency with source
models. However, we demonstrate that it doesn't generalize well in practice. To
alleviate the inconsistency issue, we present TailoredBench, a method that
conducts customized evaluation tailored to each target model. Specifically, a
Global-coreset is first constructed as a probe to identify the most consistent
source models for each target model with an adaptive source model selection
strategy. Afterwards, a scalable K-Medoids clustering algorithm is proposed to
extend the Global-coreset to a tailored Native-coreset for each target model.
According to the predictions on Native-coresets, we obtain the performance of
target models on the whole benchmark with a calibrated estimation strategy.
Comprehensive experiments on 5 benchmarks across over 300 models demonstrate
that compared to best performing baselines, TailoredBench achieves an average
reduction of 31.4% in MAE of accuracy estimates under the same inference
budgets, showcasing strong effectiveness and generalizability.
|
2502.13577
|
Unraveling the Localized Latents: Learning Stratified Manifold
Structures in LLM Embedding Space with Sparse Mixture-of-Experts
|
cs.LG
|
However, real-world data often exhibit complex local structures that can be
challenging for single-model approaches with a smooth global manifold in the
embedding space to unravel. In this work, we conjecture that in the latent
space of these large language models, the embeddings live in a local manifold
structure with different dimensions depending on the perplexities and domains
of the input data, commonly referred to as a Stratified Manifold structure,
which in combination form a structured space known as a Stratified Space. To
investigate the validity of this structural claim, we propose an analysis
framework based on a Mixture-of-Experts (MoE) model where each expert is
implemented with a simple dictionary learning algorithm at varying sparsity
levels. By incorporating an attention-based soft-gating network, we verify that
our model learns specialized sub-manifolds for an ensemble of input data
sources, reflecting the semantic stratification in LLM embedding space. We
further analyze the intrinsic dimensions of these stratified sub-manifolds and
present extensive statistics on expert assignments, gating entropy, and
inter-expert distances. Our experimental results demonstrate that our method
not only validates the claim of a stratified manifold structure in the LLM
embedding space, but also provides interpretable clusters that align with the
intrinsic semantic variations of the input data.
|
2502.13581
|
ActionPiece: Contextually Tokenizing Action Sequences for Generative
Recommendation
|
cs.IR cs.LG
|
Generative recommendation (GR) is an emerging paradigm where user actions are
tokenized into discrete token patterns and autoregressively generated as
predictions. However, existing GR models tokenize each action independently,
assigning the same fixed tokens to identical actions across all sequences
without considering contextual relationships. This lack of context-awareness
can lead to suboptimal performance, as the same action may hold different
meanings depending on its surrounding context. To address this issue, we
propose ActionPiece to explicitly incorporate context when tokenizing action
sequences. In ActionPiece, each action is represented as a set of item
features, which serve as the initial tokens. Given the action sequence corpora,
we construct the vocabulary by merging feature patterns as new tokens, based on
their co-occurrence frequency both within individual sets and across adjacent
sets. Considering the unordered nature of feature sets, we further introduce
set permutation regularization, which produces multiple segmentations of action
sequences with the same semantics. Experiments on public datasets demonstrate
that ActionPiece consistently outperforms existing action tokenization methods,
improving NDCG@$10$ by $6.00\%$ to $12.82\%$.
|
2502.13584
|
Multi-Target Radar Search and Track Using Sequence-Capable Deep
Reinforcement Learning
|
cs.LG cs.SY eess.SY
|
The research addresses sensor task management for radar systems, focusing on
efficiently searching and tracking multiple targets using reinforcement
learning. The approach develops a 3D simulation environment with an active
electronically scanned array radar, using a multi-target tracking algorithm to
improve observation data quality. Three neural network architectures were
compared including an approach using fated recurrent units with multi-headed
self-attention. Two pre-training techniques were applied: behavior cloning to
approximate a random search strategy and an auto-encoder to pre-train the
feature extractor. Experimental results revealed that search performance was
relatively consistent across most methods. The real challenge emerged in
simultaneously searching and tracking targets. The multi-headed self-attention
architecture demonstrated the most promising results, highlighting the
potential of sequence-capable architectures in handling dynamic tracking
scenarios. The key contribution lies in demonstrating how reinforcement
learning can optimize sensor management, potentially improving radar systems'
ability to identify and track multiple targets in complex environments.
|
2502.13592
|
Don't Stop the Multi-Party! On Generating Synthetic Multi-Party
Conversations with Constraints
|
cs.CL
|
Multi-Party Conversations (MPCs) are widely studied across disciplines, with
social media as a primary data source due to their accessibility. However,
these datasets raise privacy concerns and often reflect platform-specific
properties. For example, interactions between speakers may be limited due to
rigid platform structures (e.g., threads, tree-like discussions), which yield
overly simplistic interaction patterns (e.g., as a consequence of ``reply-to''
links). This work explores the feasibility of generating diverse MPCs with
instruction-tuned Large Language Models (LLMs) by providing deterministic
constraints such as dialogue structure and participants' stance. We investigate
two complementary strategies of leveraging LLMs in this context: (i.) LLMs as
MPC generators, where we task the LLM to generate a whole MPC at once and (ii.)
LLMs as MPC parties, where the LLM generates one turn of the conversation at a
time, provided the conversation history. We next introduce an analytical
framework to evaluate compliance with the constraints, content quality, and
interaction complexity for both strategies. Finally, we assess the quality of
obtained MPCs via human annotation and LLM-as-a-judge evaluations. We find
stark differences among LLMs, with only some being able to generate
high-quality MPCs. We also find that turn-by-turn generation yields better
conformance to constraints and higher linguistic variability than generating
MPCs in one pass. Nonetheless, our structural and qualitative evaluation
indicates that both generation strategies can yield high-quality MPCs.
|
2502.13593
|
Toward Robust Non-Transferable Learning: A Survey and Benchmark
|
cs.LG cs.CR cs.CV
|
Over the past decades, researchers have primarily focused on improving the
generalization abilities of models, with limited attention given to regulating
such generalization. However, the ability of models to generalize to unintended
data (e.g., harmful or unauthorized data) can be exploited by malicious
adversaries in unforeseen ways, potentially resulting in violations of model
ethics. Non-transferable learning (NTL), a task aimed at reshaping the
generalization abilities of deep learning models, was proposed to address these
challenges. While numerous methods have been proposed in this field, a
comprehensive review of existing progress and a thorough analysis of current
limitations remain lacking. In this paper, we bridge this gap by presenting the
first comprehensive survey on NTL and introducing NTLBench, the first benchmark
to evaluate NTL performance and robustness within a unified framework.
Specifically, we first introduce the task settings, general framework, and
criteria of NTL, followed by a summary of NTL approaches. Furthermore, we
emphasize the often-overlooked issue of robustness against various attacks that
can destroy the non-transferable mechanism established by NTL. Experiments
conducted via NTLBench verify the limitations of existing NTL methods in
robustness. Finally, we discuss the practical applications of NTL, along with
its future directions and associated challenges.
|
2502.13595
|
MMTEB: Massive Multilingual Text Embedding Benchmark
|
cs.CL cs.AI cs.IR
|
Text embeddings are typically evaluated on a limited set of tasks, which are
constrained by language, domain, and task diversity. To address these
limitations and provide a more comprehensive evaluation, we introduce the
Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale,
community-driven expansion of MTEB, covering over 500 quality-controlled
evaluation tasks across 250+ languages. MMTEB includes a diverse set of
challenging, novel tasks such as instruction following, long-document
retrieval, and code retrieval, representing the largest multilingual collection
of evaluation tasks for embedding models to date. Using this collection, we
develop several highly multilingual benchmarks, which we use to evaluate a
representative set of models. We find that while large language models (LLMs)
with billions of parameters can achieve state-of-the-art performance on certain
language subsets and task categories, the best-performing publicly available
model is multilingual-e5-large-instruct with only 560 million parameters. To
facilitate accessibility and reduce computational cost, we introduce a novel
downsampling method based on inter-task correlation, ensuring a diverse
selection while preserving relative model rankings. Furthermore, we optimize
tasks such as retrieval by sampling hard negatives, creating smaller but
effective splits. These optimizations allow us to introduce benchmarks that
drastically reduce computational demands. For instance, our newly introduced
zero-shot English benchmark maintains a ranking order similar to the full-scale
version but at a fraction of the computational cost.
|
2502.13603
|
Efficient Safety Retrofitting Against Jailbreaking for LLMs
|
cs.CL cs.AI cs.LG
|
Direct Preference Optimization (DPO) is an efficient alignment technique that
steers LLMs towards preferable outputs by training on preference data,
bypassing the need for explicit reward models. Its simplicity enables easy
adaptation to various domains and safety requirements. This paper examines
DPO's effectiveness in model safety against jailbreaking attacks while
minimizing data requirements and training costs. We introduce Egida, a dataset
expanded from multiple sources, which includes 27 different safety topics and
18 different attack styles, complemented with synthetic and human labels. This
data is used to boost the safety of state-of-the-art LLMs
(Llama-3.1-8B/70B-Instruct, Qwen-2.5-7B/72B-Instruct) across topics and attack
styles. In addition to safety evaluations, we assess their post-alignment
performance degradation in general purpose tasks, and their tendency to over
refusal. Following the proposed methodology, trained models reduce their Attack
Success Rate by 10%-30%, using small training efforts (2,000 samples) with low
computational cost (3\$ for 8B models, 20\$ for 72B models). Safety aligned
models generalize to unseen topics and attack styles, with the most successful
attack style reaching a success rate around 5%. Size and family are found to
strongly influence model malleability towards safety, pointing at the
importance of pre-training choices. To validate our findings, a large
independent assessment of human preference agreement with Llama-Guard-3-8B is
conducted by the authors and the associated dataset Egida-HSafe is released.
Overall, this study illustrates how affordable and accessible it is to enhance
LLM safety using DPO while outlining its current limitations. All datasets and
models are released to enable reproducibility and further research.
|
2502.13604
|
BeamLoRA: Beam-Constraint Low-Rank Adaptation
|
cs.CL
|
Due to the demand for efficient fine-tuning of large language models,
Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective
parameter-efficient fine-tuning methods. Nevertheless, while LoRA improves
efficiency, there remains room for improvement in accuracy. Herein, we adopt a
novel perspective to assess the characteristics of LoRA ranks. The results
reveal that different ranks within the LoRA modules not only exhibit varying
levels of importance but also evolve dynamically throughout the fine-tuning
process, which may limit the performance of LoRA. Based on these findings, we
propose BeamLoRA, which conceptualizes each LoRA module as a beam where each
rank naturally corresponds to a potential sub-solution, and the fine-tuning
process becomes a search for the optimal sub-solution combination. BeamLoRA
dynamically eliminates underperforming sub-solutions while expanding the
parameter space for promising ones, enhancing performance with a fixed rank.
Extensive experiments across three base models and 12 datasets spanning math
reasoning, code generation, and commonsense reasoning demonstrate that BeamLoRA
consistently enhances the performance of LoRA, surpassing the other baseline
methods.
|
2502.13606
|
LaVCa: LLM-assisted Visual Cortex Captioning
|
q-bio.NC cs.AI cs.CL cs.CV cs.LG
|
Understanding the property of neural populations (or voxels) in the human
brain can advance our comprehension of human perceptual and cognitive
processing capabilities and contribute to developing brain-inspired computer
models. Recent encoding models using deep neural networks (DNNs) have
successfully predicted voxel-wise activity. However, interpreting the
properties that explain voxel responses remains challenging because of the
black-box nature of DNNs. As a solution, we propose LLM-assisted Visual Cortex
Captioning (LaVCa), a data-driven approach that uses large language models
(LLMs) to generate natural-language captions for images to which voxels are
selective. By applying LaVCa for image-evoked brain activity, we demonstrate
that LaVCa generates captions that describe voxel selectivity more accurately
than the previously proposed method. Furthermore, the captions generated by
LaVCa quantitatively capture more detailed properties than the existing method
at both the inter-voxel and intra-voxel levels. Furthermore, a more detailed
analysis of the voxel-specific properties generated by LaVCa reveals
fine-grained functional differentiation within regions of interest (ROIs) in
the visual cortex and voxels that simultaneously represent multiple distinct
concepts. These findings offer profound insights into human visual
representations by assigning detailed captions throughout the visual cortex
while highlighting the potential of LLM-based methods in understanding brain
representations. Please check out our webpage at
https://sites.google.com/view/lavca-llm/
|
2502.13607
|
Environmental Influences on Collaboration Network Evolution: A
Historical Analysis
|
cs.SI physics.soc-ph
|
We analysed two large collaboration networks -- the Microsoft Academic Graph
(1800-2020) and Internet Movie Database (1900-2020) -- to quantify network
responses to major historical events. Our analysis revealed four properties of
network-environment interaction. First, historical events can influence network
evolution, with effects persisting far longer than previously recognised; the
academic network showed 45\% declines during World Wars and 90\% growth during
La Belle Epoque. Second, node and edge processes exhibited different
environmental sensitivities; while node addition/removal tracked historical
events, edge formation maintained stable statistical properties even during
major disruptions. Third, different collaboration networks showed distinct
response patterns; academic networks displayed sharp disruptions and rapid
recoveries, while entertainment networks showed gradual changes and greater
resilience. Fourth, both networks developed increasing resilience. Our results
provide new insights for modelling network evolution and managing collaborative
systems during periods of external disruption.
|
2502.13619
|
Complex Ontology Matching with Large Language Model Embeddings
|
cs.CL cs.AI
|
Ontology, and more broadly, Knowledge Graph Matching is a challenging task in
which expressiveness has not been fully addressed. Despite the increasing use
of embeddings and language models for this task, approaches for generating
expressive correspondences still do not take full advantage of these models, in
particular, large language models (LLMs). This paper proposes to integrate LLMs
into an approach for generating expressive correspondences based on alignment
need and ABox-based relation discovery. The generation of correspondences is
performed by matching similar surroundings of instance sub-graphs. The
integration of LLMs results in different architectural modifications, including
label similarity, sub-graph matching, and entity matching. The performance word
embeddings, sentence embeddings, and LLM-based embeddings, was compared. The
results demonstrate that integrating LLMs surpasses all other models, enhancing
the baseline version of the approach with a 45\% increase in F-measure.
|
2502.13621
|
Decentralized Planning Using Probabilistic Hyperproperties
|
cs.LO cs.AI
|
Multi-agent planning under stochastic dynamics is usually formalised using
decentralized (partially observable) Markov decision processes ( MDPs) and
reachability or expected reward specifications. In this paper, we propose a
different approach: we use an MDP describing how a single agent operates in an
environment and probabilistic hyperproperties to capture desired temporal
objectives for a set of decentralized agents operating in the environment. We
extend existing approaches for model checking probabilistic hyperproperties to
handle temporal formulae relating paths of different agents, thus requiring the
self-composition between multiple MDPs. Using several case studies, we
demonstrate that our approach provides a flexible and expressive framework to
broaden the specification capabilities with respect to existing planning
techniques. Additionally, we establish a close connection between a subclass of
probabilistic hyperproperties and planning for a particular type of Dec-MDPs,
for both of which we show undecidability. This lays the ground for the use of
existing decentralized planning tools in the field of probabilistic
hyperproperty verification.
|
2502.13622
|
REFIND: Retrieval-Augmented Factuality Hallucination Detection in Large
Language Models
|
cs.CL cs.AI
|
Hallucinations in large language model (LLM) outputs severely limit their
reliability in knowledge-intensive tasks such as question answering. To address
this challenge, we introduce REFIND (Retrieval-augmented Factuality
hallucINation Detection), a novel framework that detects hallucinated spans
within LLM outputs by directly leveraging retrieved documents. As part of the
REFIND, we propose the Context Sensitivity Ratio (CSR), a novel metric that
quantifies the sensitivity of LLM outputs to retrieved evidence. This
innovative approach enables REFIND to efficiently and accurately detect
hallucinations, setting it apart from existing methods. In the evaluation,
REFIND demonstrated robustness across nine languages, including low-resource
settings, and significantly outperformed baseline models, achieving superior
IoU scores in identifying hallucinated spans. This work highlights the
effectiveness of quantifying context sensitivity for hallucination detection,
thereby paving the way for more reliable and trustworthy LLM applications
across diverse languages.
|
2502.13624
|
CardiacMamba: A Multimodal RGB-RF Fusion Framework with State Space
Models for Remote Physiological Measurement
|
cs.CV
|
Heart rate (HR) estimation via remote photoplethysmography (rPPG) offers a
non-invasive solution for health monitoring. However, traditional
single-modality approaches (RGB or Radio Frequency (RF)) face challenges in
balancing robustness and accuracy due to lighting variations, motion artifacts,
and skin tone bias. In this paper, we propose CardiacMamba, a multimodal RGB-RF
fusion framework that leverages the complementary strengths of both modalities.
It introduces the Temporal Difference Mamba Module (TDMM) to capture dynamic
changes in RF signals using timing differences between frames, enhancing the
extraction of local and global features. Additionally, CardiacMamba employs a
Bidirectional SSM for cross-modal alignment and a Channel-wise Fast Fourier
Transform (CFFT) to effectively capture and refine the frequency domain
characteristics of RGB and RF signals, ultimately improving heart rate
estimation accuracy and periodicity detection. Extensive experiments on the
EquiPleth dataset demonstrate state-of-the-art performance, achieving marked
improvements in accuracy and robustness. CardiacMamba significantly mitigates
skin tone bias, reducing performance disparities across demographic groups, and
maintains resilience under missing-modality scenarios. By addressing critical
challenges in fairness, adaptability, and precision, the framework advances
rPPG technology toward reliable real-world deployment in healthcare. The codes
are available at: https://github.com/WuZheng42/CardiacMamba.
|
2502.13626
|
AI-Empowered Catalyst Discovery: A Survey from Classical Machine
Learning Approaches to Large Language Models
|
cs.CE
|
Catalysts are essential for accelerating chemical reactions and enhancing
selectivity, which is crucial for the sustainable production of energy,
materials, and bioactive compounds. Catalyst discovery is fundamental yet
challenging in computational chemistry and has garnered significant attention
due to the promising performance of advanced Artificial Intelligence (AI)
techniques. The development of Large Language Models (LLMs) notably accelerates
progress in the discovery of both homogeneous and heterogeneous catalysts,
where their chemical reactions differ significantly in material phases,
temperature, dynamics, etc. However, there is currently no comprehensive survey
that discusses the progress and latest developments in both areas, particularly
with the application of LLM techniques. To address this gap, this paper
presents a thorough and systematic survey of AI-empowered catalyst discovery,
employing a unified and general categorization for homogeneous and
heterogeneous catalysts. We examine the progress of AI-empowered catalyst
discovery, highlighting their individual advantages and disadvantages, and
discuss the challenges faced in this field. Furthermore, we suggest potential
directions for future research from the perspective of computer science. Our
goal is to assist researchers in computational chemistry, computer science, and
related fields in easily tracking the latest advancements, providing a clear
overview and roadmap of this area. We also organize and make accessible
relevant resources, including article lists and datasets, in an open repository
at
https://github.com/LuckyGirl-XU/Awesome-Artificial-Intelligence-Empowered-Catalyst-Discovery.
|
2502.13628
|
Non-Euclidean Hierarchical Representational Learning using Hyperbolic
Graph Neural Networks for Environmental Claim Detection
|
cs.CL
|
Transformer-based models dominate NLP tasks like sentiment analysis, machine
translation, and claim verification. However, their massive computational
demands and lack of interpretability pose challenges for real-world
applications requiring efficiency and transparency. In this work, we explore
Graph Neural Networks (GNNs) and Hyperbolic Graph Neural Networks (HGNNs) as
lightweight yet effective alternatives for Environmental Claim Detection,
reframing it as a graph classification problem. We construct dependency parsing
graphs to explicitly model syntactic structures, using simple word embeddings
(word2vec) for node features with dependency relations encoded as edge
features. Our results demonstrate that these graph-based models achieve
comparable or superior performance to state-of-the-art transformers while using
30x fewer parameters. This efficiency highlights the potential of structured,
interpretable, and computationally efficient graph-based approaches.
|
2502.13632
|
Concept Layers: Enhancing Interpretability and Intervenability via LLM
Conceptualization
|
cs.LG cs.AI cs.CL
|
The opaque nature of Large Language Models (LLMs) has led to significant
research efforts aimed at enhancing their interpretability, primarily through
post-hoc methods. More recent in-hoc approaches, such as Concept Bottleneck
Models (CBMs), offer both interpretability and intervenability by incorporating
explicit concept representations. However, these methods suffer from key
limitations, including reliance on labeled concept datasets and significant
architectural modifications that challenges re-integration into existing system
pipelines. In this work, we introduce a new methodology for incorporating
interpretability and intervenability into an existing model by integrating
Concept Layers (CLs) into its architecture. Our approach projects the model's
internal vector representations into a conceptual, explainable vector space
before reconstructing and feeding them back into the model. Furthermore, we
eliminate the need for a human-selected concept set by algorithmically
searching an ontology for a set of concepts that can be either task-specific or
task-agnostic. We evaluate CLs across multiple tasks, demonstrating that they
maintain the original model's performance and agreement while enabling
meaningful interventions. Additionally, we present a proof of concept
showcasing an intervenability interface, allowing users to adjust model
behavior dynamically, such as mitigating biases during inference.
|
2502.13634
|
First Glimpse on Physical Layer Security in Internet of Vehicles:
Transformed from Communication Interference to Sensing Interference
|
cs.IT math.IT
|
Integrated sensing and communication (ISAC) plays a crucial role in the
Internet of Vehicles (IoV), serving as a key factor in enhancing driving safety
and traffic efficiency. To address the security challenges of the confidential
information transmission caused by the inherent openness nature of wireless
medium, different from current physical layer security (PLS) methods, which
depends on the \emph{additional communication interference} costing extra power
resources, in this paper, we investigate a novel PLS solution, under which the
\emph{inherent radar sensing interference} of the vehicles is utilized to
secure wireless communications. To measure the performance of PLS methods in
ISAC-based IoV systems, we first define an improved security performance metric
called by transmission reliability and sensing accuracy based secrecy rate
(TRSA\_SR), and derive closed-form expressions of connection outage probability
(COP), secrecy outage probability (SOP), success ranging probability (SRP) for
evaluating transmission reliability, security and sensing accuracy,
respectively. Furthermore, we formulate an optimization problem to maximize the
TRSA\_SR by utilizing radar sensing interference and joint design of the
communication duration, transmission power and straight trajectory of the
legitimate transmitter. Finally, the non-convex feature of formulated problem
is solved through the problem decomposition and alternating optimization.
Simulations indicate that compared with traditional PLS methods obtaining a
non-positive STC, the proposed method achieves a secrecy rate of 3.92bps/Hz for
different settings of noise power.
|
2502.13637
|
Exploring Mutual Cross-Modal Attention for Context-Aware Human
Affordance Generation
|
cs.CV cs.MM
|
Human affordance learning investigates contextually relevant novel pose
prediction such that the estimated pose represents a valid human action within
the scene. While the task is fundamental to machine perception and automated
interactive navigation agents, the exponentially large number of probable pose
and action variations make the problem challenging and non-trivial. However,
the existing datasets and methods for human affordance prediction in 2D scenes
are significantly limited in the literature. In this paper, we propose a novel
cross-attention mechanism to encode the scene context for affordance prediction
by mutually attending spatial feature maps from two different modalities. The
proposed method is disentangled among individual subtasks to efficiently reduce
the problem complexity. First, we sample a probable location for a person
within the scene using a variational autoencoder (VAE) conditioned on the
global scene context encoding. Next, we predict a potential pose template from
a set of existing human pose candidates using a classifier on the local context
encoding around the predicted location. In the subsequent steps, we use two
VAEs to sample the scale and deformation parameters for the predicted pose
template by conditioning on the local context and template class. Our
experiments show significant improvements over the previous baseline of human
affordance injection into complex 2D scenes.
|
2502.13638
|
Integrating Inverse and Forward Modeling for Sparse Temporal Data from
Sensor Networks
|
cs.LG cs.AI
|
We present CavePerception, a framework for the analysis of sparse data from
sensor networks that incorporates elements of inverse modeling and forward
modeling. By integrating machine learning with physical modeling in a
hypotheses space, we aim to improve the interpretability of sparse, noisy, and
potentially incomplete sensor data. The framework assumes data from a
two-dimensional sensor network laid out in a graph structure that detects
certain objects, with certain motion patterns. Examples of such sensors are
magnetometers. Given knowledge about the objects and the way they act on the
sensors, one can develop a data generator that produces data from simulated
motions of the objects across the sensor field. The framework uses the
simulated data to infer object behaviors across the sensor network. The
approach is experimentally tested on real-world data, where magnetometers are
used on an airport to detect and identify aircraft motions. Experiments
demonstrate the value of integrating inverse and forward modeling, enabling
intelligent systems to better understand and predict complex, sensor-driven
events.
|
2502.13640
|
Qorgau: Evaluating LLM Safety in Kazakh-Russian Bilingual Contexts
|
cs.CL
|
Large language models (LLMs) are known to have the potential to generate
harmful content, posing risks to users. While significant progress has been
made in developing taxonomies for LLM risks and safety evaluation prompts, most
studies have focused on monolingual contexts, primarily in English. However,
language- and region-specific risks in bilingual contexts are often overlooked,
and core findings can diverge from those in monolingual settings. In this
paper, we introduce Qorgau, a novel dataset specifically designed for safety
evaluation in Kazakh and Russian, reflecting the unique bilingual context in
Kazakhstan, where both Kazakh (a low-resource language) and Russian (a
high-resource language) are spoken. Experiments with both multilingual and
language-specific LLMs reveal notable differences in safety performance,
emphasizing the need for tailored, region-specific datasets to ensure the
responsible and safe deployment of LLMs in countries like Kazakhstan. Warning:
this paper contains example data that may be offensive, harmful, or biased.
|
2502.13641
|
SLAMSpoof: Practical LiDAR Spoofing Attacks on Localization Systems
Guided by Scan Matching Vulnerability Analysis
|
cs.RO
|
Accurate localization is essential for enabling modern full self-driving
services. These services heavily rely on map-based traffic information to
reduce uncertainties in recognizing lane shapes, traffic light locations, and
traffic signs. Achieving this level of reliance on map information requires
centimeter-level localization accuracy, which is currently only achievable with
LiDAR sensors. However, LiDAR is known to be vulnerable to spoofing attacks
that emit malicious lasers against LiDAR to overwrite its measurements. Once
localization is compromised, the attack could lead the victim off roads or make
them ignore traffic lights. Motivated by these serious safety implications, we
design SLAMSpoof, the first practical LiDAR spoofing attack on localization
systems for self-driving to assess the actual attack significance on autonomous
vehicles. SLAMSpoof can effectively find the effective attack location based on
our scan matching vulnerability score (SMVS), a point-wise metric representing
the potential vulnerability to spoofing attacks. To evaluate the effectiveness
of the attack, we conduct real-world experiments on ground vehicles and confirm
its high capability in real-world scenarios, inducing position errors of
$\geq$4.2 meters (more than typical lane width) for all 3 popular LiDAR-based
localization algorithms. We finally discuss the potential countermeasures of
this attack. Code is available at https://github.com/Keio-CSG/slamspoof
|
2502.13645
|
Measuring the Effect of Transcription Noise on Downstream Language
Understanding Tasks
|
cs.CL
|
With the increasing prevalence of recorded human speech, spoken language
understanding (SLU) is essential for its efficient processing. In order to
process the speech, it is commonly transcribed using automatic speech
recognition technology. This speech-to-text transition introduces errors into
the transcripts, which subsequently propagate to downstream NLP tasks, such as
dialogue summarization. While it is known that transcript noise affects
downstream tasks, a systematic approach to analyzing its effects across
different noise severities and types has not been addressed. We propose a
configurable framework for assessing task models in diverse noisy settings, and
for examining the impact of transcript-cleaning techniques. The framework
facilitates the investigation of task model behavior, which can in turn support
the development of effective SLU solutions. We exemplify the utility of our
framework on three SLU tasks and four task models, offering insights regarding
the effect of transcript noise on tasks in general and models in particular.
For instance, we find that task models can tolerate a certain level of noise,
and are affected differently by the types of errors in the transcript.
|
2502.13646
|
D.Va: Validate Your Demonstration First Before You Use It
|
cs.CL
|
In-context learning (ICL) has demonstrated significant potential in enhancing
the capabilities of large language models (LLMs) during inference. It's
well-established that ICL heavily relies on selecting effective demonstrations
to generate outputs that better align with the expected results. As for
demonstration selection, previous approaches have typically relied on intuitive
metrics to evaluate the effectiveness of demonstrations, which often results in
limited robustness and poor cross-model generalization capabilities. To tackle
these challenges, we propose a novel method, \textbf{D}emonstration
\textbf{VA}lidation (\textbf{D.Va}), which integrates a demonstration
validation perspective into this field. By introducing the demonstration
validation mechanism, our method effectively identifies demonstrations that are
both effective and highly generalizable. \textbf{D.Va} surpasses all existing
demonstration selection techniques across both natural language understanding
(NLU) and natural language generation (NLG) tasks. Additionally, we demonstrate
the robustness and generalizability of our approach across various language
models with different retrieval models.
|
2502.13647
|
Instruction Tuning on Public Government and Cultural Data for
Low-Resource Language: a Case Study in Kazakh
|
cs.CL
|
Instruction tuning in low-resource languages remains underexplored due to
limited text data, particularly in government and cultural domains. To address
this, we introduce and open-source a large-scale (10,600 samples)
instruction-following (IFT) dataset, covering key institutional and cultural
knowledge relevant to Kazakhstan. Our dataset enhances LLMs' understanding of
procedural, legal, and structural governance topics. We employ LLM-assisted
data generation, comparing open-weight and closed-weight models for dataset
construction, and select GPT-4o as the backbone. Each entity of our dataset
undergoes full manual verification to ensure high quality. We also show that
fine-tuning Qwen, Falcon, and Gemma on our dataset leads to consistent
performance improvements in both multiple-choice and generative tasks,
demonstrating the potential of LLM-assisted instruction tuning for low-resource
languages.
|
2502.13648
|
Reliability Across Parametric and External Knowledge: Understanding
Knowledge Handling in LLMs
|
cs.CL
|
Large Language Models (LLMs) enhance their problem-solving capability by
leveraging both parametric and external knowledge. Beyond leveraging external
knowledge to improve response accuracy, they require key capabilities for
reliable knowledge-handling: resolving conflicts between knowledge sources,
avoiding distraction from uninformative external knowledge, and abstaining when
sufficient knowledge is unavailable. Prior studies have examined these
scenarios in isolation or with limited scope. To systematically evaluate these
capabilities, we introduce a comprehensive framework for analyzing
knowledge-handling based on two key dimensions: the presence of parametric
knowledge and the informativeness of external knowledge. Through analysis, we
identify biases in knowledge utilization and examine how the ability to handle
one scenario impacts performance in others. Furthermore, we demonstrate that
training on data constructed based on the knowledge-handling scenarios improves
LLMs' reliability in integrating and utilizing knowledge.
|
2502.13652
|
C2T: A Classifier-Based Tree Construction Method in Speculative Decoding
|
cs.CL cs.AI
|
The growing scale of Large Language Models (LLMs) has exacerbated inference
latency and computational costs. Speculative decoding methods, which aim to
mitigate these issues, often face inefficiencies in the construction of token
trees and the verification of candidate tokens. Existing strategies, including
chain mode, static tree, and dynamic tree approaches, have limitations in
accurately preparing candidate token trees for verification. We propose a novel
method named C2T that adopts a lightweight classifier to generate and prune
token trees dynamically. Our classifier considers additional feature variables
beyond the commonly used joint probability to predict the confidence score for
each draft token to determine whether it is the candidate token for
verification. This method outperforms state-of-the-art (SOTA) methods such as
EAGLE-2 on multiple benchmarks, by reducing the total number of candidate
tokens by 25% while maintaining or even improving the acceptance length.
|
2502.13653
|
A Query-Driven Approach to Space-Efficient Range Searching
|
cs.DS cs.CG cs.LG
|
We initiate a study of a query-driven approach to designing partition trees
for range-searching problems. Our model assumes that a data structure is to be
built for an unknown query distribution that we can access through a sampling
oracle, and must be selected such that it optimizes a meaningful performance
parameter on expectation. Our first contribution is to show that a near-linear
sample of queries allows the construction of a partition tree with a
near-optimal expected number of nodes visited during querying. We enhance this
approach by treating node processing as a classification problem, leveraging
fast classifiers like shallow neural networks to obtain experimentally
efficient query times. Our second contribution is to develop partition trees
using sparse geometric separators. Our preprocessing algorithm, based on a
sample of queries, builds a balanced tree with nodes associated with separators
that minimize query stabs on expectation; this yields both fast processing of
each node and a small number of visited nodes, significantly reducing query
time.
|
2502.13656
|
Refining Sentence Embedding Model through Ranking Sentences Generation
with Large Language Models
|
cs.CL
|
Sentence embedding is essential for many NLP tasks, with contrastive learning
methods achieving strong performance using annotated datasets like NLI. Yet,
the reliance on manual labels limits scalability. Recent studies leverage large
language models (LLMs) to generate sentence pairs, reducing annotation
dependency. However, they overlook ranking information crucial for fine-grained
semantic distinctions. To tackle this challenge, we propose a method for
controlling the generation direction of LLMs in the latent space. Unlike
unconstrained generation, the controlled approach ensures meaningful semantic
divergence. Then, we refine exist sentence embedding model by integrating
ranking information and semantic information. Experiments on multiple
benchmarks demonstrate that our method achieves new SOTA performance with a
modest cost in ranking sentence synthesis.
|
2502.13660
|
Towards Invariance to Node Identifiers in Graph Neural Networks
|
cs.LG
|
Message-Passing Graph Neural Networks (GNNs) are known to have limited
expressive power, due to their message passing structure. One mechanism for
circumventing this limitation is to add unique node identifiers (IDs), which
break the symmetries that underlie the expressivity limitation. In this work,
we highlight a key limitation of the ID framework, and propose an approach for
addressing it. We begin by observing that the final output of the GNN should
clearly not depend on the specific IDs used. We then show that in practice this
does not hold, and thus the learned network does not possess this desired
structural property. Such invariance to node IDs may be enforced in several
ways, and we discuss their theoretical properties.
We then propose a novel regularization method that effectively enforces ID
invariance to the network. Extensive evaluations on both real-world and
synthetic tasks demonstrate that our approach significantly improves ID
invariance and, in turn, often boosts generalization performance.
|
2502.13662
|
Generalization error bound for denoising score matching under relaxed
manifold assumption
|
cs.LG math.ST stat.ML stat.TH
|
We examine theoretical properties of the denoising score matching estimate.
We model the density of observations with a nonparametric Gaussian mixture. We
significantly relax the standard manifold assumption allowing the samples step
away from the manifold. At the same time, we are still able to leverage a nice
distribution structure. We derive non-asymptotic bounds on the approximation
and generalization errors of the denoising score matching estimate. The rates
of convergence are determined by the intrinsic dimension. Furthermore, our
bounds remain valid even if we allow the ambient dimension grow polynomially
with the sample size.
|
2502.13663
|
User Association and Coordinated Beamforming in Cognitive
Aerial-Terrestrial Networks: A Safe Reinforcement Learning Approach
|
cs.IT eess.SP math.IT
|
Cognitive aerial-terrestrial networks (CATNs) offer a solution to spectrum
scarcity by sharing spectrum between aerial and terrestrial networks. However,
aerial users (AUs) experience significant interference from numerous
terrestrial base stations (BSs). To alleviate such interference, we investigate
a user association and coordinated beamforming (CBF) problem in CATN, where the
aerial network serves as the primary network sharing its spectrum with the
terrestrial network. Specifically, we maximize the sum rate of the secondary
terrestrial users (TUs) under the interference temperature constraints of the
AUs. Traditional iterative optimization schemes are impractical due to their
high computational complexity and information exchange overhead. Although deep
reinforcement learning (DRL) based schemes can address these challenges, their
performance is sensitive to the weights of the weighted penalty terms for
violating constraints in the reward function. Motivated by these issues, we
propose a safe DRL-based user association and CBF scheme for CATN, eliminating
the need for training multiple times to find the optimal penalty weight before
actual deployment. Specifically, the CATN is modeled as a networked constrained
partially observable Markov game. Each TU acts as an agent to choose its
associated BS, and each BS acts as an agent to decide its beamforming vectors,
aiming to maximize the reward while satisfying the safety constraints
introduced by the interference constraints of the AUs. By exploiting a safe DRL
algorithm, the proposed scheme incurs lower deployment expenses than the
penalty-based DRL schemes since only one training is required before actual
deployment. Simulation results show that the proposed scheme can achieve a
higher sum rate of TUs than a two-stage optimization scheme while the average
received interference power of the AUs is generally below the threshold.
|
2502.13668
|
PeerQA: A Scientific Question Answering Dataset from Peer Reviews
|
cs.CL cs.AI cs.IR
|
We present PeerQA, a real-world, scientific, document-level Question
Answering (QA) dataset. PeerQA questions have been sourced from peer reviews,
which contain questions that reviewers raised while thoroughly examining the
scientific article. Answers have been annotated by the original authors of each
paper. The dataset contains 579 QA pairs from 208 academic articles, with a
majority from ML and NLP, as well as a subset of other scientific communities
like Geoscience and Public Health. PeerQA supports three critical tasks for
developing practical QA systems: Evidence retrieval, unanswerable question
classification, and answer generation. We provide a detailed analysis of the
collected dataset and conduct experiments establishing baseline systems for all
three tasks. Our experiments and analyses reveal the need for
decontextualization in document-level retrieval, where we find that even simple
decontextualization approaches consistently improve retrieval performance
across architectures. On answer generation, PeerQA serves as a challenging
benchmark for long-context modeling, as the papers have an average size of 12k
tokens. Our code and data is available at https://github.com/UKPLab/peerqa.
|
2502.13674
|
SCOPE: A Self-supervised Framework for Improving Faithfulness in
Conditional Text Generation
|
cs.CL
|
Large Language Models (LLMs), when used for conditional text generation,
often produce hallucinations, i.e., information that is unfaithful or not
grounded in the input context. This issue arises in typical conditional text
generation tasks, such as text summarization and data-to-text generation, where
the goal is to produce fluent text based on contextual input. When fine-tuned
on specific domains, LLMs struggle to provide faithful answers to a given
context, often adding information or generating errors. One underlying cause of
this issue is that LLMs rely on statistical patterns learned from their
training data. This reliance can interfere with the model's ability to stay
faithful to a provided context, leading to the generation of ungrounded
information. We build upon this observation and introduce a novel
self-supervised method for generating a training set of unfaithful samples. We
then refine the model using a training process that encourages the generation
of grounded outputs over unfaithful ones, drawing on preference-based training.
Our approach leads to significantly more grounded text generation,
outperforming existing self-supervised techniques in faithfulness, as evaluated
through automatic metrics, LLM-based assessments, and human evaluations.
|
2502.13675
|
A CFL condition for the finite cell method
|
cs.CE cs.NA math.NA
|
Immersed boundary finite element methods allow the user to bypass the
potentially troublesome task of boundary-conforming mesh generation. However,
they suffer from the influence of cut elements, i.e., elements that are
intersected by the physical domain boundaries. When combined with explicit time
integration, poorly cut elements with little support in the physical domain
have a detrimental effect on the critical time step size, thereby hampering the
application of immersed boundary methods to wave propagation simulations. In
this paper, we investigate the stabilizing effect of the finite cell method
concerning explicit time integration. Starting with an analytical solution of
an example with one degree of freedom, we systematically study the influence of
$\alpha$-stabilization on the maximum eigenvalue and thus on the critical time
step size. The analysis is then complemented by a numerical study of an example
with one element and an increasing polynomial degree. We demonstrate that the
critical time step size does not decrease below a certain limit, even when
further reducing the cut fraction of the element. This minimum critical time
step size is controlled by the chosen $\alpha$ value and becomes less severe
for higher dimensions. Increasing the polynomial degree has little effect on
the degradation of the minimum critical time step size. Finally, we provide an
estimate of the minimum critical time step size depending on the chosen
stabilization parameter $\alpha$ and the dimension of the problem. Based on
this estimate, we propose a modified CFL condition for the finite cell method,
the validity of which we demonstrate on a numerical example of a perforated
plate.
|
2502.13676
|
An Adaptive Data-Enabled Policy Optimization Approach for Autonomous
Bicycle Control
|
eess.SY cs.RO cs.SY math.OC
|
This paper presents a unified control framework that integrates a Feedback
Linearization (FL) controller in the inner loop with an adaptive Data-Enabled
Policy Optimization (DeePO) controller in the outer loop to balance an
autonomous bicycle. While the FL controller stabilizes and partially linearizes
the inherently unstable and nonlinear system, its performance is compromised by
unmodeled dynamics and time-varying characteristics. To overcome these
limitations, the DeePO controller is introduced to enhance adaptability and
robustness. The initial control policy of DeePO is obtained from a finite set
of offline, persistently exciting input and state data. To improve stability
and compensate for system nonlinearities and disturbances, a
robustness-promoting regularizer refines the initial policy, while the adaptive
section of the DeePO framework is enhanced with a forgetting factor to improve
adaptation to time-varying dynamics. The proposed DeePO+FL approach is
evaluated through simulations and real-world experiments on an instrumented
autonomous bicycle. Results demonstrate its superiority over the FL-only
approach, achieving more precise tracking of the reference lean angle and lean
rate.
|
2502.13677
|
A Framework for Semantics-based Situational Awareness during Mobile
Robot Deployments
|
cs.RO
|
Deployment of robots into hazardous environments typically involves a
``Human-Robot Teaming'' (HRT) paradigm, in which a human supervisor interacts
with a remotely operating robot inside the hazardous zone. Situational
Awareness (SA) is vital for enabling HRT, to support navigation, planning, and
decision-making. This paper explores issues of higher-level ``semantic''
information and understanding in SA. In semi-autonomous, or variable-autonomy
paradigms, different types of semantic information may be important, in
different ways, for both the human operator and an autonomous agent controlling
the robot. We propose a generalizable framework for acquiring and combining
multiple modalities of semantic-level SA during remote deployments of mobile
robots. We demonstrate the framework with an example application of search and
rescue (SAR) in disaster response robotics. We propose a set of ``environment
semantic indicators" that can reflect a variety of different types of semantic
information, e.g. indicators of risk, or signs of human activity, as the robot
encounters different scenes. Based on these indicators, we propose a metric to
describe the overall situation of the environment called ``Situational Semantic
Richness (SSR)". This metric combines multiple semantic indicators to summarise
the overall situation. The SSR indicates if an information-rich and complex
situation has been encountered, which may require advanced reasoning for robots
and humans and hence the attention of the expert human operator. The framework
is tested on a Jackal robot in a mock-up disaster response environment.
Experimental results demonstrate that the proposed semantic indicators are
sensitive to changes in different modalities of semantic information in
different scenes, and the SSR metric reflects overall semantic changes in the
situations encountered.
|
2502.13681
|
An LLM-based Agent for Reliable Docker Environment Configuration
|
cs.SE cs.AI cs.CL cs.LG
|
Environment configuration is a critical yet time-consuming step in software
development, especially when dealing with unfamiliar code repositories. While
Large Language Models (LLMs) demonstrate the potential to accomplish software
engineering tasks, existing methods for environment configuration often rely on
manual efforts or fragile scripts, leading to inefficiencies and unreliable
outcomes. We introduce Repo2Run, the first LLM-based agent designed to fully
automate environment configuration and generate executable Dockerfiles for
arbitrary Python repositories. We address two major challenges: (1) enabling
the LLM agent to configure environments within isolated Docker containers, and
(2) ensuring the successful configuration process is recorded and accurately
transferred to a Dockerfile without error. To achieve this, we propose atomic
configuration synthesis, featuring a dual-environment architecture (internal
and external environment) with a rollback mechanism to prevent environment
"pollution" from failed commands, guaranteeing atomic execution (execute fully
or not at all) and a Dockerfile generator to transfer successful configuration
steps into runnable Dockerfiles. We evaluate Repo2Run~on our proposed benchmark
of 420 recent Python repositories with unit tests, where it achieves an 86.0%
success rate, outperforming the best baseline by 63.9%.
|
2502.13685
|
MoM: Linear Sequence Modeling with Mixture-of-Memories
|
cs.CL cs.AI cs.LG
|
Linear sequence modeling methods, such as linear attention, state space
modeling, and linear RNNs, offer significant efficiency improvements by
reducing the complexity of training and inference. However, these methods
typically compress the entire input sequence into a single fixed-size memory
state, which leads to suboptimal performance on recall-intensive downstream
tasks. Drawing inspiration from neuroscience, particularly the brain's ability
to maintain robust long-term memory while mitigating "memory interference", we
introduce a novel architecture called Mixture-of-Memories (MoM). MoM utilizes
multiple independent memory states, with a router network directing input
tokens to specific memory states. This approach greatly enhances the overall
memory capacity while minimizing memory interference. As a result, MoM performs
exceptionally well on recall-intensive tasks, surpassing existing linear
sequence modeling techniques. Despite incorporating multiple memory states, the
computation of each memory state remains linear in complexity, allowing MoM to
retain the linear-complexity advantage during training, while
constant-complexity during inference. Our experimental results show that MoM
significantly outperforms current linear sequence models on downstream language
tasks, particularly recall-intensive tasks, and even achieves performance
comparable to Transformer models. The code is released at
https://github.com/OpenSparseLLMs/MoM and is also released as a part of
https://github.com/OpenSparseLLMs/Linear-MoE.
|
2502.13686
|
Graph Signal Inference by Learning Narrowband Spectral Kernels
|
stat.ML cs.LG
|
While a common assumption in graph signal analysis is the smoothness of the
signals or the band-limitedness of their spectrum, in many instances the
spectrum of real graph data may be concentrated at multiple regions of the
spectrum, possibly including mid-to-high-frequency components. In this work, we
propose a novel graph signal model where the signal spectrum is represented
through the combination of narrowband kernels in the graph frequency domain. We
then present an algorithm that jointly learns the model by optimizing the
kernel parameters and the signal representation coefficients from a collection
of graph signals. Our problem formulation has the flexibility of permitting the
incorporation of signals possibly acquired on different graphs into the
learning algorithm. We then theoretically study the signal reconstruction
performance of the proposed method, by also elaborating on when joint learning
on multiple graphs is preferable to learning an individual model on each graph.
Experimental results on several graph data sets shows that the proposed method
offers quite satisfactory signal interpolation accuracy in comparison with a
variety of reference approaches in the literature.
|
2502.13688
|
Non-Linear Function Computation Broadcast
|
cs.IT math.IT
|
This work addresses the $K$-user computation broadcast problem consisting of
a master node, that holds all datasets and users for a general class of
function demands, including linear and non-linear functions, over finite
fields. The master node sends a broadcast message to enable each of $K$
distributed users to compute its demanded function in an asymptotically
lossless manner with user's side information. We derive bounds on the optimal
$K$-user computation broadcast rate that allows the users to compute their
demanded functions by capturing the structures of the computations and
available side information. Our achievability scheme involves the design of a
novel graph-based coding model to build a broadcast message to meet each user's
demand, by leveraging the structural dependencies among the datasets, the user
demands, and the side information of each user, drawing on K{\"o}rner's
characteristic graph framework. The converse uses the structures of the demands
and the side information available at $K$ users to yield a tight lower bound on
the broadcast rate. With the help of examples, we demonstrate our scheme
achieves a better communication rate than the existing state of the art.
|
2502.13691
|
Is This Collection Worth My LLM's Time? Automatically Measuring
Information Potential in Text Corpora
|
cs.CL
|
As large language models (LLMs) converge towards similar capabilities, the
key to advancing their performance lies in identifying and incorporating
valuable new information sources. However, evaluating which text collections
are worth the substantial investment required for digitization, preprocessing,
and integration into LLM systems remains a significant challenge. We present a
novel approach to this challenge: an automated pipeline that evaluates the
potential information gain from text collections without requiring model
training or fine-tuning. Our method generates multiple choice questions (MCQs)
from texts and measures an LLM's performance both with and without access to
the source material. The performance gap between these conditions serves as a
proxy for the collection's information potential. We validate our approach
using three strategically selected datasets: EPFL PhD manuscripts (likely
containing novel specialized knowledge), Wikipedia articles (presumably part of
training data), and a synthetic baseline dataset. Our results demonstrate that
this method effectively identifies collections containing valuable novel
information, providing a practical tool for prioritizing data acquisition and
integration efforts.
|
2502.13692
|
Tight Generalization Bounds for Large-Margin Halfspaces
|
cs.LG math.ST stat.TH
|
We prove the first generalization bound for large-margin halfspaces that is
asymptotically tight in the tradeoff between the margin, the fraction of
training points with the given margin, the failure probability and the number
of training points.
|
2502.13693
|
Medical Image Classification with KAN-Integrated Transformers and
Dilated Neighborhood Attention
|
cs.CV
|
Convolutional networks, transformers, hybrid models, and Mamba-based
architectures have demonstrated strong performance across various medical image
classification tasks. However, these methods were primarily designed to
classify clean images using labeled data. In contrast, real-world clinical data
often involve image corruptions that are unique to multi-center studies and
stem from variations in imaging equipment across manufacturers. In this paper,
we introduce the Medical Vision Transformer (MedViTV2), a novel architecture
incorporating Kolmogorov-Arnold Network (KAN) layers into the transformer
architecture for the first time, aiming for generalized medical image
classification. We have developed an efficient KAN block to reduce
computational load while enhancing the accuracy of the original MedViT.
Additionally, to counteract the fragility of our MedViT when scaled up, we
propose an enhanced Dilated Neighborhood Attention (DiNA), an adaptation of the
efficient fused dot-product attention kernel capable of capturing global
context and expanding receptive fields to scale the model effectively and
addressing feature collapse issues. Moreover, a hierarchical hybrid strategy is
introduced to stack our Local Feature Perception and Global Feature Perception
blocks in an efficient manner, which balances local and global feature
perceptions to boost performance. Extensive experiments on 17 medical image
classification datasets and 12 corrupted medical image datasets demonstrate
that MedViTV2 achieved state-of-the-art results in 27 out of 29 experiments
with reduced computational complexity. MedViTV2 is 44\% more computationally
efficient than the previous version and significantly enhances accuracy,
achieving improvements of 4.6\% on MedMNIST, 5.8\% on NonMNIST, and 13.4\% on
the MedMNIST-C benchmark.
|
2502.13699
|
Secure and Green Rate-Splitting Multiple Access Integrated Sensing and
Communications
|
cs.IT math.IT
|
In this paper, we investigate the sensing, communication, security, and
energy efficiency of integrated sensing and communication (ISAC)-enabled
cognitive radio networks (CRNs) in a challenging scenario where communication
quality, security, and sensing accuracy are affected by interference and
eavesdropping. Specifically, we analyze the communication and sensing signals
of ISAC as well as the communication signal consisting of common and private
streams, based on rate-splitting multiple access (RSMA) of multicast network.
Then, the sensing signal-tocluster-plus-noise ratio, the security rate, the
communication rate, and the security energy efficiency (SEE) are derived,
respectively. To simultaneously enhance the aforementioned performance metrics,
we formulate a targeted optimization framework that aims to maximizing SEE by
jointly optimizing the transmit signal beamforming (BF) vectors and the echo
signal BF vector to construct green interference using the echo signal, as well
as common and private streams split by RSMA to refine security rate and
suppress power consumption, i.e., achieving a higher SEE. Given the non-convex
nature of the optimization problem, we present an alternative approach that
leverages Taylor series expansion, majorization-minimization, semi-definite
programming, and successive convex approximation techniques. Specifically, we
decompose the original non-convex and intractable optimization problem into
three simplified sub-optimization problems, which are iteratively solved using
an alternating optimization strategy. Simulations provide comparisons with
state-of-the-art schemes, highlighting the superiority of the proposed joint
multi-BF optimization scheme based on RSMA and constructed green interference
in improving system performances.
|
2502.13701
|
Causes and Strategies in Multiagent Systems
|
cs.AI cs.MA
|
Causality plays an important role in daily processes, human reasoning, and
artificial intelligence. There has however not been much research on causality
in multi-agent strategic settings. In this work, we introduce a systematic way
to build a multi-agent system model, represented as a concurrent game
structure, for a given structural causal model. In the obtained so-called
causal concurrent game structure, transitions correspond to interventions on
agent variables of the given causal model. The Halpern and Pearl framework of
causality is used to determine the effects of a certain value for an agent
variable on other variables. The causal concurrent game structure allows us to
analyse and reason about causal effects of agents' strategic decisions. We
formally investigate the relation between causal concurrent game structures and
the original structural causal models.
|
2502.13703
|
Parameterized Complexity of Hedonic Games with Enemy-Oriented
Preferences
|
cs.GT cs.MA
|
Hedonic games model settings in which a set of agents have to be partitioned
into groups which we call coalitions. In the enemy aversion model, each agent
has friends and enemies, and an agent prefers to be in a coalition with as few
enemies as possible and, subject to that, as many friends as possible. A
partition should be stable, i.e., no subset of agents prefer to be together
rather than being in their assigned coalition under the partition. We look at
two stability concepts: core stability and strict core stability. This yields
several algorithmic problems: determining whether a (strictly) core stable
partition exists, finding such a partition, and checking whether a given
partition is (strictly) core stable. Several of these problems have been shown
to be NP-complete, or even beyond NP. This motivates the study of parameterized
complexity. We conduct a thorough computational study using several parameters:
treewidth, number of friends, number of enemies, partition size, and coalition
size. We give polynomial algorithms for restricted graph classes as well as FPT
algorithms with respect to the number of friends an agent may have and the
treewidth of the graph representing the friendship or enemy relations. We show
W[1]-hardness or para-NP-hardness with respect to the other parameters.
We conclude this paper with results in the setting in which agents can have
neutral relations with each other, including hardness-results for very
restricted cases.
|
2502.13707
|
Human-Like Robot Impedance Regulation Skill Learning from Human-Human
Demonstrations
|
cs.RO
|
Humans are experts in collaborating with others physically by regulating
compliance behaviors based on the perception of their partner states and the
task requirements. Enabling robots to develop proficiency in human
collaboration skills can facilitate more efficient human-robot collaboration
(HRC). This paper introduces an innovative impedance regulation skill learning
framework for achieving HRC in multiple physical collaborative tasks. The
framework is designed to adjust the robot compliance to the human partner
states while adhering to reference trajectories provided by human-human
demonstrations. Specifically, electromyography (EMG) signals from human muscles
are collected and analyzed to extract limb impedance, representing compliance
behaviors during demonstrations. Human endpoint motions are captured and
represented using a probabilistic learning method to create reference
trajectories and corresponding impedance profiles. Meanwhile, an LSTMbased
module is implemented to develop task-oriented impedance regulation policies by
mapping the muscle synergistic contributions between two demonstrators.
Finally, we propose a wholebody impedance controller for a human-like robot,
coordinating joint outputs to achieve the desired impedance and reference
trajectory during task execution. Experimental validation was conducted through
a collaborative transportation task and two interactive Tai Chi pushing hands
tasks, demonstrating superior performance from the perspective of interactive
forces compared to a constant impedance control method.
|
2502.13708
|
Active Illumination for Visual Ego-Motion Estimation in the Dark
|
cs.RO
|
Visual Odometry (VO) and Visual SLAM (V-SLAM) systems often struggle in
low-light and dark environments due to the lack of robust visual features. In
this paper, we propose a novel active illumination framework to enhance the
performance of VO and V-SLAM algorithms in these challenging conditions. The
developed approach dynamically controls a moving light source to illuminate
highly textured areas, thereby improving feature extraction and tracking.
Specifically, a detector block, which incorporates a deep learning-based
enhancing network, identifies regions with relevant features. Then, a pan-tilt
controller is responsible for guiding the light beam toward these areas, so
that to provide information-rich images to the ego-motion estimation algorithm.
Experimental results on a real robotic platform demonstrate the effectiveness
of the proposed method, showing a reduction in the pose estimation error up to
75% with respect to a traditional fixed lighting technique.
|
2502.13713
|
TALKPLAY: Multimodal Music Recommendation with Large Language Models
|
cs.IR cs.SD eess.AS
|
We present TalkPlay, a multimodal music recommendation system that
reformulates the recommendation task as large language model token generation.
TalkPlay represents music through an expanded token vocabulary that encodes
multiple modalities - audio, lyrics, metadata, semantic tags, and playlist
co-occurrence. Using these rich representations, the model learns to generate
recommendations through next-token prediction on music recommendation
conversations, that requires learning the associations natural language query
and response, as well as music items. In other words, the formulation
transforms music recommendation into a natural language understanding task,
where the model's ability to predict conversation tokens directly optimizes
query-item relevance. Our approach eliminates traditional
recommendation-dialogue pipeline complexity, enabling end-to-end learning of
query-aware music recommendations. In the experiment, TalkPlay is successfully
trained and outperforms baseline methods in various aspects, demonstrating
strong context understanding as a conversational music recommender.
|
2502.13714
|
Hierarchical RL-MPC for Demand Response Scheduling
|
eess.SY cs.SY
|
This paper presents a hierarchical framework for demand response optimization
in air separation units (ASUs) that combines reinforcement learning (RL) with
linear model predictive control (LMPC). We investigate two control
architectures: a direct RL approach and a control-informed methodology where an
RL agent provides setpoints to a lower-level LMPC. The proposed RL-LMPC
framework demonstrates improved sample efficiency during training and better
constraint satisfaction compared to direct RL control. Using an industrial ASU
case study, we show that our approach successfully manages operational
constraints while optimizing electricity costs under time-varying pricing.
Results indicate that the RL-LMPC architecture achieves comparable economic
performance to direct RL while providing better robustness and requiring fewer
training samples to converge. The framework offers a practical solution for
implementing flexible operation strategies in process industries, bridging the
gap between data-driven methods and traditional control approaches.
|
2502.13716
|
Event-Based Video Frame Interpolation With Cross-Modal Asymmetric
Bidirectional Motion Fields
|
cs.CV
|
Video Frame Interpolation (VFI) aims to generate intermediate video frames
between consecutive input frames. Since the event cameras are bio-inspired
sensors that only encode brightness changes with a micro-second temporal
resolution, several works utilized the event camera to enhance the performance
of VFI. However, existing methods estimate bidirectional inter-frame motion
fields with only events or approximations, which can not consider the complex
motion in real-world scenarios. In this paper, we propose a novel event-based
VFI framework with cross-modal asymmetric bidirectional motion field
estimation. In detail, our EIF-BiOFNet utilizes each valuable characteristic of
the events and images for direct estimation of inter-frame motion fields
without any approximation methods. Moreover, we develop an interactive
attention-based frame synthesis network to efficiently leverage the
complementary warping-based and synthesis-based features. Finally, we build a
large-scale event-based VFI dataset, ERF-X170FPS, with a high frame rate,
extreme motion, and dynamic textures to overcome the limitations of previous
event-based VFI datasets. Extensive experimental results validate that our
method shows significant performance improvement over the state-of-the-art VFI
methods on various datasets. Our project pages are available at:
https://github.com/intelpro/CBMNet
|
2502.13718
|
Multi-Scale and Multi-Objective Optimization for Cross-Lingual
Aspect-Based Sentiment Analysis
|
cs.CL
|
Aspect-based sentiment analysis (ABSA) is a sequence labeling task that has
garnered growing research interest in multilingual contexts. However, recent
studies lack more robust feature alignment and finer aspect-level alignment. In
this paper, we propose a novel framework, Multi-Scale and Multi-Objective
optimization (MSMO) for cross-lingual ABSA. During multi-scale alignment, we
achieve cross-lingual sentence-level and aspect-level alignment, aligning
features of aspect terms in different contextual environments. Specifically, we
introduce code-switched bilingual sentences into the language discriminator and
consistency training modules to enhance the model's robustness. During
multi-objective optimization, we design two optimization objectives: supervised
training and consistency training, aiming to enhance cross-lingual semantic
alignment. To further improve model performance, we incorporate distilled
knowledge of the target language into the model. Results show that MSMO
significantly enhances cross-lingual ABSA by achieving state-of-the-art
performance across multiple languages and models.
|
2502.13719
|
TrustRAG: An Information Assistant with Retrieval Augmented Generation
|
cs.IR cs.AI
|
\Ac{RAG} has emerged as a crucial technique for enhancing large models with
real-time and domain-specific knowledge. While numerous improvements and
open-source tools have been proposed to refine the \ac{RAG} framework for
accuracy, relatively little attention has been given to improving the
trustworthiness of generated results. To address this gap, we introduce
TrustRAG, a novel framework that enhances \ac{RAG} from three perspectives:
indexing, retrieval, and generation. Specifically, in the indexing stage, we
propose a semantic-enhanced chunking strategy that incorporates hierarchical
indexing to supplement each chunk with contextual information, ensuring
semantic completeness. In the retrieval stage, we introduce a utility-based
filtering mechanism to identify high-quality information, supporting answer
generation while reducing input length. In the generation stage, we propose
fine-grained citation enhancement, which detects opinion-bearing sentences in
responses and infers citation relationships at the sentence-level, thereby
improving citation accuracy. We open-source the TrustRAG framework and provide
a demonstration studio designed for excerpt-based question answering tasks
\footnote{https://huggingface.co/spaces/golaxy/TrustRAG}. Based on these, we
aim to help researchers: 1) systematically enhancing the trustworthiness of
\ac{RAG} systems and (2) developing their own \ac{RAG} systems with more
reliable outputs.
|
2502.13721
|
Learning Novel Transformer Architecture for Time-series Forecasting
|
cs.LG cs.CL
|
Despite the success of Transformer-based models in the time-series prediction
(TSP) tasks, the existing Transformer architecture still face limitations and
the literature lacks comprehensive explorations into alternative architectures.
To address these challenges, we propose AutoFormer-TS, a novel framework that
leverages a comprehensive search space for Transformer architectures tailored
to TSP tasks. Our framework introduces a differentiable neural architecture
search (DNAS) method, AB-DARTS, which improves upon existing DNAS approaches by
enhancing the identification of optimal operations within the architecture.
AutoFormer-TS systematically explores alternative attention mechanisms,
activation functions, and encoding operations, moving beyond the traditional
Transformer design. Extensive experiments demonstrate that AutoFormer-TS
consistently outperforms state-of-the-art baselines across various TSP
benchmarks, achieving superior forecasting accuracy while maintaining
reasonable training efficiency.
|
2502.13722
|
Deep Learning for VWAP Execution in Crypto Markets: Beyond the Volume
Curve
|
q-fin.ST cs.LG
|
Volume-Weighted Average Price (VWAP) is arguably the most prevalent benchmark
for trade execution as it provides an unbiased standard for comparing
performance across market participants. However, achieving VWAP is inherently
challenging due to its dependence on two dynamic factors, volumes and prices.
Traditional approaches typically focus on forecasting the market's volume
curve, an assumption that may hold true under steady conditions but becomes
suboptimal in more volatile environments or markets such as cryptocurrency
where prediction error margins are higher. In this study, I propose a deep
learning framework that directly optimizes the VWAP execution objective by
bypassing the intermediate step of volume curve prediction. Leveraging
automatic differentiation and custom loss functions, my method calibrates order
allocation to minimize VWAP slippage, thereby fully addressing the complexities
of the execution problem. My results demonstrate that this direct optimization
approach consistently achieves lower VWAP slippage compared to conventional
methods, even when utilizing a naive linear model presented in
arXiv:2410.21448. They validate the observation that strategies optimized for
VWAP performance tend to diverge from accurate volume curve predictions and
thus underscore the advantage of directly modeling the execution objective.
This research contributes a more efficient and robust framework for VWAP
execution in volatile markets, illustrating the potential of deep learning in
complex financial systems where direct objective optimization is crucial.
Although my empirical analysis focuses on cryptocurrency markets, the
underlying principles of the framework are readily applicable to other asset
classes such as equities.
|
2502.13723
|
Direct Value Optimization: Improving Chain-of-Thought Reasoning in LLMs
with Refined Values
|
cs.CL cs.AI
|
We introduce Direct Value Optimization (DVO), an innovative reinforcement
learning framework for enhancing large language models in complex reasoning
tasks. Unlike traditional methods relying on preference labels, DVO utilizes
value signals at individual reasoning steps, optimizing models via a mean
squared error loss. The key benefit of DVO lies in its fine-grained
supervision, circumventing the need for labor-intensive human annotations.
Target values within the DVO are estimated using either Monte Carlo Tree Search
or an outcome value model. Our empirical analysis on both mathematical and
commonsense reasoning tasks shows that DVO consistently outperforms existing
offline preference optimization techniques, even with fewer training steps.
These findings underscore the importance of value signals in advancing
reasoning capabilities and highlight DVO as a superior methodology under
scenarios lacking explicit human preference information.
|
2502.13725
|
Adapting Large Language Models for Time Series Modeling via a Novel
Parameter-efficient Adaptation Method
|
cs.CL
|
Time series modeling holds significant importance in many real-world
applications and has been extensively studied. While pre-trained foundation
models have made impressive strides in the fields of natural language
processing (NLP) and computer vision (CV), their development in time series
domains has been constrained by data sparsity. A series of recent studies have
demonstrated that large language models (LLMs) possess robust pattern
recognition and reasoning abilities over complex sequences of tokens. However,
the current literature have yet striked a high-quality balance between (a)
effectively aligning the time series and natural language modalities, and (b)
keeping the inference efficiency. To address the above issues, we now propose
the Time-LlaMA framework. Time-LlaMA first converts the time series input into
token embeddings through a linear tokenization mechanism. Second, the time
series token embeddings are aligned with the text prompts. Third, to further
adapt the LLM backbone for time series modeling, we have developed a dynamic
low-rank adaptation technique (D-LoRA). D-LoRA dynamically chooses the most
suitable LoRA modules at each layer of the Transformer backbone for each time
series input, enhancing the model's predictive capabilities. Our experimental
results on an extensive collection of challenging real-world time series tasks
confirm that our proposed method achieves the state-of-the-art (SOTA)
performance.
|
2502.13728
|
Secure Federated Data Distillation
|
cs.CR cs.AI
|
Dataset Distillation (DD) is a powerful technique for reducing large datasets
into compact, representative synthetic datasets, accelerating Machine Learning
training. However, traditional DD methods operate in a centralized manner,
which poses significant privacy threats and reduces its applicability. To
mitigate these risks, we propose a Secure Federated Data Distillation framework
(SFDD) to decentralize the distillation process while preserving privacy.Unlike
existing Federated Distillation techniques that focus on training global models
with distilled knowledge, our approach aims to produce a distilled dataset
without exposing local contributions. We leverage the gradient-matching-based
distillation method, adapting it for a distributed setting where clients
contribute to the distillation process without sharing raw data. The central
aggregator iteratively refines a synthetic dataset by integrating client-side
updates while ensuring data confidentiality. To make our approach resilient to
inference attacks perpetrated by the server that could exploit gradient updates
to reconstruct private data, we create an optimized Local Differential Privacy
approach, called LDPO-RLD (Label Differential Privacy Obfuscation via
Randomized Linear Dispersion). Furthermore, we assess the framework's
resilience against malicious clients executing backdoor attacks and demonstrate
robustness under the assumption of a sufficient number of participating
clients. Our experimental results demonstrate the effectiveness of SFDD and
that the proposed defense concretely mitigates the identified vulnerabilities,
with minimal impact on the performance of the distilled dataset. By addressing
the interplay between privacy and federation in dataset distillation, this work
advances the field of privacy-preserving Machine Learning making our SFDD
framework a viable solution for sensitive data-sharing applications.
|
2502.13729
|
Emergence of the Primacy Effect in Structured State-Space Models
|
cs.LG cs.NE q-bio.NC
|
Human and animal memory for sequentially presented items is well-documented
to be more accurate for those at the beginning and end of the sequence,
phenomena known as the primacy and recency effects, respectively. By contrast,
artificial neural network (ANN) models are typically designed with a memory
that decays monotonically over time. Accordingly, ANNs are expected to show the
recency effect but not the primacy effect. Contrary to this theoretical
expectation, however, the present study reveals a counterintuitive finding: a
recently developed ANN architecture, called structured state-space models,
exhibits the primacy effect when trained and evaluated on a synthetic task that
mirrors psychological memory experiments. Given that this model was originally
designed for recovering neuronal activity patterns observed in biological
brains, this result provides a novel perspective on the psychological primacy
effect while also posing a non-trivial puzzle for the current theories in
machine learning.
|
2502.13730
|
Cascading CMA-ES Instances for Generating Input-diverse Solution Batches
|
cs.NE
|
Rather than obtaining a single good solution for a given optimization
problem, users often seek alternative design choices, because the best-found
solution may perform poorly with respect to additional objectives or
constraints that are difficult to capture into the modeling process.
Aiming for batches of diverse solutions of high quality is often desirable,
as it provides flexibility to accommodate post-hoc user preferences. At the
same time, it is crucial that the quality of the best solution found is not
compromised.
One particular problem setting balancing high quality and diversity is fixing
the required minimum distance between solutions while simultaneously obtaining
the best possible fitness. Recent work by Santoni et al. [arXiv 2024] revealed
that this setting is not well addressed by state-of-the-art algorithms,
performing in par or worse than pure random sampling.
Driven by this important limitation, we propose a new approach, where
parallel runs of the covariance matrix adaptation evolution strategy (CMA-ES)
inherit tabu regions in a cascading fashion. We empirically demonstrate that
our CMA-ES-Diversity Search (CMA-ES-DS) algorithm generates trajectories that
allow to extract high-quality solution batches that respect a given minimum
distance requirement, clearly outperforming those obtained from off-the-shelf
random sampling, multi-modal optimization algorithms, and standard CMA-ES.
|
2502.13731
|
Robust Counterfactual Inference in Markov Decision Processes
|
cs.AI
|
This paper addresses a key limitation in existing counterfactual inference
methods for Markov Decision Processes (MDPs). Current approaches assume a
specific causal model to make counterfactuals identifiable. However, there are
usually many causal models that align with the observational and interventional
distributions of an MDP, each yielding different counterfactual distributions,
so fixing a particular causal model limits the validity (and usefulness) of
counterfactual inference. We propose a novel non-parametric approach that
computes tight bounds on counterfactual transition probabilities across all
compatible causal models. Unlike previous methods that require solving
prohibitively large optimisation problems (with variables that grow
exponentially in the size of the MDP), our approach provides closed-form
expressions for these bounds, making computation highly efficient and scalable
for non-trivial MDPs. Once such an interval counterfactual MDP is constructed,
our method identifies robust counterfactual policies that optimise the
worst-case reward w.r.t. the uncertain interval MDP probabilities. We evaluate
our method on various case studies, demonstrating improved robustness over
existing methods.
|
2502.13732
|
Homophily Heterogeneity Matters in Graph Federated Learning: A Spectrum
Sharing and Complementing Perspective
|
cs.LG
|
Since heterogeneity presents a fundamental challenge in graph federated
learning, many existing methods are proposed to deal with node feature
heterogeneity and structure heterogeneity. However, they overlook the critical
homophily heterogeneity, which refers to the substantial variation in homophily
levels across graph data from different clients. The homophily level represents
the proportion of edges connecting nodes that belong to the same class. Due to
adapting to their local homophily, local models capture inconsistent spectral
properties across different clients, significantly reducing the effectiveness
of collaboration. Specifically, local models trained on graphs with high
homophily tend to capture low-frequency information, whereas local models
trained on graphs with low homophily tend to capture high-frequency
information. To effectively deal with homophily heterophily, we introduce the
spectral Graph Neural Network (GNN) and propose a novel Federated learning
method by mining Graph Spectral Properties (FedGSP). On one hand, our proposed
FedGSP enables clients to share generic spectral properties (i.e.,
low-frequency information), allowing all clients to benefit through
collaboration. On the other hand, inspired by our theoretical findings, our
proposed FedGSP allows clients to complement non-generic spectral properties by
acquiring the spectral properties they lack (i.e., high-frequency information),
thereby obtaining additional information gain. Extensive experiments conducted
on six homophilic and five heterophilic graph datasets, across both
non-overlapping and overlapping settings, validate the superiority of our
method over eleven state-of-the-art methods. Notably, our FedGSP outperforms
the second-best method by an average margin of 3.28% on all heterophilic
datasets.
|
2502.13734
|
CARE: Confidence-Aware Regression Estimation of building density
fine-tuning EO Foundation Models
|
cs.CV cs.LG
|
Performing accurate confidence quantification and assessment is important for
deep neural networks to predict their failures, improve their performance and
enhance their capabilities in real-world applications, for their practical
deployment in real life. For pixel-wise regression tasks, confidence
quantification and assessment has not been well addressed in the literature, in
contrast to classification tasks like semantic segmentation. The softmax output
layer is not used in deep neural networks that solve pixel-wise regression
problems. In this paper, to address these problems, we develop, train and
evaluate the proposed model Confidence-Aware Regression Estimation (CARE). Our
model CARE computes and assigns confidence to regression output results. We
focus on solving regression problems as downstream tasks of an AI Foundation
Model for Earth Observation (EO). We evaluate the proposed model CARE and
experimental results on data from the Copernicus Sentinel-2 satellite
constellation for estimating the density of buildings show that the proposed
method can be successfully applied to regression problems. We also show that
our approach outperforms other methods.
|
2502.13737
|
PEDRO-V: From a concurrent engineering case study to a promising phase
zero mission definition
|
eess.SY cs.SY
|
Each year, the European Space Agency (ESA) organizes challenges for
university students, from BSc to PhD levels. The ESA Concurrent Engineering
Challange 2024 was hosted by four Concurrent Design Facilites (CDF) across
Europe: ESEC Galazia, ISAE SUPAERO, the University of Athens, and the
University of Portsmouth. A total of 102 students participated in the event.
Over five days, students worked on a feasibility study for a space mission,
simulating ESA's design session at ESTEC, the ESA headquarters. Students were
divided into specializes groups based on their backgrounds, reflecting ESA's
concurrent engineering teams. This paper discusses the design of subsystems by
students, their trade-off results, and the outcomes of the CDF study. It
highlights the effectiveness of concurrent engineering, which enabled rapid and
efficient results even from non-esxpert teams. The future development roadmap
and lessons learned are also presented. The students used CDP4-Comet software
within the replicated ESA CDF, resulting in the PEDRO-V mission proposal:
Planetary Exploration Deployment and Research Operation - Venus. The teams
collaboratively defined the Concept of Operations, identified actors,
worst-case scenarios, use cases, and activities. Their output included a list
of requirements, a draft product breakdown structure, and key subsystems
information. The concurrent engineering process led to continuous improvement
and convergence of key parameters. This approach proved to be effective by
aligning different teams' solutions and comparing them to similar missions. The
PEDRO-V mission feasibility was confirmed, demonstrating the potential of
concurrent engineering in accademic settings for space missions. (summarized
with AI)
|
2502.13738
|
Enhancing Input-Label Mapping in In-Context Learning with Contrastive
Decoding
|
cs.CL
|
Large language models (LLMs) excel at a range of tasks through in-context
learning (ICL), where only a few task examples guide their predictions.
However, prior research highlights that LLMs often overlook input-label mapping
information in ICL, relying more on their pre-trained knowledge. To address
this issue, we introduce In-Context Contrastive Decoding (ICCD), a novel method
that emphasizes input-label mapping by contrasting the output distributions
between positive and negative in-context examples. Experiments on 7 natural
language understanding (NLU) tasks show that our ICCD method brings consistent
and significant improvement (up to +2.1 improvement on average) upon 6
different scales of LLMs without requiring additional training. Our approach is
versatile, enhancing performance with various demonstration selection methods,
demonstrating its broad applicability and effectiveness. The code and scripts
will be publicly released.
|
2502.13740
|
Benchmarking of Different YOLO Models for CAPTCHAs Detection and
Classification
|
cs.CV
|
This paper provides an analysis and comparison of the YOLOv5, YOLOv8 and
YOLOv10 models for webpage CAPTCHAs detection using the datasets collected from
the web and darknet as well as synthetized data of webpages. The study examines
the nano (n), small (s), and medium (m) variants of YOLO architectures and use
metrics such as Precision, Recall, F1 score, mAP@50 and inference speed to
determine the real-life utility. Additionally, the possibility of tuning the
trained model to detect new CAPTCHA patterns efficiently was examined as it is
a crucial part of real-life applications. The image slicing method was proposed
as a way to improve the metrics of detection on oversized input images which
can be a common scenario in webpages analysis. Models in version nano achieved
the best results in terms of speed, while more complexed architectures scored
better in terms of other metrics.
|
2502.13743
|
Inference of Abstraction for Grounded Predicate Logic
|
cs.AI
|
An important open question in AI is what simple and natural principle enables
a machine to reason logically for meaningful abstraction with grounded symbols.
This paper explores a conceptually new approach to combining probabilistic
reasoning and predicative symbolic reasoning over data. We return to the era of
reasoning with a full joint distribution before the advent of Bayesian
networks. We then discuss that a full joint distribution over models of
exponential size in propositional logic and of infinite size in predicate logic
should be simply derived from a full joint distribution over data of linear
size. We show that the same process is not only enough to generalise the
logical consequence relation of predicate logic but also to provide a new
perspective to rethink well-known limitations such as the undecidability of
predicate logic, the symbol grounding problem and the principle of explosion.
The reproducibility of this theoretical work is fully demonstrated by the
included proofs.
|
2502.13747
|
Reverse Markov Learning: Multi-Step Generative Models for Complex
Distributions
|
cs.LG stat.ME stat.ML
|
Learning complex distributions is a fundamental challenge in contemporary
applications. Generative models, such as diffusion models, have demonstrated
remarkable success in overcoming many limitations of traditional statistical
methods. Shen and Meinshausen (2024) introduced engression, a generative
approach based on scoring rules that maps noise (and covariates, if available)
directly to data. While effective, engression struggles with highly complex
distributions, such as those encountered in image data. In this work, we extend
engression to improve its capability in learning complex distributions. We
propose a framework that defines a general forward process transitioning from
the target distribution to a known distribution (e.g., Gaussian) and then
learns a reverse Markov process using multiple engression models. This reverse
process reconstructs the target distribution step by step. Our approach
supports general forward processes, allows for dimension reduction, and
naturally discretizes the generative process. As a special case, when using a
diffusion-based forward process, our framework offers a method to discretize
the training and inference of diffusion models efficiently. Empirical
evaluations on simulated and climate data validate our theoretical insights,
demonstrating the effectiveness of our approach in capturing complex
distributions.
|
2502.13751
|
RobustX: Robust Counterfactual Explanations Made Easy
|
cs.LG cs.AI
|
The increasing use of Machine Learning (ML) models to aid decision-making in
high-stakes industries demands explainability to facilitate trust.
Counterfactual Explanations (CEs) are ideally suited for this, as they can
offer insights into the predictions of an ML model by illustrating how changes
in its input data may lead to different outcomes. However, for CEs to realise
their explanatory potential, significant challenges remain in ensuring their
robustness under slight changes in the scenario being explained. Despite the
widespread recognition of CEs' robustness as a fundamental requirement, a lack
of standardised tools and benchmarks hinders a comprehensive and effective
comparison of robust CE generation methods. In this paper, we introduce
RobustX, an open-source Python library implementing a collection of CE
generation and evaluation methods, with a focus on the robustness property.
RobustX provides interfaces to several existing methods from the literature,
enabling streamlined access to state-of-the-art techniques. The library is also
easily extensible, allowing fast prototyping of novel robust CE generation and
evaluation methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.