id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.12902
|
Probabilistic neural operators for functional uncertainty quantification
|
cs.LG
|
Neural operators aim to approximate the solution operator of a system of
differential equations purely from data. They have shown immense success in
modeling complex dynamical systems across various domains. However, the
occurrence of uncertainties inherent in both model and data has so far rarely
been taken into account\textemdash{}a critical limitation in complex, chaotic
systems such as weather forecasting. In this paper, we introduce the
probabilistic neural operator (PNO), a framework for learning probability
distributions over the output function space of neural operators. PNO extends
neural operators with generative modeling based on strictly proper scoring
rules, integrating uncertainty information directly into the training process.
We provide a theoretical justification for the approach and demonstrate
improved performance in quantifying uncertainty across different domains and
with respect to different baselines. Furthermore, PNO requires minimal
adjustment to existing architectures, shows improved performance for most
probabilistic prediction tasks, and leads to well-calibrated predictive
distributions and adequate uncertainty representations even for long dynamical
trajectories. Implementing our approach into large-scale models for physical
applications can lead to improvements in corresponding uncertainty
quantification and extreme event identification, ultimately leading to a deeper
understanding of the prediction of such surrogate models.
|
2502.12904
|
Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM
Against Augmented Fraud and Phishing Inducements
|
cs.CL
|
We introduce Fraud-R1, a benchmark designed to evaluate LLMs' ability to
defend against internet fraud and phishing in dynamic, real-world scenarios.
Fraud-R1 comprises 8,564 fraud cases sourced from phishing scams, fake job
postings, social media, and news, categorized into 5 major fraud types. Unlike
previous benchmarks, Fraud-R1 introduces a multi-round evaluation pipeline to
assess LLMs' resistance to fraud at different stages, including credibility
building, urgency creation, and emotional manipulation. Furthermore, we
evaluate 15 LLMs under two settings: 1. Helpful-Assistant, where the LLM
provides general decision-making assistance, and 2. Role-play, where the model
assumes a specific persona, widely used in real-world agent-based interactions.
Our evaluation reveals the significant challenges in defending against fraud
and phishing inducement, especially in role-play settings and fake job
postings. Additionally, we observe a substantial performance gap between
Chinese and English, underscoring the need for improved multilingual fraud
detection capabilities.
|
2502.12908
|
Graph Neural Networks for Databases: A Survey
|
cs.DB cs.AI
|
Graph neural networks (GNNs) are powerful deep learning models for
graph-structured data, demonstrating remarkable success across diverse domains.
Recently, the database (DB) community has increasingly recognized the
potentiality of GNNs, prompting a surge of researches focusing on improving
database systems through GNN-based approaches. However, despite notable
advances, There is a lack of a comprehensive review and understanding of how
GNNs could improve DB systems. Therefore, this survey aims to bridge this gap
by providing a structured and in-depth overview of GNNs for DB systems.
Specifically, we propose a new taxonomy that classifies existing methods into
two key categories: (1) Relational Databases, which includes tasks like
performance prediction, query optimization, and text-to-SQL, and (2) Graph
Databases, addressing challenges like efficient graph query processing and
graph similarity computation. We systematically review key methods in each
category, highlighting their contributions and practical implications. Finally,
we suggest promising avenues for integrating GNNs into Database systems.
|
2502.12911
|
Knapsack Optimization-based Schema Linking for LLM-based Text-to-SQL
Generation
|
cs.CL cs.DB
|
Generating SQLs from user queries is a long-standing challenge, where the
accuracy of initial schema linking significantly impacts subsequent SQL
generation performance. However, current schema linking models still struggle
with missing relevant schema elements or an excess of redundant ones. A crucial
reason for this is that commonly used metrics, recall and precision, fail to
capture relevant element missing and thus cannot reflect actual schema linking
performance. Motivated by this, we propose an enhanced schema linking metric by
introducing a restricted missing indicator. Accordingly, we introduce Knapsack
optimization-based Schema Linking Agent (KaSLA), a plug-in schema linking agent
designed to prevent the missing of relevant schema elements while minimizing
the inclusion of redundant ones. KaSLA employs a hierarchical linking strategy
that first identifies the optimal table linking and subsequently links columns
within the selected table to reduce linking candidate space. In each linking
process, it utilize a knapsack optimization approach to link potentially
relevant elements while accounting for a limited tolerance of potential
redundant ones.With this optimization, KaSLA-1.6B achieves superior schema
linking results compared to large-scale LLMs, including deepseek-v3 with
state-of-the-art (SOTA) schema linking method. Extensive experiments on Spider
and BIRD benchmarks verify that KaSLA can significantly improve the SQL
generation performance of SOTA text-to-SQL models by substituting their schema
linking processes.
|
2502.12912
|
A Simplified and Numerically Stable Approach to the BG/NBD Churn
Prediction model
|
stat.OT cs.LG math.ST stat.TH
|
This study extends the BG/NBD churn probability model, addressing its
limitations in industries where customer behaviour is often influenced by
seasonal events and possibly high purchase counts. We propose a modified
definition of churn, considering a customer to have churned if they make no
purchases within M days. Our contribution is twofold: First, we simplify the
general equation for the specific case of zero purchases within M days. Second,
we derive an alternative expression using numerical techniques to mitigate
numerical overflow or underflow issues. This approach provides a more practical
and robust method for predicting customer churn in industries with irregular
purchase patterns.
|
2502.12913
|
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training
for LLMs On-Device Fine-tuning
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) fine-tuning technologies have achieved
remarkable results. However, traditional LLM fine-tuning approaches face
significant challenges: they require large Floating Point (FP) computation,
raising privacy concerns when handling sensitive data, and are impractical for
resource-constrained edge devices. While Parameter-Efficient Fine-Tuning (PEFT)
techniques reduce trainable parameters, their reliance on floating-point
arithmetic creates fundamental incompatibilities with edge hardware. In this
work, we introduce a novel framework for on-device LLM fine-tuning that
eliminates the need for floating-point operations in both inference and
training, named GSQ-Tuning. At its core is the Group-Shared Exponents Integer
format, which efficiently represents model parameters in integer format using
shared exponents among parameter groups. When combined with LoRA-like adapters,
this enables fully integer-based fine-tuning that is both memory and compute
efficient. We demonstrate that our approach achieves accuracy comparable to
FP16-based fine-tuning while significantly reducing memory usage (50%).
Moreover, compared to FP8, our method can reduce 5x power consumption and 11x
chip area with same performance, making large-scale model adaptation feasible
on edge devices.
|
2502.12917
|
Contrast-Unity for Partially-Supervised Temporal Sentence Grounding
|
cs.CV
|
Temporal sentence grounding aims to detect event timestamps described by the
natural language query from given untrimmed videos. The existing
fully-supervised setting achieves great results but requires expensive
annotation costs; while the weakly-supervised setting adopts cheap labels but
performs poorly. To pursue high performance with less annotation costs, this
paper introduces an intermediate partially-supervised setting, i.e., only
short-clip is available during training. To make full use of partial labels, we
specially design one contrast-unity framework, with the two-stage goal of
implicit-explicit progressive grounding. In the implicit stage, we align
event-query representations at fine granularity using comprehensive quadruple
contrastive learning: event-query gather, event-background separation,
intra-cluster compactness and inter-cluster separability. Then, high-quality
representations bring acceptable grounding pseudo-labels. In the explicit
stage, to explicitly optimize grounding objectives, we train one
fully-supervised model using obtained pseudo-labels for grounding refinement
and denoising. Extensive experiments and thoroughly ablations on Charades-STA
and ActivityNet Captions demonstrate the significance of partial supervision,
as well as our superior performance.
|
2502.12918
|
Query Rewriting via LLMs
|
cs.DB
|
Query rewriting is a classical technique for transforming complex declarative
SQL queries into ``lean'' equivalents that are conducive to (a) faster
execution from a performance perspective, and (b) better understanding from a
developer perspective. The rewriting is typically achieved via transformation
rules, but these rules are limited in scope and difficult to update in a
production system. In recent times, LLM-based techniques have also been mooted,
but they are prone to both semantic and syntactic errors.
We investigate here, how the remarkable cognitive capabilities of LLMs can be
leveraged for performant query rewriting while incorporating safeguards and
optimizations to ensure correctness and efficiency. Our study shows that these
goals can be progressively achieved through incorporation of (a) an ensemble
suite of basic prompts, (b) database-sensitive prompts via redundancy removal
and selectivity-based rewriting rules, and (c) LLM token probability-guided
rewrite paths. Further, a suite of statistical and logic-based tools can be
used to guard against errors produced by the model.
We have implemented the above LLM-infused techniques in the LITHE system, and
evaluated complex analytic queries from multiple benchmarks on contemporary
database platforms. The results show significant improvements over SOTA
rewriting techniques -- for instance, on TPC-DS, LITHE constructed productive
(>1.5x speedup) rewrites for \emph{two-thirds} of the query suite, delivering
four times more coverage than SOTA. Further, the geometric mean of its
estimated execution speedups was an \emph{order-of-magnitude} jump over SOTA
performance. In essence, LITHE offers a potent and robust LLM-based
intermediary between enterprise applications and database engines.
|
2502.12919
|
A Smooth Transition Between Induction and Deduction: Fast Abductive
Learning Based on Probabilistic Symbol Perception
|
cs.LG
|
Abductive learning (ABL) that integrates strengths of machine learning and
logical reasoning to improve the learning generalization, has been recently
shown effective. However, its efficiency is affected by the transition between
numerical induction and symbolical deduction, leading to high computational
costs in the worst-case scenario. Efforts on this issue remain to be limited.
In this paper, we identified three reasons why previous optimization algorithms
for ABL were not effective: insufficient utilization of prediction, symbol
relationships, and accumulated experience in successful abductive processes,
resulting in redundant calculations to the knowledge base. To address these
challenges, we introduce an optimization algorithm named as Probabilistic
Symbol Perception (PSP), which makes a smooth transition between induction and
deduction and keeps the correctness of ABL unchanged. We leverage probability
as a bridge and present an efficient data structure, achieving the transfer
from a continuous probability sequence to discrete Boolean sequences with low
computational complexity. Experiments demonstrate the promising results.
|
2502.12920
|
Lightweight Online Adaption for Time Series Foundation Model Forecasts
|
cs.LG stat.ML
|
Foundation models (FMs) have emerged as a promising approach for time series
forecasting. While effective, FMs typically remain fixed during deployment due
to the high computational costs of learning them online. Consequently, deployed
FMs fail to adapt their forecasts to current data characteristics, despite the
availability of online feedback from newly arriving data. This raises the
question of whether FM performance can be enhanced by the efficient usage of
this feedback. We propose AdapTS to answer this question.
AdapTS is a lightweight mechanism for the online adaption of FM forecasts in
response to online feedback. AdapTS consists of two parts: a) the
AdapTS-Forecaster which is used to learn the current data distribution; and b)
the AdapTS-Weighter which is used to combine the forecasts of the FM and the
AdapTS-Forecaster. We evaluate the performance of AdapTS in conjunction with
several recent FMs across a suite of standard time series datasets. In all of
our experiments we find that using AdapTS improves performance. This work
demonstrates how efficient usage of online feedback can be used to improve FM
forecasts.
|
2502.12921
|
Q-STRUM Debate: Query-Driven Contrastive Summarization for
Recommendation Comparison
|
cs.CL
|
Query-driven recommendation with unknown items poses a challenge for users to
understand why certain items are appropriate for their needs. Query-driven
Contrastive Summarization (QCS) is a methodology designed to address this issue
by leveraging language-based item descriptions to clarify contrasts between
them. However, existing state-of-the-art contrastive summarization methods such
as STRUM-LLM fall short of this goal. To overcome these limitations, we
introduce Q-STRUM Debate, a novel extension of STRUM-LLM that employs
debate-style prompting to generate focused and contrastive summarizations of
item aspects relevant to a query. Leveraging modern large language models
(LLMs) as powerful tools for generating debates, Q-STRUM Debate provides
enhanced contrastive summaries. Experiments across three datasets demonstrate
that Q-STRUM Debate yields significant performance improvements over existing
methods on key contrastive summarization criteria, thus introducing a novel and
performant debate prompting methodology for QCS.
|
2502.12923
|
On-Device LLMs for Home Assistant: Dual Role in Intent Detection and
Response Generation
|
cs.CL
|
This paper investigates whether Large Language Models (LLMs), fine-tuned on
synthetic but domain-representative data, can perform the twofold task of (i)
slot and intent detection and (ii) natural language response generation for a
smart home assistant, while running solely on resource-limited, CPU-only edge
hardware. We fine-tune LLMs to produce both JSON action calls and text
responses. Our experiments show that 16-bit and 8-bit quantized variants
preserve high accuracy on slot and intent detection and maintain strong
semantic coherence in generated text, while the 4-bit model, while retaining
generative fluency, suffers a noticeable drop in device-service classification
accuracy. Further evaluations on noisy human (non-synthetic) prompts and
out-of-domain intents confirm the models' generalization ability, obtaining
around 80--86\% accuracy. While the average inference time is 5--6 seconds per
query -- acceptable for one-shot commands but suboptimal for multi-turn
dialogue -- our results affirm that an on-device LLM can effectively unify
command interpretation and flexible response generation for home automation
without relying on specialized hardware.
|
2502.12924
|
Conditioning LLMs to Generate Code-Switched Text: A Methodology Grounded
in Naturally Occurring Data
|
cs.CL cs.AI
|
Code-switching (CS) is still a critical challenge in Natural Language
Processing (NLP). Current Large Language Models (LLMs) struggle to interpret
and generate code-switched text, primarily due to the scarcity of large-scale
CS datasets for training. This paper presents a novel methodology to generate
CS data using LLMs, and test it on the English-Spanish language pair. We
propose back-translating natural CS sentences into monolingual English, and
using the resulting parallel corpus to fine-tune LLMs to turn monolingual
sentences into CS. Unlike previous approaches to CS generation, our methodology
uses natural CS data as a starting point, allowing models to learn its natural
distribution beyond grammatical patterns. We thoroughly analyse the models'
performance through a study on human preferences, a qualitative error analysis
and an evaluation with popular automatic metrics. Results show that our
methodology generates fluent code-switched text, expanding research
opportunities in CS communication, and that traditional metrics do not
correlate with human judgement when assessing the quality of the generated CS
data. We release our code and generated dataset under a CC-BY-NC-SA license.
|
2502.12925
|
Keep what you need : extracting efficient subnetworks from large audio
representation models
|
cs.SD cs.AI
|
Recently, research on audio foundation models has witnessed notable advances,
as illustrated by the ever improving results on complex downstream tasks.
Subsequently, those pretrained networks have quickly been used for various
audio applications. These improvements have however resulted in a considerable
increase both in size and complexity of these models. Along the environmental
concerns this issue raises, this prevents the deployment of such networks on
consumer-level devices, and precludes their use for real-time applications.
Moreover, this appears contradictory with the specificity of the tasks for
which these models are used, which are often simpler compared to extracting a
rich, multi-purpose representation from any type of audio data. In this paper,
we address this issue with a simple, yet effective method to extract
lightweight specialist subnetworks from large foundation models. Specifically,
we introduce learnable binary masks in-between the layers of a pretrained
representation model. When training the end-to-end model on a downstream task,
we add a sparsity-inducing loss to the overall objective, hence learning a
compact subnetwork specialized on a single task. Importantly, the weights of
the foundation model are kept frozen, resulting into low additional training
costs. Once trained, the masked computational units can then be removed from
the network, implying significant performance gains. We assess our method on
three widespread audio foundation models, each based on a different backbone
architecture, and illustrate its effectiveness on common audio representation
evaluation tasks, as well as its versatility on both speech, music, and general
audio. Code for reproducing the results and supporting webpage are available at
https://github.com/gnvIRCAM/Audio-representation-trimming
|
2502.12926
|
Towards more Contextual Agents: An extractor-Generator Optimization
Framework
|
cs.AI
|
Large Language Model (LLM)-based agents have demonstrated remarkable success
in solving complex tasks across a wide range of general-purpose applications.
However, their performance often degrades in context-specific scenarios, such
as specialized industries or research domains, where the absence of
domain-relevant knowledge leads to imprecise or suboptimal outcomes. To address
this challenge, our work introduces a systematic approach to enhance the
contextual adaptability of LLM-based agents by optimizing their underlying
prompts-critical components that govern agent behavior, roles, and
interactions. Manually crafting optimized prompts for context-specific tasks is
labor-intensive, error-prone, and lacks scalability. In this work, we introduce
an Extractor-Generator framework designed to automate the optimization of
contextual LLM-based agents. Our method operates through two key stages: (i)
feature extraction from a dataset of gold-standard input-output examples, and
(ii) prompt generation via a high-level optimization strategy that iteratively
identifies underperforming cases and applies self-improvement techniques. This
framework substantially improves prompt adaptability by enabling more precise
generalization across diverse inputs, particularly in context-specific tasks
where maintaining semantic consistency and minimizing error propagation are
critical for reliable performance. Although developed with single-stage
workflows in mind, the approach naturally extends to multi-stage workflows,
offering broad applicability across various agent-based systems. Empirical
evaluations demonstrate that our framework significantly enhances the
performance of prompt-optimized agents, providing a structured and efficient
approach to contextual LLM-based agents.
|
2502.12927
|
SEFL: Harnessing Large Language Model Agents to Improve Educational
Feedback Systems
|
cs.CL
|
Providing high-quality feedback is crucial for student success but is
constrained by time, cost, and limited data availability. We introduce
Synthetic Educational Feedback Loops (SEFL), a novel framework designed to
deliver immediate, on-demand feedback at scale without relying on extensive,
real-world student data. In SEFL, two large language models (LLMs) operate in
teacher--student roles to simulate assignment completion and formative
feedback, generating abundant synthetic pairs of student work and corresponding
critiques. We then fine-tune smaller, more computationally efficient LLMs on
these synthetic pairs, enabling them to replicate key features of high-quality,
goal-oriented feedback. Unlike personalized tutoring approaches that offer
multi-turn, individualized instruction, SEFL specifically focuses on
replicating the teacher-->student feedback loop for diverse assignments.
Through both LLM-as-a-judge and human evaluations, we demonstrate that
SEFL-tuned models outperform their non-tuned counterparts in feedback quality,
clarity, and timeliness. These findings reveal SEFL's potential to transform
feedback processes for higher education and beyond, offering an ethical and
scalable alternative to conventional manual feedback cycles.
|
2502.12928
|
Finedeep: Mitigating Sparse Activation in Dense LLMs via Multi-Layer
Fine-Grained Experts
|
cs.CL
|
Large language models have demonstrated exceptional performance across a wide
range of tasks. However, dense models usually suffer from sparse activation,
where many activation values tend towards zero (i.e., being inactivated). We
argue that this could restrict the efficient exploration of model
representation space. To mitigate this issue, we propose Finedeep, a
deep-layered fine-grained expert architecture for dense models. Our framework
partitions the feed-forward neural network layers of traditional dense models
into small experts, arranges them across multiple sub-layers. A novel routing
mechanism is proposed to determine each expert's contribution. We conduct
extensive experiments across various model sizes, demonstrating that our
approach significantly outperforms traditional dense architectures in terms of
perplexity and benchmark performance while maintaining a comparable number of
parameters and floating-point operations. Moreover, we find that Finedeep
achieves optimal results when balancing depth and width, specifically by
adjusting the number of expert sub-layers and the number of experts per
sub-layer. Empirical results confirm that Finedeep effectively alleviates
sparse activation and efficiently utilizes representation capacity in dense
models.
|
2502.12929
|
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking
Through Options
|
cs.LG cs.AI cs.CL
|
We present a novel reasoning approach called Flow-of-Options (FoO), designed
to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs
to systematically explore a diverse range of possibilities in their reasoning,
as demonstrated by an FoO-based agentic system for autonomously solving Machine
Learning tasks (AutoML). Our framework outperforms state-of-the-art baselines,
achieving improvements of 38.2% - 69.2% on standard data science tasks, and
37.4% - 47.9% on therapeutic chemistry tasks. With an overall operation cost
under $1 per task, our framework is well-suited for cost-sensitive
applications. Beyond classification and regression, we illustrate the broader
applicability of our FoO-based agentic system to tasks such as reinforcement
learning and image generation. Our framework presents significant advancements
compared to current state-of-the-art agentic systems for AutoML, due to the
benefits of FoO in enforcing diversity in LLM solutions through compressed,
explainable representations that also support long-term memory when combined
with case-based reasoning.
|
2502.12930
|
Universal Embedding Function for Traffic Classification via QUIC Domain
Recognition Pretraining: A Transfer Learning Success
|
cs.LG cs.NI
|
Encrypted traffic classification (TC) methods must adapt to new protocols and
extensions as well as to advancements in other machine learning fields. In this
paper, we follow a transfer learning setup best known from computer vision. We
first pretrain an embedding model on a complex task with a large number of
classes and then transfer it to five well-known TC datasets. The pretraining
task is recognition of SNI domains in encrypted QUIC traffic, which in itself
is a problem for network monitoring due to the growing adoption of TLS
Encrypted Client Hello. Our training pipeline -- featuring a disjoint class
setup, ArcFace loss function, and a modern deep learning architecture -- aims
to produce universal embeddings applicable across tasks. The proposed solution,
based on nearest neighbors search in the embedding space, surpasses SOTA
performance on four of the five TC datasets. A comparison with a baseline
method utilizing raw packet sequences revealed unexpected findings with
potential implications for the broader TC field. We published the model
architecture, trained weights, and transfer learning experiments.
|
2502.12932
|
Synthetic Data Generation for Culturally Nuanced Commonsense Reasoning
in Low-Resource Languages
|
cs.CL
|
Quantifying reasoning capability in low-resource languages remains a
challenge in NLP due to data scarcity and limited access to annotators. While
LLM-assisted dataset construction has proven useful for medium- and
high-resource languages, its effectiveness in low-resource languages,
particularly for commonsense reasoning, is still unclear. In this paper, we
compare three dataset creation strategies: (1) LLM-assisted dataset generation,
(2) machine translation, and (3) human-written data by native speakers, to
build a culturally nuanced story comprehension dataset. We focus on Javanese
and Sundanese, two major local languages in Indonesia, and evaluate the
effectiveness of open-weight and closed-weight LLMs in assisting dataset
creation through extensive manual validation. To assess the utility of
synthetic data, we fine-tune language models on classification and generation
tasks using this data and evaluate performance on a human-written test set. Our
findings indicate that LLM-assisted data creation outperforms machine
translation.
|
2502.12937
|
Tuning Algorithmic and Architectural Hyperparameters in Graph-Based
Semi-Supervised Learning with Provable Guarantees
|
cs.LG
|
Graph-based semi-supervised learning is a powerful paradigm in machine
learning for modeling and exploiting the underlying graph structure that
captures the relationship between labeled and unlabeled data. A large number of
classical as well as modern deep learning based algorithms have been proposed
for this problem, often having tunable hyperparameters. We initiate a formal
study of tuning algorithm hyperparameters from parameterized algorithm families
for this problem. We obtain novel $O(\log n)$ pseudo-dimension upper bounds for
hyperparameter selection in three classical label propagation-based algorithm
families, where $n$ is the number of nodes, implying bounds on the amount of
data needed for learning provably good parameters. We further provide matching
$\Omega(\log n)$ pseudo-dimension lower bounds, thus asymptotically
characterizing the learning-theoretic complexity of the parameter tuning
problem. We extend our study to selecting architectural hyperparameters in
modern graph neural networks. We bound the Rademacher complexity for tuning the
self-loop weighting in recently proposed Simplified Graph Convolution (SGC)
networks. We further propose a tunable architecture that interpolates graph
convolutional neural networks (GCN) and graph attention networks (GAT) in every
layer, and provide Rademacher complexity bounds for tuning the interpolation
coefficient.
|
2502.12944
|
Performance of Zero-Shot Time Series Foundation Models on Cloud Data
|
cs.LG
|
Time series foundation models (FMs) have emerged as a popular paradigm for
zero-shot multi-domain forecasting. FMs are trained on numerous diverse
datasets and claim to be effective forecasters across multiple different time
series domains, including cloud data. In this work we investigate this claim,
exploring the effectiveness of FMs on cloud data. We demonstrate that many
well-known FMs fail to generate meaningful or accurate zero-shot forecasts in
this setting. We support this claim empirically, showing that FMs are
outperformed consistently by simple linear baselines. We also illustrate a
number of interesting pathologies, including instances where FMs suddenly
output seemingly erratic, random-looking forecasts. Our results suggest a
widespread failure of FMs to model cloud data.
|
2502.12945
|
LLMPopcorn: An Empirical Study of LLMs as Assistants for Popular
Micro-video Generation
|
cs.CL cs.CV
|
Popular Micro-videos, dominant on platforms like TikTok and YouTube, hold
significant commercial value. The rise of high-quality AI-generated content has
spurred interest in AI-driven micro-video creation. However, despite the
advanced capabilities of large language models (LLMs) like ChatGPT and DeepSeek
in text generation and reasoning, their potential to assist the creation of
popular micro-videos remains largely unexplored.
In this paper, we conduct an empirical study on LLM-assisted popular
micro-video generation (LLMPopcorn). Specifically, we investigate the following
research questions: (i) How can LLMs be effectively utilized to assist popular
micro-video generation? (ii) To what extent can prompt-based enhancements
optimize the LLM-generated content for higher popularity? (iii) How well do
various LLMs and video generators perform in the popular micro-video generation
task? By exploring these questions, we show that advanced LLMs like DeepSeek-V3
enable micro-video generation to achieve popularity comparable to human-created
content. Prompt enhancements further boost popularity, and benchmarking
highlights DeepSeek-V3 and DeepSeek-R1 among LLMs, while LTX-Video and
HunyuanVideo lead in video generation. This pioneering work advances
AI-assisted micro-video creation, uncovering new research opportunities. We
will release the code and datasets to support future studies.
|
2502.12947
|
Every Expert Matters: Towards Effective Knowledge Distillation for
Mixture-of-Experts Language Models
|
cs.CL cs.AI cs.LG
|
With the emergence of Mixture-of-Experts (MoE), the efficient scaling of
model size has accelerated the development of large language models in recent
years. However, their high memory requirements prevent their use in
resource-constrained environments. While knowledge distillation (KD) has been a
proven method for model compression, its application to MoE teacher models
remains underexplored. Through our investigation, we discover that
non-activated experts in MoE models possess valuable knowledge that benefits
student models. We further demonstrate that existing KD methods are not optimal
for compressing MoE models, as they fail to leverage this knowledge
effectively. To address this, we propose two intuitive MoE-specific KD methods
for the first time: Knowledge Augmentation (KA) and Student-Aware Router (SAR),
both designed to effectively extract knowledge from all experts. Specifically,
KA augments knowledge by sampling experts multiple times, while SAR uses all
experts and adjusts the expert weights through router training to provide
optimal knowledge. Extensive experiments show that our methods outperform
conventional KD methods, demonstrating their effectiveness for MoE teacher
models.
|
2502.12948
|
Fake It Till You Make It: Using Synthetic Data and Domain Knowledge for
Improved Text-Based Learning for LGE Detection
|
cs.CV cs.AI
|
Detection of hyperenhancement from cardiac LGE MRI images is a complex task
requiring significant clinical expertise. Although deep learning-based models
have shown promising results for the task, they require large amounts of data
with fine-grained annotations. Clinical reports generated for cardiac MR
studies contain rich, clinically relevant information, including the location,
extent and etiology of any scars present. Although recently developed
CLIP-based training enables pretraining models with image-text pairs, it
requires large amounts of data and further finetuning strategies on downstream
tasks. In this study, we use various strategies rooted in domain knowledge to
train a model for LGE detection solely using text from clinical reports, on a
relatively small clinical cohort of 965 patients. We improve performance
through the use of synthetic data augmentation, by systematically creating scar
images and associated text. In addition, we standardize the orientation of the
images in an anatomy-informed way to enable better alignment of spatial and
text features. We also use a captioning loss to enable fine-grained supervision
and explore the effect of pretraining of the vision encoder on performance.
Finally, ablation studies are carried out to elucidate the contributions of
each design component to the overall performance of the model.
|
2502.12949
|
Efficient Learning Under Density Shift in Incremental Settings Using
Cram\'er-Rao-Based Regularization
|
cs.LG stat.ML
|
The continuous surge in data volume and velocity is often dealt with using
data orchestration and distributed processing approaches, abstracting away the
machine learning challenges that exist at the algorithmic level. With growing
interest in automating the learning loop, training with data that arrive in a
sequence rather than in the classical in-memory training data form will face a
machine learning challenge because of evolving feature distributions across
batches of training data biasing the cross-validation step
(\cite{sugiyama2012machine}). This work takes a distributed density estimation
angle to the problem where data are temporally distributed. It processes data
in batches and allows a neural network to treat a batch as training data. The
method accumulates knowledge about the data density via posterior probability
absorption using the Fisher Information Matrix, which contains information
about the local optimization gradients for the batch. This is then used as a
regularizer for the loss in the following batch, and therefore the density
estimate for the entire dataset constructively gets more robust to the non-iid
distribution shift. This needs the presence of a pair of batches in memory at a
time, so the space cost is not a function of the size of the complete,
distributed dataset. We proposed a novel regularization-based approach
Covariate Shift Correction $C^{2}A$ that leverages Fisher information and
Kullback-Leibler divergence to adapt to both natural and sequential covariate
shift caused by dataset fragmentation. $C^{2}A$ achieves $19\%$ accuracy at
maximum against state-of-the-art methods.
|
2502.12950
|
Towards Hybrid Traffic Laws for Mixed Flow of Human-Driven Vehicles and
Connected Autonomous Vehicles
|
cs.MA
|
Hybrid traffic laws represent an innovative approach to managing mixed
environments of connected autonomous vehicles (CAVs) and human-driven vehicles
(HDVs) by introducing separate sets of regulations for each vehicle type. These
laws are designed to leverage the unique capabilities of CAVs while ensuring
both types of cars coexist effectively, ultimately aiming to enhance overall
social welfare. This study uses the SUMO simulation platform to explore hybrid
traffic laws in a restricted lane scenario. It evaluates static and dynamic
lane access policies under varying traffic demands and CAV proportions. The
policies aim to minimize average passenger delay and encourage the
incorporation of autonomous vehicles with higher occupancy rates. Results
demonstrate that dynamic policies significantly improve traffic flow,
especially at low CAV proportions, compared to traditional dedicated bus lane
strategies. These findings highlight the potential of hybrid traffic laws to
enhance traffic efficiency and accelerate the transition to autonomous
technology.
|
2502.12951
|
Guaranteed Conditional Diffusion: 3D Block-based Models for Scientific
Data Compression
|
cs.LG
|
This paper proposes a new compression paradigm -- Guaranteed Conditional
Diffusion with Tensor Correction (GCDTC) -- for lossy scientific data
compression. The framework is based on recent conditional diffusion (CD)
generative models, and it consists of a conditional diffusion model, tensor
correction, and error guarantee. Our diffusion model is a mixture of 3D
conditioning and 2D denoising U-Net. The approach leverages a 3D block-based
compressing module to address spatiotemporal correlations in structured
scientific data. Then, the reverse diffusion process for 2D spatial data is
conditioned on the ``slices'' of content latent variables produced by the
compressing module. After training, the denoising decoder reconstructs the data
with zero noise and content latent variables, and thus it is entirely
deterministic. The reconstructed outputs of the CD model are further
post-processed by our tensor correction and error guarantee steps to control
and ensure a maximum error distortion, which is an inevitable requirement in
lossy scientific data compression. Our experiments involving two datasets
generated by climate and chemical combustion simulations show that our
framework outperforms standard convolutional autoencoders and yields
competitive compression quality with an existing scientific data compression
algorithm.
|
2502.12953
|
Task-Informed Anti-Curriculum by Masking Improves Downstream Performance
on Text
|
cs.CL cs.AI cs.LG
|
Masked language modeling has become a widely adopted unsupervised technique
to pre-train language models. However, the process of selecting tokens for
masking is random, and the percentage of masked tokens is typically fixed for
the entire training process. In this paper, we propose to adjust the masking
ratio and to decide which tokens to mask based on a novel task-informed
anti-curriculum learning scheme. First, we harness task-specific knowledge
about useful and harmful tokens in order to determine which tokens to mask.
Second, we propose a cyclic decaying masking ratio, which corresponds to an
anti-curriculum schedule (from hard to easy). We exemplify our novel
task-informed anti-curriculum by masking (TIACBM) approach across three diverse
downstream tasks: sentiment analysis, text classification by topic, and
authorship attribution. Our findings suggest that TIACBM enhances the ability
of the model to focus on key task-relevant features, contributing to
statistically significant performance gains across tasks. We release our code
at https://github.com/JarcaAndrei/TIACBM.
|
2502.12958
|
Preventing the Popular Item Embedding Based Attack in Federated
Recommendations
|
cs.CR cs.DB cs.LG
|
Privacy concerns have led to the rise of federated recommender systems (FRS),
which can create personalized models across distributed clients. However, FRS
is vulnerable to poisoning attacks, where malicious users manipulate gradients
to promote their target items intentionally. Existing attacks against FRS have
limitations, as they depend on specific models and prior knowledge, restricting
their real-world applicability. In our exploration of practical FRS
vulnerabilities, we devise a model-agnostic and prior-knowledge-free attack,
named PIECK (Popular Item Embedding based Attack). The core module of PIECK is
popular item mining, which leverages embedding changes during FRS training to
effectively identify the popular items. Built upon the core module, PIECK
branches into two diverse solutions: The PIECKIPE solution employs an item
popularity enhancement module, which aligns the embeddings of targeted items
with the mined popular items to increase item exposure. The PIECKUEA further
enhances the robustness of the attack by using a user embedding approximation
module, which approximates private user embeddings using mined popular items.
Upon identifying PIECK, we evaluate existing federated defense methods and find
them ineffective against PIECK, as poisonous gradients inevitably overwhelm the
cold target items. We then propose a novel defense method by introducing two
regularization terms during user training, which constrain item popularity
enhancement and user embedding approximation while preserving FRS performance.
We evaluate PIECK and its defense across two base models, three real datasets,
four top-tier attacks, and six general defense methods, affirming the efficacy
of both PIECK and its defense.
|
2502.12959
|
AlignFreeze: Navigating the Impact of Realignment on the Layers of
Multilingual Models Across Diverse Languages
|
cs.CL cs.AI
|
Realignment techniques are often employed to enhance cross-lingual transfer
in multilingual language models, still, they can sometimes degrade performance
in languages that differ significantly from the fine-tuned source language.
This paper introduces AlignFreeze, a method that freezes either the layers'
lower half or upper half during realignment. Through controlled experiments on
4 tasks, 3 models, and in 35 languages, we find that realignment affects all
the layers but can be the most detrimental to the lower ones. Freezing the
lower layers can prevent performance degradation. Particularly, AlignFreeze
improves Part-of-Speech (PoS) tagging performances in languages where full
realignment fails: with XLM-R, it provides improvements of more than one
standard deviation in accuracy in seven more languages than full realignment.
|
2502.12961
|
Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger
|
cs.AI cs.CL
|
Large language models (LLMs) have shown remarkable emergent capabilities,
transforming the execution of functional tasks by leveraging external tools for
complex problems that require specialized processing or real-time data. While
existing research expands LLMs access to diverse tools (e.g., program
interpreters, search engines, weather/map apps), the necessity of using these
tools is often overlooked, leading to indiscriminate tool invocation. This
naive approach raises two key issues:(1) increased delays due to unnecessary
tool calls, and (2) potential errors resulting from faulty interactions with
external tools. In this paper, we introduce meta-cognition as a proxy for LLMs
self-assessment of their capabilities, representing the model's awareness of
its own limitations. Based on this, we propose MeCo, an adaptive
decision-making strategy for external tool use. MeCo quantifies metacognitive
scores by capturing high-level cognitive signals in the representation space,
guiding when to invoke tools. Notably, MeCo is fine-tuning-free and incurs
minimal cost. Our experiments show that MeCo accurately detects LLMs' internal
cognitive signals and significantly improves tool-use decision-making across
multiple base models and benchmarks.
|
2502.12962
|
Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing
|
cs.CL
|
Limited by the context window size of Large Language Models(LLMs), handling
various tasks with input tokens exceeding the upper limit has been challenging,
whether it is a simple direct retrieval task or a complex multi-hop reasoning
task. Although various methods have been proposed to enhance the long-context
processing capabilities of LLMs, they either incur substantial post-training
costs, or require additional tool modules(e.g.,RAG), or have not shown
significant improvement in realistic tasks. Our work observes the correlation
between the attention distribution and generated answers across each layer, and
establishes the attention allocation aligns with retrieval-augmented
capabilities through experiments. Drawing on the above insights, we propose a
novel method InfiniRetri that leverages the LLMs's own attention information to
enable accurate retrieval across inputs of infinitely length. Our evaluations
indicate that InfiniRetri achieves 100% accuracy in the
Needle-In-a-Haystack(NIH) test over 1M tokens using a 0.5B parameter model,
surpassing other method or larger models and setting a new
state-of-the-art(SOTA). Moreover, our method achieves significant performance
improvements on real-world benchmarks, with a maximum 288% improvement. In
addition, InfiniRetri can be applied to any Transformer-based LLMs without
additional training and substantially reduces inference latency and compute
overhead in long texts. In summary, our comprehensive studies show
InfiniRetri's potential for practical applications and creates a paradigm for
retrievaling information using LLMs own capabilities under infinite-length
tokens. Code will be released in link.
|
2502.12963
|
D3-ARM: High-Dynamic, Dexterous and Fully Decoupled Cable-driven Robotic
Arm
|
cs.RO
|
Cable transmission enables motors of robotic arm to operate lightweight and
low-inertia joints remotely in various environments, but it also creates issues
with motion coupling and cable routing that can reduce arm's control precision
and performance. In this paper, we present a novel motion decoupling mechanism
with low-friction to align the cables and efficiently transmit the motor's
power. By arranging these mechanisms at the joints, we fabricate a fully
decoupled and lightweight cable-driven robotic arm called D3-Arm with all the
electrical components be placed at the base. Its 776 mm length moving part
boasts six degrees of freedom (DOF) and only 1.6 kg weights. To address the
issue of cable slack, a cable-pretension mechanism is integrated to enhance the
stability of long-distance cable transmission. Through a series of
comprehensive tests, D3-Arm demonstrated 1.29 mm average positioning error and
2.0 kg payload capacity, proving the practicality of the proposed decoupling
mechanisms in cable-driven robotic arm.
|
2502.12964
|
Trust Me, I'm Wrong: High-Certainty Hallucinations in LLMs
|
cs.CL
|
Large Language Models (LLMs) often generate outputs that lack grounding in
real-world facts, a phenomenon known as hallucinations. Prior research has
associated hallucinations with model uncertainty, leveraging this relationship
for hallucination detection and mitigation. In this paper, we challenge the
underlying assumption that all hallucinations are associated with uncertainty.
Using knowledge detection and uncertainty measurement methods, we demonstrate
that models can hallucinate with high certainty even when they have the correct
knowledge. We further show that high-certainty hallucinations are consistent
across models and datasets, distinctive enough to be singled out, and challenge
existing mitigation methods. Our findings reveal an overlooked aspect of
hallucinations, emphasizing the need to understand their origins and improve
mitigation strategies to enhance LLM safety. The code is available at
https://github.com/technion-cs-nlp/Trust_me_Im_wrong .
|
2502.12965
|
A Survey of Text Classification Under Class Distribution Shift
|
cs.CL cs.AI cs.LG
|
The basic underlying assumption of machine learning (ML) models is that the
training and test data are sampled from the same distribution. However, in
daily practice, this assumption is often broken, i.e.~the distribution of the
test data changes over time, which hinders the application of conventional ML
models. One domain where the distribution shift naturally occurs is text
classification, since people always find new topics to discuss. To this end, we
survey research articles studying open-set text classification and related
tasks. We divide the methods in this area based on the constraints that define
the kind of distribution shift and the corresponding problem formulation,
i.e.~learning with the Universum, zero-shot learning, and open-set learning. We
next discuss the predominant mitigation approaches for each problem setup.
Finally, we identify several future work directions, aiming to push the
boundaries beyond the state of the art. Interestingly, we find that continual
learning can solve many of the issues caused by the shifting class
distribution. We maintain a list of relevant papers at
https://github.com/Eduard6421/Open-Set-Survey.
|
2502.12966
|
The Early Days of the Ethereum Blob Fee Market and Lessons Learnt
|
cs.CE cs.CR cs.DC cs.ET econ.GN q-fin.EC
|
Ethereum has adopted a rollup-centric roadmap to scale by making rollups
(layer 2 scaling solutions) the primary method for handling transactions. The
first significant step towards this goal was EIP-4844, which introduced blob
transactions that are designed to meet the data availability needs of layer 2
protocols. This work constitutes the first rigorous and comprehensive empirical
analysis of transaction- and mempool-level data since the institution of blobs
on Ethereum on March 13, 2024. We perform a longitudinal study of the early
days of the blob fee market analyzing the landscape and the behaviors of its
participants. We identify and measure the inefficiencies arising out of
suboptimal block packing, showing that at times it has resulted in up to 70%
relative fee loss. We hone in and give further insight into two (congested)
peak demand periods for blobs. Finally, we document a market design issue
relating to subset bidding due to the inflexibility of the transaction
structure on packing data as blobs and suggest possible ways to fix it. The
latter market structure issue also applies more generally for any discrete
objects included within transactions.
|
2502.12970
|
Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language
Models from Jailbreaking
|
cs.CL
|
The reasoning abilities of Large Language Models (LLMs) have demonstrated
remarkable advancement and exceptional performance across diverse domains.
However, leveraging these reasoning capabilities to enhance LLM safety against
adversarial attacks and jailbreak queries remains largely unexplored. To bridge
this gap, we propose Reasoning-to-Defend (R2D), a novel training paradigm that
integrates safety reflections of queries and responses into LLMs' generation
process, unlocking a safety-aware reasoning mechanism. This approach enables
self-evaluation at each reasoning step to create safety pivot tokens as
indicators of the response's safety status. Furthermore, in order to improve
the learning efficiency of pivot token prediction, we propose Contrastive Pivot
Optimization(CPO), which enhances the model's ability to perceive the safety
status of dialogues. Through this mechanism, LLMs dynamically adjust their
response strategies during reasoning, significantly enhancing their defense
capabilities against jailbreak attacks. Extensive experimental results
demonstrate that R2D effectively mitigates various attacks and improves overall
safety, highlighting the substantial potential of safety-aware reasoning in
strengthening LLMs' robustness against jailbreaks.
|
2502.12973
|
Optimizing Social Network Interventions via Hypergradient-Based
Recommender System Design
|
cs.SI cs.SY eess.SY math.OC
|
Although social networks have expanded the range of ideas and information
accessible to users, they are also criticized for amplifying the polarization
of user opinions. Given the inherent complexity of these phenomena, existing
approaches to counteract these effects typically rely on handcrafted algorithms
and heuristics. We propose an elegant solution: we act on the network weights
that model user interactions on social networks (e.g., frequency of
communication), to optimize a performance metric (e.g., polarization
reduction), while users' opinions follow the classical Friedkin-Johnsen model.
Our formulation gives rise to a challenging large-scale optimization problem
with non-convex constraints, for which we develop a gradient-based algorithm.
Our scheme is simple, scalable, and versatile, as it can readily integrate
different, potentially non-convex, objectives. We demonstrate its merit by: (i)
rapidly solving complex social network intervention problems with 3 million
variables based on the Reddit and DBLP datasets; (ii) significantly
outperforming competing approaches in terms of both computation time and
disagreement reduction.
|
2502.12974
|
Learning More Effective Representations for Dense Retrieval through
Deliberate Thinking Before Search
|
cs.IR
|
Recent dense retrievers usually thrive on the emergency capabilities of Large
Language Models (LLMs), using them to encode queries and documents into an
embedding space for retrieval. These LLM-based dense retrievers have shown
promising performance across various retrieval scenarios. However, relying on a
single embedding to represent documents proves less effective in capturing
different perspectives of documents for matching. In this paper, we propose
Deliberate Thinking based Dense Retriever (DEBATER), which enhances these
LLM-based retrievers by enabling them to learn more effective document
representations through a step-by-step thinking process. DEBATER introduces the
Chain-of-Deliberation mechanism to iteratively optimize document
representations using a continuous chain of thought. To consolidate information
from various thinking steps, DEBATER also incorporates the Self Distillation
mechanism, which identifies the most informative thinking steps and integrates
them into a unified text embedding. Experimental results show that DEBATER
significantly outperforms existing methods across several retrieval benchmarks,
demonstrating superior accuracy and robustness. All codes are available at
https://github.com/OpenBMB/DEBATER.
|
2502.12975
|
Instance-Level Moving Object Segmentation from a Single Image with
Events
|
cs.CV
|
Moving object segmentation plays a crucial role in understanding dynamic
scenes involving multiple moving objects, while the difficulties lie in taking
into account both spatial texture structures and temporal motion cues. Existing
methods based on video frames encounter difficulties in distinguishing whether
pixel displacements of an object are caused by camera motion or object motion
due to the complexities of accurate image-based motion modeling. Recent
advances exploit the motion sensitivity of novel event cameras to counter
conventional images' inadequate motion modeling capabilities, but instead lead
to challenges in segmenting pixel-level object masks due to the lack of dense
texture structures in events. To address these two limitations imposed by
unimodal settings, we propose the first instance-level moving object
segmentation framework that integrates complementary texture and motion cues.
Our model incorporates implicit cross-modal masked attention augmentation,
explicit contrastive feature learning, and flow-guided motion enhancement to
exploit dense texture information from a single image and rich motion
information from events, respectively. By leveraging the augmented texture and
motion features, we separate mask segmentation from motion classification to
handle varying numbers of independently moving objects. Through extensive
evaluations on multiple datasets, as well as ablation experiments with
different input settings and real-time efficiency analysis of the proposed
framework, we believe that our first attempt to incorporate image and event
data for practical deployment can provide new insights for future work in
event-based motion related works. The source code with model training and
pre-trained weights is released at https://npucvr.github.io/EvInsMOS
|
2502.12976
|
Does Training with Synthetic Data Truly Protect Privacy?
|
cs.CR cs.LG
|
As synthetic data becomes increasingly popular in machine learning tasks,
numerous methods--without formal differential privacy guarantees--use synthetic
data for training. These methods often claim, either explicitly or implicitly,
to protect the privacy of the original training data. In this work, we explore
four different training paradigms: coreset selection, dataset distillation,
data-free knowledge distillation, and synthetic data generated from diffusion
models. While all these methods utilize synthetic data for training, they lead
to vastly different conclusions regarding privacy preservation. We caution that
empirical approaches to preserving data privacy require careful and rigorous
evaluation; otherwise, they risk providing a false sense of privacy.
|
2502.12977
|
Time-series attribution maps with regularized contrastive learning
|
stat.ML cs.AI cs.LG q-bio.NC
|
Gradient-based attribution methods aim to explain decisions of deep learning
models but so far lack identifiability guarantees. Here, we propose a method to
generate attribution maps with identifiability guarantees by developing a
regularized contrastive learning algorithm trained on time-series data plus a
new attribution method called Inverted Neuron Gradient (collectively named
xCEBRA). We show theoretically that xCEBRA has favorable properties for
identifying the Jacobian matrix of the data generating process. Empirically, we
demonstrate robust approximation of zero vs. non-zero entries in the
ground-truth attribution map on synthetic datasets, and significant
improvements across previous attribution methods based on feature ablation,
Shapley values, and other gradient-based methods. Our work constitutes a first
example of identifiable inference of time-series attribution maps and opens
avenues to a better understanding of time-series data, such as for neural
dynamics and decision-processes within neural networks.
|
2502.12978
|
Statistically Significant $k$NNAD by Selective Inference
|
stat.ML cs.LG
|
In this paper, we investigate the problem of unsupervised anomaly detection
using the k-Nearest Neighbor method. The k-Nearest Neighbor Anomaly Detection
(kNNAD) is a simple yet effective approach for identifying anomalies across
various domains and fields. A critical challenge in anomaly detection,
including kNNAD, is appropriately quantifying the reliability of detected
anomalies. To address this, we formulate kNNAD as a statistical hypothesis test
and quantify the probability of false detection using $p$-values. The main
technical challenge lies in performing both anomaly detection and statistical
testing on the same data, which hinders correct $p$-value calculation within
the conventional statistical testing framework. To resolve this issue, we
introduce a statistical hypothesis testing framework called Selective Inference
(SI) and propose a method named Statistically Significant NNAD (Stat-kNNAD). By
leveraging SI, the Stat-kNNAD method ensures that detected anomalies are
statistically significant with theoretical guarantees. The proposed Stat-kNNAD
method is applicable to anomaly detection in both the original feature space
and latent feature spaces derived from deep learning models. Through numerical
experiments on synthetic data and applications to industrial product anomaly
detection, we demonstrate the validity and effectiveness of the Stat-kNNAD
method.
|
2502.12979
|
Electron flow matching for generative reaction mechanism prediction
obeying conservation laws
|
cs.LG
|
Central to our understanding of chemical reactivity is the principle of mass
conservation, which is fundamental for ensuring physical consistency, balancing
equations, and guiding reaction design. However, data-driven computational
models for tasks such as reaction product prediction rarely abide by this most
basic constraint. In this work, we recast the problem of reaction prediction as
a problem of electron redistribution using the modern deep generative framework
of flow matching. Our model, FlowER, overcomes limitations inherent in previous
approaches by enforcing exact mass conservation, thereby resolving
hallucinatory failure modes, recovering mechanistic reaction sequences for
unseen substrate scaffolds, and generalizing effectively to out-of-domain
reaction classes with extremely data-efficient fine-tuning. FlowER additionally
enables estimation of thermodynamic or kinetic feasibility and manifests a
degree of chemical intuition in reaction prediction tasks. This inherently
interpretable framework represents a significant step in bridging the gap
between predictive accuracy and mechanistic understanding in data-driven
reaction outcome prediction.
|
2502.12981
|
Towards Variational Flow Matching on General Geometries
|
cs.LG math.DG
|
We introduce Riemannian Gaussian Variational Flow Matching (RG-VFM), an
extension of Variational Flow Matching (VFM) that leverages Riemannian Gaussian
distributions for generative modeling on structured manifolds. We derive a
variational objective for probability flows on manifolds with closed-form
geodesics, making RG-VFM comparable - though fundamentally different to
Riemannian Flow Matching (RFM) in this geometric setting. Experiments on a
checkerboard dataset wrapped on the sphere demonstrate that RG-VFM captures
geometric structure more effectively than Euclidean VFM and baseline methods,
establishing it as a robust framework for manifold-aware generative modeling.
|
2502.12982
|
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
|
cs.CL cs.AI cs.LG
|
Sailor2 is a family of cutting-edge multilingual language models for
South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit
diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous
pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to
support 13 SEA languages while retaining proficiency in Chinese and English.
Sailor2-20B model achieves a 50-50 win rate against GPT-4o across SEA
languages. We also deliver a comprehensive cookbook on how to develop the
multilingual model in an efficient manner, including five key aspects: data
curation, pre-training, post-training, model customization and evaluation. We
hope that Sailor2 model (Apache 2.0 license) will drive language development in
the SEA region, and Sailor2 cookbook will inspire researchers to build more
inclusive LLMs for other under-served languages.
|
2502.12984
|
On Erlang mixture approximations for differential equations with
distributed time delays
|
math.DS cs.NA cs.SY eess.SY math.NA
|
In this paper, we propose a general approach for approximate simulation and
analysis of delay differential equations (DDEs) with distributed time delays
based on methods for ordinary differential equations (ODEs). The key innovation
is that we 1) approximate the kernel by the probability density function of an
Erlang mixture and 2) use the linear chain trick to transform the approximate
DDEs to ODEs. Furthermore, we prove that an approximation with infinitely many
terms converges for continuous and bounded kernels and for specific choices of
the coefficients. We compare the steady states of the original DDEs and their
stability criteria to those of the approximate system of ODEs, and we propose
an approach based on bisection and least-squares estimation for determining
optimal parameter values in the approximation. Finally, we present numerical
examples that demonstrate the accuracy and convergence rate obtained with the
optimal parameters and the efficacy of the proposed approach for bifurcation
analysis and Monte Carlo simulation. The numerical examples involve a modified
logistic equation and a point reactor kinetics model of a molten salt nuclear
fission reactor.
|
2502.12985
|
PartSDF: Part-Based Implicit Neural Representation for Composite 3D
Shape Parametrization and Optimization
|
cs.CV cs.AI
|
Accurate 3D shape representation is essential in engineering applications
such as design, optimization, and simulation. In practice, engineering
workflows require structured, part-aware representations, as objects are
inherently designed as assemblies of distinct components. However, most
existing methods either model shapes holistically or decompose them without
predefined part structures, limiting their applicability in real-world design
tasks. We propose PartSDF, a supervised implicit representation framework that
explicitly models composite shapes with independent, controllable parts while
maintaining shape consistency. Despite its simple single-decoder architecture,
PartSDF outperforms both supervised and unsupervised baselines in
reconstruction and generation tasks. We further demonstrate its effectiveness
as a structured shape prior for engineering applications, enabling precise
control over individual components while preserving overall coherence. Code
available at https://github.com/cvlab-epfl/PartSDF.
|
2502.12987
|
Ensemble Kalman filter in latent space using a variational autoencoder
pair
|
cs.LG physics.ao-ph
|
Popular (ensemble) Kalman filter data assimilation (DA) approaches assume
that the errors in both the a priori estimate of the state and those in the
observations are Gaussian. For constrained variables, e.g. sea ice
concentration or stress, such an assumption does not hold. The variational
autoencoder (VAE) is a machine learning (ML) technique that allows to map an
arbitrary distribution to/from a latent space in which the distribution is
supposedly closer to a Gaussian. We propose a novel hybrid DA-ML approach in
which VAEs are incorporated in the DA procedure. Specifically, we introduce a
variant of the popular ensemble transform Kalman filter (ETKF) in which the
analysis is applied in the latent space of a single VAE or a pair of VAEs. In
twin experiments with a simple circular model, whereby the circle represents an
underlying submanifold to be respected, we find that the use of a VAE ensures
that a posteri ensemble members lie close to the manifold containing the truth.
Furthermore, online updating of the VAE is necessary and achievable when this
manifold varies in time, i.e. when it is non-stationary. We demonstrate that
introducing an additional second latent space for the observational innovations
improves robustness against detrimental effects of non-Gaussianity and bias in
the observational errors but it slightly lessens the performance if
observational errors are strictly Gaussian.
|
2502.12988
|
Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in
LLMs
|
cs.CL
|
Previous approaches to persona simulation large language models (LLMs) have
typically relied on learning basic biographical information, or using limited
role-play dialogue datasets to capture a character's responses. However, a
holistic representation of an individual goes beyond surface-level facts or
conversations to deeper thoughts and thinking. In this work, we introduce
CharacterBot, a model designed to replicate both the linguistic patterns and
distinctive thought processes of a character. Using Lu Xun, a renowned Chinese
writer, as a case study, we propose four training tasks derived from his 17
essay collections. These include a pre-training task focused on mastering
external linguistic structures and knowledge, as well as three fine-tuning
tasks: multiple-choice question answering, generative question answering, and
style transfer, each aligning the LLM with Lu Xun's internal ideation and
writing style. To optimize learning across these tasks, we introduce a CharLoRA
parameter updating mechanism, where a general linguistic style expert
collaborates with other task-specific experts to better study both the language
style and the understanding of deeper thoughts. We evaluate CharacterBot on
three tasks for linguistic accuracy and opinion comprehension, demonstrating
that it significantly outperforms the baselines on our adapted metrics. We hope
that this work inspires future research on deep character persona simulation
LLM.
|
2502.12992
|
B-cos LM: Efficiently Transforming Pre-trained Language Models for
Improved Explainability
|
cs.CL cs.AI
|
Post-hoc explanation methods for black-box models often struggle with
faithfulness and human interpretability due to the lack of explainability in
current neural models. Meanwhile, B-cos networks have been introduced to
improve model explainability through architectural and computational
adaptations, but their application has so far been limited to computer vision
models and their associated training pipelines. In this work, we introduce
B-cos LMs, i.e., B-cos networks empowered for NLP tasks. Our approach directly
transforms pre-trained language models into B-cos LMs by combining B-cos
conversion and task fine-tuning, improving efficiency compared to previous
B-cos methods. Our automatic and human evaluation results demonstrate that
B-cos LMs produce more faithful and human interpretable explanations than post
hoc methods, while maintaining task performance comparable to conventional
fine-tuning. Our in-depth analysis explores how B-cos LMs differ from
conventionally fine-tuned models in their learning processes and explanation
patterns. Finally, we provide practical guidelines for effectively building
B-cos LMs based on our findings. Our code is available at
https://anonymous.4open.science/r/bcos_lm.
|
2502.12993
|
Approximate Tree Completion and Learning-Augmented Algorithms for Metric
Minimum Spanning Trees
|
cs.DS cs.DM cs.LG
|
Finding a minimum spanning tree (MST) for $n$ points in an arbitrary metric
space is a fundamental primitive for hierarchical clustering and many other ML
tasks, but this takes $\Omega(n^2)$ time to even approximate. We introduce a
framework for metric MSTs that first (1) finds a forest of disconnected
components using practical heuristics, and then (2) finds a small weight set of
edges to connect disjoint components of the forest into a spanning tree. We
prove that optimally solving the second step still takes $\Omega(n^2)$ time,
but we provide a subquadratic 2.62-approximation algorithm. In the spirit of
learning-augmented algorithms, we then show that if the forest found in step
(1) overlaps with an optimal MST, we can approximate the original MST problem
in subquadratic time, where the approximation factor depends on a measure of
overlap. In practice, we find nearly optimal spanning trees for a wide range of
metrics, while being orders of magnitude faster than exact algorithms.
|
2502.12994
|
SHADeS: Self-supervised Monocular Depth Estimation Through
Non-Lambertian Image Decomposition
|
cs.CV
|
Purpose: Visual 3D scene reconstruction can support colonoscopy navigation.
It can help in recognising which portions of the colon have been visualised and
characterising the size and shape of polyps. This is still a very challenging
problem due to complex illumination variations, including abundant specular
reflections. We investigate how to effectively decouple light and depth in this
problem.
Methods: We introduce a self-supervised model that simultaneously
characterises the shape and lighting of the visualised colonoscopy scene. Our
model estimates shading, albedo, depth, and specularities (SHADeS) from single
images. Unlike previous approaches (IID), we use a non-Lambertian model that
treats specular reflections as a separate light component. The implementation
of our method is available at https://github.com/RemaDaher/SHADeS.
Results: We demonstrate on real colonoscopy images (Hyper Kvasir) that
previous models for light decomposition (IID) and depth estimation (MonoVIT,
ModoDepth2) are negatively affected by specularities. In contrast, SHADeS can
simultaneously produce light decomposition and depth maps that are robust to
specular regions. We also perform a quantitative comparison on phantom data
(C3VD) where we further demonstrate the robustness of our model.
Conclusion: Modelling specular reflections improves depth estimation in
colonoscopy. We propose an effective self-supervised approach that uses this
insight to jointly estimate light decomposition and depth. Light decomposition
has the potential to help with other problems, such as place recognition within
the colon.
|
2502.12995
|
Free Argumentative Exchanges for Explaining Image Classifiers
|
cs.AI
|
Deep learning models are powerful image classifiers but their opacity hinders
their trustworthiness. Explanation methods for capturing the reasoning process
within these classifiers faithfully and in a clear manner are scarce, due to
their sheer complexity and size. We provide a solution for this problem by
defining a novel method for explaining the outputs of image classifiers with
debates between two agents, each arguing for a particular class. We obtain
these debates as concrete instances of Free Argumentative eXchanges (FAXs), a
novel argumentation-based multi-agent framework allowing agents to internalise
opinions by other agents differently than originally stated. We define two
metrics (consensus and persuasion rate) to assess the usefulness of FAXs as
argumentative explanations for image classifiers. We then conduct a number of
empirical experiments showing that FAXs perform well along these metrics as
well as being more faithful to the image classifiers than conventional,
non-argumentative explanation methods. All our implementations can be found at
https://github.com/koriavinash1/FAX.
|
2502.12996
|
Eager Updates For Overlapped Communication and Computation in DiLoCo
|
cs.CL
|
Distributed optimization methods such as DiLoCo have been shown to be
effective in training very large models across multiple distributed workers,
such as datacenters. These methods split updates into two parts: an inner
optimization phase, where the workers independently execute multiple
optimization steps on their own local data, and an outer optimization step,
where the inner updates are synchronized. While such approaches require orders
of magnitude less communication than standard data-parallel training, in
settings where the workers are datacenters, even the limited communication
requirements of these approaches can still cause significant slow downs due to
the blocking necessary at each outer optimization step. In this paper, we
investigate techniques to mitigate this issue by overlapping communication with
computation in a manner that allows the outer optimization step to fully
overlap with the inner optimization phase. We show that a particular variant,
dubbed eager updates, provides competitive performance with standard DiLoCo in
settings with low bandwidth between workers.
|
2502.12998
|
Personalized Top-k Set Queries Over Predicted Scores
|
cs.DB cs.AI cs.LG
|
This work studies the applicability of expensive external oracles such as
large language models in answering top-k queries over predicted scores. Such
scores are incurred by user-defined functions to answer personalized queries
over multi-modal data. We propose a generic computational framework that
handles arbitrary set-based scoring functions, as long as the functions could
be decomposed into constructs, each of which sent to an oracle (in our case an
LLM) to predict partial scores. At a given point in time, the framework assumes
a set of responses and their partial predicted scores, and it maintains a
collection of possible sets that are likely to be the true top-k. Since calling
oracles is costly, our framework judiciously identifies the next construct,
i.e., the next best question to ask the oracle so as to maximize the likelihood
of identifying the true top-k. We present a principled probabilistic model that
quantifies that likelihood. We study efficiency opportunities in designing
algorithms. We run an evaluation with three large scale datasets, scoring
functions, and baselines. Experiments indicate the efficacy of our framework,
as it achieves an order of magnitude improvement over baselines in requiring
LLM calls while ensuring result accuracy. Scalability experiments further
indicate that our framework could be used in large-scale applications.
|
2502.12999
|
Asymptotic Optimism of Random-Design Linear and Kernel Regression Models
|
stat.ML cs.LG math.ST stat.TH
|
We derived the closed-form asymptotic optimism of linear regression models
under random designs, and generalizes it to kernel ridge regression. Using
scaled asymptotic optimism as a generic predictive model complexity measure, we
studied the fundamental different behaviors of linear regression model, tangent
kernel (NTK) regression model and three-layer fully connected neural networks
(NN). Our contribution is two-fold: we provided theoretical ground for using
scaled optimism as a model predictive complexity measure; and we show
empirically that NN with ReLUs behaves differently from kernel models under
this measure. With resampling techniques, we can also compute the optimism for
regression models with real data.
|
2502.13000
|
Edge-Colored Clustering in Hypergraphs: Beyond Minimizing Unsatisfied
Edges
|
cs.DS cs.DM cs.LG
|
We consider a framework for clustering edge-colored hypergraphs, where the
goal is to cluster (equivalently, to color) objects based on the primary type
of multiway interactions they participate in. One well-studied objective is to
color nodes to minimize the number of unsatisfied hyperedges -- those
containing one or more nodes whose color does not match the hyperedge color. We
motivate and present advances for several directions that extend beyond this
minimization problem. We first provide new algorithms for maximizing satisfied
edges, which is the same at optimality but is much more challenging to
approximate, with all prior work restricted to graphs. We develop the first
approximation algorithm for hypergraphs, and then refine it to improve the
best-known approximation factor for graphs. We then introduce new objective
functions that incorporate notions of balance and fairness, and provide new
hardness results, approximations, and fixed-parameter tractability results.
|
2502.13001
|
You need to MIMIC to get FAME: Solving Meeting Transcript Scarcity with
a Multi-Agent Conversations
|
cs.AI cs.CL
|
Meeting summarization suffers from limited high-quality data, mainly due to
privacy restrictions and expensive collection processes. We address this gap
with FAME, a dataset of 500 meetings in English and 300 in German produced by
MIMIC, our new multi-agent meeting synthesis framework that generates meeting
transcripts on a given knowledge source by defining psychologically grounded
participant profiles, outlining the conversation, and orchestrating a large
language model (LLM) debate. A modular post-processing step refines these
outputs, mitigating potential repetitiveness and overly formal tones, ensuring
coherent, credible dialogues at scale. We also propose a psychologically
grounded evaluation framework assessing naturalness, social behavior
authenticity, and transcript difficulties. Human assessments show that FAME
approximates real-meeting spontaneity (4.5/5 in naturalness), preserves
speaker-centric challenges (3/5 in spoken language), and introduces richer
information-oriented difficulty (4/5 in difficulty). These findings highlight
that FAME is a good and scalable proxy for real-world meeting conditions. It
enables new test scenarios for meeting summarization research and other
conversation-centric applications in tasks requiring conversation data or
simulating social scenarios under behavioral constraints.
|
2502.13004
|
Language Barriers: Evaluating Cross-Lingual Performance of CNN and
Transformer Architectures for Speech Quality Estimation
|
cs.CL
|
Objective speech quality models aim to predict human-perceived speech quality
using automated methods. However, cross-lingual generalization remains a major
challenge, as Mean Opinion Scores (MOS) vary across languages due to
linguistic, perceptual, and dataset-specific differences. A model trained
primarily on English data may struggle to generalize to languages with
different phonetic, tonal, and prosodic characteristics, leading to
inconsistencies in objective assessments. This study investigates the
cross-lingual performance of two speech quality models: NISQA, a CNN-based
model, and a Transformer-based Audio Spectrogram Transformer (AST) model. Both
models were trained exclusively on English datasets containing over 49,000
speech samples and subsequently evaluated on speech in German, French,
Mandarin, Swedish, and Dutch. We analyze model performance using Pearson
Correlation Coefficient (PCC) and Root Mean Square Error (RMSE) across five
speech quality dimensions: coloration, discontinuity, loudness, noise, and MOS.
Our findings show that while AST achieves a more stable cross-lingual
performance, both models exhibit noticeable biases. Notably, Mandarin speech
quality predictions correlate highly with human MOS scores, whereas Swedish and
Dutch present greater prediction challenges. Discontinuities remain difficult
to model across all languages. These results highlight the need for more
balanced multilingual datasets and architecture-specific adaptations to improve
cross-lingual generalization.
|
2502.13006
|
Integrating Reinforcement Learning, Action Model Learning, and Numeric
Planning for Tackling Complex Tasks
|
cs.AI
|
Automated Planning algorithms require a model of the domain that specifies
the preconditions and effects of each action. Obtaining such a domain model is
notoriously hard. Algorithms for learning domain models exist, yet it remains
unclear whether learning a domain model and planning is an effective approach
for numeric planning environments, i.e., where states include discrete and
numeric state variables. In this work, we explore the benefits of learning a
numeric domain model and compare it with alternative model-free solutions. As a
case study, we use two tasks in Minecraft, a popular sandbox game that has been
used as an AI challenge. First, we consider an offline learning setting, where
a set of expert trajectories are available to learn from. This is the standard
setting for learning domain models. We used the Numeric Safe Action Model
Learning (NSAM) algorithm to learn a numeric domain model and solve new
problems with the learned domain model and a numeric planner. We call this
model-based solution NSAM_(+p), and compare it to several model-free Imitation
Learning (IL) and Offline Reinforcement Learning (RL) algorithms. Empirical
results show that some IL algorithms can learn faster to solve simple tasks,
while NSAM_(+p) allows solving tasks that require long-term planning and
enables generalizing to solve problems in larger environments. Then, we
consider an online learning setting, where learning is done by moving an agent
in the environment. For this setting, we introduce RAMP. In RAMP, observations
collected during the agent's execution are used to simultaneously train an RL
policy and learn a planning domain action model. This forms a positive feedback
loop between the RL policy and the learned domain model. We demonstrate
experimentally the benefits of using RAMP, showing that it finds more efficient
plans and solves more problems than several RL baselines.
|
2502.13010
|
Adaptive Knowledge Graphs Enhance Medical Question Answering: Bridging
the Gap Between LLMs and Evolving Medical Knowledge
|
cs.CL cs.MA
|
Large Language Models (LLMs) have significantly advanced medical
question-answering by leveraging extensive clinical data and medical
literature. However, the rapid evolution of medical knowledge and the
labor-intensive process of manually updating domain-specific resources pose
challenges to the reliability of these systems. To address this, we introduce
Adaptive Medical Graph-RAG (AMG-RAG), a comprehensive framework that automates
the construction and continuous updating of medical knowledge graphs,
integrates reasoning, and retrieves current external evidence, such as PubMed
and WikiSearch. By dynamically linking new findings and complex medical
concepts, AMG-RAG not only improves accuracy but also enhances interpretability
in medical queries.
Evaluations on the MEDQA and MEDMCQA benchmarks demonstrate the effectiveness
of AMG-RAG, achieving an F1 score of 74.1 percent on MEDQA and an accuracy of
66.34 percent on MEDMCQA, outperforming both comparable models and those 10 to
100 times larger. Notably, these improvements are achieved without increasing
computational overhead, highlighting the critical role of automated knowledge
graph generation and external evidence retrieval in delivering up-to-date,
trustworthy medical insights.
|
2502.13012
|
Towards a Design Guideline for RPA Evaluation: A Survey of Large
Language Model-Based Role-Playing Agents
|
cs.HC cs.CL
|
Role-Playing Agent (RPA) is an increasingly popular type of LLM Agent that
simulates human-like behaviors in a variety of tasks. However, evaluating RPAs
is challenging due to diverse task requirements and agent designs. This paper
proposes an evidence-based, actionable, and generalizable evaluation design
guideline for LLM-based RPA by systematically reviewing 1,676 papers published
between Jan. 2021 and Dec. 2024. Our analysis identifies six agent attributes,
seven task attributes, and seven evaluation metrics from existing literature.
Based on these findings, we present an RPA evaluation design guideline to help
researchers develop more systematic and consistent evaluation methods.
|
2502.13013
|
HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit
|
cs.RO cs.AI cs.HC
|
Current humanoid teleoperation systems either lack reliable low-level control
policies, or struggle to acquire accurate whole-body control commands, making
it difficult to teleoperate humanoids for loco-manipulation tasks. To solve
these issues, we propose HOMIE, a novel humanoid teleoperation cockpit
integrates a humanoid loco-manipulation policy and a low-cost exoskeleton-based
hardware system. The policy enables humanoid robots to walk and squat to
specific heights while accommodating arbitrary upper-body poses. This is
achieved through our novel reinforcement learning-based training framework that
incorporates upper-body pose curriculum, height-tracking reward, and symmetry
utilization, without relying on any motion priors. Complementing the policy,
the hardware system integrates isomorphic exoskeleton arms, a pair of
motion-sensing gloves, and a pedal, allowing a single operator to achieve full
control of the humanoid robot. Our experiments show our cockpit facilitates
more stable, rapid, and precise humanoid loco-manipulation teleoperation,
accelerating task completion and eliminating retargeting errors compared to
inverse kinematics-based methods. We also validate the effectiveness of the
data collected by our cockpit for imitation learning. Our project is fully
open-sourced, demos and code can be found in https://homietele.github.io/.
|
2502.13016
|
LLM-Powered Proactive Data Systems
|
cs.DB cs.AI
|
With the power of LLMs, we now have the ability to query data that was
previously impossible to query, including text, images, and video. However,
despite this enormous potential, most present-day data systems that leverage
LLMs are reactive, reflecting our community's desire to map LLMs to known
abstractions. Most data systems treat LLMs as an opaque black box that operates
on user inputs and data as is, optimizing them much like any other approximate,
expensive UDFs, in conjunction with other relational operators. Such data
systems do as they are told, but fail to understand and leverage what the LLM
is being asked to do (i.e. the underlying operations, which may be
error-prone), the data the LLM is operating on (e.g., long, complex documents),
or what the user really needs. They don't take advantage of the characteristics
of the operations and/or the data at hand, or ensure correctness of results
when there are imprecisions and ambiguities. We argue that data systems instead
need to be proactive: they need to be given more agency -- armed with the power
of LLMs -- to understand and rework the user inputs and the data and to make
decisions on how the operations and the data should be represented and
processed. By allowing the data system to parse, rewrite, and decompose user
inputs and data, or to interact with the user in ways that go beyond the
standard single-shot query-result paradigm, the data system is able to address
user needs more efficiently and effectively. These new capabilities lead to a
rich design space where the data system takes more initiative: they are
empowered to perform optimization based on the transformation operations, data
characteristics, and user intent. We discuss various successful examples of how
this framework has been and can be applied in real-world tasks, and present
future directions for this ambitious research agenda.
|
2502.13017
|
Mean of Means: Human Localization with Calibration-free and
Unconstrained Camera Settings (extended version)
|
cs.CV cs.GR
|
Accurate human localization is crucial for various applications, especially
in the Metaverse era. Existing high precision solutions rely on expensive,
tag-dependent hardware, while vision-based methods offer a cheaper, tag-free
alternative. However, current vision solutions based on stereo vision face
limitations due to rigid perspective transformation principles and error
propagation in multi-stage SVD solvers. These solutions also require multiple
high-resolution cameras with strict setup constraints.To address these
limitations, we propose a probabilistic approach that considers all points on
the human body as observations generated by a distribution centered around the
body's geometric center. This enables us to improve sampling significantly,
increasing the number of samples for each point of interest from hundreds to
billions. By modeling the relation between the means of the distributions of
world coordinates and pixel coordinates, leveraging the Central Limit Theorem,
we ensure normality and facilitate the learning process. Experimental results
demonstrate human localization accuracy of 96\% within a 0.3$m$ range and
nearly 100\% accuracy within a 0.5$m$ range, achieved at a low cost of only 10
USD using two web cameras with a resolution of 640$\times$480 pixels.
|
2502.13019
|
Oreo: A Plug-in Context Reconstructor to Enhance Retrieval-Augmented
Generation
|
cs.CL
|
Despite the remarkable capabilities of Large Language Models (LLMs) in
various NLP tasks, they remain vulnerable to hallucinations due to their
limited parametric knowledge and lack of domain-specific expertise.
Retrieval-Augmented Generation (RAG) addresses this challenge by incorporating
external document retrieval to augment the knowledge base of LLMs. In this
approach, RAG retrieves document chunks from an external corpus in response to
a query, which are then used as context for the downstream language model to
generate an answer. However, these retrieved knowledge sources often include
irrelevant or erroneous information, undermining the effectiveness of RAG in
downstream tasks. To overcome this limitation, we introduce a compact,
efficient, and pluggable module designed to refine external knowledge sources
before feeding them to the generator. The module reconstructs retrieved content
by extracting the most relevant and supportive information and reorganising it
into a concise, query-specific format. Through a three-stage training paradigm
- comprising supervised fine-tuning, contrastive multi-task learning, and
reinforcement learning-based alignment - it prioritises critical knowledge and
aligns it with the generator's preferences. This method enables LLMs to produce
outputs that are more accurate, reliable, and contextually appropriate.
|
2502.13022
|
Efficient and Sharp Off-Policy Learning under Unobserved Confounding
|
cs.LG
|
We develop a novel method for personalized off-policy learning in scenarios
with unobserved confounding. Thereby, we address a key limitation of standard
policy learning: standard policy learning assumes unconfoundedness, meaning
that no unobserved factors influence both treatment assignment and outcomes.
However, this assumption is often violated, because of which standard policy
learning produces biased estimates and thus leads to policies that can be
harmful. To address this limitation, we employ causal sensitivity analysis and
derive a statistically efficient estimator for a sharp bound on the value
function under unobserved confounding. Our estimator has three advantages: (1)
Unlike existing works, our estimator avoids unstable minimax optimization based
on inverse propensity weighted outcomes. (2) Our estimator is statistically
efficient. (3) We prove that our estimator leads to the optimal
confounding-robust policy. Finally, we extend our theory to the related task of
policy improvement under unobserved confounding, i.e., when a baseline policy
such as the standard of care is available. We show in experiments with
synthetic and real-world data that our method outperforms simple plug-in
approaches and existing baselines. Our method is highly relevant for
decision-making where unobserved confounding can be problematic, such as in
healthcare and public policy.
|
2502.13023
|
Detection and Geographic Localization of Natural Objects in the Wild: A
Case Study on Palms
|
cs.CV cs.LG
|
Palms are ecologically and economically indicators of tropical forest health,
biodiversity, and human impact that support local economies and global forest
product supply chains. While palm detection in plantations is well-studied,
efforts to map naturally occurring palms in dense forests remain limited by
overlapping crowns, uneven shading, and heterogeneous landscapes. We develop
PRISM (Processing, Inference, Segmentation, and Mapping), a flexible pipeline
for detecting and localizing palms in dense tropical forests using large
orthomosaic images. Orthomosaics are created from thousands of aerial images
and spanning several to hundreds of gigabytes. Our contributions are threefold.
First, we construct a large UAV-derived orthomosaic dataset collected across 21
ecologically diverse sites in western Ecuador, annotated with 8,830 bounding
boxes and 5,026 palm center points. Second, we evaluate multiple
state-of-the-art object detectors based on efficiency and performance,
integrating zero-shot SAM 2 as the segmentation backbone, and refining the
results for precise geographic mapping. Third, we apply calibration methods to
align confidence scores with IoU and explore saliency maps for feature
explainability. Though optimized for palms, PRISM is adaptable for identifying
other natural objects, such as eastern white pines. Future work will explore
transfer learning for lower-resolution datasets (0.5 to 1m).
|
2502.13024
|
Fragility-aware Classification for Understanding Risk and Improving
Generalization
|
cs.LG math.OC
|
Classification models play a critical role in data-driven decision-making
applications such as medical diagnosis, user profiling, recommendation systems,
and default detection. Traditional performance metrics, such as accuracy, focus
on overall error rates but fail to account for the confidence of incorrect
predictions, thereby overlooking the risk of confident misjudgments. This risk
is particularly significant in cost-sensitive and safety-critical domains like
medical diagnosis and autonomous driving, where overconfident false predictions
may cause severe consequences. To address this issue, we introduce the
Fragility Index (FI), a novel metric that evaluates classification performance
from a risk-averse perspective by explicitly capturing the tail risk of
confident misjudgments. To enhance generalizability, we define FI within the
robust satisficing (RS) framework, incorporating data uncertainty. We further
develop a model training approach that optimizes FI while maintaining
tractability for common loss functions. Specifically, we derive exact
reformulations for cross-entropy loss, hinge-type loss, and Lipschitz loss, and
extend the approach to deep learning models. Through synthetic experiments and
real-world medical diagnosis tasks, we demonstrate that FI effectively
identifies misjudgment risk and FI-based training improves model robustness and
generalizability. Finally, we extend our framework to deep neural network
training, further validating its effectiveness in enhancing deep learning
models.
|
2502.13025
|
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
|
cs.AI cond-mat.mtrl-sci cs.CL cs.LG
|
We present an agentic, autonomous graph expansion framework that iteratively
structures and refines knowledge in situ. Unlike conventional knowledge graph
construction methods relying on static extraction or single-pass learning, our
approach couples a reasoning-native large language model with a continually
updated graph representation. At each step, the system actively generates new
concepts and relationships, merges them into a global graph, and formulates
subsequent prompts based on its evolving structure. Through this
feedback-driven loop, the model organizes information into a scale-free network
characterized by hub formation, stable modularity, and bridging nodes that link
disparate knowledge clusters. Over hundreds of iterations, new nodes and edges
continue to appear without saturating, while centrality measures and shortest
path distributions evolve to yield increasingly distributed connectivity. Our
analysis reveals emergent patterns, such as the rise of highly connected 'hub'
concepts and the shifting influence of 'bridge' nodes, indicating that agentic,
self-reinforcing graph construction can yield open-ended, coherent knowledge
structures. Applied to materials design problems, we present compositional
reasoning experiments by extracting node-specific and synergy-level principles
to foster genuinely novel knowledge synthesis, yielding cross-domain ideas that
transcend rote summarization and strengthen the framework's potential for
open-ended scientific discovery. We discuss other applications in scientific
discovery and outline future directions for enhancing scalability and
interpretability.
|
2502.13027
|
A deep learning framework for efficient pathology image analysis
|
cs.CV
|
Artificial intelligence (AI) has transformed digital pathology by enabling
biomarker prediction from high-resolution whole slide images (WSIs). However,
current methods are computationally inefficient, processing thousands of
redundant tiles per WSI and requiring complex aggregator models. We introduce
EAGLE (Efficient Approach for Guided Local Examination), a deep learning
framework that emulates pathologists by selectively analyzing informative
regions. EAGLE incorporates two foundation models: CHIEF for efficient tile
selection and Virchow2 for extracting high-quality features. Benchmarking was
conducted against leading slide- and tile-level foundation models across 31
tasks from four cancer types, spanning morphology, biomarker prediction and
prognosis. EAGLE outperformed state-of-the-art foundation models by up to 23%
and achieved the highest AUROC overall. It processed a slide in 2.27 seconds,
reducing computational time by more than 99% compared to existing models. This
efficiency enables real-time workflows, allows pathologists to validate all
tiles which are used by the model during analysis, and eliminates dependence on
high-performance computing, making AI-powered pathology more accessible. By
reliably identifying meaningful regions and minimizing artifacts, EAGLE
provides robust and interpretable outputs, supporting rapid slide searches,
integration into multi-omics pipelines and emerging clinical foundation models.
|
2502.13028
|
Whose story is it? Personalizing story generation by inferring author
styles
|
cs.CL
|
Personalization has become essential for improving user experience in
interactive writing and educational applications, yet its potential in story
generation remains largely unexplored. In this work, we propose a novel
two-stage pipeline for personalized story generation. Our approach first infers
an author's implicit story-writing characteristics from their past work and
organizes them into an Author Writing Sheet, inspired by narrative theory. The
second stage uses this sheet to simulate the author's persona through tailored
persona descriptions and personalized story writing rules. To enable and
validate our approach, we construct Mythos, a dataset of 590 stories from 64
authors across five distinct sources that reflect diverse story-writing
settings. A head-to-head comparison with a non-personalized baseline
demonstrates our pipeline's effectiveness in generating high-quality
personalized stories. Our personalized stories achieve a 75 percent win rate
(versus 14 percent for the baseline and 11 percent ties) in capturing authors'
writing style based on their past works. Human evaluation highlights the high
quality of our Author Writing Sheet and provides valuable insights into the
personalized story generation task. Notable takeaways are that writings from
certain sources, such as Reddit, are easier to personalize than others, like
AO3, while narrative aspects, like Creativity and Language Use, are easier to
personalize than others, like Plot.
|
2502.13030
|
Likelihood-Ratio Regularized Quantile Regression: Adapting Conformal
Prediction to High-Dimensional Covariate Shifts
|
stat.ML cs.AI cs.LG
|
We consider the problem of conformal prediction under covariate shift. Given
labeled data from a source domain and unlabeled data from a covariate shifted
target domain, we seek to construct prediction sets with valid marginal
coverage in the target domain. Most existing methods require estimating the
unknown likelihood ratio function, which can be prohibitive for
high-dimensional data such as images. To address this challenge, we introduce
the likelihood ratio regularized quantile regression (LR-QR) algorithm, which
combines the pinball loss with a novel choice of regularization in order to
construct a threshold function without directly estimating the unknown
likelihood ratio. We show that the LR-QR method has coverage at the desired
level in the target domain, up to a small error term that we can control. Our
proofs draw on a novel analysis of coverage via stability bounds from learning
theory. Our experiments demonstrate that the LR-QR algorithm outperforms
existing methods on high-dimensional prediction tasks, including a regression
task for the Communities and Crime dataset, and an image classification task
from the WILDS repository.
|
2502.13031
|
HPSS: Heuristic Prompting Strategy Search for LLM Evaluators
|
cs.CL
|
Since the adoption of large language models (LLMs) for text evaluation has
become increasingly prevalent in the field of natural language processing
(NLP), a series of existing works attempt to optimize the prompts for LLM
evaluators to improve their alignment with human judgment. However, their
efforts are limited to optimizing individual factors of evaluation prompts,
such as evaluation criteria or output formats, neglecting the combinatorial
impact of multiple factors, which leads to insufficient optimization of the
evaluation pipeline. Nevertheless, identifying well-behaved prompting
strategies for adjusting multiple factors requires extensive enumeration. To
this end, we comprehensively integrate 8 key factors for evaluation prompts and
propose a novel automatic prompting strategy optimization method called
Heuristic Prompting Strategy Search (HPSS). Inspired by the genetic algorithm,
HPSS conducts an iterative search to find well-behaved prompting strategies for
LLM evaluators. A heuristic function is employed to guide the search process,
enhancing the performance of our algorithm. Extensive experiments across four
evaluation tasks demonstrate the effectiveness of HPSS, consistently
outperforming both human-designed evaluation prompts and existing automatic
prompt optimization methods.
|
2502.13034
|
Natural Language Generation from Visual Sequences: Challenges and Future
Directions
|
cs.CL cs.AI cs.CV cs.LG
|
The ability to use natural language to talk about visual content is at the
core of human intelligence and a crucial feature of any artificial intelligence
system. Various studies have focused on generating text for single images. In
contrast, comparatively little attention has been paid to exhaustively
analyzing and advancing work on multiple-image vision-to-text settings. In this
position paper, we claim that any task dealing with temporally ordered
sequences of multiple images or frames is an instance of a broader, more
general problem involving the understanding of intricate relationships between
the visual content and the corresponding text. We comprehensively analyze five
tasks that are instances of this problem and argue that they pose a common set
of challenges and share similarities in terms of modeling and evaluation
approaches. Based on the insights from these various aspects and stages of
multi-image-to-text generation, we highlight several open questions and suggest
future research directions. We believe that these directions can advance the
understanding of complex phenomena in this domain and the development of better
models.
|
2502.13037
|
Enhancing Power Grid Inspections with Machine Learning
|
cs.CV
|
Ensuring the safety and reliability of power grids is critical as global
energy demands continue to rise. Traditional inspection methods, such as manual
observations or helicopter surveys, are resource-intensive and lack
scalability. This paper explores the use of 3D computer vision to automate
power grid inspections, utilizing the TS40K dataset -- a high-density,
annotated collection of 3D LiDAR point clouds. By concentrating on 3D semantic
segmentation, our approach addresses challenges like class imbalance and noisy
data to enhance the detection of critical grid components such as power lines
and towers. The benchmark results indicate significant performance
improvements, with IoU scores reaching 95.53% for the detection of power lines
using transformer-based models. Our findings illustrate the potential for
integrating ML into grid maintenance workflows, increasing efficiency and
enabling proactive risk management strategies.
|
2502.13042
|
Network-Realized Model Predictive Control Part I: NRF-Enabled
Closed-loop Decomposition
|
eess.SY cs.SY
|
A two-layer control architecture is proposed, which promotes scalable
implementations for model predictive controllers. The top layer acts as both
reference governor for the bottom layer, and as a feedback controller for the
regulated network. By employing set-based methods, global theoretical
guarantees are obtained by enforcing local constraints upon the variables of
the network and of the first layer's implementation. The proposed technique
offers recursive feasibility guarantees as one of its central features, and the
expressions of the resulting predictive strategies bear a striking resemblance
to classical formulations from model predictive control literature, allowing
for flexible and easily customizable implementations.
|
2502.13044
|
Do we still need Human Annotators? Prompting Large Language Models for
Aspect Sentiment Quad Prediction
|
cs.CL
|
Aspect sentiment quadruple prediction (ASQP) facilitates a detailed
understanding of opinions expressed in a text by identifying the opinion term,
aspect term, aspect category and sentiment polarity for each opinion. However,
annotating a full set of training examples to fine-tune models for ASQP is a
resource-intensive process. In this study, we explore the capabilities of large
language models (LLMs) for zero- and few-shot learning on the ASQP task across
five diverse datasets. We report F1 scores slightly below those obtained with
state-of-the-art fine-tuned models but exceeding previously reported zero- and
few-shot performance. In the 40-shot setting on the Rest16 restaurant domain
dataset, LLMs achieved an F1 score of 52.46, compared to 60.39 by the
best-performing fine-tuned method MVP. Additionally, we report the performance
of LLMs in target aspect sentiment detection (TASD), where the F1 scores were
also close to fine-tuned models, achieving 66.03 on Rest16 in the 40-shot
setting, compared to 72.76 with MVP. While human annotators remain essential
for achieving optimal performance, LLMs can reduce the need for extensive
manual annotation in ASQP tasks.
|
2502.13049
|
$k$-Graph: A Graph Embedding for Interpretable Time Series Clustering
|
cs.LG
|
Time series clustering poses a significant challenge with diverse
applications across domains. A prominent drawback of existing solutions lies in
their limited interpretability, often confined to presenting users with
centroids. In addressing this gap, our work presents $k$-Graph, an unsupervised
method explicitly crafted to augment interpretability in time series
clustering. Leveraging a graph representation of time series subsequences,
$k$-Graph constructs multiple graph representations based on different
subsequence lengths. This feature accommodates variable-length time series
without requiring users to predetermine subsequence lengths. Our experimental
results reveal that $k$-Graph outperforms current state-of-the-art time series
clustering algorithms in accuracy, while providing users with meaningful
explanations and interpretations of the clustering outcomes.
|
2502.13053
|
AEIA-MN: Evaluating the Robustness of Multimodal LLM-Powered Mobile
Agents Against Active Environmental Injection Attacks
|
cs.CL
|
As researchers continuously optimize AI agents to perform tasks more
effectively within operating systems, they often neglect to address the
critical need for enabling these agents to identify "impostors" within the
system. Through an analysis of the agents' operating environment, we identified
a potential threat: attackers can disguise their attack methods as
environmental elements, injecting active disturbances into the agents'
execution process, thereby disrupting their decision-making. We define this
type of attack as Active Environment Injection Attack (AEIA). Based on this, we
propose AEIA-MN, an active environment injection attack scheme that exploits
interaction vulnerabilities in the mobile operating system to evaluate the
robustness of MLLM-based agents against such threats. Experimental results show
that even advanced MLLMs are highly vulnerable to this attack, achieving a
maximum attack success rate of 93% in the AndroidWorld benchmark.
|
2502.13055
|
LAMD: Context-driven Android Malware Detection and Classification with
LLMs
|
cs.CR cs.AI cs.LG
|
The rapid growth of mobile applications has escalated Android malware
threats. Although there are numerous detection methods, they often struggle
with evolving attacks, dataset biases, and limited explainability. Large
Language Models (LLMs) offer a promising alternative with their zero-shot
inference and reasoning capabilities. However, applying LLMs to Android malware
detection presents two key challenges: (1)the extensive support code in Android
applications, often spanning thousands of classes, exceeds LLMs' context limits
and obscures malicious behavior within benign functionality; (2)the structural
complexity and interdependencies of Android applications surpass LLMs'
sequence-based reasoning, fragmenting code analysis and hindering malicious
intent inference. To address these challenges, we propose LAMD, a practical
context-driven framework to enable LLM-based Android malware detection. LAMD
integrates key context extraction to isolate security-critical code regions and
construct program structures, then applies tier-wise code reasoning to analyze
application behavior progressively, from low-level instructions to high-level
semantics, providing final prediction and explanation. A well-designed factual
consistency verification mechanism is equipped to mitigate LLM hallucinations
from the first tier. Evaluation in real-world settings demonstrates LAMD's
effectiveness over conventional detectors, establishing a feasible basis for
LLM-driven malware analysis in dynamic threat landscapes.
|
2502.13056
|
Benchmarking MedMNIST dataset on real quantum hardware
|
quant-ph cs.LG
|
Quantum machine learning (QML) has emerged as a promising domain to leverage
the computational capabilities of quantum systems to solve complex
classification tasks. In this work, we present first comprehensive QML study by
benchmarking the MedMNIST-a diverse collection of medical imaging datasets on a
127-qubit real IBM quantum hardware, to evaluate the feasibility and
performance of quantum models (without any classical neural networks) in
practical applications. This study explore recent advancements in quantum
computing such as device-aware quantum circuits, error suppression and
mitigation for medical image classification. Our methodology comprised of three
stages: preprocessing, generation of noise-resilient and hardware-efficient
quantum circuits, optimizing/training of quantum circuits on classical
hardware, and inference on real IBM quantum hardware. Firstly, we process all
input images in the preprocessing stage to reduce the spatial dimension due to
the quantum hardware limitations. We generate hardware-efficient quantum
circuits using backend properties expressible to learn complex patterns for
medical image classification. After classical optimization of QML models, we
perform the inference on real quantum hardware. We also incorporates advanced
error suppression and mitigation techniques in our QML workflow including
dynamical decoupling (DD), gate twirling, and matrix-free measurement
mitigation (M3) to mitigate the effects of noise and improve classification
performance. The experimental results showcase the potential of quantum
computing for medical imaging and establishes a benchmark for future
advancements in QML applied to healthcare.
|
2502.13059
|
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large
Language Models
|
cs.CL
|
The increasing application of multi-modal large language models (MLLMs)
across various sectors have spotlighted the essence of their output reliability
and accuracy, particularly their ability to produce content grounded in factual
information (e.g. common and domain-specific knowledge). In this work, we
introduce SimpleVQA, the first comprehensive multi-modal benchmark to evaluate
the factuality ability of MLLMs to answer natural language short questions.
SimpleVQA is characterized by six key features: it covers multiple tasks and
multiple scenarios, ensures high quality and challenging queries, maintains
static and timeless reference answers, and is straightforward to evaluate. Our
approach involves categorizing visual question-answering items into 9 different
tasks around objective events or common knowledge and situating these within 9
topics. Rigorous quality control processes are implemented to guarantee
high-quality, concise, and clear answers, facilitating evaluation with minimal
variance via an LLM-as-a-judge scoring system. Using SimpleVQA, we perform a
comprehensive assessment of leading 18 MLLMs and 8 text-only LLMs, delving into
their image comprehension and text generation abilities by identifying and
analyzing error cases.
|
2502.13061
|
Improved Fine-Tuning of Large Multimodal Models for Hateful Meme
Detection
|
cs.CL cs.AI cs.CV cs.LG
|
Hateful memes have become a significant concern on the Internet,
necessitating robust automated detection systems. While large multimodal models
have shown strong generalization across various tasks, they exhibit poor
generalization to hateful meme detection due to the dynamic nature of memes
tied to emerging social trends and breaking news. Recent work further
highlights the limitations of conventional supervised fine-tuning for large
multimodal models in this context. To address these challenges, we propose
Large Multimodal Model Retrieval-Guided Contrastive Learning (LMM-RGCL), a
novel two-stage fine-tuning framework designed to improve both in-domain
accuracy and cross-domain generalization. Experimental results on six widely
used meme classification datasets demonstrate that LMM-RGCL achieves
state-of-the-art performance, outperforming agent-based systems such as
VPD-PALI-X-55B. Furthermore, our method effectively generalizes to
out-of-domain memes under low-resource settings, surpassing models like GPT-4o.
|
2502.13062
|
AI-Assisted Decision Making with Human Learning
|
cs.AI cs.GT cs.HC
|
AI systems increasingly support human decision-making. In many cases, despite
the algorithm's superior performance, the final decision remains in human
hands. For example, an AI may assist doctors in determining which diagnostic
tests to run, but the doctor ultimately makes the diagnosis. This paper studies
such AI-assisted decision-making settings, where the human learns through
repeated interactions with the algorithm. In our framework, the algorithm --
designed to maximize decision accuracy according to its own model -- determines
which features the human can consider. The human then makes a prediction based
on their own less accurate model. We observe that the discrepancy between the
algorithm's model and the human's model creates a fundamental tradeoff. Should
the algorithm prioritize recommending more informative features, encouraging
the human to recognize their importance, even if it results in less accurate
predictions in the short term until learning occurs? Or is it preferable to
forgo educating the human and instead select features that align more closely
with their existing understanding, minimizing the immediate cost of learning?
This tradeoff is shaped by the algorithm's time-discounted objective and the
human's learning ability. Our results show that optimal feature selection has a
surprisingly clean combinatorial characterization, reducible to a stationary
sequence of feature subsets that is tractable to compute. As the algorithm
becomes more "patient" or the human's learning improves, the algorithm
increasingly selects more informative features, enhancing both prediction
accuracy and the human's understanding. Notably, early investment in learning
leads to the selection of more informative features than a later investment. We
complement our analysis by showing that the impact of errors in the algorithm's
knowledge is limited as it does not make the prediction directly.
|
2502.13063
|
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the
Limits of Embedding Space Capacity
|
cs.CL cs.LG
|
A range of recent works addresses the problem of compression of sequence of
tokens into a shorter sequence of real-valued vectors to be used as inputs
instead of token embeddings or key-value cache. These approaches allow to
reduce the amount of compute in existing language models. Despite relying on
powerful models as encoders, the maximum attainable lossless compression ratio
is typically not higher than x10. This fact is highly intriguing because, in
theory, the maximum information capacity of large real-valued vectors is far
beyond the presented rates even for 16-bit precision and a modest vector size.
In this work, we explore the limits of compression by replacing the encoder
with a per-sample optimization procedure. We show that vectors with compression
ratios up to x1500 exist, which highlights two orders of magnitude gap between
existing and practically attainable solutions. Furthermore, we empirically show
that the compression limits are determined not by the length of the input but
by the amount of uncertainty to be reduced, namely, the cross-entropy loss on
this sequence without any conditioning. The obtained limits highlight the
substantial gap between the theoretical capacity of input embeddings and their
practical utilization, suggesting significant room for optimization in model
design.
|
2502.13069
|
Interactive Agents to Overcome Ambiguity in Software Engineering
|
cs.AI
|
AI agents are increasingly being deployed to automate tasks, often based on
ambiguous and underspecified user instructions. Making unwarranted assumptions
and failing to ask clarifying questions can lead to suboptimal outcomes, safety
risks due to tool misuse, and wasted computational resources. In this work, we
study the ability of LLM agents to handle ambiguous instructions in interactive
code generation settings by evaluating proprietary and open-weight models on
their performance across three key steps: (a) leveraging interactivity to
improve performance in ambiguous scenarios, (b) detecting ambiguity, and (c)
asking targeted questions. Our findings reveal that models struggle to
distinguish between well-specified and underspecified instructions. However,
when models interact for underspecified inputs, they effectively obtain vital
information from the user, leading to significant improvements in performance
and underscoring the value of effective interaction. Our study highlights
critical gaps in how current state-of-the-art models handle ambiguity in
complex software engineering tasks and structures the evaluation into distinct
steps to enable targeted improvements.
|
2502.13071
|
RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye
View for 3D Object Detection
|
cs.CV
|
While recent low-cost radar-camera approaches have shown promising results in
multi-modal 3D object detection, both sensors face challenges from
environmental and intrinsic disturbances. Poor lighting or adverse weather
conditions degrade camera performance, while radar suffers from noise and
positional ambiguity. Achieving robust radar-camera 3D object detection
requires consistent performance across varying conditions, a topic that has not
yet been fully explored. In this work, we first conduct a systematic analysis
of robustness in radar-camera detection on five kinds of noises and propose
RobuRCDet, a robust object detection model in BEV. Specifically, we design a 3D
Gaussian Expansion (3DGE) module to mitigate inaccuracies in radar points,
including position, Radar Cross-Section (RCS), and velocity. The 3DGE uses RCS
and velocity priors to generate a deformable kernel map and variance for kernel
size adjustment and value distribution. Additionally, we introduce a
weather-adaptive fusion module, which adaptively fuses radar and camera
features based on camera signal confidence. Extensive experiments on the
popular benchmark, nuScenes, show that our model achieves competitive results
in regular and noisy conditions.
|
2502.13073
|
Network-Realized Model Predictive Control Part II: Distributed
Constraint Management
|
eess.SY cs.SY
|
A two-layer control architecture is proposed, which promotes scalable
implementations for model predictive controllers. The top layer acts as both
reference governor for the bottom layer, and as a feedback controller for the
regulated network. By employing set-based methods, global theoretical
guarantees are obtained by enforcing local constraints upon the variables of
the network and of the first layer's implementation. The proposed technique
offers recursive feasibility guarantees as one of its central features, and the
expressions of the resulting predictive strategies bear a striking resemblance
to classical formulations from model predictive control literature, allowing
for flexible and easily customizable implementations.
|
2502.13076
|
KAPPA: A Generic Patent Analysis Framework with Keyphrase-Based
Portraits
|
cs.CL
|
Patent analysis highly relies on concise and interpretable document
representations, referred to as patent portraits. Keyphrases, both present and
absent, are ideal candidates for patent portraits due to their brevity,
representativeness, and clarity. In this paper, we introduce KAPPA, an
integrated framework designed to construct keyphrase-based patent portraits and
enhance patent analysis. KAPPA operates in two phases: patent portrait
construction and portrait-based analysis. To ensure effective portrait
construction, we propose a semantic-calibrated keyphrase generation paradigm
that integrates pre-trained language models with a prompt-based hierarchical
decoding strategy to leverage the multi-level structural characteristics of
patents. For portrait-based analysis, we develop a comprehensive framework that
employs keyphrase-based patent portraits to enable efficient and accurate
patent analysis. Extensive experiments on benchmark datasets of keyphrase
generation, the proposed model achieves significant improvements compared to
state-of-the-art baselines. Further experiments conducted on real-world patent
applications demonstrate that our keyphrase-based portraits effectively capture
domain-specific knowledge and enrich semantic representation for patent
analysis tasks.
|
2502.13077
|
Pricing is All You Need to Improve Traffic Routing
|
eess.SY cs.SY
|
We investigate the design of pricing policies that enhance driver adherence
to route guidance, ensuring effective routing control. The major novelty lies
in that we adopt a Markov chain to model drivers' compliance rates conditioned
on both traffic states and tolls. By formulating the managed traffic network as
a nonlinear stochastic dynamical system, we can quantify in a more realistic
way the impacts of driver route choices and thus determine appropriate tolls.
Specially, we focus on a network comprised of one corridor and one local
street. We assume that a reasonable routing policy is specified in advance.
However, drivers could be reluctant to be detoured. Thus a fixed toll is set on
the corridor to give drivers incentives to choose the local street. We evaluate
the effectiveness of the given routing and pricing policies via stability
analysis. We suggest using the stability and instability conditions to
establish lower and upper bounds on throughput. This allows us to select
suitable tolls that maximize these bounds.
|
2502.13078
|
L4P: Low-Level 4D Vision Perception Unified
|
cs.CV
|
The spatio-temporal relationship between the pixels of a video carries
critical information for low-level 4D perception. A single model that reasons
about it should be able to solve several such tasks well. Yet, most
state-of-the-art methods rely on architectures specialized for the task at
hand. We present L4P (pronounced "LAP"), a feedforward, general-purpose
architecture that solves low-level 4D perception tasks in a unified framework.
L4P combines a ViT-based backbone with per-task heads that are lightweight and
therefore do not require extensive training. Despite its general and
feedforward formulation, our method matches or surpasses the performance of
existing specialized methods on both dense tasks, such as depth or optical flow
estimation, and sparse tasks, such as 2D/3D tracking. Moreover, it solves all
those tasks at once in a time comparable to that of individual single-task
methods.
|
2502.13080
|
BOLIMES: Boruta and LIME optiMized fEature Selection for Gene Expression
Classification
|
cs.LG cs.AI
|
Gene expression classification is a pivotal yet challenging task in
bioinformatics, primarily due to the high dimensionality of genomic data and
the risk of overfitting. To bridge this gap, we propose BOLIMES, a novel
feature selection algorithm designed to enhance gene expression classification
by systematically refining the feature subset. Unlike conventional methods that
rely solely on statistical ranking or classifier-specific selection, we
integrate the robustness of Boruta with the interpretability of LIME, ensuring
that only the most relevant and influential genes are retained. BOLIMES first
employs Boruta to filter out non-informative genes by comparing each feature
against its randomized counterpart, thus preserving valuable information. It
then uses LIME to rank the remaining genes based on their local importance to
the classifier. Finally, an iterative classification evaluation determines the
optimal feature subset by selecting the number of genes that maximizes
predictive accuracy. By combining exhaustive feature selection with
interpretability-driven refinement, our solution effectively balances
dimensionality reduction with high classification performance, offering a
powerful solution for high-dimensional gene expression analysis.
|
2502.13081
|
Personalized Image Generation with Deep Generative Models: A Decade
Survey
|
cs.CV
|
Recent advancements in generative models have significantly facilitated the
development of personalized content creation. Given a small set of images with
user-specific concept, personalized image generation allows to create images
that incorporate the specified concept and adhere to provided text
descriptions. Due to its wide applications in content creation, significant
effort has been devoted to this field in recent years. Nonetheless, the
technologies used for personalization have evolved alongside the development of
generative models, with their distinct and interrelated components. In this
survey, we present a comprehensive review of generalized personalized image
generation across various generative models, including traditional GANs,
contemporary text-to-image diffusion models, and emerging multi-model
autoregressive models. We first define a unified framework that standardizes
the personalization process across different generative models, encompassing
three key components, i.e., inversion spaces, inversion methods, and
personalization schemes. This unified framework offers a structured approach to
dissecting and comparing personalization techniques across different generative
architectures. Building upon this unified framework, we further provide an
in-depth analysis of personalization techniques within each generative model,
highlighting their unique contributions and innovations. Through comparative
analysis, this survey elucidates the current landscape of personalized image
generation, identifying commonalities and distinguishing features among
existing methods. Finally, we discuss the open challenges in the field and
propose potential directions for future research. We keep tracing related works
at https://github.com/csyxwei/Awesome-Personalized-Image-Generation.
|
2502.13082
|
Automated Linear Parameter-Varying Modeling of Nonlinear Systems: A
Global Embedding Approach
|
eess.SY cs.SY
|
In this paper, an automated Linear Parameter-Varying (LPV) model conversion
approach is proposed for nonlinear dynamical systems. The proposed method
achieves global embedding of the original nonlinear behavior of the system by
leveraging the second fundamental theorem of calculus to factorize matrix
function expressions without any approximation. The implementation of the
proposed method in the LPVcore toolbox for Matlab is discussed, and its
performance is showcased on a comprehensive example of automated LPV model
conversion of an unbalanced disk system, which is then used to design an LPV
controller that is deployed on the original nonlinear system. In addition, the
conversion capabilities are further demonstrated by obtaining an LPV embedding
of a three-degree-of-freedom control moment gyroscope. All software
implementations are available at www.lpvcore.net.
|
2502.13085
|
A Neural Difference-of-Entropies Estimator for Mutual Information
|
stat.ML cs.IT cs.LG math.IT
|
Estimating Mutual Information (MI), a key measure of dependence of random
quantities without specific modelling assumptions, is a challenging problem in
high dimensions. We propose a novel mutual information estimator based on
parametrizing conditional densities using normalizing flows, a deep generative
model that has gained popularity in recent years. This estimator leverages a
block autoregressive structure to achieve improved bias-variance trade-offs on
standard benchmark tasks.
|
2502.13090
|
tn4ml: Tensor Network Training and Customization for Machine Learning
|
cs.LG cs.MS quant-ph
|
Tensor Networks have emerged as a prominent alternative to neural networks
for addressing Machine Learning challenges in foundational sciences, paving the
way for their applications to real-life problems. This paper introduces tn4ml,
a novel library designed to seamlessly integrate Tensor Networks into
optimization pipelines for Machine Learning tasks. Inspired by existing Machine
Learning frameworks, the library offers a user-friendly structure with modules
for data embedding, objective function definition, and model training using
diverse optimization strategies. We demonstrate its versatility through two
examples: supervised learning on tabular data and unsupervised learning on an
image dataset. Additionally, we analyze how customizing the parts of the
Machine Learning pipeline for Tensor Networks influences performance metrics.
|
2502.13092
|
Text2World: Benchmarking Large Language Models for Symbolic World Model
Generation
|
cs.CL cs.AI
|
Recently, there has been growing interest in leveraging large language models
(LLMs) to generate symbolic world models from textual descriptions. Although
LLMs have been extensively explored in the context of world modeling, prior
studies encountered several challenges, including evaluation randomness,
dependence on indirect metrics, and a limited domain scope. To address these
limitations, we introduce a novel benchmark, Text2World, based on planning
domain definition language (PDDL), featuring hundreds of diverse domains and
employing multi-criteria, execution-based metrics for a more robust evaluation.
We benchmark current LLMs using Text2World and find that reasoning models
trained with large-scale reinforcement learning outperform others. However,
even the best-performing model still demonstrates limited capabilities in world
modeling. Building on these insights, we examine several promising strategies
to enhance the world modeling capabilities of LLMs, including test-time
scaling, agent training, and more. We hope that Text2World can serve as a
crucial resource, laying the groundwork for future research in leveraging LLMs
as world models. The project page is available at
https://text-to-world.github.io/.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.