id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.11688
|
From Isolates to Families: Using Neural Networks for Automated Language
Affiliation
|
cs.CL
|
In historical linguistics, the affiliation of languages to a common language
family is traditionally carried out using a complex workflow that relies on
manually comparing individual languages. Large-scale standardized collections
of multilingual wordlists and grammatical language structures might help to
improve this and open new avenues for developing automated language affiliation
workflows. Here, we present neural network models that use lexical and
grammatical data from a worldwide sample of more than 1,000 languages with
known affiliations to classify individual languages into families. In line with
the traditional assumption of most linguists, our results show that models
trained on lexical data alone outperform models solely based on grammatical
data, whereas combining both types of data yields even better performance. In
additional experiments, we show how our models can identify long-ranging
relations between entire subgroups, how they can be employed to investigate
potential relatives of linguistic isolates, and how they can help us to obtain
first hints on the affiliation of so far unaffiliated languages. We conclude
that models for automated language affiliation trained on lexical and
grammatical data provide comparative linguists with a valuable tool for
evaluating hypotheses about deep and unknown language relations.
|
2502.11689
|
Improve LLM-as-a-Judge Ability as a General Ability
|
cs.CL
|
LLM-as-a-Judge leverages the generative and reasoning capabilities of large
language models (LLMs) to evaluate LLM responses across diverse scenarios,
providing accurate preference signals. This approach plays a vital role in
aligning LLMs with human values, ensuring ethical and reliable AI outputs that
align with societal norms. Recent studies have raised many methods to train LLM
as generative judges, but most of them are data consuming or lack accuracy, and
only focus on LLM's judge ability. In this work, we regard judge ability as a
general ability of LLM and implement a two-stage training approach, comprising
supervised fine-tuning (SFT) warm-up and direct preference optimization (DPO)
enhancement, to achieve judge style adaptation and improve judgment accuracy.
Additionally, we introduce an efficient data synthesis method to generate
judgmental content. Experimental results demonstrate that our approach,
utilizing only about 2% to 40% of the data required by other methods, achieves
SOTA performance on RewardBench. Furthermore, our training method enhances the
general capabilities of the model by constructing complicated judge task, and
the judge signals provided by our model have significantly enhanced the
downstream DPO training performance of our internal models in our test to
optimize policy model with Judge Model. We also open-source our model weights
and training data to facilitate further research.
|
2502.11697
|
MVTokenFlow: High-quality 4D Content Generation using Multiview Token
Flow
|
cs.CV
|
In this paper, we present MVTokenFlow for high-quality 4D content creation
from monocular videos. Recent advancements in generative models such as video
diffusion models and multiview diffusion models enable us to create videos or
3D models. However, extending these generative models for dynamic 4D content
creation is still a challenging task that requires the generated content to be
consistent spatially and temporally. To address this challenge, MVTokenFlow
utilizes the multiview diffusion model to generate multiview images on
different timesteps, which attains spatial consistency across different
viewpoints and allows us to reconstruct a reasonable coarse 4D field. Then,
MVTokenFlow further regenerates all the multiview images using the rendered 2D
flows as guidance. The 2D flows effectively associate pixels from different
timesteps and improve the temporal consistency by reusing tokens in the
regeneration process. Finally, the regenerated images are spatiotemporally
consistent and utilized to refine the coarse 4D field to get a high-quality 4D
field. Experiments demonstrate the effectiveness of our design and show
significantly improved quality than baseline methods.
|
2502.11703
|
CMQCIC-Bench: A Chinese Benchmark for Evaluating Large Language Models
in Medical Quality Control Indicator Calculation
|
cs.CL
|
Medical quality control indicators are essential to assess the qualifications
of healthcare institutions for medical services. With the impressive
performance of large language models (LLMs) like GPT-4 in the medical field,
leveraging these technologies for the Medical Quality Control Indicator
Calculation (MQCIC) presents a promising approach. In this work, (1) we
introduce a real-world task MQCIC and propose an open-source Chinese electronic
medical records (EMRs)-based dataset (CMQCIC-Bench) comprising 785 instances
and 76 indicators. (2) We propose a semi-automatic method to enhance the rule
representation. Then we propose the Clinical Facts-based Inferential Rule
(CF-IR) method that disentangles the clinical fact verification and inferential
rule reasoning actions. (3) We conduct comprehensive experiments on 20
representative LLMs, covering general and medical models. Our findings reveal
that CF-IR outperforms Chain-of-Thought methods in MQCIC tasks. (4) We conduct
an error analysis and investigate the capabilities of clinical fact
verification and inferential rule reasoning, providing insights to improve
performance in the MQCIC further. The dataset and code is available in this
repo https://anonymous.4open.science/r/C-MQCIC-1151.
|
2502.11705
|
LLM Agents Making Agent Tools
|
cs.CL cs.AI cs.LG cs.MA
|
Tool use has turned large language models (LLMs) into powerful agents that
can perform complex multi-step tasks by dynamically utilising external software
components. However, these tools must be implemented in advance by human
developers, hindering the applicability of LLM agents in domains which demand
large numbers of highly specialised tools, like in life sciences and medicine.
Motivated by the growing trend of scientific studies accompanied by public code
repositories, we propose ToolMaker, a novel agentic framework that autonomously
transforms papers with code into LLM-compatible tools. Given a short task
description and a repository URL, ToolMaker autonomously installs required
dependencies and generates code to perform the task, using a closed-loop
self-correction mechanism to iteratively diagnose and rectify errors. To
evaluate our approach, we introduce a benchmark comprising 15 diverse and
complex computational tasks spanning both medical and non-medical domains with
over 100 unit tests to objectively assess tool correctness and robustness.
ToolMaker correctly implements 80% of the tasks, substantially outperforming
current state-of-the-art software engineering agents. ToolMaker therefore is a
step towards fully autonomous agent-based scientific workflows.
|
2502.11707
|
Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating
Large Language Models
|
cs.CL
|
This study utilizes the game Codenames as a benchmarking tool to evaluate
large language models (LLMs) with respect to specific linguistic and cognitive
skills. LLMs play each side of the game, where one side generates a clue word
covering several target words and the other guesses those target words. We
designed various experiments by controlling the choice of words (abstract vs.
concrete words, ambiguous vs. monosemic) or the opponent (programmed to be
faster or slower in revealing words). Recent commercial and open-weight models
were compared side-by-side to find out factors affecting their performance. The
evaluation reveals details about their strategies, challenging cases, and
limitations of LLMs.
|
2502.11710
|
The Worse The Better: Content-Aware Viewpoint Generation Network for
Projection-related Point Cloud Quality Assessment
|
cs.CV
|
Through experimental studies, however, we observed the instability of final
predicted quality scores, which change significantly over different viewpoint
settings. Inspired by the "wooden barrel theory", given the default
content-independent viewpoints of existing projection-related PCQA approaches,
this paper presents a novel content-aware viewpoint generation network (CAVGN)
to learn better viewpoints by taking the distribution of geometric and
attribute features of degraded point clouds into consideration. Firstly, the
proposed CAVGN extracts multi-scale geometric and texture features of the
entire input point cloud, respectively. Then, for each default
content-independent viewpoint, the extracted geometric and texture features are
refined to focus on its corresponding visible part of the input point cloud.
Finally, the refined geometric and texture features are concatenated to
generate an optimized viewpoint. To train the proposed CAVGN, we present a
self-supervised viewpoint ranking network (SSVRN) to select the viewpoint with
the worst quality projected image to construct a default-optimized viewpoint
dataset, which consists of thousands of paired default viewpoints and
corresponding optimized viewpoints. Experimental results show that the
projection-related PCQA methods can achieve higher performance using the
viewpoints generated by the proposed CAVGN.
|
2502.11711
|
Knowledge-aware contrastive heterogeneous molecular graph learning
|
cs.LG cs.AI
|
Molecular representation learning is pivotal in predicting molecular
properties and advancing drug design. Traditional methodologies, which
predominantly rely on homogeneous graph encoding, are limited by their
inability to integrate external knowledge and represent molecular structures
across different levels of granularity. To address these limitations, we
propose a paradigm shift by encoding molecular graphs into heterogeneous
structures, introducing a novel framework: Knowledge-aware Contrastive
Heterogeneous Molecular Graph Learning (KCHML). This approach leverages
contrastive learning to enrich molecular representations with embedded external
knowledge. KCHML conceptualizes molecules through three distinct graph
views-molecular, elemental, and pharmacological-enhanced by heterogeneous
molecular graphs and a dual message-passing mechanism. This design offers a
comprehensive representation for property prediction, as well as for downstream
tasks such as drug-drug interaction (DDI) prediction. Extensive benchmarking
demonstrates KCHML's superiority over state-of-the-art molecular property
prediction models, underscoring its ability to capture intricate molecular
features.
|
2502.11712
|
Component-aware Unsupervised Logical Anomaly Generation for Industrial
Anomaly Detection
|
cs.CV
|
Anomaly detection is critical in industrial manufacturing for ensuring
product quality and improving efficiency in automated processes. The scarcity
of anomalous samples limits traditional detection methods, making anomaly
generation essential for expanding the data repository. However, recent
generative models often produce unrealistic anomalies increasing false
positives, or require real-world anomaly samples for training. In this work, we
treat anomaly generation as a compositional problem and propose ComGEN, a
component-aware and unsupervised framework that addresses the gap in logical
anomaly generation. Our method comprises a multi-component learning strategy to
disentangle visual components, followed by subsequent generation editing
procedures. Disentangled text-to-component pairs, revealing intrinsic logical
constraints, conduct attention-guided residual mapping and model training with
iteratively matched references across multiple scales. Experiments on the
MVTecLOCO dataset confirm the efficacy of ComGEN, achieving the best AUROC
score of 91.2%. Additional experiments on the real-world scenario of Diesel
Engine and widely-used MVTecAD dataset demonstrate significant performance
improvements when integrating simulated anomalies generated by ComGEN into
automated production workflows.
|
2502.11713
|
Nonlinearity Cancellation Based on Optimized First Order Perturbative
Kernels
|
cs.IT math.IT
|
The potential offered by interference cancellation based on optimized regular
perturbation kernels of the Manakov equation is studied. Theoretical gains of
up to 2.5 dB in effective SNR are demonstrated.
|
2502.11715
|
Proactive Depot Discovery: A Generative Framework for Flexible
Location-Routing
|
cs.LG cs.AI
|
The Location-Routing Problem (LRP), which combines the challenges of facility
(depot) locating and vehicle route planning, is critically constrained by the
reliance on predefined depot candidates, limiting the solution space and
potentially leading to suboptimal outcomes. Previous research on LRP without
predefined depots is scant and predominantly relies on heuristic algorithms
that iteratively attempt depot placements across a planar area. Such approaches
lack the ability to proactively generate depot locations that meet specific
geographic requirements, revealing a notable gap in current research landscape.
To bridge this gap, we propose a data-driven generative DRL framework, designed
to proactively generate depots for LRP without predefined depot candidates,
solely based on customer requests data which include geographic and demand
information. It can operate in two distinct modes: direct generation of exact
depot locations, and the creation of a multivariate Gaussian distribution for
flexible depots sampling. By extracting depots' geographic pattern from
customer requests data, our approach can dynamically respond to logistical
needs, identifying high-quality depot locations that further reduce total
routing costs compared to traditional methods. Extensive experiments
demonstrate that, for a same group of customer requests, compared with those
depots identified through random attempts, our framework can proactively
generate depots that lead to superior solution routes with lower routing cost.
The implications of our framework potentially extend into real-world
applications, particularly in emergency medical rescue and disaster relief
logistics, where rapid establishment and adjustment of depot locations are
paramount, showcasing its potential in addressing LRP for dynamic and
unpredictable environments.
|
2502.11718
|
ChineseSimpleVQA -- "See the World, Discover Knowledge": A Chinese
Factuality Evaluation for Large Vision Language Models
|
cs.CL cs.CV
|
The evaluation of factual accuracy in large vision language models (LVLMs)
has lagged behind their rapid development, making it challenging to fully
reflect these models' knowledge capacity and reliability. In this paper, we
introduce the first factuality-based visual question-answering benchmark in
Chinese, named ChineseSimpleVQA, aimed at assessing the visual factuality of
LVLMs across 8 major topics and 56 subtopics. The key features of this
benchmark include a focus on the Chinese language, diverse knowledge types, a
multi-hop question construction, high-quality data, static consistency, and
easy-to-evaluate through short answers. Moreover, we contribute a rigorous data
construction pipeline and decouple the visual factuality into two parts: seeing
the world (i.e., object recognition) and discovering knowledge. This decoupling
allows us to analyze the capability boundaries and execution mechanisms of
LVLMs. Subsequently, we evaluate 34 advanced open-source and closed-source
models, revealing critical performance gaps within this field.
|
2502.11720
|
Can you pass that tool?: Implications of Indirect Speech in Physical
Human-Robot Collaboration
|
cs.HC cs.RO
|
Indirect speech acts (ISAs) are a natural pragmatic feature of human
communication, allowing requests to be conveyed implicitly while maintaining
subtlety and flexibility. Although advancements in speech recognition have
enabled natural language interactions with robots through direct, explicit
commands -- roviding clarity in communication -- the rise of large language
models presents the potential for robots to interpret ISAs. However, empirical
evidence on the effects of ISAs on human-robot collaboration (HRC) remains
limited. To address this, we conducted a Wizard-of-Oz study (N=36), engaging a
participant and a robot in collaborative physical tasks. Our findings indicate
that robots capable of understanding ISAs significantly improve human's
perceived robot anthropomorphism, team performance, and trust. However, the
effectiveness of ISAs is task- and context-dependent, thus requiring careful
use. These results highlight the importance of appropriately integrating direct
and indirect requests in HRC to enhance collaborative experiences and task
performance.
|
2502.11721
|
Enhancing Recommendation Explanations through User-Centric Refinement
|
cs.IR
|
Generating natural language explanations for recommendations has become
increasingly important in recommender systems. Traditional approaches typically
treat user reviews as ground truth for explanations and focus on improving
review prediction accuracy by designing various model architectures. However,
due to limitations in data scale and model capability, these explanations often
fail to meet key user-centric aspects such as factuality, personalization, and
sentiment coherence, significantly reducing their overall helpfulness to users.
In this paper, we propose a novel paradigm that refines initial explanations
generated by existing explainable recommender models during the inference stage
to enhance their quality in multiple aspects. Specifically, we introduce a
multi-agent collaborative refinement framework based on large language models.
To ensure alignment between the refinement process and user demands, we employ
a plan-then-refine pattern to perform targeted modifications. To enable
continuous improvements, we design a hierarchical reflection mechanism that
provides feedback on the refinement process from both strategic and content
perspectives. Extensive experiments on three datasets demonstrate the
effectiveness of our framework.
|
2502.11723
|
Energy-Conscious LLM Decoding: Impact of Text Generation Strategies on
GPU Energy Consumption
|
cs.AI
|
Decoding strategies significantly influence the quality and diversity of the
generated texts in large language models (LLMs), yet their impact on
computational resource consumption, particularly GPU energy usage, is
insufficiently studied. This paper investigates the relationship between text
generation decoding methods and energy efficiency, focusing on the trade-off
between generation quality and GPU energy consumption across diverse tasks and
decoding configurations. By benchmarking multiple strategies across different
text generation tasks, such as Translation, Code Summarization, and Math
Problem Solving, we reveal how selecting appropriate decoding techniques with
their tuned hyperparameters affects text quality and has measurable
implications for resource utilization, emphasizing the need for balanced
optimization. To the best of our knowledge, this study is among the first to
explore decoding strategies in LLMs through the lens of energy consumption,
offering actionable insights for designing resource-aware applications that
maintain high-quality text generation.
|
2502.11724
|
Incomplete Modality Disentangled Representation for Ophthalmic Disease
Grading and Diagnosis
|
cs.CV
|
Ophthalmologists typically require multimodal data sources to improve
diagnostic accuracy in clinical decisions. However, due to medical device
shortages, low-quality data and data privacy concerns, missing data modalities
are common in real-world scenarios. Existing deep learning methods tend to
address it by learning an implicit latent subspace representation for different
modality combinations. We identify two significant limitations of these
methods: (1) implicit representation constraints that hinder the model's
ability to capture modality-specific information and (2) modality
heterogeneity, causing distribution gaps and redundancy in feature
representations. To address these, we propose an Incomplete Modality
Disentangled Representation (IMDR) strategy, which disentangles features into
explicit independent modal-common and modal-specific features by guidance of
mutual information, distilling informative knowledge and enabling it to
reconstruct valuable missing semantics and produce robust multimodal
representations. Furthermore, we introduce a joint proxy learning module that
assists IMDR in eliminating intra-modality redundancy by exploiting the
extracted proxies from each class. Experiments on four ophthalmology multimodal
datasets demonstrate that the proposed IMDR outperforms the state-of-the-art
methods significantly.
|
2502.11725
|
Adversarially Robust CLIP Models Can Induce Better (Robust) Perceptual
Metrics
|
cs.CV cs.LG
|
Measuring perceptual similarity is a key tool in computer vision. In recent
years perceptual metrics based on features extracted from neural networks with
large and diverse training sets, e.g. CLIP, have become popular. At the same
time, the metrics extracted from features of neural networks are not
adversarially robust. In this paper we show that adversarially robust CLIP
models, called R-CLIP$_\textrm{F}$, obtained by unsupervised adversarial
fine-tuning induce a better and adversarially robust perceptual metric that
outperforms existing metrics in a zero-shot setting, and further matches the
performance of state-of-the-art metrics while being robust after fine-tuning.
Moreover, our perceptual metric achieves strong performance on related tasks
such as robust image-to-image retrieval, which becomes especially relevant when
applied to "Not Safe for Work" (NSFW) content detection and dataset filtering.
While standard perceptual metrics can be easily attacked by a small
perturbation completely degrading NSFW detection, our robust perceptual metric
maintains high accuracy under an attack while having similar performance for
unperturbed images. Finally, perceptual metrics induced by robust CLIP models
have higher interpretability: feature inversion can show which images are
considered similar, while text inversion can find what images are associated to
a given prompt. This also allows us to visualize the very rich visual concepts
learned by a CLIP model, including memorized persons, paintings and complex
queries.
|
2502.11726
|
No-reference geometry quality assessment for colorless point clouds via
list-wise rank learning
|
cs.CV
|
Geometry quality assessment (GQA) of colorless point clouds is crucial for
evaluating the performance of emerging point cloud-based solutions (e.g.,
watermarking, compression, and 3-Dimensional (3D) reconstruction).
Unfortunately, existing objective GQA approaches are traditional full-reference
metrics, whereas state-of-the-art learning-based point cloud quality assessment
(PCQA) methods target both color and geometry distortions, neither of which are
qualified for the no-reference GQA task. In addition, the lack of large-scale
GQA datasets with subjective scores, which are always imprecise, biased, and
inconsistent, also hinders the development of learning-based GQA metrics.
Driven by these limitations, this paper proposes a no-reference geometry-only
quality assessment approach based on list-wise rank learning, termed LRL-GQA,
which comprises of a geometry quality assessment network (GQANet) and a
list-wise rank learning network (LRLNet). The proposed LRL-GQA formulates the
no-reference GQA as a list-wise rank problem, with the objective of directly
optimizing the entire quality ordering. Specifically, a large dataset
containing a variety of geometry-only distortions is constructed first, named
LRL dataset, in which each sample is label-free but coupled with quality
ranking information. Then, the GQANet is designed to capture intrinsic
multi-scale patch-wise geometric features in order to predict a quality index
for each point cloud. After that, the LRLNet leverages the LRL dataset and a
likelihood loss to train the GQANet and ranks the input list of degraded point
clouds according to their distortion levels. In addition, the pre-trained
GQANet can be fine-tuned further to obtain absolute quality scores.
Experimental results demonstrate the superior performance of the proposed
no-reference LRL-GQA method compared with existing full-reference GQA metrics.
|
2502.11728
|
Matrix Low-dimensional Qubit Casting Based Quantum Electromagnetic
Transient Network Simulation Program
|
quant-ph cs.SY eess.SY
|
In modern power systems, the integration of converter-interfaced generations
requires the development of electromagnetic transient network simulation
programs (EMTP) that can capture rapid fluctuations. However, as the power
system scales, the EMTP's computing complexity increases exponentially, leading
to a curse of dimensionality that hinders its practical application. Facing
this challenge, quantum computing offers a promising approach for achieving
exponential acceleration. To realize this in noisy intermediate-scale quantum
computers, the variational quantum linear solution (VQLS) was advocated because
of its robustness against depolarizing noise. However, it suffers data
inflation issues in its preprocessing phase, and no prior research has applied
quantum computing to high-frequency switching EMT networks.To address these
issues, this paper first designs the matrix low-dimension qubit casting (MLQC)
method to address the data inflation problem in the preprocessing of the
admittance matrix for VQLS in EMT networks. Besides, we propose a real-only
quantum circuit reduction method tailored to the characteristics of the EMT
network admittance matrices. Finally, the proposed quantum EMTP algorithm
(QEMTP) has been successfully verified for EMT networks containing a large
number of high-frequency switching elements.
|
2502.11731
|
GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs
|
cs.CV
|
Accurately restoring topology is both challenging and crucial in tubular
structure extraction tasks, such as blood vessel segmentation and road network
extraction. Diverging from traditional approaches based on pixel-level
classification, our proposed method, named GraphMorph, focuses on branch-level
features of tubular structures to achieve more topologically accurate
predictions. GraphMorph comprises two main components: a Graph Decoder and a
Morph Module. Utilizing multi-scale features extracted from an image patch by
the segmentation network, the Graph Decoder facilitates the learning of
branch-level features and generates a graph that accurately represents the
tubular structure in this patch. The Morph Module processes two primary inputs:
the graph and the centerline probability map, provided by the Graph Decoder and
the segmentation network, respectively. Employing a novel SkeletonDijkstra
algorithm, the Morph Module produces a centerline mask that aligns with the
predicted graph. Furthermore, we observe that employing centerline masks
predicted by GraphMorph significantly reduces false positives in the
segmentation task, which is achieved by a simple yet effective post-processing
strategy. The efficacy of our method in the centerline extraction and
segmentation tasks has been substantiated through experimental evaluations
across various datasets. Source code will be released soon.
|
2502.11733
|
Plant in Cupboard, Orange on Table, Book on Shelf. Benchmarking
Practical Reasoning and Situation Modelling in a Text-Simulated Situated
Environment
|
cs.CL
|
Large language models (LLMs) have risen to prominence as 'chatbots' for users
to interact via natural language. However, their abilities to capture
common-sense knowledge make them seem promising as language-based planners of
situated or embodied action as well. We have implemented a simple text-based
environment -- similar to others that have before been used for
reinforcement-learning of agents -- that simulates, very abstractly, a
household setting. We use this environment and the detailed error-tracking
capabilities we implemented for targeted benchmarking of LLMs on the problem of
practical reasoning: Going from goals and observations to actions. Our findings
show that environmental complexity and game restrictions hamper performance,
and concise action planning is demanding for current LLMs.
|
2502.11735
|
MT-RAIG: Novel Benchmark and Evaluation Framework for
Retrieval-Augmented Insight Generation over Multiple Tables
|
cs.CL
|
Recent advancements in table-based reasoning have expanded beyond
factoid-level QA to address insight-level tasks, where systems should
synthesize implicit knowledge in the table to provide explainable analyses.
Although effective, existing studies remain confined to scenarios where a
single gold table is given alongside the user query, failing to address cases
where users seek comprehensive insights from multiple unknown tables. To bridge
these gaps, we propose MT-RAIG Bench, design to evaluate systems on
Retrieval-Augmented Insight Generation over Mulitple-Tables. Additionally, to
tackle the suboptimality of existing automatic evaluation methods in the table
domain, we further introduce a fine-grained evaluation framework MT-RAIG Eval,
which achieves better alignment with human quality judgments on the generated
insights. We conduct extensive experiments and reveal that even frontier LLMs
still struggle with complex multi-table reasoning, establishing our MT-RAIG
Bench as a challenging testbed for future research.
|
2502.11736
|
ReviewEval: An Evaluation Framework for AI-Generated Reviews
|
cs.CL cs.AI
|
The escalating volume of academic research, coupled with a shortage of
qualified reviewers, necessitates innovative approaches to peer review. While
large language model (LLMs) offer potential for automating this process, their
current limitations include superficial critiques, hallucinations, and a lack
of actionable insights. This research addresses these challenges by introducing
a comprehensive evaluation framework for AI-generated reviews, that measures
alignment with human evaluations, verifies factual accuracy, assesses
analytical depth, and identifies actionable insights. We also propose a novel
alignment mechanism that tailors LLM-generated reviews to the unique evaluation
priorities of individual conferences and journals. To enhance the quality of
these reviews, we introduce a self-refinement loop that iteratively optimizes
the LLM's review prompts. Our framework establishes standardized metrics for
evaluating AI-based review systems, thereby bolstering the reliability of
AI-generated reviews in academic research.
|
2502.11740
|
Mitigating Visual Knowledge Forgetting in MLLM Instruction-tuning via
Modality-decoupled Gradient Descent
|
cs.LG cs.CV
|
Recent MLLMs have shown emerging visual understanding and reasoning abilities
after being pre-trained on large-scale multimodal datasets. Unlike
pre-training, where MLLMs receive rich visual-text alignment,
instruction-tuning is often text-driven with weaker visual supervision, leading
to the degradation of pre-trained visual understanding and causing visual
forgetting. Existing approaches, such as direct fine-tuning and continual
learning methods, fail to explicitly address this issue, often compressing
visual representations and prioritizing task alignment over visual retention,
which further worsens visual forgetting. To overcome this limitation, we
introduce a novel perspective leveraging effective rank to quantify the
degradation of visual representation richness, interpreting this degradation
through the information bottleneck principle as excessive compression that
leads to the degradation of crucial pre-trained visual knowledge. Building on
this view, we propose a modality-decoupled gradient descent (MDGD) method that
regulates gradient updates to maintain the effective rank of visual
representations while mitigating the over-compression effects described by the
information bottleneck. By explicitly disentangling the optimization of visual
understanding from task-specific alignment, MDGD preserves pre-trained visual
knowledge while enabling efficient task adaptation. To enable lightweight
instruction-tuning, we further develop a memory-efficient fine-tuning approach
using gradient masking, which selectively updates a subset of model parameters
to enable parameter-efficient fine-tuning (PEFT), reducing computational
overhead while preserving rich visual representations. Extensive experiments
across various downstream tasks and backbone MLLMs demonstrate that MDGD
effectively mitigates visual forgetting from pre-trained tasks while enabling
strong adaptation to new tasks.
|
2502.11741
|
SQL-o1: A Self-Reward Heuristic Dynamic Search Method for Text-to-SQL
|
cs.DB cs.AI
|
The Text-to-SQL(Text2SQL) task aims to convert natural language queries into
executable SQL queries. Thanks to the application of large language models
(LLMs), significant progress has been made in this field. However, challenges
such as model scalability, limited generation space, and coherence issues in
SQL generation still persist. To address these issues, we propose SQL-o1, a
Self-Reward-based heuristic search method designed to enhance the reasoning
ability of LLMs in SQL query generation. SQL-o1 combines Monte Carlo Tree
Search (MCTS) for heuristic process-level search and constructs a Schema-Aware
dataset to help the model better understand database schemas. Extensive
experiments on the Bird and Spider datasets demonstrate that SQL-o1 improves
execution accuracy by 10.8\% on the complex Bird dataset compared to the latest
baseline methods, even outperforming GPT-4-based approaches. Additionally,
SQL-o1 excels in few-shot learning scenarios and shows strong cross-model
transferability. Our code is publicly available
at:https://github.com/ShuaiLyu0110/SQL-o1.
|
2502.11742
|
Range and Bird's Eye View Fused Cross-Modal Visual Place Recognition
|
cs.CV
|
Image-to-point cloud cross-modal Visual Place Recognition (VPR) is a
challenging task where the query is an RGB image, and the database samples are
LiDAR point clouds. Compared to single-modal VPR, this approach benefits from
the widespread availability of RGB cameras and the robustness of point clouds
in providing accurate spatial geometry and distance information. However,
current methods rely on intermediate modalities that capture either the
vertical or horizontal field of view, limiting their ability to fully exploit
the complementary information from both sensors. In this work, we propose an
innovative initial retrieval + re-rank method that effectively combines
information from range (or RGB) images and Bird's Eye View (BEV) images. Our
approach relies solely on a computationally efficient global descriptor
similarity search process to achieve re-ranking. Additionally, we introduce a
novel similarity label supervision technique to maximize the utility of limited
training data. Specifically, we employ points average distance to approximate
appearance similarity and incorporate an adaptive margin, based on similarity
differences, into the vanilla triplet loss. Experimental results on the KITTI
dataset demonstrate that our method significantly outperforms state-of-the-art
approaches.
|
2502.11743
|
Robust Partial-Label Learning by Leveraging Class Activation Values
|
cs.LG stat.ML
|
Real-world training data is often noisy; for example, human annotators assign
conflicting class labels to the same instances. Partial-label learning (PLL) is
a weakly supervised learning paradigm that allows training classifiers in this
context without manual data cleaning. While state-of-the-art methods have good
predictive performance, their predictions are sensitive to high noise levels,
out-of-distribution data, and adversarial perturbations. We propose a novel PLL
method based on subjective logic, which explicitly represents uncertainty by
leveraging the magnitudes of the underlying neural network's class activation
values. Thereby, we effectively incorporate prior knowledge about the class
labels by using a novel label weight re-distribution strategy that we prove to
be optimal. We empirically show that our method yields more robust predictions
in terms of predictive performance under high PLL noise levels, handling
out-of-distribution examples, and handling adversarial perturbations on the
test instances.
|
2502.11744
|
FUNCTO: Function-Centric One-Shot Imitation Learning for Tool
Manipulation
|
cs.RO cs.CV
|
Learning tool use from a single human demonstration video offers a highly
intuitive and efficient approach to robot teaching. While humans can
effortlessly generalize a demonstrated tool manipulation skill to diverse tools
that support the same function (e.g., pouring with a mug versus a teapot),
current one-shot imitation learning (OSIL) methods struggle to achieve this. A
key challenge lies in establishing functional correspondences between
demonstration and test tools, considering significant geometric variations
among tools with the same function (i.e., intra-function variations). To
address this challenge, we propose FUNCTO (Function-Centric OSIL for Tool
Manipulation), an OSIL method that establishes function-centric correspondences
with a 3D functional keypoint representation, enabling robots to generalize
tool manipulation skills from a single human demonstration video to novel tools
with the same function despite significant intra-function variations. With this
formulation, we factorize FUNCTO into three stages: (1) functional keypoint
extraction, (2) function-centric correspondence establishment, and (3)
functional keypoint-based action planning. We evaluate FUNCTO against exiting
modular OSIL methods and end-to-end behavioral cloning methods through
real-robot experiments on diverse tool manipulation tasks. The results
demonstrate the superiority of FUNCTO when generalizing to novel tools with
intra-function geometric variations. More details are available at
https://sites.google.com/view/functo.
|
2502.11747
|
Open-Ended and Knowledge-Intensive Video Question Answering
|
cs.IR
|
Video question answering that requires external knowledge beyond the visual
content remains a significant challenge in AI systems. While models can
effectively answer questions based on direct visual observations, they often
falter when faced with questions requiring broader contextual knowledge. To
address this limitation, we investigate knowledge-intensive video question
answering (KI-VideoQA) through the lens of multi-modal retrieval-augmented
generation, with a particular focus on handling open-ended questions rather
than just multiple-choice formats. Our comprehensive analysis examines various
retrieval augmentation approaches using cutting-edge retrieval and vision
language models, testing both zero-shot and fine-tuned configurations. We
investigate several critical dimensions: the interplay between different
information sources and modalities, strategies for integrating diverse
multi-modal contexts, and the dynamics between query formulation and retrieval
result utilization. Our findings reveal that while retrieval augmentation shows
promise in improving model performance, its success is heavily dependent on the
chosen modality and retrieval methodology. The study also highlights the
critical role of query construction and retrieval depth optimization in
effective knowledge integration. Through our proposed approach, we achieve a
substantial 17.5% improvement in accuracy on multiple choice questions in the
KnowIT VQA dataset, establishing new state-of-the-art performance levels.
|
2502.11748
|
ILIAS: Instance-Level Image retrieval At Scale
|
cs.CV
|
This work introduces ILIAS, a new test dataset for Instance-Level Image
retrieval At Scale. It is designed to evaluate the ability of current and
future foundation models and retrieval techniques to recognize particular
objects. The key benefits over existing datasets include large scale, domain
diversity, accurate ground truth, and a performance that is far from saturated.
ILIAS includes query and positive images for 1,000 object instances, manually
collected to capture challenging conditions and diverse domains. Large-scale
retrieval is conducted against 100 million distractor images from YFCC100M. To
avoid false negatives without extra annotation effort, we include only query
objects confirmed to have emerged after 2014, i.e. the compilation date of
YFCC100M. An extensive benchmarking is performed with the following
observations: i) models fine-tuned on specific domains, such as landmarks or
products, excel in that domain but fail on ILIAS ii) learning a linear
adaptation layer using multi-domain class supervision results in performance
improvements, especially for vision-language models iii) local descriptors in
retrieval re-ranking are still a key ingredient, especially in the presence of
severe background clutter iv) the text-to-image performance of the
vision-language foundation models is surprisingly close to the corresponding
image-to-image case. website: https://vrg.fel.cvut.cz/ilias/
|
2502.11749
|
JotlasNet: Joint Tensor Low-Rank and Attention-based Sparse Unrolling
Network for Accelerating Dynamic MRI
|
cs.CV cs.AI
|
Joint low-rank and sparse unrolling networks have shown superior performance
in dynamic MRI reconstruction. However, existing works mainly utilized matrix
low-rank priors, neglecting the tensor characteristics of dynamic MRI images,
and only a global threshold is applied for the sparse constraint to the
multi-channel data, limiting the flexibility of the network. Additionally, most
of them have inherently complex network structure, with intricate interactions
among variables. In this paper, we propose a novel deep unrolling network,
JotlasNet, for dynamic MRI reconstruction by jointly utilizing tensor low-rank
and attention-based sparse priors. Specifically, we utilize tensor low-rank
prior to exploit the structural correlations in high-dimensional data.
Convolutional neural networks are used to adaptively learn the low-rank and
sparse transform domains. A novel attention-based soft thresholding operator is
proposed to assign a unique learnable threshold to each channel of the data in
the CNN-learned sparse domain. The network is unrolled from the elaborately
designed composite splitting algorithm and thus features a simple yet efficient
parallel structure. Extensive experiments on two datasets (OCMR, CMRxRecon)
demonstrate the superior performance of JotlasNet in dynamic MRI
reconstruction.
|
2502.11751
|
Language Models Can See Better: Visual Contrastive Decoding For LLM
Multimodal Reasoning
|
cs.CV cs.AI
|
Although Large Language Models (LLMs) excel in reasoning and generation for
language tasks, they are not specifically designed for multimodal challenges.
Training Multimodal Large Language Models (MLLMs), however, is
resource-intensive and constrained by various training limitations. In this
paper, we propose the Modular-based Visual Contrastive Decoding (MVCD)
framework to move this obstacle. Our framework leverages LLMs' In-Context
Learning (ICL) capability and the proposed visual contrastive-example decoding
(CED), specifically tailored for this framework, without requiring any
additional training. By converting visual signals into text and focusing on
contrastive output distributions during decoding, we can highlight the new
information introduced by contextual examples, explore their connections, and
avoid over-reliance on prior encoded knowledge. MVCD enhances LLMs' visual
perception to make it see and reason over the input visuals. To demonstrate
MVCD's effectiveness, we conduct experiments with four LLMs across five
question answering datasets. Our results not only show consistent improvement
in model accuracy but well explain the effective components inside our decoding
strategy. Our code will be available at https://github.com/Pbhgit/MVCD.
|
2502.11752
|
Early Detection of Human Handover Intentions in Human-Robot
Collaboration: Comparing EEG, Gaze, and Hand Motion
|
cs.RO cs.HC
|
Human-robot collaboration (HRC) relies on accurate and timely recognition of
human intentions to ensure seamless interactions. Among common HRC tasks,
human-to-robot object handovers have been studied extensively for planning the
robot's actions during object reception, assuming the human intention for
object handover. However, distinguishing handover intentions from other actions
has received limited attention. Most research on handovers has focused on
visually detecting motion trajectories, which often results in delays or false
detections when trajectories overlap. This paper investigates whether human
intentions for object handovers are reflected in non-movement-based
physiological signals. We conduct a multimodal analysis comparing three data
modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our
study aims to distinguish between handover-intended human motions and
non-handover motions in an HRC setting, evaluating each modality's performance
in predicting and classifying these actions before and after human movement
initiation. We develop and evaluate human intention detectors based on these
modalities, comparing their accuracy and timing in identifying handover
intentions. To the best of our knowledge, this is the first study to
systematically develop and test intention detectors across multiple modalities
within the same experimental context of human-robot handovers. Our analysis
reveals that handover intention can be detected from all three modalities.
Nevertheless, gaze signals are the earliest as well as the most accurate to
classify the motion as intended for handover or non-handover.
|
2502.11753
|
HintsOfTruth: A Multimodal Checkworthiness Detection Dataset with Real
and Synthetic Claims
|
cs.AI
|
Misinformation can be countered with fact-checking, but the process is costly
and slow. Identifying checkworthy claims is the first step, where automation
can help scale fact-checkers' efforts. However, detection methods struggle with
content that is 1) multimodal, 2) from diverse domains, and 3) synthetic. We
introduce HintsOfTruth, a public dataset for multimodal checkworthiness
detection with $27$K real-world and synthetic image/claim pairs. The mix of
real and synthetic data makes this dataset unique and ideal for benchmarking
detection methods. We compare fine-tuned and prompted Large Language Models
(LLMs). We find that well-configured lightweight text-based encoders perform
comparably to multimodal models but the first only focus on identifying
non-claim-like content. Multimodal LLMs can be more accurate but come at a
significant computational cost, making them impractical for large-scale
applications. When faced with synthetic data, multimodal models perform more
robustly
|
2502.11756
|
On the Computation of the Fisher Information in Continual Learning
|
cs.LG cs.AI cs.CV stat.ML
|
One of the most popular methods for continual learning with deep neural
networks is Elastic Weight Consolidation (EWC), which involves computing the
Fisher Information. The exact way in which the Fisher Information is computed
is however rarely described, and multiple different implementations for it can
be found online. This blog post discusses and empirically compares several
often-used implementations, which highlights that many currently reported
results for EWC could likely be improved by changing the way the Fisher
Information is computed.
|
2502.11763
|
Lightweight Deepfake Detection Based on Multi-Feature Fusion
|
cs.CV cs.AI
|
Deepfake technology utilizes deep learning based face manipulation techniques
to seamlessly replace faces in videos creating highly realistic but
artificially generated content. Although this technology has beneficial
applications in media and entertainment misuse of its capabilities may lead to
serious risks including identity theft cyberbullying and false information. The
integration of DL with visual cognition has resulted in important technological
improvements particularly in addressing privacy risks caused by artificially
generated deepfake images on digital media platforms. In this study we propose
an efficient and lightweight method for detecting deepfake images and videos
making it suitable for devices with limited computational resources. In order
to reduce the computational burden usually associated with DL models our method
integrates machine learning classifiers in combination with keyframing
approaches and texture analysis. Moreover the features extracted with a
histogram of oriented gradients (HOG) local binary pattern (LBP) and KAZE bands
were integrated to evaluate using random forest extreme gradient boosting extra
trees and support vector classifier algorithms. Our findings show a
feature-level fusion of HOG LBP and KAZE features improves accuracy to 92% and
96% on FaceForensics++ and Celeb-DFv2 respectively.
|
2502.11766
|
Warmup-Distill: Bridge the Distribution Mismatch between Teacher and
Student before Knowledge Distillation
|
cs.CL
|
The widespread deployment of Large Language Models (LLMs) is hindered by the
high computational demands, making knowledge distillation (KD) crucial for
developing compact smaller ones. However, the conventional KD methods endure
the distribution mismatch issue between the teacher and student models, leading
to the poor performance of distillation. For instance, the widely-used KL-based
methods suffer the mode-averaging and mode-collapsing problems, since the
mismatched probabitliy distribution between both models. Previous studies
mainly optimize this issue via different distance calculations towards the
distribution of both models. Unfortunately, the distribution mismatch issue
still exists in the early stage of the distillation. Hence, to reduce the
impact of distribution mismatch, we propose a simple yet efficient method,
named Warmup-Distill, which aligns the distillation of the student to that of
the teacher in advance of distillation. Specifically, we first detect the
distribution of the student model in practical scenarios with its internal
knowledge, and then modify the knowledge with low probability via the teacher
as the checker. Consequently, Warmup-Distill aligns the internal student's
knowledge to that of the teacher, which expands the distribution of the student
with the teacher's, and assists the student model to learn better in the
subsequent distillation. Experiments on the seven benchmarks demonstrate that
Warmup-Distill could provide a warmup student more suitable for distillation,
which outperforms the vanilla student by as least +0.4 averaged score among all
benchmarks. Noteably, with the assistance of Warmup-Distill, the distillation
on the math task could yield a further improvement, at most +1.9% accuracy.
|
2502.11767
|
From Selection to Generation: A Survey of LLM-based Active Learning
|
cs.LG cs.CL
|
Active Learning (AL) has been a powerful paradigm for improving model
efficiency and performance by selecting the most informative data points for
labeling and training. In recent active learning frameworks, Large Language
Models (LLMs) have been employed not only for selection but also for generating
entirely new data instances and providing more cost-effective annotations.
Motivated by the increasing importance of high-quality data and efficient model
training in the era of LLMs, we present a comprehensive survey on LLM-based
Active Learning. We introduce an intuitive taxonomy that categorizes these
techniques and discuss the transformative roles LLMs can play in the active
learning loop. We further examine the impact of AL on LLM learning paradigms
and its applications across various domains. Finally, we identify open
challenges and propose future research directions. This survey aims to serve as
an up-to-date resource for researchers and practitioners seeking to gain an
intuitive understanding of LLM-based AL techniques and deploy them to new
applications.
|
2502.11770
|
Cognitive-Aligned Document Selection for Retrieval-augmented Generation
|
cs.AI
|
Large language models (LLMs) inherently display hallucinations since the
precision of generated texts cannot be guaranteed purely by the parametric
knowledge they include. Although retrieval-augmented generation (RAG) systems
enhance the accuracy and reliability of generative models by incorporating
external documents, these retrieved documents often fail to adequately support
the model's responses in practical applications. To address this issue, we
propose GGatrieval (Fine-\textbf{G}rained \textbf{G}rounded \textbf{A}lignment
Re\textbf{trieval} for verifiable generation), which leverages an LLM to
dynamically update queries and filter high-quality, reliable retrieval
documents. Specifically, we parse the user query into its syntactic components
and perform fine-grained grounded alignment with the retrieved documents. For
query components that cannot be individually aligned, we propose a dynamic
semantic compensation mechanism that iteratively refines and rewrites the query
while continuously updating the retrieval results. This iterative process
continues until the retrieved documents sufficiently support the query's
response. Our approach introduces a novel criterion for filtering retrieved
documents, closely emulating human strategies for acquiring targeted
information. This ensures that the retrieved content effectively supports and
verifies the generated outputs. On the ALCE benchmark, our method significantly
surpasses a wide range of baselines, achieving state-of-the-art performance.
|
2502.11771
|
The Validation Gap: A Mechanistic Analysis of How Language Models
Compute Arithmetic but Fail to Validate It
|
cs.CL cs.AI
|
The ability of large language models (LLMs) to validate their output and
identify potential errors is crucial for ensuring robustness and reliability.
However, current research indicates that LLMs struggle with self-correction,
encountering significant challenges in detecting errors. While studies have
explored methods to enhance self-correction in LLMs, relatively little
attention has been given to understanding the models' internal mechanisms
underlying error detection. In this paper, we present a mechanistic analysis of
error detection in LLMs, focusing on simple arithmetic problems. Through
circuit analysis, we identify the computational subgraphs responsible for
detecting arithmetic errors across four smaller-sized LLMs. Our findings reveal
that all models heavily rely on $\textit{consistency heads}$--attention heads
that assess surface-level alignment of numerical values in arithmetic
solutions. Moreover, we observe that the models' internal arithmetic
computation primarily occurs in higher layers, whereas validation takes place
in middle layers, before the final arithmetic results are fully encoded. This
structural dissociation between arithmetic computation and validation seems to
explain why current LLMs struggle to detect even simple arithmetic errors.
|
2502.11774
|
Interpretable Machine Learning for Kronecker Coefficients
|
cs.LG math.CO math.RT stat.ML
|
We analyze the saliency of neural networks and employ interpretable machine
learning models to predict whether the Kronecker coefficients of the symmetric
group are zero or not. Our models use triples of partitions as input features,
as well as b-loadings derived from the principal component of an embedding that
captures the differences between partitions. Across all approaches, we achieve
an accuracy of approximately 83% and derive explicit formulas for a decision
function in terms of b-loadings. Additionally, we develop transformer-based
models for prediction, achieving the highest reported accuracy of over 99%.
|
2502.11775
|
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
|
cs.CV
|
While recent advancements in reasoning optimization have significantly
enhanced the capabilities of large language models (LLMs), existing efforts to
improve reasoning have been limited to solving mathematical problems and
focusing on visual graphical inputs, neglecting broader applications in general
video understanding.This paper proposes video-SALMONN-o1, the first open-source
reasoning-enhanced audio-visual LLM designed for general video understanding
tasks. To enhance its reasoning abilities, we develop a reasoning-intensive
dataset featuring challenging audio-visual questions with step-by-step
solutions. We also propose process direct preference optimization (pDPO), which
leverages contrastive step selection to achieve efficient step-level reward
modelling tailored for multimodal inputs. Additionally, we introduce RivaBench,
the first reasoning-intensive video understanding benchmark, featuring over
4,000 high-quality, expert-curated question-answer pairs across scenarios such
as standup comedy, academic presentations, and synthetic video detection.
video-SALMONN-o1 achieves 3-8% accuracy improvements over the LLaVA-OneVision
baseline across different video reasoning benchmarks. Besides, pDPO achieves
6-8% improvements compared to the supervised fine-tuning model on RivaBench.
Enhanced reasoning enables video-SALMONN-o1 zero-shot synthetic video detection
capabilities.
|
2502.11777
|
Deep Neural Networks for Accurate Depth Estimation with Latent Space
Features
|
cs.CV cs.AI
|
Depth estimation plays a pivotal role in advancing human-robot interactions,
especially in indoor environments where accurate 3D scene reconstruction is
essential for tasks like navigation and object handling. Monocular depth
estimation, which relies on a single RGB camera, offers a more affordable
solution compared to traditional methods that use stereo cameras or LiDAR.
However, despite recent progress, many monocular approaches struggle with
accurately defining depth boundaries, leading to less precise reconstructions.
In response to these challenges, this study introduces a novel depth estimation
framework that leverages latent space features within a deep convolutional
neural network to enhance the precision of monocular depth maps. The proposed
model features dual encoder-decoder architecture, enabling both color-to-depth
and depth-to-depth transformations. This structure allows for refined depth
estimation through latent space encoding. To further improve the accuracy of
depth boundaries and local features, a new loss function is introduced. This
function combines latent loss with gradient loss, helping the model maintain
the integrity of depth boundaries. The framework is thoroughly tested using the
NYU Depth V2 dataset, where it sets a new benchmark, particularly excelling in
complex indoor scenarios. The results clearly show that this approach
effectively reduces depth ambiguities and blurring, making it a promising
solution for applications in human-robot interaction and 3D scene
reconstruction.
|
2502.11778
|
Private Synthetic Graph Generation and Fused Gromov-Wasserstein Distance
|
stat.ML cs.DS cs.LG math.PR
|
Networks are popular for representing complex data. In particular,
differentially private synthetic networks are much in demand for method and
algorithm development. The network generator should be easy to implement and
should come with theoretical guarantees. Here we start with complex data as
input and jointly provide a network representation as well as a synthetic
network generator. Using a random connection model, we devise an effective
algorithmic approach for generating attributed synthetic graphs which is
$\epsilon$-differentially private at the vertex level, while preserving utility
under an appropriate notion of distance which we develop. We provide
theoretical guarantees for the accuracy of the private synthetic graphs using
the fused Gromov-Wasserstein distance, which extends the Wasserstein metric to
structured data. Our method draws inspiration from the PSMM method of
\citet{he2023}.
|
2502.11779
|
Efficient Response Generation Method Selection for Fine-Tuning Large
Language Models
|
cs.CL
|
The training data for fine-tuning large language models (LLMs) is typically
structured as input-output pairs. However, for many tasks, there can be
multiple equally valid output variations for the same input. Recent studies
have observed that the choice of output variation used in training can affect
the model's performance. This raises an important question: how can we generate
the most effective output from the many possible response generation strategy
options? Rather than relying on the traditional but resource-intensive
train-and-evaluate approach, this paper proposes a scalable, approximate method
for estimating the quality of a small subset of generated training data derived
from the same input. We then evaluate how well this small subset of generated
output fits the target model we are trying to train. We present a large-scale
benchmark covering diverse reasoning-based datasets to support our study.
The central idea is that a good output should closely resemble the output
generated by the target LLM. We formalize this 'closeness' as the expected
alignment score between a candidate output and the output sampled from the
target LLM. We connect this measurement to the perplexity metric used in
previous literature and demonstrate that leveraging an alignment-based metric
can provide better predictions of model performance. Using this strategy, we
can evaluate a small subset of the generated output from each response
generation strategy option, then select the most effective strategy. We show
that an LLM trained on data generated by the selected strategy could lead to a
significant performance gain in many cases.
|
2502.11785
|
Changing the Rules of the Game: Reasoning about Dynamic Phenomena in
Multi-Agent Systems
|
cs.LO cs.MA
|
The design and application of multi-agent systems (MAS) require reasoning
about the effects of modifications on their underlying structure. In
particular, such changes may impact the satisfaction of system specifications
and the strategic abilities of their autonomous components. In this paper, we
are concerned with the problem of verifying and synthesising modifications (or
\textit{updates}) of MAS. We propose an extension of the Alternating-Time
Temporal Logic ($\mathsf{ATL}$) that enables reasoning about the dynamics of
model change, called the \textit{Logic for $\mathsf{ATL}$ Model Building}
($\mathsf{LAMB}$). We show how $\mathsf{LAMB}$ can express various intuitions
and ideas about the dynamics of MAS, from normative updates to mechanism
design. As the main technical result, we prove that, while being strictly more
expressive than $\mathsf{ATL}$, $\mathsf{LAMB}$ enjoys a P-complete
model-checking procedure.
|
2502.11789
|
Personality Editing for Language Models through Relevant Knowledge
Editing
|
cs.CL
|
Large Language Models (LLMs) play a vital role in applications like
conversational agents and content creation, where controlling a model's
personality is crucial for maintaining tone, consistency, and engagement.
However, traditional prompt-based techniques for controlling personality often
fall short, as they do not effectively mitigate the model's inherent biases. In
this paper, we introduce a novel method PALETTE that enhances personality
control through knowledge editing. By generating adjustment queries inspired by
psychological assessments, our approach systematically adjusts responses to
personality-related queries similar to modifying factual knowledge, thereby
achieving controlled shifts in personality traits. Experimental results from
both automatic and human evaluations demonstrate that our method enables more
stable and well-balanced personality control in LLMs.
|
2502.11799
|
Table-Critic: A Multi-Agent Framework for Collaborative Criticism and
Refinement in Table Reasoning
|
cs.AI cs.CL
|
Despite the remarkable capabilities of large language models (LLMs) in
various reasoning tasks, they still struggle with table reasoning tasks,
particularly in maintaining consistency throughout multi-step reasoning
processes. While existing approaches have explored various decomposition
strategies, they often lack effective mechanisms to identify and correct errors
in intermediate reasoning steps, leading to cascading error propagation. To
address these issues, we propose Table-Critic, a novel multi-agent framework
that facilitates collaborative criticism and iterative refinement of the
reasoning process until convergence to correct solutions. Our framework
consists of four specialized agents: a Judge for error identification, a Critic
for comprehensive critiques, a Refiner for process improvement, and a Curator
for pattern distillation. To effectively deal with diverse and unpredictable
error types, we introduce a self-evolving template tree that systematically
accumulates critique knowledge through experience-driven learning and guides
future reflections. Extensive experiments have demonstrated that Table-Critic
achieves substantial improvements over existing methods, achieving superior
accuracy and error correction rates while maintaining computational efficiency
and lower solution degradation rate.
|
2502.11800
|
Residual Learning towards High-fidelity Vehicle Dynamics Modeling with
Transformer
|
cs.RO
|
The vehicle dynamics model serves as a vital component of autonomous driving
systems, as it describes the temporal changes in vehicle state. In a long
period, researchers have made significant endeavors to accurately model vehicle
dynamics. Traditional physics-based methods employ mathematical formulae to
model vehicle dynamics, but they are unable to adequately describe complex
vehicle systems due to the simplifications they entail. Recent advancements in
deep learning-based methods have addressed this limitation by directly
regressing vehicle dynamics. However, the performance and generalization
capabilities still require further enhancement. In this letter, we address
these problems by proposing a vehicle dynamics correction system that leverages
deep neural networks to correct the state residuals of a physical model instead
of directly estimating the states. This system greatly reduces the difficulty
of network learning and thus improves the estimation accuracy of vehicle
dynamics. Furthermore, we have developed a novel Transformer-based dynamics
residual correction network, DyTR. This network implicitly represents state
residuals as high-dimensional queries, and iteratively updates the estimated
residuals by interacting with dynamics state features. The experiments in
simulations demonstrate the proposed system works much better than physics
model, and our proposed DyTR model achieves the best performances on dynamics
state residual correction task, reducing the state prediction errors of a
simple 3 DoF vehicle model by an average of 92.3% and 59.9% in two dataset,
respectively.
|
2502.11801
|
3D Gaussian Inpainting with Depth-Guided Cross-View Consistency
|
cs.CV cs.LG
|
When performing 3D inpainting using novel-view rendering methods like Neural
Radiance Field (NeRF) or 3D Gaussian Splatting (3DGS), how to achieve texture
and geometry consistency across camera views has been a challenge. In this
paper, we propose a framework of 3D Gaussian Inpainting with Depth-Guided
Cross-View Consistency (3DGIC) for cross-view consistent 3D inpainting. Guided
by the rendered depth information from each training view, our 3DGIC exploits
background pixels visible across different views for updating the inpainting
mask, allowing us to refine the 3DGS for inpainting purposes.Through extensive
experiments on benchmark datasets, we confirm that our 3DGIC outperforms
current state-of-the-art 3D inpainting methods quantitatively and
qualitatively.
|
2502.11806
|
Exploring Translation Mechanism of Large Language Models
|
cs.CL
|
Large language models (LLMs) have succeeded remarkably in multilingual
translation tasks. However, the inherent translation mechanisms of LLMs remain
poorly understood, largely due to sophisticated architectures and vast
parameter scales. In response to this issue, this study explores the
translation mechanism of LLM from the perspective of computational components
(e.g., attention heads and MLPs). Path patching is utilized to explore causal
relationships between components, detecting those crucial for translation tasks
and subsequently analyzing their behavioral patterns in human-interpretable
terms. Comprehensive analysis reveals that translation is predominantly
facilitated by a sparse subset of specialized attention heads (less than 5\%),
which extract source language, indicator, and positional features. MLPs
subsequently integrate and process these features by transiting towards
English-centric latent representations. Notably, building on the above
findings, targeted fine-tuning of only 64 heads achieves translation
improvement comparable to full-parameter tuning while preserving general
capabilities.
|
2502.11809
|
Revealing Bias Formation in Deep Neural Networks Through the Geometric
Mechanisms of Human Visual Decoupling
|
cs.CV cs.AI
|
Deep neural networks (DNNs) often exhibit biases toward certain categories
during object recognition, even under balanced training data conditions. The
intrinsic mechanisms underlying these biases remain unclear. Inspired by the
human visual system, which decouples object manifolds through hierarchical
processing to achieve object recognition, we propose a geometric analysis
framework linking the geometric complexity of class-specific perceptual
manifolds in DNNs to model bias. Our findings reveal that differences in
geometric complexity can lead to varying recognition capabilities across
categories, introducing biases. To support this analysis, we present the
Perceptual-Manifold-Geometry library, designed for calculating the geometric
properties of perceptual manifolds.
|
2502.11811
|
FineFilter: A Fine-grained Noise Filtering Mechanism for
Retrieval-Augmented Large Language Models
|
cs.CL
|
Retrieved documents containing noise will hinder Retrieval-Augmented
Generation (RAG) from detecting answer clues, necessitating noise filtering
mechanisms to enhance accuracy. Existing methods use re-ranking or
summarization to identify the most relevant sentences, but directly and
accurately locating answer clues from these large-scale and complex documents
remains challenging. Unlike these document-level operations, we treat noise
filtering as a sentence-level MinMax optimization problem: first identifying
the potential clues from multiple documents using contextual information, then
ranking them by relevance, and finally retaining the least clues through
truncation. In this paper, we propose FineFilter, a novel fine-grained noise
filtering mechanism for RAG consisting of a clue extractor, a re-ranker, and a
truncator. We optimize each module to tackle complex reasoning challenges: (1)
Clue extractor firstly uses sentences containing the answer and similar ones as
fine-tuned targets, aiming at extracting sufficient potential clues; (2)
Re-ranker is trained to prioritize effective clues based on the real feedback
from generation module, with clues capable of generating correct answer as
positive samples and others as negative; (3) Truncator takes the minimum clues
needed to answer the question (truncation point) as fine-tuned targets, and
performs truncation on the re-ranked clues to achieve fine-grained noise
filtering. Experiments on three QA datasets demonstrate that FineFilter
significantly outperforms baselines in terms of performance and inference cost.
Further analysis on each module shows the effectiveness of our optimizations
for complex reasoning.
|
2502.11812
|
Towards Understanding Fine-Tuning Mechanisms of LLMs via Circuit
Analysis
|
cs.CL cs.AI cs.LG
|
Fine-tuning significantly improves the performance of Large Language Models
(LLMs), yet its underlying mechanisms remain poorly understood. This paper aims
to provide an in-depth interpretation of the fine-tuning process through
circuit analysis, a popular tool in Mechanistic Interpretability (MI). Unlike
previous studies
\cite{prakash2024finetuningenhancesexistingmechanisms,chhabra2024neuroplasticity}
that focus on tasks where pre-trained models already perform well, we develop a
set of mathematical tasks where fine-tuning yields substantial performance
gains, which are closer to the practical setting. In our experiments, we
identify circuits at various checkpoints during fine-tuning and examine the
interplay between circuit analysis, fine-tuning methods, and task complexities.
First, we find that while circuits maintain high node similarity before and
after fine-tuning, their edges undergo significant changes, which is in
contrast to the previous work
\cite{prakash2024finetuningenhancesexistingmechanisms,chhabra2024neuroplasticity}
that show circuits only add some additional components after fine-tuning. Based
on these observations, we develop a circuit-aware Low-Rank Adaptation (LoRA)
method, which assigns ranks to layers based on edge changes in the circuits.
Experimental results demonstrate that our circuit-based LoRA algorithm achieves
an average performance improvement of 2.46\% over standard LoRA with similar
parameter sizes. Furthermore, we explore how combining circuits from subtasks
can enhance fine-tuning in compositional tasks, providing new insights into the
design of such tasks and deepening the understanding of circuit dynamics and
fine-tuning mechanisms.
|
2502.11816
|
IMTS-Mixer: Mixer-Networks for Irregular Multivariate Time Series
Forecasting
|
cs.LG
|
Forecasting Irregular Multivariate Time Series (IMTS) has recently emerged as
a distinct research field, necessitating specialized models to address its
unique challenges. While most forecasting literature assumes regularly spaced
observations without missing values, many real-world datasets - particularly in
healthcare, climate research, and biomechanics - violate these assumptions.
Time Series (TS)-mixer models have achieved remarkable success in regular
multivariate time series forecasting. However, they remain unexplored for IMTS
due to their requirement for complete and evenly spaced observations. To bridge
this gap, we introduce IMTS-Mixer, a novel forecasting architecture designed
specifically for IMTS. Our approach retains the core principles of TS mixer
models while introducing innovative methods to transform IMTS into fixed-size
matrix representations, enabling their seamless integration with mixer modules.
We evaluate IMTS-Mixer on a benchmark of four real-world datasets from various
domains. Our results demonstrate that IMTS-Mixer establishes a new
state-of-the-art in forecasting accuracy while also improving computational
efficiency.
|
2502.11817
|
AAKT: Enhancing Knowledge Tracing with Alternate Autoregressive Modeling
|
cs.AI cs.CY cs.LG
|
Knowledge Tracing (KT) aims to predict students' future performances based on
their former exercises and additional information in educational settings. KT
has received significant attention since it facilitates personalized
experiences in educational situations. Simultaneously, the autoregressive
modeling on the sequence of former exercises has been proven effective for this
task. One of the primary challenges in autoregressive modeling for Knowledge
Tracing is effectively representing the anterior (pre-response) and posterior
(post-response) states of learners across exercises. Existing methods often
employ complex model architectures to update learner states using question and
response records. In this study, we propose a novel perspective on knowledge
tracing task by treating it as a generative process, consistent with the
principles of autoregressive models. We demonstrate that knowledge states can
be directly represented through autoregressive encodings on a question-response
alternate sequence, where model generate the most probable representation in
hidden state space by analyzing history interactions. This approach underpins
our framework, termed Alternate Autoregressive Knowledge Tracing (AAKT).
Additionally, we incorporate supplementary educational information, such as
question-related skills, into our framework through an auxiliary task, and
include extra exercise details, like response time, as additional inputs. Our
proposed framework is implemented using advanced autoregressive technologies
from Natural Language Generation (NLG) for both training and prediction.
Empirical evaluations on four real-world KT datasets indicate that AAKT
consistently outperforms all baseline models in terms of AUC, ACC, and RMSE.
Furthermore, extensive ablation studies and visualized analysis validate the
effectiveness of key components in AAKT.
|
2502.11821
|
Unitary orthonormal bases of finite dimensional inclusions
|
math.OA cs.IT math-ph math.IT math.MP quant-ph
|
We study unitary orthonormal bases in the sense of Pimsner and Popa for
inclusions $(\mathcal{B}\subseteq \mathcal{A}, E),$ where $\mathcal{A},
\mathcal{B}$ are finite dimensional von Neumann algebras and $E$ is a
conditional expectation map from $\mathcal{A}$ onto $\mathcal{B}$. It is shown
that existence of such bases requires that the associated inclusion matrix
satisfies a spectral condition forcing dimension vectors to be Perron-Frobenius
eigenvectors and the conditional expectation map preserves the Markov trace.
Subject to these conditions, explicit unitary orthonormal bases are constructed
if either one of the algebras is abelian or simple. They generalize complex
Hadamard matrices, Weyl unitary bases, and a recent work of Crann et al which
correspond to the special cases of $\mathcal{A}$ being abelian, simple, and
general multi-matrix algebras respectively with $\mathcal{B}$ being the algebra
of complex numbers. For the first time $\mathcal{B}$ is more general. As an
application of these results it is shown that if $(\mathcal{B}\subseteq
\mathcal{A}, E),$ admits a unitary orthonormal basis then the Connes-St{\o}rmer
relative entropy $H(\mathcal{A}_1|\mathcal{A})$ equals the logarithm of the
square of the norm of the inclusion matrix, where $\mathcal{A}_1$ denotes the
Jones basic construction of the inclusion. As a further application, we prove
the existence of unitary orthonormal bases for a large class of depth 2
subfactors with abelian relative commutant.
|
2502.11824
|
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment Analysis
|
cs.CL
|
Aspect-based sentiment analysis (ABSA) is a crucial task in information
extraction and sentiment analysis, aiming to identify aspects with associated
sentiment elements in text. However, existing ABSA datasets are predominantly
English-centric, limiting the scope for multilingual evaluation and research.
To bridge this gap, we present M-ABSA, a comprehensive dataset spanning 7
domains and 21 languages, making it the most extensive multilingual parallel
dataset for ABSA to date. Our primary focus is on triplet extraction, which
involves identifying aspect terms, aspect categories, and sentiment polarities.
The dataset is constructed through an automatic translation process with human
review to ensure quality. We perform extensive experiments using various
baselines to assess performance and compatibility on M-ABSA. Our empirical
findings highlight that the dataset enables diverse evaluation tasks, such as
multilingual and multi-domain transfer learning, and large language model
evaluation, underscoring its inclusivity and its potential to drive
advancements in multilingual ABSA research.
|
2502.11827
|
Influence Operations in Social Networks
|
cs.SI cs.CY
|
An important part of online activities are intended to control the public
opinion and behavior, being considered currently a global threat. This article
identifies and conceptualizes seven online strategies employed in social media
influence operations. These procedures are quantified through the analysis of
80 incidents of foreign information manipulation and interference (FIMI),
estimating their real-world usage and combination. Finally, we suggest future
directions for research on influence operations.
|
2502.11828
|
Intersectional Fairness in Reinforcement Learning with Large State and
Constraint Spaces
|
cs.LG cs.GT
|
In traditional reinforcement learning (RL), the learner aims to solve a
single objective optimization problem: find the policy that maximizes expected
reward. However, in many real-world settings, it is important to optimize over
multiple objectives simultaneously. For example, when we are interested in
fairness, states might have feature annotations corresponding to multiple
(intersecting) demographic groups to whom reward accrues, and our goal might be
to maximize the reward of the group receiving the minimal reward. In this work,
we consider a multi-objective optimization problem in which each objective is
defined by a state-based reweighting of a single scalar reward function. This
generalizes the problem of maximizing the reward of the minimum reward group.
We provide oracle-efficient algorithms to solve these multi-objective RL
problems even when the number of objectives is exponentially large-for tabular
MDPs, as well as for large MDPs when the group functions have additional
structure. Finally, we experimentally validate our theoretical results and
demonstrate applications on a preferential attachment graph MDP.
|
2502.11829
|
Code-Vision: Evaluating Multimodal LLMs Logic Understanding and Code
Generation Capabilities
|
cs.CL cs.AI cs.SE
|
This paper introduces Code-Vision, a benchmark designed to evaluate the
logical understanding and code generation capabilities of Multimodal Large
Language Models (MLLMs). It challenges MLLMs to generate a correct program that
fulfills specific functionality requirements based on a given flowchart, which
visually represents the desired algorithm or process. Code-Vision comprises
three subsets: HumanEval-V, Algorithm, and MATH, which evaluate MLLMs' coding
abilities across basic programming, algorithmic, and mathematical
problem-solving domains. Our experiments evaluate 12 MLLMs on Code-Vision.
Experimental results demonstrate that there is a large performance difference
between proprietary and open-source models. On Hard problems, GPT-4o can
achieve 79.3% pass@1, but the best open-source model only achieves 15%. Further
experiments reveal that Code-Vision can pose unique challenges compared to
other multimodal reasoning benchmarks MMCode and MathVista. We also explore the
reason for the poor performance of the open-source models. All data and codes
are available at https://github.com/wanghanbinpanda/CodeVision.
|
2502.11830
|
Text Classification in the LLM Era -- Where do we stand?
|
cs.CL
|
Large Language Models revolutionized NLP and showed dramatic performance
improvements across several tasks. In this paper, we investigated the role of
such language models in text classification and how they compare with other
approaches relying on smaller pre-trained language models. Considering 32
datasets spanning 8 languages, we compared zero-shot classification, few-shot
fine-tuning and synthetic data based classifiers with classifiers built using
the complete human labeled dataset. Our results show that zero-shot approaches
do well for sentiment classification, but are outperformed by other approaches
for the rest of the tasks, and synthetic data sourced from multiple LLMs can
build better classifiers than zero-shot open LLMs. We also see wide performance
disparities across languages in all the classification scenarios. We expect
that these findings would guide practitioners working on developing text
classification systems across languages.
|
2502.11831
|
Intuitive physics understanding emerges from self-supervised pretraining
on natural videos
|
cs.CV cs.AI
|
We investigate the emergence of intuitive physics understanding in
general-purpose deep neural network models trained to predict masked regions in
natural videos. Leveraging the violation-of-expectation framework, we find that
video prediction models trained to predict outcomes in a learned representation
space demonstrate an understanding of various intuitive physics properties,
such as object permanence and shape consistency. In contrast, video prediction
in pixel space and multimodal large language models, which reason through text,
achieve performance closer to chance. Our comparisons of these architectures
reveal that jointly learning an abstract representation space while predicting
missing parts of sensory input, akin to predictive coding, is sufficient to
acquire an understanding of intuitive physics, and that even models trained on
one week of unique video achieve above chance performance. This challenges the
idea that core knowledge -- a set of innate systems to help understand the
world -- needs to be hardwired to develop an understanding of intuitive
physics.
|
2502.11835
|
Neural Chaos: A Spectral Stochastic Neural Operator
|
cs.CE physics.comp-ph stat.ML
|
Building surrogate models with uncertainty quantification capabilities is
essential for many engineering applications where randomness, such as
variability in material properties, is unavoidable. Polynomial Chaos Expansion
(PCE) is widely recognized as a to-go method for constructing stochastic
solutions in both intrusive and non-intrusive ways. Its application becomes
challenging, however, with complex or high-dimensional processes, as achieving
accuracy requires higher-order polynomials, which can increase computational
demands and or the risk of overfitting. Furthermore, PCE requires specialized
treatments to manage random variables that are not independent, and these
treatments may be problem-dependent or may fail with increasing complexity. In
this work, we adopt the spectral expansion formalism used in PCE; however, we
replace the classical polynomial basis functions with neural network (NN) basis
functions to leverage their expressivity. To achieve this, we propose an
algorithm that identifies NN-parameterized basis functions in a purely
data-driven manner, without any prior assumptions about the joint distribution
of the random variables involved, whether independent or dependent. The
proposed algorithm identifies each NN-parameterized basis function
sequentially, ensuring they are orthogonal with respect to the data
distribution. The basis functions are constructed directly on the joint
stochastic variables without requiring a tensor product structure. This
approach may offer greater flexibility for complex stochastic models, while
simplifying implementation compared to the tensor product structures typically
used in PCE to handle random vectors. We demonstrate the effectiveness of the
proposed scheme through several numerical examples of varying complexity and
provide comparisons with classical PCE.
|
2502.11836
|
Model Generalization on Text Attribute Graphs: Principles with Large
Language Models
|
cs.LG
|
Large language models (LLMs) have recently been introduced to graph learning,
aiming to extend their zero-shot generalization success to tasks where labeled
graph data is scarce. Among these applications, inference over text-attributed
graphs (TAGs) presents unique challenges: existing methods struggle with LLMs'
limited context length for processing large node neighborhoods and the
misalignment between node embeddings and the LLM token space. To address these
issues, we establish two key principles for ensuring generalization and derive
the framework LLM-BP accordingly: (1) Unifying the attribute space with
task-adaptive embeddings, where we leverage LLM-based encoders and task-aware
prompting to enhance generalization of the text attribute embeddings; (2)
Developing a generalizable graph information aggregation mechanism, for which
we adopt belief propagation with LLM-estimated parameters that adapt across
graphs. Evaluations on 11 real-world TAG benchmarks demonstrate that LLM-BP
significantly outperforms existing approaches, achieving 8.10% improvement with
task-conditional embeddings and an additional 1.71% gain from adaptive
aggregation.
|
2502.11840
|
ChordFormer: A Conformer-Based Architecture for Large-Vocabulary Audio
Chord Recognition
|
cs.SD cs.AI cs.CV cs.IR cs.LG
|
Chord recognition serves as a critical task in music information retrieval
due to the abstract and descriptive nature of chords in music analysis. While
audio chord recognition systems have achieved significant accuracy for small
vocabularies (e.g., major/minor chords), large-vocabulary chord recognition
remains a challenging problem. This complexity also arises from the inherent
long-tail distribution of chords, where rare chord types are underrepresented
in most datasets, leading to insufficient training samples. Effective chord
recognition requires leveraging contextual information from audio sequences,
yet existing models, such as combinations of convolutional neural networks,
bidirectional long short-term memory networks, and bidirectional transformers,
face limitations in capturing long-term dependencies and exhibit suboptimal
performance on large-vocabulary chord recognition tasks. This work proposes
ChordFormer, a novel conformer-based architecture designed to tackle structural
chord recognition (e.g., triads, bass, sevenths) for large vocabularies.
ChordFormer leverages conformer blocks that integrate convolutional neural
networks with transformers, thus enabling the model to capture both local
patterns and global dependencies effectively. By addressing challenges such as
class imbalance through a reweighted loss function and structured chord
representations, ChordFormer outperforms state-of-the-art models, achieving a
2% improvement in frame-wise accuracy and a 6% increase in class-wise accuracy
on large-vocabulary chord datasets. Furthermore, ChordFormer excels in handling
class imbalance, providing robust and balanced recognition across chord types.
This approach bridges the gap between theoretical music knowledge and practical
applications, advancing the field of large-vocabulary chord recognition.
|
2502.11843
|
Can LLM Agents Maintain a Persona in Discourse?
|
cs.CL cs.AI cs.SI
|
Large Language Models (LLMs) are widely used as conversational agents,
exploiting their capabilities in various sectors such as education, law,
medicine, and more. However, LLMs are often subjected to context-shifting
behaviour, resulting in a lack of consistent and interpretable
personality-aligned interactions. Adherence to psychological traits lacks
comprehensive analysis, especially in the case of dyadic (pairwise)
conversations. We examine this challenge from two viewpoints, initially using
two conversation agents to generate a discourse on a certain topic with an
assigned personality from the OCEAN framework (Openness, Conscientiousness,
Extraversion, Agreeableness, and Neuroticism) as High/Low for each trait. This
is followed by using multiple judge agents to infer the original traits
assigned to explore prediction consistency, inter-model agreement, and
alignment with the assigned personality. Our findings indicate that while LLMs
can be guided toward personality-driven dialogue, their ability to maintain
personality traits varies significantly depending on the combination of models
and discourse settings. These inconsistencies emphasise the challenges in
achieving stable and interpretable personality-aligned interactions in LLMs.
|
2502.11844
|
BaxBench: Can LLMs Generate Correct and Secure Backends?
|
cs.CR cs.AI cs.LG cs.PL
|
The automatic generation of programs has long been a fundamental challenge in
computer science. Recent benchmarks have shown that large language models
(LLMs) can effectively generate code at the function level, make code edits,
and solve algorithmic coding tasks. However, to achieve full automation, LLMs
should be able to generate production-quality, self-contained application
modules. To evaluate the capabilities of LLMs in solving this challenge, we
introduce BaxBench, a novel evaluation benchmark consisting of 392 tasks for
the generation of backend applications. We focus on backends for three critical
reasons: (i) they are practically relevant, building the core components of
most modern web and cloud software, (ii) they are difficult to get right,
requiring multiple functions and files to achieve the desired functionality,
and (iii) they are security-critical, as they are exposed to untrusted
third-parties, making secure solutions that prevent deployment-time attacks an
imperative. BaxBench validates the functionality of the generated applications
with comprehensive test cases, and assesses their security exposure by
executing end-to-end exploits. Our experiments reveal key limitations of
current LLMs in both functionality and security: (i) even the best model,
OpenAI o1, achieves a mere 60% on code correctness; (ii) on average, we could
successfully execute security exploits on more than half of the correct
programs generated by each LLM; and (iii) in less popular backend frameworks,
models further struggle to generate correct and secure applications. Progress
on BaxBench signifies important steps towards autonomous and secure software
development with LLMs.
|
2502.11850
|
Steering the LoCoMotif: Using Domain Knowledge in Time Series Motif
Discovery
|
cs.LG cs.AI cs.CV
|
Time Series Motif Discovery (TSMD) identifies repeating patterns in time
series data, but its unsupervised nature might result in motifs that are not
interesting to the user. To address this, we propose a framework that allows
the user to impose constraints on the motifs to be discovered, where
constraints can easily be defined according to the properties of the desired
motifs in the application domain. We also propose an efficient implementation
of the framework, the LoCoMotif-DoK algorithm. We demonstrate that
LoCoMotif-DoK can effectively leverage domain knowledge in real and synthetic
data, outperforming other TSMD techniques which only support a limited form of
domain knowledge.
|
2502.11853
|
StructTransform: A Scalable Attack Surface for Safety-Aligned Large
Language Models
|
cs.LG
|
In this work, we present a series of structure transformation attacks on LLM
alignment, where we encode natural language intent using diverse syntax spaces,
ranging from simple structure formats and basic query languages (e.g. SQL) to
new novel spaces and syntaxes created entirely by LLMs. Our extensive
evaluation shows that our simplest attacks can achieve close to 90% success
rate, even on strict LLMs (such as Claude 3.5 Sonnet) using SOTA alignment
mechanisms. We improve the attack performance further by using an adaptive
scheme that combines structure transformations along with existing
\textit{content transformations}, resulting in over 96% ASR with 0% refusals.
To generalize our attacks, we explore numerous structure formats, including
syntaxes purely generated by LLMs. Our results indicate that such novel
syntaxes are easy to generate and result in a high ASR, suggesting that
defending against our attacks is not a straightforward process. Finally, we
develop a benchmark and evaluate existing safety-alignment defenses against it,
showing that most of them fail with 100% ASR. Our results show that existing
safety alignment mostly relies on token-level patterns without recognizing
harmful concepts, highlighting and motivating the need for serious research
efforts in this direction. As a case study, we demonstrate how attackers can
use our attack to easily generate a sample malware, and a corpus of fraudulent
SMS messages, which perform well in bypassing detection.
|
2502.11854
|
Enhanced Anomaly Detection in IoMT Networks using Ensemble AI Models on
the CICIoMT2024 Dataset
|
cs.CR cs.LG
|
The rapid proliferation of Internet of Medical Things (IoMT) devices in
healthcare has introduced unique cybersecurity challenges, primarily due to the
diverse communication protocols and critical nature of these devices This
research aims to develop an advanced, real-time anomaly detection framework
tailored for IoMT network traffic, leveraging AI/ML models and the CICIoMT2024
dataset By integrating multi-protocol (MQTT, WiFi), attack-specific (DoS,
DDoS), time-series (active/idle states), and device-specific (Bluetooth) data,
our study captures a comprehensive range of IoMT interactions As part of our
data analysis, various machine learning techniques are employed which include
an ensemble model using XGBoost for improved performance against specific
attack types, sequential models comprised of LSTM and CNN-LSTM that leverage
time dependencies, and unsupervised models such as Autoencoders and Isolation
Forest that are good in general anomaly detection The results of the experiment
prove with an ensemble model lowers false positive rates and reduced
detections.
|
2502.11856
|
LLMs as a synthesis between symbolic and continuous approaches to
language
|
cs.CL
|
Since the middle of the 20th century, a fierce battle is being fought between
symbolic and continuous approaches to language and cognition. The success of
deep learning models, and LLMs in particular, has been alternatively taken as
showing that the continuous camp has won, or dismissed as an irrelevant
engineering development. However, in this position paper I argue that deep
learning models for language actually represent a synthesis between the two
traditions. This is because 1) deep learning architectures allow for both
continuous/distributed and symbolic/discrete-like representations and
computations; 2) models trained on language make use this flexibility. In
particular, I review recent research in mechanistic interpretability that
showcases how a substantial part of morphosyntactic knowledge is encoded in a
near-discrete fashion in LLMs. This line of research suggests that different
behaviors arise in an emergent fashion, and models flexibly alternate between
the two modes (and everything in between) as needed. This is possibly one of
the main reasons for their wild success; and it is also what makes them
particularly interesting for the study of language and cognition. Is it time
for peace?
|
2502.11858
|
Rethinking Audio-Visual Adversarial Vulnerability from Temporal and
Modality Perspectives
|
cs.SD cs.CV
|
While audio-visual learning equips models with a richer understanding of the
real world by leveraging multiple sensory modalities, this integration also
introduces new vulnerabilities to adversarial attacks.
In this paper, we present a comprehensive study of the adversarial robustness
of audio-visual models, considering both temporal and modality-specific
vulnerabilities. We propose two powerful adversarial attacks: 1) a temporal
invariance attack that exploits the inherent temporal redundancy across
consecutive time segments and 2) a modality misalignment attack that introduces
incongruence between the audio and visual modalities. These attacks are
designed to thoroughly assess the robustness of audio-visual models against
diverse threats. Furthermore, to defend against such attacks, we introduce a
novel audio-visual adversarial training framework. This framework addresses key
challenges in vanilla adversarial training by incorporating efficient
adversarial perturbation crafting tailored to multi-modal data and an
adversarial curriculum strategy. Extensive experiments in the Kinetics-Sounds
dataset demonstrate that our proposed temporal and modality-based attacks in
degrading model performance can achieve state-of-the-art performance, while our
adversarial training defense largely improves the adversarial robustness as
well as the adversarial training efficiency.
|
2502.11859
|
Defining and Evaluating Visual Language Models' Basic Spatial Abilities:
A Perspective from Psychometrics
|
cs.CV cs.CL
|
The Theory of Multiple Intelligences underscores the hierarchical nature of
cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer
a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual
Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial
Orientation, Mental Rotation, and Spatial Visualization. Benchmarking 13
mainstream VLMs through nine validated psychometric experiments reveals
significant gaps versus humans (average score 24.95 vs. 68.38), with three key
findings: 1) VLMs mirror human hierarchies (strongest in 2D orientation,
weakest in 3D rotation) with independent BSAs (Pearson's r<0.4); 2) Smaller
models such as Qwen2-VL-7B surpass larger counterparts, with Qwen leading
(30.82) and InternVL2 lagging (19.6); 3) Interventions like chain-of-thought
(0.100 accuracy gain) and 5-shot training (0.259 improvement) show limits from
architectural constraints. Identified barriers include weak geometry encoding
and missing dynamic simulation. By linking psychometric BSAs to VLM
capabilities, we provide a diagnostic toolkit for spatial intelligence
evaluation, methodological foundations for embodied AI development, and a
cognitive science-informed roadmap for achieving human-like spatial
intelligence.
|
2502.11861
|
Exploring Large Language Models in Healthcare: Insights into Corpora
Sources, Customization Strategies, and Evaluation Metrics
|
cs.CL
|
This study reviewed the use of Large Language Models (LLMs) in healthcare,
focusing on their training corpora, customization techniques, and evaluation
metrics. A systematic search of studies from 2021 to 2024 identified 61
articles. Four types of corpora were used: clinical resources, literature,
open-source datasets, and web-crawled data. Common construction techniques
included pre-training, prompt engineering, and retrieval-augmented generation,
with 44 studies combining multiple methods. Evaluation metrics were categorized
into process, usability, and outcome metrics, with outcome metrics divided into
model-based and expert-assessed outcomes. The study identified critical gaps in
corpus fairness, which contributed to biases from geographic, cultural, and
socio-economic factors. The reliance on unverified or unstructured data
highlighted the need for better integration of evidence-based clinical
guidelines. Future research should focus on developing a tiered corpus
architecture with vetted sources and dynamic weighting, while ensuring model
transparency. Additionally, the lack of standardized evaluation frameworks for
domain-specific models called for comprehensive validation of LLMs in
real-world healthcare settings.
|
2502.11862
|
Understanding In-Context Machine Translation for Low-Resource Languages:
A Case Study on Manchu
|
cs.CL
|
In-context machine translation (MT) with large language models (LLMs) is a
promising approach for low-resource MT, as it can readily take advantage of
linguistic resources such as grammar books and dictionaries. Such resources are
usually selectively integrated into the prompt so that LLMs can directly
perform translation without any specific training, via their in-context
learning capability (ICL). However, the relative importance of each type of
resource e.g., dictionary, grammar book, and retrieved parallel examples, is
not entirely clear. To address this gap, this study systematically investigates
how each resource and its quality affects the translation performance, with the
Manchu language as our case study. To remove any prior knowledge of Manchu
encoded in the LLM parameters and single out the effect of ICL, we also
experiment with an encrypted version of Manchu texts. Our results indicate that
high-quality dictionaries and good parallel examples are very helpful, while
grammars hardly help. In a follow-up study, we showcase a promising application
of in-context MT: parallel data augmentation as a way to bootstrap the
conventional MT model. When monolingual data abound, generating synthetic
parallel data through in-context MT offers a pathway to mitigate data scarcity
and build effective and efficient low-resource neural MT systems.
|
2502.11863
|
FedEAT: A Robustness Optimization Framework for Federated LLMs
|
cs.LG cs.AI
|
Significant advancements have been made by Large Language Models (LLMs) in
the domains of natural language understanding and automated content creation.
However, they still face persistent problems, including substantial
computational costs and inadequate availability of training data. The
combination of Federated Learning (FL) and LLMs (federated LLMs) offers a
solution by leveraging distributed data while protecting privacy, which
positions it as an ideal choice for sensitive domains. However, Federated LLMs
still suffer from robustness challenges, including data heterogeneity,
malicious clients, and adversarial attacks, which greatly hinder their
applications. We first introduce the robustness problems in federated LLMs, to
address these challenges, we propose FedEAT (Federated Embedding space
Adversarial Training), a novel framework that applies adversarial training in
the embedding space of client LLM and employs a robust aggregation approach,
specifically geometric median aggregation, to enhance the robustness of
Federated LLMs. Our experiments demonstrate that FedEAT effectively improves
the robustness of Federated LLMs with minimal performance loss.
|
2502.11864
|
Does Knowledge About Perceptual Uncertainty Help an Agent in Automated
Driving?
|
cs.CV cs.RO
|
Agents in real-world scenarios like automated driving deal with uncertainty
in their environment, in particular due to perceptual uncertainty. Although,
reinforcement learning is dedicated to autonomous decision-making under
uncertainty these algorithms are typically not informed about the uncertainty
currently contained in their environment. On the other hand, uncertainty
estimation for perception itself is typically directly evaluated in the
perception domain, e.g., in terms of false positive detection rates or
calibration errors based on camera images. Its use for deciding on
goal-oriented actions remains largely unstudied. In this paper, we investigate
how an agent's behavior is influenced by an uncertain perception and how this
behavior changes if information about this uncertainty is available. Therefore,
we consider a proxy task, where the agent is rewarded for driving a route as
fast as possible without colliding with other road users. For controlled
experiments, we introduce uncertainty in the observation space by perturbing
the perception of the given agent while informing the latter. Our experiments
show that an unreliable observation space modeled by a perturbed perception
leads to a defensive driving behavior of the agent. Furthermore, when adding
the information about the current uncertainty directly to the observation
space, the agent adapts to the specific situation and in general accomplishes
its task faster while, at the same time, accounting for risks.
|
2502.11866
|
Southern Newswire Corpus: A Large-Scale Dataset of Mid-Century Wire
Articles Beyond the Front Page
|
cs.CL
|
I introduce a new large-scale dataset of historical wire articles from U.S.
Southern newspapers, spanning 1960-1975 and covering multiple wire services:
The Associated Press, United Press International, Newspaper Enterprise
Association. Unlike prior work focusing on front-page content, this dataset
captures articles across the entire newspaper, offering broader insight into
mid-century Southern coverage. The dataset includes a version that has
undergone an LLM-based text cleanup pipeline to reduce OCR noise, enhancing its
suitability for quantitative text analysis. Additionally, duplicate versions of
articles are retained to enable analysis of editorial differences in language
and framing across newspapers. Each article is tagged by wire service,
facilitating comparative studies of editorial patterns across agencies. This
resource opens new avenues for research in computational social science,
digital humanities, and historical linguistics, providing a detailed
perspective on how Southern newspapers relayed national and international news
during a transformative period in American history. The dataset will be made
available upon publication or request for research purposes.
|
2502.11867
|
On Data-Driven Robust Optimization With Multiple Uncertainty Subsets:
Unified Uncertainty Set Representation and Mitigating Conservatism
|
math.OC cs.SY eess.SY
|
Constructing uncertainty sets as unions of multiple subsets has emerged as an
effective approach for creating compact and flexible uncertainty
representations in data-driven robust optimization (RO). This paper focuses on
two separate research questions. The first concerns the computational challenge
in applying these uncertainty sets in RO-based predictive control. To address
this, a monolithic mixed-integer representation of the uncertainty set is
proposed to uniformly describe the union of multiple subsets, enabling the
computation of the worst-case uncertainty scenario across all subsets within a
single mixed-integer linear programming (MILP) problem. The second research
question focuses on mitigating the conservatism of conventional RO formulations
by leveraging the structure of the uncertainty set. To achieve this, a novel
objective function is proposed to exploit the uncertainty set structure and
integrate the existing RO and distributionally robust optimization (DRO)
formulations, yielding less conservative solutions than conventional RO
formulations while avoiding the high-dimensional continuous uncertainty
distributions and incurring high computational burden typically associated with
existing DRO formulations. Given the proposed formulations, numerically
efficient computation methods based on column-and-constraint generation (CCG)
are also developed. Extensive simulations across three case studies are
performed to demonstrate the effectiveness of the proposed schemes.
|
2502.11874
|
VAQUUM: Are Vague Quantifiers Grounded in Visual Data?
|
cs.CL
|
Vague quantifiers such as "a few" and "many" are influenced by many
contextual factors, including how many objects are present in a given context.
In this work, we evaluate the extent to which vision-and-language models (VLMs)
are compatible with humans when producing or judging the appropriateness of
vague quantifiers in visual contexts. We release a novel dataset, VAQUUM,
containing 20300 human ratings on quantified statements across a total of 1089
images. Using this dataset, we compare human judgments and VLM predictions
using three different evaluation methods. Our findings show that VLMs, like
humans, are influenced by object counts in vague quantifier use. However, we
find significant inconsistencies across models in different evaluation
settings, suggesting that judging and producing vague quantifiers rely on two
different processes.
|
2502.11877
|
JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs
|
stat.ML cs.LG
|
We introduce a simple method for probabilistic predictions on tabular data
based on Large Language Models (LLMs) called JoLT (Joint LLM Process for
Tabular data). JoLT uses the in-context learning capabilities of LLMs to define
joint distributions over tabular data conditioned on user-specified side
information about the problem, exploiting the vast repository of latent
problem-relevant knowledge encoded in LLMs. JoLT defines joint distributions
for multiple target variables with potentially heterogeneous data types without
any data conversion, data preprocessing, special handling of missing data, or
model training, making it accessible and efficient for practitioners. Our
experiments show that JoLT outperforms competitive methods on low-shot
single-target and multi-target tabular classification and regression tasks.
Furthermore, we show that JoLT can automatically handle missing data and
perform data imputation by leveraging textual side information. We argue that
due to its simplicity and generality, JoLT is an effective approach for a wide
variety of real prediction problems.
|
2502.11880
|
Bitnet.cpp: Efficient Edge Inference for Ternary LLMs
|
cs.LG cs.AI cs.CL cs.DC
|
The advent of 1-bit large language models (LLMs), led by BitNet b1.58, has
spurred interest in ternary LLMs. Despite this, research and practical
applications focusing on efficient edge inference for ternary LLMs remain
scarce. To bridge this gap, we introduce Bitnet.cpp, an inference system
optimized for BitNet b1.58 and ternary LLMs. Given that mixed-precision matrix
multiplication (mpGEMM) constitutes the bulk of inference time in ternary LLMs,
Bitnet.cpp incorporates a novel mpGEMM library to facilitate
sub-2-bits-per-weight, efficient and lossless inference. The library features
two core solutions: Ternary Lookup Table (TL), which addresses spatial
inefficiencies of previous bit-wise methods, and Int2 with a Scale (I2_S),
which ensures lossless edge inference, both enabling high-speed inference. Our
experiments show that Bitnet.cpp achieves up to a 6.25x increase in speed over
full-precision baselines and up to 2.32x over low-bit baselines, setting new
benchmarks in the field. Additionally, we expand TL to element-wise lookup
table (ELUT) for low-bit LLMs in the appendix, presenting both theoretical and
empirical evidence of its considerable potential. Bitnet.cpp is publicly
available at https://github.com/microsoft/BitNet/tree/paper , offering a
sophisticated solution for the efficient and practical deployment of edge LLMs.
|
2502.11881
|
Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models
|
cs.AI cs.CL
|
Existing LLM reasoning methods have shown impressive capabilities across
various tasks, such as solving math and coding problems. However, applying
these methods to scenarios without ground-truth answers or rule-based
verification methods - such as tracking the mental states of an agent - remains
challenging. Inspired by the sequential Monte Carlo algorithm, we introduce
thought-tracing, an inference-time reasoning algorithm designed to trace the
mental states of specific agents by generating hypotheses and weighting them
based on observations without relying on ground-truth solutions to questions in
datasets. Our algorithm is modeled after the Bayesian theory-of-mind framework,
using LLMs to approximate probabilistic inference over agents' evolving mental
states based on their perceptions and actions. We evaluate thought-tracing on
diverse theory-of-mind benchmarks, demonstrating significant performance
improvements compared to baseline LLMs. Our experiments also reveal interesting
behaviors of the recent reasoning models - e.g., o1 and R1 - on theory-of-mind,
highlighting the difference of social reasoning compared to other domains.
|
2502.11882
|
Leveraging Dual Process Theory in Language Agent Framework for Real-time
Simultaneous Human-AI Collaboration
|
cs.AI cs.CL cs.HC cs.LG cs.MA
|
Agents built on large language models (LLMs) have excelled in turn-by-turn
human-AI collaboration but struggle with simultaneous tasks requiring real-time
interaction. Latency issues and the challenge of inferring variable human
strategies hinder their ability to make autonomous decisions without explicit
instructions. Through experiments with current independent System 1 and System
2 methods, we validate the necessity of using Dual Process Theory (DPT) in
real-time tasks. We propose DPT-Agent, a novel language agent framework that
integrates System 1 and System 2 for efficient real-time simultaneous human-AI
collaboration. DPT-Agent's System 1 uses a Finite-state Machine (FSM) and
code-as-policy for fast, intuitive, and controllable decision-making.
DPT-Agent's System 2 integrates Theory of Mind (ToM) and asynchronous
reflection to infer human intentions and perform reasoning-based autonomous
decisions. We demonstrate the effectiveness of DPT-Agent through further
experiments with rule-based agents and human collaborators, showing significant
improvements over mainstream LLM-based frameworks. To the best of our
knowledge, DPT-Agent is the first language agent framework that achieves
successful real-time simultaneous human-AI collaboration autonomously. Code of
DPT-Agent can be found in https://github.com/sjtu-marl/DPT-Agent.
|
2502.11883
|
FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information
Retrieval Algorithms
|
cs.IR
|
In modern information retrieval (IR). achieving more than just accuracy is
essential to sustaining a healthy ecosystem, especially when addressing
fairness and diversity considerations. To meet these needs, various datasets,
algorithms, and evaluation frameworks have been introduced. However, these
algorithms are often tested across diverse metrics, datasets, and experimental
setups, leading to inconsistencies and difficulties in direct comparisons. This
highlights the need for a comprehensive IR toolkit that enables standardized
evaluation of fairness- and diversity-aware algorithms across different IR
tasks. To address this challenge, we present FairDiverse, an open-source and
standardized toolkit. FairDiverse offers a framework for integrating fair and
diverse methods, including pre-processing, in-processing, and post-processing
techniques, at different stages of the IR pipeline. The toolkit supports the
evaluation of 28 fairness and diversity algorithms across 16 base models,
covering two core IR tasks (search and recommendation) thereby establishing a
comprehensive benchmark. Moreover, FairDiverse is highly extensible, providing
multiple APIs that empower IR researchers to swiftly develop and evaluate their
own fairness and diversity aware models, while ensuring fair comparisons with
existing baselines. The project is open-sourced and available on
https://github.com/XuChen0427/FairDiverse.
|
2502.11886
|
LIMR: Less is More for RL Scaling
|
cs.LG cs.AI cs.CL
|
In this paper, we ask: what truly determines the effectiveness of RL training
data for enhancing language models' reasoning capabilities? While recent
advances like o1, Deepseek R1, and Kimi1.5 demonstrate RL's potential, the lack
of transparency about training data requirements has hindered systematic
progress. Starting directly from base models without distillation, we challenge
the assumption that scaling up RL training data inherently improves
performance. we demonstrate that a strategically selected subset of just 1,389
samples can outperform the full 8,523-sample dataset. We introduce Learning
Impact Measurement (LIM), an automated method to evaluate and prioritize
training samples based on their alignment with model learning trajectories,
enabling efficient resource utilization and scalable implementation. Our method
achieves comparable or even superior performance using only 1,389 samples
versus the full 8,523 samples dataset. Notably, while recent data-efficient
approaches (e.g., LIMO and s1) show promise with 32B-scale models, we find it
significantly underperforms at 7B-scale through supervised fine-tuning (SFT).
In contrast, our RL-based LIMR achieves 16.7% higher accuracy on AIME24 and
outperforms LIMO and s1 by 13.0% and 22.2% on MATH500. These results
fundamentally reshape our understanding of RL scaling in LLMs, demonstrating
that precise sample selection, rather than data scale, may be the key to
unlocking enhanced reasoning capabilities. For reproducible research and future
innovation, we are open-sourcing LIMR, including implementation of LIM,
training and evaluation code, curated datasets, and trained models at
https://github.com/GAIR-NLP/LIMR.
|
2502.11887
|
Stonefish: Supporting Machine Learning Research in Marine Robotics
|
cs.RO cs.AI cs.SY eess.SY
|
Simulations are highly valuable in marine robotics, offering a cost-effective
and controlled environment for testing in the challenging conditions of
underwater and surface operations. Given the high costs and logistical
difficulties of real-world trials, simulators capable of capturing the
operational conditions of subsea environments have become key in developing and
refining algorithms for remotely-operated and autonomous underwater vehicles.
This paper highlights recent enhancements to the Stonefish simulator, an
advanced open-source platform supporting development and testing of marine
robotics solutions. Key updates include a suite of additional sensors, such as
an event-based camera, a thermal camera, and an optical flow camera, as well
as, visual light communication, support for tethered operations, improved
thruster modelling, more flexible hydrodynamics, and enhanced sonar accuracy.
These developments and an automated annotation tool significantly bolster
Stonefish's role in marine robotics research, especially in the field of
machine learning, where training data with a known ground truth is hard or
impossible to collect.
|
2502.11890
|
Revisiting Classification Taxonomy for Grammatical Errors
|
cs.CL
|
Grammatical error classification plays a crucial role in language learning
systems, but existing classification taxonomies often lack rigorous validation,
leading to inconsistencies and unreliable feedback. In this paper, we revisit
previous classification taxonomies for grammatical errors by introducing a
systematic and qualitative evaluation framework. Our approach examines four
aspects of a taxonomy, i.e., exclusivity, coverage, balance, and usability.
Then, we construct a high-quality grammatical error classification dataset
annotated with multiple classification taxonomies and evaluate them grounding
on our proposed evaluation framework. Our experiments reveal the drawbacks of
existing taxonomies. Our contributions aim to improve the precision and
effectiveness of error analysis, providing more understandable and actionable
feedback for language learners.
|
2502.11891
|
From Open-Vocabulary to Vocabulary-Free Semantic Segmentation
|
cs.CV
|
Open-vocabulary semantic segmentation enables models to identify novel object
categories beyond their training data. While this flexibility represents a
significant advancement, current approaches still rely on manually specified
class names as input, creating an inherent bottleneck in real-world
applications. This work proposes a Vocabulary-Free Semantic Segmentation
pipeline, eliminating the need for predefined class vocabularies. Specifically,
we address the chicken-and-egg problem where users need knowledge of all
potential objects within a scene to identify them, yet the purpose of
segmentation is often to discover these objects. The proposed approach
leverages Vision-Language Models to automatically recognize objects and
generate appropriate class names, aiming to solve the challenge of class
specification and naming quality. Through extensive experiments on several
public datasets, we highlight the crucial role of the text encoder in model
performance, particularly when the image text classes are paired with generated
descriptions. Despite the challenges introduced by the sensitivity of the
segmentation text encoder to false negatives within the class tagging process,
which adds complexity to the task, we demonstrate that our fully automated
pipeline significantly enhances vocabulary-free segmentation accuracy across
diverse real-world scenarios.
|
2502.11893
|
Rethinking Benign Overfitting in Two-Layer Neural Networks
|
cs.LG stat.ML
|
Recent theoretical studies (Kou et al., 2023; Cao et al., 2022) have revealed
a sharp phase transition from benign to harmful overfitting when the
noise-to-feature ratio exceeds a threshold-a situation common in long-tailed
data distributions where atypical data is prevalent. However, harmful
overfitting rarely happens in overparameterized neural networks. Further
experimental results suggested that memorization is necessary for achieving
near-optimal generalization error in long-tailed data distributions (Feldman &
Zhang, 2020). We argue that this discrepancy between theoretical predictions
and empirical observations arises because previous feature-noise data models
overlook the heterogeneous nature of noise across different data classes. In
this paper, we refine the feature-noise data model by incorporating
class-dependent heterogeneous noise and re-examine the overfitting phenomenon
in neural networks. Through a comprehensive analysis of the training dynamics,
we establish test loss bounds for the refined model. Our findings reveal that
neural networks can leverage "data noise", previously deemed harmful, to learn
implicit features that improve the classification accuracy for long-tailed
data. Experimental validation on both synthetic and real-world datasets
supports our theoretical results.
|
2502.11895
|
Continual Quantization-Aware Pre-Training: When to transition from
16-bit to 1.58-bit pre-training for BitNet language models?
|
cs.LG cs.AI
|
Large language models (LLMs) require immense resources for training and
inference. Quantization, a technique that reduces the precision of model
parameters, offers a promising solution for improving LLM efficiency and
sustainability. While post-training quantization methods typically achieve 4-8
bits per parameter, recent research suggests that training LLMs with 1.58 bits
per weight parameter from scratch can maintain model accuracy while greatly
reducing memory requirements and energy consumption at inference time. Here, we
investigate a training strategy for quantization-aware pre-training, where the
models are first trained with 16-bit precision and then transition into
1.58-bit quantization-aware training. Our results on 11 downstream tasks show
that this 16-to-1.58-bit training strategy is preferable over full 1.58-bit
training and leaves models closer to those which have undergone 16-bit
training. We further investigate the effects of retaining the optimizer state
at the transition point and gradually phasing in quantization strength --
finding that both techniques alleviate the magnitude of loss spikes, but also
that these effects can be compensated through further training.
|
2502.11896
|
CAMEL: Continuous Action Masking Enabled by Large Language Models for
Reinforcement Learning
|
cs.LG cs.AI
|
Reinforcement learning (RL) in continuous action spaces encounters persistent
challenges, such as inefficient exploration and convergence to suboptimal
solutions. To address these limitations, we propose CAMEL, a novel framework
integrating LLM-generated suboptimal policies into the RL training pipeline.
CAMEL leverages dynamic action masking and an adaptive epsilon-masking
mechanism to guide exploration during early training stages while gradually
enabling agents to optimize policies independently. At the core of CAMEL lies
the integration of Python-executable suboptimal policies generated by LLMs
based on environment descriptions and task objectives. Although simplistic and
hard-coded, these policies offer valuable initial guidance for RL agents. To
effectively utilize these priors, CAMEL employs masking-aware optimization to
dynamically constrain the action space based on LLM outputs. Additionally,
epsilon-masking gradually reduces reliance on LLM-generated guidance, enabling
agents to transition from constrained exploration to autonomous policy
refinement. Experimental validation on Gymnasium MuJoCo environments
demonstrates the effectiveness of CAMEL. In Hopper-v4 and Ant-v4, LLM-generated
policies significantly improve sample efficiency, achieving performance
comparable to or surpassing expert masking baselines. For Walker2d-v4, where
LLMs struggle to accurately model bipedal gait dynamics, CAMEL maintains robust
RL performance without notable degradation, highlighting the framework's
adaptability across diverse tasks. While CAMEL shows promise in enhancing
sample efficiency and mitigating convergence challenges, these issues remain
open for further research. Future work aims to generalize CAMEL to multimodal
LLMs for broader observation-action spaces and automate policy evaluation,
reducing human intervention and enhancing scalability in RL training pipelines.
|
2502.11897
|
DLFR-VAE: Dynamic Latent Frame Rate VAE for Video Generation
|
cs.CV cs.AI
|
In this paper, we propose the Dynamic Latent Frame Rate VAE (DLFR-VAE), a
training-free paradigm that can make use of adaptive temporal compression in
latent space. While existing video generative models apply fixed compression
rates via pretrained VAE, we observe that real-world video content exhibits
substantial temporal non-uniformity, with high-motion segments containing more
information than static scenes. Based on this insight, DLFR-VAE dynamically
adjusts the latent frame rate according to the content complexity.
Specifically, DLFR-VAE comprises two core innovations: (1) A Dynamic Latent
Frame Rate Scheduler that partitions videos into temporal chunks and adaptively
determines optimal frame rates based on information-theoretic content
complexity, and (2) A training-free adaptation mechanism that transforms
pretrained VAE architectures into a dynamic VAE that can process features with
variable frame rates. Our simple but effective DLFR-VAE can function as a
plug-and-play module, seamlessly integrating with existing video generation
models and accelerating the video generation process.
|
2502.11900
|
Ansatz-free Hamiltonian learning with Heisenberg-limited scaling
|
quant-ph cs.IT cs.LG math.IT
|
Learning the unknown interactions that govern a quantum system is crucial for
quantum information processing, device benchmarking, and quantum sensing. The
problem, known as Hamiltonian learning, is well understood under the assumption
that interactions are local, but this assumption may not hold for arbitrary
Hamiltonians. Previous methods all require high-order inverse polynomial
dependency with precision, unable to surpass the standard quantum limit and
reach the gold standard Heisenberg-limited scaling. Whether Heisenberg-limited
Hamiltonian learning is possible without prior assumptions about the
interaction structures, a challenge we term \emph{ansatz-free Hamiltonian
learning}, remains an open question. In this work, we present a quantum
algorithm to learn arbitrary sparse Hamiltonians without any structure
constraints using only black-box queries of the system's real-time evolution
and minimal digital controls to attain Heisenberg-limited scaling in estimation
error. Our method is also resilient to state-preparation-and-measurement
errors, enhancing its practical feasibility. Moreover, we establish a
fundamental trade-off between total evolution time and quantum control on
learning arbitrary interactions, revealing the intrinsic interplay between
controllability and total evolution time complexity for any learning algorithm.
These results pave the way for further exploration into Heisenberg-limited
Hamiltonian learning in complex quantum systems under minimal assumptions,
potentially enabling new benchmarking and verification protocols.
|
2502.11901
|
Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o
Under Data Scarsity
|
cs.CL cs.PL cs.SE
|
Existing LMs struggle with proof-oriented programming due to data scarcity,
which manifest in two key ways: (1) a lack of sufficient corpora for
proof-oriented programming languages such as F*, and (2) the absence of
large-scale, project-level proof-oriented implementations that can teach the
model the intricate reasoning process when performing proof-oriented
programming. We present the first on synthetic data augmentation for project
level proof oriented programming for both generation and repair. Our method
addresses data scarcity by synthesizing basic proof-oriented programming
problems for proficiency in that language; incorporating diverse coding data
for reasoning capability elicitation and creating new proofs and repair data
within existing repositories. This approach enables language models to both
synthesize and repair proofs for function- and repository-level code. We show
that our fine-tuned 14B parameter model, PoPilot, can exceed the performance of
the models that outperforms GPT-4o in project-level proof-oriented programming
by 64% relative margin, and can improve GPT-4o's performance by 54% by
repairing its outputs over GPT-4o's self-repair.
|
2502.11903
|
MMRC: A Large-Scale Benchmark for Understanding Multimodal Large
Language Model in Real-World Conversation
|
cs.CL
|
Recent multimodal large language models (MLLMs) have demonstrated significant
potential in open-ended conversation, generating more accurate and personalized
responses. However, their abilities to memorize, recall, and reason in
sustained interactions within real-world scenarios remain underexplored. This
paper introduces MMRC, a Multi-Modal Real-world Conversation benchmark for
evaluating six core open-ended abilities of MLLMs: information extraction,
multi-turn reasoning, information update, image management, memory recall, and
answer refusal. With data collected from real-world scenarios, MMRC comprises
5,120 conversations and 28,720 corresponding manually labeled questions, posing
a significant challenge to existing MLLMs. Evaluations on 20 MLLMs in MMRC
indicate an accuracy drop during open-ended interactions. We identify four
common failure patterns: long-term memory degradation, inadequacies in updating
factual knowledge, accumulated assumption of error propagation, and reluctance
to say no. To mitigate these issues, we propose a simple yet effective
NOTE-TAKING strategy, which can record key information from the conversation
and remind the model during its responses, enhancing conversational
capabilities. Experiments across six MLLMs demonstrate significant performance
improvements.
|
2502.11904
|
A formal implementation of Behavior Trees to act in robotics
|
cs.RO
|
Behavior Trees (BT) are becoming quite popular as an Acting component of
autonomous robotic systems. We propose to define a formal semantics to BT by
translating them to a formal language which enables us to perform verification
of programs written with BT, as well as runtime verification while these BT
execute. This allows us to formally verify BT correctness without requiring BT
programmers to master formal language and without compromising BT most valuable
features: modularity, flexibility and reusability. We present the formal
framework we use: Fiacre, its langage and the produced TTS model; Tina, its
model checking tools and Hippo, its runtime verification engine. We then show
how the translation from BT to Fiacre is automatically done, the type of formal
LTL and CTL properties we can check offline and how to execute the formal model
online in place of a regular BT engine. We illustrate our approach on two
robotics applications, and show how BT could benefit of other features
available in the Fiacre formal framework (state variables, time, etc).
|
2502.11909
|
Neural Guided Diffusion Bridges
|
stat.ML cs.LG
|
We propose a novel method for simulating conditioned diffusion processes
(diffusion bridges) in Euclidean spaces. By training a neural network to
approximate bridge dynamics, our approach eliminates the need for
computationally intensive Markov Chain Monte Carlo (MCMC) methods or
reverse-process modeling. Compared to existing methods, it offers greater
robustness across various diffusion specifications and conditioning scenarios.
This applies in particular to rare events and multimodal distributions, which
pose challenges for score-learning- and MCMC-based approaches. We propose a
flexible variational family for approximating the diffusion bridge path measure
which is partially specified by a neural network. Once trained, it enables
efficient independent sampling at a cost comparable to sampling the
unconditioned (forward) process.
|
2502.11910
|
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More
Measurable Objectives
|
cs.LG
|
Misaligned research objectives have considerably hindered progress in
adversarial robustness research over the past decade. For instance, an
extensive focus on optimizing target metrics, while neglecting rigorous
standardized evaluation, has led researchers to pursue ad-hoc heuristic
defenses that were seemingly effective. Yet, most of these were exposed as
flawed by subsequent evaluations, ultimately contributing little measurable
progress to the field. In this position paper, we illustrate that current
research on the robustness of large language models (LLMs) risks repeating past
patterns with potentially worsened real-world implications. To address this, we
argue that realigned objectives are necessary for meaningful progress in
adversarial alignment. To this end, we build on established cybersecurity
taxonomy to formally define differences between past and emerging threat models
that apply to LLMs. Using this framework, we illustrate that progress requires
disentangling adversarial alignment into addressable sub-problems and returning
to core academic principles, such as measureability, reproducibility, and
comparability. Although the field presents significant challenges, the fresh
start on adversarial robustness offers the unique opportunity to build on past
experience while avoiding previous mistakes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.