id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.04143
|
A data-driven two-microphone method for in-situ sound absorption
measurements
|
cs.SD cs.LG eess.AS
|
This work presents a data-driven approach to estimating the sound absorption
coefficient of an infinite porous slab using a neural network and a
two-microphone measurement on a finite porous sample. A 1D-convolutional
network predicts the sound absorption coefficient from the complex-valued
transfer function between the sound pressure measured at the two microphone
positions. The network is trained and validated with numerical data generated
by a boundary element model using the Delany-Bazley-Miki model, demonstrating
accurate predictions for various numerical samples. The method is
experimentally validated with baffled rectangular samples of a fibrous
material, where sample size and source height are varied. The results show that
the neural network offers the possibility to reliably predict the in-situ sound
absorption of a porous material using the traditional two-microphone method as
if the sample were infinite. The normal-incidence sound absorption coefficient
obtained by the network compares well with that obtained theoretically and in
an impedance tube. The proposed method has promising perspectives for
estimating the sound absorption coefficient of acoustic materials after
installation and in realistic operational conditions.
|
2502.04144
|
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
|
cs.CV
|
We present a validation dataset of newly-collected kitchen-based egocentric
videos, manually annotated with highly detailed and interconnected ground-truth
labels covering: recipe steps, fine-grained actions, ingredients with
nutritional values, moving objects, and audio annotations. Importantly, all
annotations are grounded in 3D through digital twinning of the scene, fixtures,
object locations, and primed with gaze. Footage is collected from unscripted
recordings in diverse home environments, making HDEPIC the first dataset
collected in-the-wild but with detailed annotations matching those in
controlled lab environments.
We show the potential of our highly-detailed annotations through a
challenging VQA benchmark of 26K questions assessing the capability to
recognise recipes, ingredients, nutrition, fine-grained actions, 3D perception,
object motion, and gaze direction. The powerful long-context Gemini Pro only
achieves 38.5% on this benchmark, showcasing its difficulty and highlighting
shortcomings in current VLMs. We additionally assess action recognition, sound
recognition, and long-term video-object segmentation on HD-EPIC.
HD-EPIC is 41 hours of video in 9 kitchens with digital twins of 413 kitchen
fixtures, capturing 69 recipes, 59K fine-grained actions, 51K audio events, 20K
object movements and 37K object masks lifted to 3D. On average, we have 263
annotations per minute of our unscripted videos.
|
2502.04153
|
UltraIF: Advancing Instruction Following from the Wild
|
cs.CL cs.AI
|
Instruction-following made modern large language models (LLMs) helpful
assistants. However, the key to taming LLMs on complex instructions remains
mysterious, for that there are huge gaps between models trained by open-source
community and those trained by leading companies. To bridge the gap, we propose
a simple and scalable approach UltraIF for building LLMs that can follow
complex instructions with open-source data. UltraIF first decomposes real-world
user prompts into simpler queries, constraints, and corresponding evaluation
questions for the constraints. Then, we train an UltraComposer to compose
constraint-associated prompts with evaluation questions. This prompt composer
allows us to synthesize complicated instructions as well as filter responses
with evaluation questions. In our experiment, for the first time, we
successfully align LLaMA-3.1-8B-Base to catch up with its instruct version on 5
instruction-following benchmarks without any benchmark information, using only
8B model as response generator and evaluator. The aligned model also achieved
competitive scores on other benchmarks. Moreover, we also show that UltraIF
could further improve LLaMA-3.1-8B-Instruct through self-alignment, motivating
broader use cases for the method. Our code will be available at
https://github.com/kkk-an/UltraIF.
|
2502.04161
|
YOLOv4: A Breakthrough in Real-Time Object Detection
|
cs.CV
|
YOLOv4 achieved the best performance on the COCO dataset by combining
advanced techniques for regression (bounding box positioning) and
classification (object class identification) using the Darknet framework. To
enhance accuracy and adaptability, it employs Cross mini-Batch Normalization,
Cross-Stage-Partial-connections, Self-Adversarial-Training, and
Weighted-Residual-Connections, as well as CIoU loss, Mosaic data augmentation,
and DropBlock regularization. With Mosaic augmentation and multi-resolution
training, YOLOv4 achieves superior detection in diverse scenarios, attaining
43.5\% AP (in contrast, 65.7\% AP50) on a Tesla V100 at ~65 frames per second,
ensuring efficiency, affordability, and adaptability for real-world
environments.
|
2502.04162
|
A Pseudo Markov-Chain Model and Time-Elapsed Measures of Mobility from
Collective Data
|
stat.AP cs.LG cs.SI stat.ML
|
In this paper we develop a pseudo Markov-chain model to understand
time-elapsed flows, over multiple intervals, from time and space aggregated
collective inter-location trip data, given as a time-series. Building on the
model, we develop measures of mobility that parallel those known for individual
mobility data, such as the radius of gyration. We apply these measures to the
NetMob 2024 Data Challenge data, and obtain interesting results that are
consistent with published statistics and commuting patterns in cities. Besides
building a new framework, we foresee applications of this approach to an
improved understanding of human mobility in the context of environmental
changes and sustainable development.
|
2502.04163
|
Multi-task Online Learning for Probabilistic Load Forecasting
|
stat.ML cs.LG
|
Load forecasting is essential for the efficient, reliable, and cost-effective
management of power systems. Load forecasting performance can be improved by
learning the similarities among multiple entities (e.g., regions, buildings).
Techniques based on multi-task learning obtain predictions by leveraging
consumption patterns from the historical load demand of multiple entities and
their relationships. However, existing techniques cannot effectively assess
inherent uncertainties in load demand or account for dynamic changes in
consumption patterns. This paper proposes a multi-task learning technique for
online and probabilistic load forecasting. This technique provides accurate
probabilistic predictions for the loads of multiple entities by leveraging
their dynamic similarities. The method's performance is evaluated using
datasets that register the load demand of multiple entities and contain diverse
and dynamic consumption patterns. The experimental results show that the
proposed method can significantly enhance the effectiveness of current
multi-task learning approaches across a wide variety of load consumption
scenarios.
|
2502.04164
|
Efficient Distributed Optimization under Heavy-Tailed Noise
|
cs.LG
|
Distributed optimization has become the default training paradigm in modern
machine learning due to the growing scale of models and datasets. To mitigate
communication overhead, local updates are often applied before global
aggregation, resulting in a nested optimization approach with inner and outer
steps. However, heavy-tailed stochastic gradient noise remains a significant
challenge, particularly in attention-based models, hindering effective
training. In this work, we propose TailOPT, an efficient framework designed to
address heavy-tailed noise by leveraging adaptive optimization or clipping
techniques. We establish convergence guarantees for the TailOPT framework under
heavy-tailed noise with potentially unbounded gradient variance and local
updates. Among its variants, we highlight a memory and communication efficient
instantiation which we call $Bi^2Clip$, which performs coordinate-wise clipping
at both the inner and outer optimizers, achieving adaptive-like performance
(e.g., Adam) without the cost of maintaining or transmitting additional
gradient statistics. Empirically, TailOPT, including $Bi^2Clip$, demonstrates
superior performance on several language tasks and models, outperforming
state-of-the-art methods.
|
2502.04167
|
Making Sense of Touch: Unsupervised Shapelet Learning in Bag-of-words
Sense
|
cs.LG cs.RO
|
This paper introduces NN-STNE, a neural network using t-distributed
stochastic neighbor embedding (t-SNE) as a hidden layer to reduce input
dimensions by mapping long time-series data into shapelet membership
probabilities. A Gaussian kernel-based mean square error preserves local data
structure, while K-means initializes shapelet candidates due to the non-convex
optimization challenge. Unlike existing methods, our approach uses t-SNE to
address crowding in low-dimensional space and applies L1-norm regularization to
optimize shapelet length. Evaluations on the UCR dataset and an electrical
component manipulation task, like switching on, demonstrate improved clustering
accuracy over state-of-the-art feature-learning methods in robotics.
|
2502.04170
|
From Configuration-Space Clearance to Feature-Space Margin: Sample
Complexity in Learning-Based Collision Detection
|
cs.RO
|
Motion planning is a central challenge in robotics, with learning-based
approaches gaining significant attention in recent years. Our work focuses on a
specific aspect of these approaches: using machine-learning techniques,
particularly Support Vector Machines (SVM), to evaluate whether robot
configurations are collision free, an operation termed ``collision detection''.
Despite the growing popularity of these methods, there is a lack of theory
supporting their efficiency and prediction accuracy. This is in stark contrast
to the rich theoretical results of machine-learning methods in general and of
SVMs in particular. Our work bridges this gap by analyzing the sample
complexity of an SVM classifier for learning-based collision detection in
motion planning. We bound the number of samples needed to achieve a specified
accuracy at a given confidence level. This result is stated in terms relevant
to robot motion-planning such as the system's clearance. Building on these
theoretical results, we propose a collision-detection algorithm that can also
provide statistical guarantees on the algorithm's error in classifying robot
configurations as collision-free or not.
|
2502.04172
|
Archetypal Analysis for Binary Data
|
cs.LG cs.AI stat.ML
|
Archetypal analysis (AA) is a matrix decomposition method that identifies
distinct patterns using convex combinations of the data points denoted
archetypes with each data point in turn reconstructed as convex combinations of
the archetypes. AA thereby forms a polytope representing trade-offs of the
distinct aspects in the data. Most existing methods for AA are designed for
continuous data and do not exploit the structure of the data distribution. In
this paper, we propose two new optimization frameworks for archetypal analysis
for binary data. i) A second order approximation of the AA likelihood based on
the Bernoulli distribution with efficient closed-form updates using an active
set procedure for learning the convex combinations defining the archetypes, and
a sequential minimal optimization strategy for learning the observation
specific reconstructions. ii) A Bernoulli likelihood based version of the
principal convex hull analysis (PCHA) algorithm originally developed for least
squares optimization. We compare these approaches with the only existing binary
AA procedure relying on multiplicative updates and demonstrate their
superiority on both synthetic and real binary data. Notably, the proposed
optimization frameworks for AA can easily be extended to other data
distributions providing generic efficient optimization frameworks for AA based
on tailored likelihood functions reflecting the underlying data distribution.
|
2502.04173
|
Lexical Substitution is not Synonym Substitution: On the Importance of
Producing Contextually Relevant Word Substitutes
|
cs.CL
|
Lexical Substitution is the task of replacing a single word in a sentence
with a similar one. This should ideally be one that is not necessarily only
synonymous, but also fits well into the surrounding context of the target word,
while preserving the sentence's grammatical structure. Recent advances in
Lexical Substitution have leveraged the masked token prediction task of
Pre-trained Language Models to generate replacements for a given word in a
sentence. With this technique, we introduce ConCat, a simple augmented approach
which utilizes the original sentence to bolster contextual information sent to
the model. Compared to existing approaches, it proves to be very effective in
guiding the model to make contextually relevant predictions for the target
word. Our study includes a quantitative evaluation, measured via sentence
similarity and task performance. In addition, we conduct a qualitative human
analysis to validate that users prefer the substitutions proposed by our
method, as opposed to previous methods. Finally, we test our approach on the
prevailing benchmark for Lexical Substitution, CoInCo, revealing potential
pitfalls of the benchmark. These insights serve as the foundation for a
critical discussion on the way in which Lexical Substitution is evaluated.
|
2502.04174
|
Dense Fixed-Wing Swarming using Receding-Horizon NMPC
|
cs.RO cs.SY eess.SY
|
In this paper, we present an approach for controlling a team of agile
fixed-wing aerial vehicles in close proximity to one another. Our approach
relies on receding-horizon nonlinear model predictive control (NMPC) to plan
maneuvers across an expanded flight envelope to enable inter-agent collision
avoidance. To facilitate robust collision avoidance and characterize the
likelihood of inter-agent collisions, we compute a statistical bound on the
probability of the system leaving a tube around the planned nominal trajectory.
Finally, we propose a metric for evaluating highly dynamic swarms and use this
metric to evaluate our approach. We successfully demonstrated our approach
through both simulation and hardware experiments, and to our knowledge, this
the first time close-quarters swarming has been achieved with physical
aerobatic fixed-wing vehicles.
|
2502.04176
|
MRAMG-Bench: A BeyondText Benchmark for Multimodal Retrieval-Augmented
Multimodal Generation
|
cs.LG cs.IR
|
Recent advancements in Retrieval-Augmented Generation (RAG) have shown
remarkable performance in enhancing response accuracy and relevance by
integrating external knowledge into generative models. However, existing RAG
methods primarily focus on providing text-only answers, even in multimodal
retrieval-augmented generation scenarios. In this work, we introduce the
Multimodal Retrieval-Augmented Multimodal Generation (MRAMG) task, which aims
to generate answers that combine both text and images, fully leveraging the
multimodal data within a corpus. Despite the importance of this task, there is
a notable absence of a comprehensive benchmark to effectively evaluate MRAMG
performance. To bridge this gap, we introduce the MRAMG-Bench, a carefully
curated, human-annotated dataset comprising 4,346 documents, 14,190 images, and
4,800 QA pairs, sourced from three categories: Web Data, Academic Papers, and
Lifestyle. The dataset incorporates diverse difficulty levels and complex
multi-image scenarios, providing a robust foundation for evaluating multimodal
generation tasks. To facilitate rigorous evaluation, our MRAMG-Bench
incorporates a comprehensive suite of both statistical and LLM-based metrics,
enabling a thorough analysis of the performance of popular generative models in
the MRAMG task. Besides, we propose an efficient multimodal answer generation
framework that leverages both LLMs and MLLMs to generate multimodal responses.
Our datasets are available at: https://huggingface.co/MRAMG.
|
2502.04180
|
Multi-agent Architecture Search via Agentic Supernet
|
cs.LG cs.CL cs.MA
|
Large Language Model (LLM)-empowered multi-agent systems extend the cognitive
boundaries of individual agents through disciplined collaboration and
interaction, while constructing these systems often requires labor-intensive
manual designs. Despite the availability of methods to automate the design of
agentic workflows, they typically seek to identify a static, complex,
one-size-fits-all system, which, however, fails to dynamically allocate
inference resources based on the difficulty and domain of each query. To
address this challenge, we shift away from the pursuit of a monolithic agentic
system, instead optimizing the \textbf{agentic supernet}, a probabilistic and
continuous distribution of agentic architectures. We introduce MaAS, an
automated framework that samples query-dependent agentic systems from the
supernet, delivering high-quality solutions and tailored resource allocation
(\textit{e.g.}, LLM calls, tool calls, token cost). Comprehensive evaluation
across six benchmarks demonstrates that MaAS \textbf{(I)} requires only
$6\sim45\%$ of the inference costs of existing handcrafted or automated
multi-agent systems, \textbf{(II)} surpasses them by $0.54\%\sim11.82\%$, and
\textbf{(III)} enjoys superior cross-dataset and cross-LLM-backbone
transferability.
|
2502.04190
|
Compliant Beaded-String Jamming For Variable Stiffness Anthropomorphic
Fingers
|
cs.RO
|
Achieving human-like dexterity in robotic grippers remains an open challenge,
particularly in ensuring robust manipulation in uncertain environments. Soft
robotic hands try to address this by leveraging passive compliance, a
characteristic that is crucial to the adaptability of the human hand, to
achieve more robust manipulation while reducing reliance on high-resolution
sensing and complex control. Further improvements in terms of precision and
postural stability in manipulation tasks are achieved through the integration
of variable stiffness mechanisms, but these tend to lack residual compliance,
be bulky and have slow response times. To address these limitations, this work
introduces a Compliant Joint Jamming mechanism for anthropomorphic fingers that
exhibits passive residual compliance and adjustable stiffness, while achieving
a range of motion in line with that of human interphalangeal joints. The
stiffness range provided by the mechanism is controllable from 0.48 Nm/rad to
1.95 Nm/rad (a 4x increase). Repeatability, hysteresis and stiffness were also
characterized as a function of the jamming force. To demonstrate the importance
of the passive residual compliance afforded by the proposed system, a
peg-in-hole task was conducted, which showed a 60% higher success rate for a
gripper integrating our joint design when compared to a rigid one.
|
2502.04192
|
PixFoundation: Are We Heading in the Right Direction with Pixel-level
Vision Foundation Models?
|
cs.CV
|
Multiple works have emerged to push the boundaries on multi-modal large
language models (MLLMs) towards pixel-level understanding. Such approaches have
shown strong performance on benchmarks for referring expression segmentation
and grounded conversation generation. The current trend in pixel-level MLLMs is
to train with pixel-level grounding supervision on large-scale labelled data.
However, we show that such MLLMs when evaluated on recent challenging vision
centric benchmarks, exhibit a weak ability in visual question answering.
Surprisingly, some of these methods even downgrade the grounding ability of
MLLMs that were never trained with such supervision. In this work, we propose
two novel challenging benchmarks and show that MLLMs without pixel-level
grounding supervision can outperform the state of the art in such tasks when
evaluating both the pixel-level grounding and visual question answering. We
propose simple baselines to extract the grounding information that can be
plugged into any MLLM, which we call as PixFoundation. More importantly, we
study the research question of ``When does grounding emerge in MLLMs that are
not trained with pixel-level grounding supervision?'' We show that grounding
can coincide with object parts or location/appearance information. Code
repository is at https://github.com/MSiam/PixFoundation/.
|
2502.04194
|
The Best Instruction-Tuning Data are Those That Fit
|
cs.CL cs.AI cs.LG
|
High-quality supervised fine-tuning (SFT) data are crucial for eliciting
strong capabilities from pretrained large language models (LLMs). Typically,
instructions are paired with multiple responses sampled from other LLMs, which
are often out of the distribution of the target model to be fine-tuned. This,
at scale, can lead to diminishing returns and even hurt the models' performance
and robustness. We propose **GRAPE**, a novel SFT framework that accounts for
the unique characteristics of the target model. For each instruction, it
gathers responses from various LLMs and selects the one with the highest
probability measured by the target model, indicating that it aligns most
closely with the target model's pretrained distribution; it then proceeds with
standard SFT training.
We first evaluate GRAPE with a controlled experiment, where we sample various
solutions for each question in UltraInteract from multiple models and fine-tune
commonly used LMs like LLaMA3.1-8B, Mistral-7B, and Qwen2.5-7B on
GRAPE-selected data. GRAPE significantly outperforms strong baselines,
including distilling from the strongest model with an absolute gain of up to
13.8%, averaged across benchmarks, and training on 3x more data with a maximum
performance improvement of 17.3%. GRAPE's strong performance generalizes to
realistic settings. We experiment with the post-training data used for Tulu3
and Olmo-2. GRAPE outperforms strong baselines trained on 4.5 times more data
by 6.1% and a state-of-the-art data selection approach by 3% on average
performance. Remarkably, using 1/3 of the data and half the number of epochs,
GRAPE enables LLaMA3.1-8B to surpass the performance of Tulu3-SFT by 3.5%.
|
2502.04195
|
Integration of Prior Knowledge into Direct Learning for Safe Control of
Linear Systems
|
eess.SY cs.SY
|
This paper integrates prior knowledge into direct learning of safe
controllers for linear uncertain systems under disturbances. To this end, we
characterize the set of all closed-loop systems that can be explained by
available prior knowledge of the system model and the disturbances. We leverage
matrix zonotopes for data-based characterization of closed-loop systems and
show that the explainability of closed-loop systems by prior knowledge can be
formalized by adding an equality conformity constraint to the matrix zonotope.
We then leverage the resulting constraint matrix zonotope and design safe
controllers that conform with both data and prior knowledge. This is achieved
by ensuring the inclusion of a constrained zonotope of all possible next states
in a {\lambda}-scaled level set of the safe set. We consider both polytope and
zonotope safe sets and provide set inclusion conditions using linear
programming.
|
2502.04199
|
Expanding Training Data for Endoscopic Phenotyping of Eosinophilic
Esophagitis
|
eess.IV cs.CV
|
Eosinophilic esophagitis (EoE) is a chronic esophageal disorder marked by
eosinophil-dominated inflammation. Diagnosing EoE usually involves endoscopic
inspection of the esophageal mucosa and obtaining esophageal biopsies for
histologic confirmation. Recent advances have seen AI-assisted endoscopic
imaging, guided by the EREFS system, emerge as a potential alternative to
reduce reliance on invasive histological assessments. Despite these
advancements, significant challenges persist due to the limited availability of
data for training AI models - a common issue even in the development of AI for
more prevalent diseases. This study seeks to improve the performance of deep
learning-based EoE phenotype classification by augmenting our training data
with a diverse set of images from online platforms, public datasets, and
electronic textbooks increasing our dataset from 435 to 7050 images. We
utilized the Data-efficient Image Transformer for image classification and
incorporated attention map visualizations to boost interpretability. The
findings show that our expanded dataset and model enhancements improved
diagnostic accuracy, robustness, and comprehensive analysis, enhancing patient
outcomes.
|
2502.04201
|
Safeguarding connected autonomous vehicle communication: Protocols,
intra- and inter-vehicular attacks and defenses
|
cs.CR cs.CV cs.NI
|
The advancements in autonomous driving technology, coupled with the growing
interest from automotive manufacturers and tech companies, suggest a rising
adoption of Connected Autonomous Vehicles (CAVs) in the near future. Despite
some evidence of higher accident rates in AVs, these incidents tend to result
in less severe injuries compared to traditional vehicles due to cooperative
safety measures. However, the increased complexity of CAV systems exposes them
to significant security vulnerabilities, potentially compromising their
performance and communication integrity. This paper contributes by presenting a
detailed analysis of existing security frameworks and protocols, focusing on
intra- and inter-vehicle communications. We systematically evaluate the
effectiveness of these frameworks in addressing known vulnerabilities and
propose a set of best practices for enhancing CAV communication security. The
paper also provides a comprehensive taxonomy of attack vectors in CAV
ecosystems and suggests future research directions for designing more robust
security mechanisms. Our key contributions include the development of a new
classification system for CAV security threats, the proposal of practical
security protocols, and the introduction of use cases that demonstrate how
these protocols can be integrated into real-world CAV applications. These
insights are crucial for advancing secure CAV adoption and ensuring the safe
integration of autonomous vehicles into intelligent transportation systems.
|
2502.04204
|
"Short-length" Adversarial Training Helps LLMs Defend "Long-length"
Jailbreak Attacks: Theoretical and Empirical Evidence
|
cs.LG cs.CR stat.ML
|
Jailbreak attacks against large language models (LLMs) aim to induce harmful
behaviors in LLMs through carefully crafted adversarial prompts. To mitigate
attacks, one way is to perform adversarial training (AT)-based alignment, i.e.,
training LLMs on some of the most adversarial prompts to help them learn how to
behave safely under attacks. During AT, the length of adversarial prompts plays
a critical role in the robustness of aligned LLMs. This paper focuses on
adversarial suffix jailbreak attacks and unveils that to defend against a
jailbreak attack with an adversarial suffix of length $\Theta(M)$, it is enough
to align LLMs on prompts with adversarial suffixes of length
$\Theta(\sqrt{M})$. Theoretically, we analyze the adversarial in-context
learning of linear transformers on linear regression tasks and prove a robust
generalization bound for trained transformers. The bound depends on the term
$\Theta(\sqrt{M_{\text{test}}}/M_{\text{train}})$, where $M_{\text{train}}$ and
$M_{\text{test}}$ are the number of adversarially perturbed in-context samples
during training and testing. Empirically, we conduct AT on popular open-source
LLMs and evaluate their robustness against jailbreak attacks of different
adversarial suffix lengths. Results confirm a positive correlation between the
attack success rate and the ratio of the square root of the adversarial suffix
during jailbreaking to the length during AT. Our findings show that it is
practical to defend "long-length" jailbreak attacks via efficient
"short-length" AT. The code is available at https://github.com/fshp971/adv-icl.
|
2502.04206
|
Ensuring Reliability via Hyperparameter Selection: Review and Advances
|
cs.LG cs.IT math.IT
|
Hyperparameter selection is a critical step in the deployment of artificial
intelligence (AI) models, particularly in the current era of foundational,
pre-trained, models. By framing hyperparameter selection as a multiple
hypothesis testing problem, recent research has shown that it is possible to
provide statistical guarantees on population risk measures attained by the
selected hyperparameter. This paper reviews the Learn-Then-Test (LTT)
framework, which formalizes this approach, and explores several extensions
tailored to engineering-relevant scenarios. These extensions encompass
different risk measures and statistical guarantees, multi-objective
optimization, the incorporation of prior knowledge and dependency structures
into the hyperparameter selection process, as well as adaptivity. The paper
also includes illustrative applications for communication systems.
|
2502.04207
|
Enhanced Feature-based Image Stitching for Endoscopic Videos in
Pediatric Eosinophilic Esophagitis
|
cs.CV
|
Video endoscopy represents a major advance in the investigation of
gastrointestinal diseases. Reviewing endoscopy videos often involves frequent
adjustments and reorientations to piece together a complete view, which can be
both time-consuming and prone to errors. Image stitching techniques address
this issue by providing a continuous and complete visualization of the examined
area. However, endoscopic images, particularly those of the esophagus, present
unique challenges. The smooth surface, lack of distinct feature points, and
non-horizontal orientation complicate the stitching process, rendering
traditional feature-based methods often ineffective for these types of images.
In this paper, we propose a novel preprocessing pipeline designed to enhance
endoscopic image stitching through advanced computational techniques. Our
approach converts endoscopic video data into continuous 2D images by following
four key steps: (1) keyframe selection, (2) image rotation adjustment to
correct distortions, (3) surface unwrapping using polar coordinate
transformation to generate a flat image, and (4) feature point matching
enhanced by Adaptive Histogram Equalization for improved feature detection. We
evaluate stitching quality through the assessment of valid feature point match
pairs. Experiments conducted on 20 pediatric endoscopy videos demonstrate that
our method significantly improves image alignment and stitching quality
compared to traditional techniques, laying a robust foundation for more
effective panoramic image creation.
|
2502.04210
|
Algorithmic causal structure emerging through compression
|
cs.LG cs.AI cs.CC cs.IT math.IT
|
We explore the relationship between causality, symmetry, and compression. We
build on and generalize the known connection between learning and compression
to a setting where causal models are not identifiable. We propose a framework
where causality emerges as a consequence of compressing data across multiple
environments. We define algorithmic causality as an alternative definition of
causality when traditional assumptions for causal identifiability do not hold.
We demonstrate how algorithmic causal and symmetric structures can emerge from
minimizing upper bounds on Kolmogorov complexity, without knowledge of
intervention targets. We hypothesize that these insights may also provide a
novel perspective on the emergence of causality in machine learning models,
such as large language models, where causal relationships may not be explicitly
identifiable.
|
2502.04218
|
Sports and Women's Sports: Gender Bias in Text Generation with Olympic
Data
|
cs.CL
|
Large Language Models (LLMs) have been shown to be biased in prior work, as
they generate text that is in line with stereotypical views of the world or
that is not representative of the viewpoints and values of historically
marginalized demographic groups. In this work, we propose using data from
parallel men's and women's events at the Olympic Games to investigate different
forms of gender bias in language models. We define three metrics to measure
bias, and find that models are consistently biased against women when the
gender is ambiguous in the prompt. In this case, the model frequently retrieves
only the results of the men's event with or without acknowledging them as such,
revealing pervasive gender bias in LLMs in the context of athletics.
|
2502.04219
|
NLP-Based .NET CLR Event Logs Analyzer
|
cs.SE cs.AI
|
In this paper, we present a tool for analyzing .NET CLR event logs based on a
novel method inspired by Natural Language Processing (NLP) approach. Our
research addresses the growing need for effective monitoring and optimization
of software systems through detailed event log analysis. We utilize a
BERT-based architecture with an enhanced tokenization process customized to
event logs. The tool, developed using Python, its libraries, and an SQLite
database, allows both conducting experiments for academic purposes and
efficiently solving industry-emerging tasks. Our experiments demonstrate the
efficacy of our approach in compressing event sequences, detecting recurring
patterns, and identifying anomalies. The trained model shows promising results,
with a high accuracy rate in anomaly detection, which demonstrates the
potential of NLP methods to improve the reliability and stability of software
systems.
|
2502.04223
|
\'Eclair -- Extracting Content and Layout with Integrated Reading Order
for Documents
|
cs.CV
|
Optical Character Recognition (OCR) technology is widely used to extract text
from images of documents, facilitating efficient digitization and data
retrieval. However, merely extracting text is insufficient when dealing with
complex documents. Fully comprehending such documents requires an understanding
of their structure -- including formatting, formulas, tables, and the reading
order of multiple blocks and columns across multiple pages -- as well as
semantic information for detecting elements like footnotes and image captions.
This comprehensive understanding is crucial for downstream tasks such as
retrieval, document question answering, and data curation for training Large
Language Models (LLMs) and Vision Language Models (VLMs). To address this, we
introduce \'Eclair, a general-purpose text-extraction tool specifically
designed to process a wide range of document types. Given an image, \'Eclair is
able to extract formatted text in reading order, along with bounding boxes and
their corresponding semantic classes. To thoroughly evaluate these novel
capabilities, we introduce our diverse human-annotated benchmark for
document-level OCR and semantic classification. \'Eclair achieves
state-of-the-art accuracy on this benchmark, outperforming other methods across
key metrics. Additionally, we evaluate \'Eclair on established benchmarks,
demonstrating its versatility and strength across several evaluation standards.
|
2502.04226
|
Keep It Light! Simplifying Image Clustering Via Text-Free Adapters
|
cs.CV cs.LG cs.NE stat.CO stat.ML
|
Many competitive clustering pipelines have a multi-modal design, leveraging
large language models (LLMs) or other text encoders, and text-image pairs,
which are often unavailable in real-world downstream applications.
Additionally, such frameworks are generally complicated to train and require
substantial computational resources, making widespread adoption challenging. In
this work, we show that in deep clustering, competitive performance with more
complex state-of-the-art methods can be achieved using a text-free and highly
simplified training pipeline. In particular, our approach, Simple Clustering
via Pre-trained models (SCP), trains only a small cluster head while leveraging
pre-trained vision model feature representations and positive data pairs.
Experiments on benchmark datasets including CIFAR-10, CIFAR-20, CIFAR-100,
STL-10, ImageNet-10, and ImageNet-Dogs, demonstrate that SCP achieves highly
competitive performance. Furthermore, we provide a theoretical result
explaining why, at least under ideal conditions, additional text-based
embeddings may not be necessary to achieve strong clustering performance in
vision.
|
2502.04229
|
Dark Distillation: Backdooring Distilled Datasets without Accessing Raw
Data
|
cs.CR cs.AI
|
Dataset distillation (DD) enhances training efficiency and reduces bandwidth
by condensing large datasets into smaller synthetic ones. It enables models to
achieve performance comparable to those trained on the raw full dataset and has
become a widely adopted method for data sharing. However, security concerns in
DD remain underexplored. Existing studies typically assume that malicious
behavior originates from dataset owners during the initial distillation
process, where backdoors are injected into raw datasets. In contrast, this work
is the first to address a more realistic and concerning threat: attackers may
intercept the dataset distribution process, inject backdoors into the distilled
datasets, and redistribute them to users. While distilled datasets were
previously considered resistant to backdoor attacks, we demonstrate that they
remain vulnerable to such attacks. Furthermore, we show that attackers do not
even require access to any raw data to inject the backdoors successfully.
Specifically, our approach reconstructs conceptual archetypes for each class
from the model trained on the distilled dataset. Backdoors are then injected
into these archetypes to update the distilled dataset. Moreover, we ensure the
updated dataset not only retains the backdoor but also preserves the original
optimization trajectory, thus maintaining the knowledge of the raw dataset. To
achieve this, a hybrid loss is designed to integrate backdoor information along
the benign optimization trajectory, ensuring that previously learned
information is not forgotten. Extensive experiments demonstrate that distilled
datasets are highly vulnerable to backdoor attacks, with risks pervasive across
various raw datasets, distillation methods, and downstream training strategies.
Moreover, our attack method is efficient, capable of synthesizing a malicious
distilled dataset in under one minute in certain cases.
|
2502.04230
|
XAttnMark: Learning Robust Audio Watermarking with Cross-Attention
|
cs.SD cs.AI cs.CR cs.LG eess.AS
|
The rapid proliferation of generative audio synthesis and editing
technologies has raised significant concerns about copyright infringement, data
provenance, and the spread of misinformation through deepfake audio.
Watermarking offers a proactive solution by embedding imperceptible,
identifiable, and traceable marks into audio content. While recent neural
network-based watermarking methods like WavMark and AudioSeal have improved
robustness and quality, they struggle to achieve both robust detection and
accurate attribution simultaneously. This paper introduces Cross-Attention
Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging
partial parameter sharing between the generator and the detector, a
cross-attention mechanism for efficient message retrieval, and a temporal
conditioning module for improved message distribution. Additionally, we propose
a psychoacoustic-aligned temporal-frequency masking loss that captures
fine-grained auditory masking effects, enhancing watermark imperceptibility.
Our approach achieves state-of-the-art performance in both detection and
attribution, demonstrating superior robustness against a wide range of audio
transformations, including challenging generative editing with strong editing
strength. The project webpage is available at
https://liuyixin-louis.github.io/xattnmark/.
|
2502.04233
|
Graph machine learning for flight delay prediction due to holding
manouver
|
cs.LG cs.SI
|
Flight delays due to holding maneuvers are a critical and costly phenomenon
in aviation, driven by the need to manage air traffic congestion and ensure
safety. Holding maneuvers occur when aircraft are instructed to circle in
designated airspace, often due to factors such as airport congestion, adverse
weather, or air traffic control restrictions. This study models the prediction
of flight delays due to holding maneuvers as a graph problem, leveraging
advanced Graph Machine Learning (Graph ML) techniques to capture complex
interdependencies in air traffic networks. Holding maneuvers, while crucial for
safety, cause increased fuel usage, emissions, and passenger dissatisfaction,
making accurate prediction essential for operational efficiency. Traditional
machine learning models, typically using tabular data, often overlook
spatial-temporal relations within air traffic data. To address this, we model
the problem of predicting holding as edge feature prediction in a directed
(multi)graph where we apply both CatBoost, enriched with graph features
capturing network centrality and connectivity, and Graph Attention Networks
(GATs), which excel in relational data contexts. Our results indicate that
CatBoost outperforms GAT in this imbalanced dataset, effectively predicting
holding events and offering interpretability through graph-based feature
importance. Additionally, we discuss the model's potential operational impact
through a web-based tool that allows users to simulate real-time delay
predictions. This research underscores the viability of graph-based approaches
for predictive analysis in aviation, with implications for enhancing fuel
efficiency, reducing delays, and improving passenger experience.
|
2502.04234
|
A Classification System Approach in Predicting Chinese Censorship
|
cs.CL cs.LG cs.SI
|
This paper is dedicated to using a classifier to predict whether a Weibo post
would be censored under the Chinese internet. Through randomized sampling from
\citeauthor{Fu2021} and Chinese tokenizing strategies, we constructed a cleaned
Chinese phrase dataset with binary censorship markings. Utilizing various
probability-based information retrieval methods on the data, we were able to
derive 4 logistic regression models for classification. Furthermore, we
experimented with pre-trained transformers to perform similar classification
tasks. After evaluating both the macro-F1 and ROC-AUC metrics, we concluded
that the Fined-Tuned BERT model exceeds other strategies in performance.
|
2502.04235
|
MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus
Expansion
|
cs.CL
|
Despite the remarkable capabilities of large language models across various
tasks, their continued scaling faces a critical challenge: the scarcity of
high-quality pretraining data. While model architectures continue to evolve,
the natural language data struggles to scale up. To tackle this bottleneck, we
propose \textbf{MA}ssive \textbf{G}enre-\textbf{A}udience~(MAGA) reformulation
method, which systematic synthesizes diverse, contextually-rich pretraining
data from existing corpus. This work makes three main contributions: (1) We
propose MAGA reformulation method, a lightweight and scalable approach for
pretraining corpus expansion, and build a 770B tokens MAGACorpus. (2) We
evaluate MAGACorpus with different data budget scaling strategies,
demonstrating consistent improvements across various model sizes (134M-13B),
establishing the necessity for next-generation large-scale synthetic
pretraining language models. (3) Through comprehensive analysis, we investigate
prompt engineering's impact on synthetic training collapse and reveal
limitations in conventional collapse detection metrics using validation losses.
Our work shows that MAGA can substantially expand training datasets while
maintaining quality, offering a reliably pathway for scaling models beyond data
limitations.
|
2502.04240
|
Memory-dependent abstractions of stochastic systems through the lens of
transfer operators
|
eess.SY cs.SY
|
With the increasing ubiquity of safety-critical autonomous systems operating
in uncertain environments, there is a need for mathematical methods for formal
verification of stochastic models. Towards formally verifying properties of
stochastic systems, methods based on discrete, finite Markov approximations --
abstractions -- thereof have surged in recent years. These are found in
contexts where: either a) one only has partial, discrete observations of the
underlying continuous stochastic process, or b) the original system is too
complex to analyze, so one partitions the continuous state-space of the
original system to construct a handleable, finite-state model thereof. In both
cases, the abstraction is an approximation of the discrete stochastic process
that arises precisely from the discretization of the underlying continuous
process. The fact that the abstraction is Markov and the discrete process is
not (even though the original one is) leads to approximation errors. Towards
accounting for non-Markovianity, we introduce memory-dependent abstractions for
stochastic systems, capturing dynamics with memory effects. Our contribution is
twofold. First, we provide a formalism for memory-dependent abstractions based
on transfer operators. Second, we quantify the approximation error by upper
bounding the total variation distance between the true continuous state
distribution and its discrete approximation.
|
2502.04242
|
A Theoretical Framework for Data Efficient Multi-Source Transfer
Learning Based on Cram\'er-Rao Bound
|
cs.LG cs.AI
|
Multi-source transfer learning provides an effective solution to data
scarcity in real-world supervised learning scenarios by leveraging multiple
source tasks. In this field, existing works typically use all available samples
from sources in training, which constrains their training efficiency and may
lead to suboptimal results. To address this, we propose a theoretical framework
that answers the question: what is the optimal quantity of source samples
needed from each source task to jointly train the target model? Specifically,
we introduce a generalization error measure that aligns with cross-entropy
loss, and minimize it based on the Cram\'er-Rao Bound to determine the optimal
transfer quantity for each source task. Additionally, we develop an
architecture-agnostic and data-efficient algorithm OTQMS to implement our
theoretical results for training deep multi-source transfer learning models.
Experimental studies on diverse architectures and two real-world benchmark
datasets show that our proposed algorithm significantly outperforms
state-of-the-art approaches in both accuracy and data efficiency. The code and
supplementary materials are available in
https://anonymous.4open.science/r/Materials.
|
2502.04244
|
An object detection approach for lane change and overtake detection from
motion profiles
|
cs.CV
|
In the application domain of fleet management and driver monitoring, it is
very challenging to obtain relevant driving events and activities from dashcam
footage while minimizing the amount of information stored and analyzed. In this
paper, we address the identification of overtake and lane change maneuvers with
a novel object detection approach applied to motion profiles, a compact
representation of driving video footage into a single image. To train and test
our model we created an internal dataset of motion profile images obtained from
a heterogeneous set of dashcam videos, manually labeled with overtake and lane
change maneuvers by the ego-vehicle. In addition to a standard object-detection
approach, we show how the inclusion of CoordConvolution layers further improves
the model performance, in terms of mAP and F1 score, yielding state-of-the art
performance when compared to other baselines from the literature. The extremely
low computational requirements of the proposed solution make it especially
suitable to run in device.
|
2502.04245
|
TriNER: A Series of Named Entity Recognition Models For Hindi, Bengali &
Marathi
|
cs.CL cs.AI cs.LG
|
India's rich cultural and linguistic diversity poses various challenges in
the domain of Natural Language Processing (NLP), particularly in Named Entity
Recognition (NER). NER is a NLP task that aims to identify and classify tokens
into different entity groups like Person, Location, Organization, Number, etc.
This makes NER very useful for downstream tasks like context-aware
anonymization. This paper details our work to build a multilingual NER model
for the three most spoken languages in India - Hindi, Bengali & Marathi. We
train a custom transformer model and fine tune a few pretrained models,
achieving an F1 Score of 92.11 for a total of 6 entity groups. Through this
paper, we aim to introduce a single model to perform NER and significantly
reduce the inconsistencies in entity groups and tag names, across the three
languages.
|
2502.04247
|
Student-t processes as infinite-width limits of posterior Bayesian
neural networks
|
stat.ML cs.LG math.PR
|
The asymptotic properties of Bayesian Neural Networks (BNNs) have been
extensively studied, particularly regarding their approximations by Gaussian
processes in the infinite-width limit. We extend these results by showing that
posterior BNNs can be approximated by Student-t processes, which offer greater
flexibility in modeling uncertainty. Specifically, we show that, if the
parameters of a BNN follow a Gaussian prior distribution, and the variance of
both the last hidden layer and the Gaussian likelihood function follows an
Inverse-Gamma prior distribution, then the resulting posterior BNN converges to
a Student-t process in the infinite-width limit. Our proof leverages the
Wasserstein metric to establish control over the convergence rate of the
Student-t process approximation.
|
2502.04248
|
Adapting to Evolving Adversaries with Regularized Continual Robust
Training
|
cs.LG
|
Robust training methods typically defend against specific attack types, such
as Lp attacks with fixed budgets, and rarely account for the fact that
defenders may encounter new attacks over time. A natural solution is to adapt
the defended model to new adversaries as they arise via fine-tuning, a method
which we call continual robust training (CRT). However, when implemented
naively, fine-tuning on new attacks degrades robustness on previous attacks.
This raises the question: how can we improve the initial training and
fine-tuning of the model to simultaneously achieve robustness against previous
and new attacks? We present theoretical results which show that the gap in a
model's robustness against different attacks is bounded by how far each attack
perturbs a sample in the model's logit space, suggesting that regularizing with
respect to this logit space distance can help maintain robustness against
previous attacks. Extensive experiments on 3 datasets (CIFAR-10, CIFAR-100, and
ImageNette) and over 100 attack combinations demonstrate that the proposed
regularization improves robust accuracy with little overhead in training time.
Our findings and open-source code lay the groundwork for the deployment of
models robust to evolving attacks.
|
2502.04249
|
Free Energy Risk Metrics for Systemically Safe AI: Gatekeeping
Multi-Agent Study
|
cs.AI cs.LG cs.MA physics.data-an stat.ML
|
We investigate the Free Energy Principle as a foundation for measuring risk
in agentic and multi-agent systems. From these principles we introduce a
Cumulative Risk Exposure metric that is flexible to differing contexts and
needs. We contrast this to other popular theories for safe AI that hinge on
massive amounts of data or describing arbitrarily complex world models. In our
framework, stakeholders need only specify their preferences over system
outcomes, providing straightforward and transparent decision rules for risk
governance and mitigation. This framework naturally accounts for uncertainty in
both world model and preference model, allowing for decision-making that is
epistemically and axiologically humble, parsimonious, and future-proof. We
demonstrate this novel approach in a simplified autonomous vehicle environment
with multi-agent vehicles whose driving policies are mediated by gatekeepers
that evaluate, in an online fashion, the risk to the collective safety in their
neighborhood, and intervene through each vehicle's policy when appropriate. We
show that the introduction of gatekeepers in an AV fleet, even at low
penetration, can generate significant positive externalities in terms of
increased system safety.
|
2502.04251
|
Combining Language and App UI Analysis for the Automated Assessment of
Bug Reproduction Steps
|
cs.SE cs.LG
|
Bug reports are essential for developers to confirm software problems,
investigate their causes, and validate fixes. Unfortunately, reports often miss
important information or are written unclearly, which can cause delays,
increased issue resolution effort, or even the inability to solve issues. One
of the most common components of reports that are problematic is the steps to
reproduce the bug(s) (S2Rs), which are essential to replicate the described
program failures and reason about fixes. Given the proclivity for deficiencies
in reported S2Rs, prior work has proposed techniques that assist reporters in
writing or assessing the quality of S2Rs. However, automated understanding of
S2Rs is challenging, and requires linking nuanced natural language phrases with
specific, semantically related program information. Prior techniques often
struggle to form such language to program connections - due to issues in
language variability and limitations of information gleaned from program
analyses.
To more effectively tackle the problem of S2R quality annotation, we propose
a new technique called AstroBR, which leverages the language understanding
capabilities of LLMs to identify and extract the S2Rs from bug reports and map
them to GUI interactions in a program state model derived via dynamic analysis.
We compared AstroBR to a related state-of-the-art approach and we found that
AstroBR annotates S2Rs 25.2% better (in terms of F1 score) than the baseline.
Additionally, AstroBR suggests more accurate missing S2Rs than the baseline (by
71.4% in terms of F1 score).
|
2502.04256
|
Work in Progress: AI-Powered Engineering-Bridging Theory and Practice
|
eess.SY cs.SE cs.SY
|
This paper explores how generative AI can help automate and improve key steps
in systems engineering. It examines AI's ability to analyze system requirements
based on INCOSE's "good requirement" criteria, identifying well-formed and
poorly written requirements. The AI does not just classify requirements but
also explains why some do not meet the standards. By comparing AI assessments
with those of experienced engineers, the study evaluates the accuracy and
reliability of AI in identifying quality issues. Additionally, it explores AI's
ability to classify functional and non-functional requirements and generate
test specifications based on these classifications. Through both quantitative
and qualitative analysis, the research aims to assess AI's potential to
streamline engineering processes and improve learning outcomes. It also
highlights the challenges and limitations of AI, ensuring its safe and ethical
use in professional and academic settings.
|
2502.04260
|
Realistic Image-to-Image Machine Unlearning via Decoupling and Knowledge
Retention
|
cs.LG
|
Machine Unlearning allows participants to remove their data from a trained
machine learning model in order to preserve their privacy, and security.
However, the machine unlearning literature for generative models is rather
limited. The literature for image-to-image generative model (I2I model)
considers minimizing the distance between Gaussian noise and the output of I2I
model for forget samples as machine unlearning. However, we argue that the
machine learning model performs fairly well on unseen data i.e., a retrained
model will be able to catch generic patterns in the data and hence will not
generate an output which is equivalent to Gaussian noise. In this paper, we
consider that the model after unlearning should treat forget samples as
out-of-distribution (OOD) data, i.e., the unlearned model should no longer
recognize or encode the specific patterns found in the forget samples. To
achieve this, we propose a framework which decouples the model parameters with
gradient ascent, ensuring that forget samples are OOD for unlearned model with
theoretical guarantee. We also provide $(\epsilon, \delta)$-unlearning
guarantee for model updates with gradient ascent. The unlearned model is
further fine-tuned on the remaining samples to maintain its performance. We
also propose an attack model to ensure that the unlearned model has effectively
removed the influence of forget samples. Extensive empirical evaluation on two
large-scale datasets, ImageNet-1K and Places365 highlights the superiority of
our approach. To show comparable performance with retrained model, we also show
the comparison of a simple AutoEncoder on various baselines on CIFAR-10
dataset.
|
2502.04262
|
Efficient Randomized Experiments Using Foundation Models
|
cs.LG stat.ME stat.ML
|
Randomized experiments are the preferred approach for evaluating the effects
of interventions, but they are costly and often yield estimates with
substantial uncertainty. On the other hand, in silico experiments leveraging
foundation models offer a cost-effective alternative that can potentially
attain higher statistical precision. However, the benefits of in silico
experiments come with a significant risk: statistical inferences are not valid
if the models fail to accurately predict experimental responses to
interventions. In this paper, we propose a novel approach that integrates the
predictions from multiple foundation models with experimental data while
preserving valid statistical inference. Our estimator is consistent and
asymptotically normal, with asymptotic variance no larger than the standard
estimator based on experimental data alone. Importantly, these statistical
properties hold even when model predictions are arbitrarily biased. Empirical
results across several randomized experiments show that our estimator offers
substantial precision gains, equivalent to a reduction of up to 20% in the
sample size needed to match the same precision as the standard estimator based
on experimental data alone.
|
2502.04263
|
Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via
Modality Inversion
|
cs.CV cs.AI cs.LG
|
Pre-trained multi-modal Vision-Language Models like CLIP are widely used
off-the-shelf for a variety of applications. In this paper, we show that the
common practice of individually exploiting the text or image encoders of these
powerful multi-modal models is highly suboptimal for intra-modal tasks like
image-to-image retrieval. We argue that this is inherently due to the
CLIP-style inter-modal contrastive loss that does not enforce any intra-modal
constraints, leading to what we call intra-modal misalignment. To demonstrate
this, we leverage two optimization-based modality inversion techniques that map
representations from their input modality to the complementary one without any
need for auxiliary data or additional trained adapters. We empirically show
that, in the intra-modal tasks of image-to-image and text-to-text retrieval,
approaching these tasks inter-modally significantly improves performance with
respect to intra-modal baselines on more than fifteen datasets. Additionally,
we demonstrate that approaching a native inter-modal task (e.g. zero-shot image
classification) intra-modally decreases performance, further validating our
findings. Finally, we show that incorporating an intra-modal term in the
pre-training objective or narrowing the modality gap between the text and image
feature embedding spaces helps reduce the intra-modal misalignment. The code is
publicly available at: https://github.com/miccunifi/Cross-the-Gap.
|
2502.04266
|
Digital Gatekeeping: An Audit of Search Engine Results shows tailoring
of queries on the Israel-Palestine Conflict
|
cs.CY cs.IR
|
Search engines, often viewed as reliable gateways to information, tailor
search results using customization algorithms based on user preferences,
location, and more. While this can be useful for routine queries, it raises
concerns when the topics are sensitive or contentious, possibly limiting
exposure to diverse viewpoints and increasing polarization.
To examine the extent of this tailoring, we focused on the Israel-Palestine
conflict and developed a privacy-protecting tool to audit the behavior of three
search engines: DuckDuckGo, Google and Yahoo. Our study focused on two main
questions: (1) How do search results for the same query about the conflict vary
among different users? and (2) Are these results influenced by the user's
location and browsing history?
Our findings revealed significant customization based on location and
browsing preferences, unlike previous studies that found only mild
personalization for general topics. Moreover, queries related to the conflict
were more customized than unrelated queries, and the results were not neutral
concerning the conflict's portrayal.
|
2502.04268
|
Point2RBox-v2: Rethinking Point-supervised Oriented Object Detection
with Spatial Layout Among Instances
|
cs.CV cs.AI
|
With the rapidly increasing demand for oriented object detection (OOD),
recent research involving weakly-supervised detectors for learning OOD from
point annotations has gained great attention. In this paper, we rethink this
challenging task setting with the layout among instances and present
Point2RBox-v2. At the core are three principles: 1) Gaussian overlap loss. It
learns an upper bound for each instance by treating objects as 2D Gaussian
distributions and minimizing their overlap. 2) Voronoi watershed loss. It
learns a lower bound for each instance through watershed on Voronoi
tessellation. 3) Consistency loss. It learns the size/rotation variation
between two output sets with respect to an input image and its augmented view.
Supplemented by a few devised techniques, e.g. edge loss and copy-paste, the
detector is further enhanced. To our best knowledge, Point2RBox-v2 is the first
approach to explore the spatial layout among instances for learning
point-supervised OOD. Our solution is elegant and lightweight, yet it is
expected to give a competitive performance especially in densely packed scenes:
62.61%/86.15%/34.71% on DOTA/HRSC/FAIR1M. Code is available at
https://github.com/VisionXLab/point2rbox-v2.
|
2502.04269
|
How does a Multilingual LM Handle Multiple Languages?
|
cs.CL cs.AI
|
Multilingual language models have significantly advanced due to rapid
progress in natural language processing. Models like BLOOM 1.7B, trained on
diverse multilingual datasets, aim to bridge linguistic gaps. However, their
effectiveness in capturing linguistic knowledge, particularly for low-resource
languages, remains an open question. This study critically examines MLMs
capabilities in multilingual understanding, semantic representation, and
cross-lingual knowledge transfer. While these models perform well for
high-resource languages, they struggle with less-represented ones.
Additionally, traditional evaluation methods often overlook their internal
syntactic and semantic encoding.
This research addresses key limitations through three objectives. First, it
assesses semantic similarity by analyzing multilingual word embeddings for
consistency using cosine similarity. Second, it examines BLOOM-1.7B and Qwen2
through Named Entity Recognition and sentence similarity tasks to understand
their linguistic structures. Third, it explores cross-lingual knowledge
transfer by evaluating generalization from high-resource to low-resource
languages in sentiment analysis and text classification.
By leveraging linguistic probing, performance metrics, and visualizations,
this study provides insights into the strengths and limitations of MLMs. The
findings aim to enhance multilingual NLP models, ensuring better support for
both high- and low-resource languages, thereby promoting inclusivity in
language technologies.
|
2502.04270
|
PILAF: Optimal Human Preference Sampling for Reward Modeling
|
cs.LG stat.ML
|
As large language models increasingly drive real-world applications, aligning
them with human values becomes paramount. Reinforcement Learning from Human
Feedback (RLHF) has emerged as a key technique, translating preference data
into reward models when oracle human values remain inaccessible. In practice,
RLHF mostly relies on approximate reward models, which may not consistently
guide the policy toward maximizing the underlying human values. We propose
Policy-Interpolated Learning for Aligned Feedback (PILAF), a novel response
sampling strategy for preference labeling that explicitly aligns preference
learning with maximizing the underlying oracle reward. PILAF is theoretically
grounded, demonstrating optimality from both an optimization and a statistical
perspective. The method is straightforward to implement and demonstrates strong
performance in iterative and online RLHF settings where feedback curation is
critical.
|
2502.04271
|
Variational decision diagrams for quantum-inspired machine learning
applications
|
quant-ph cs.LG
|
Decision diagrams (DDs) have emerged as an efficient tool for simulating
quantum circuits due to their capacity to exploit data redundancies in quantum
states and quantum operations, enabling the efficient computation of
probability amplitudes. However, their application in quantum machine learning
(QML) has remained unexplored. This paper introduces variational decision
diagrams (VDDs), a novel graph structure that combines the structural benefits
of DDs with the adaptability of variational methods for efficiently
representing quantum states. We investigate the trainability of VDDs by
applying them to the ground state estimation problem for transverse-field Ising
and Heisenberg Hamiltonians. Analysis of gradient variance suggests that
training VDDs is possible, as no signs of vanishing gradients--also known as
barren plateaus--are observed. This work provides new insights into the use of
decision diagrams in QML as an alternative to design and train variational
ans\"atze.
|
2502.04273
|
Electrical Impedance Tomography for Anisotropic Media: a Machine
Learning Approach to Classify Inclusions
|
math.NA cs.LG cs.NA
|
We consider the problem in Electrical Impedance Tomography (EIT) of
identifying one or multiple inclusions in a background-conducting body
$\Omega\subset\mathbb{R}^2$, from the knowledge of a finite number of
electrostatic measurements taken on its boundary $\partial\Omega$ and modelled
by the Dirichlet-to-Neumann (D-N) matrix. Once the presence of one inclusion in
$\Omega$ is established, our model, combined with the machine learning
techniques of Artificial Neural Networks (ANN) and Support Vector Machines
(SVM), may be used to determine the size of the inclusion, the presence of
multiple inclusions, and also that of anisotropy within the inclusion(s).
Utilising both real and simulated datasets within a 16-electrode setup, we
achieve a high rate of inclusion detection and show that two measurements are
sufficient to achieve a good level of accuracy when predicting the size of an
inclusion. This underscores the substantial potential of integrating machine
learning approaches with the more classical analysis of EIT and the inverse
inclusion problem to extract critical insights, such as the presence of
anisotropy.
|
2502.04274
|
Orthogonal Representation Learning for Estimating Causal Quantities
|
cs.LG
|
Representation learning is widely used for estimating causal quantities
(e.g., the conditional average treatment effect) from observational data. While
existing representation learning methods have the benefit of allowing for
end-to-end learning, they do not have favorable theoretical properties of
Neyman-orthogonal learners, such as double robustness and quasi-oracle
efficiency. Also, such representation learning methods often employ additional
constraints, like balancing, which may even lead to inconsistent estimation. In
this paper, we propose a novel class of Neyman-orthogonal learners for causal
quantities defined at the representation level, which we call OR-learners. Our
OR-learners have several practical advantages: they allow for consistent
estimation of causal quantities based on any learned representation, while
offering favorable theoretical properties including double robustness and
quasi-oracle efficiency. In multiple experiments, we show that, under certain
regularity conditions, our OR-learners improve existing representation learning
methods and achieve state-of-the-art performance. To the best of our knowledge,
our OR-learners are the first work to offer a unified framework of
representation learning methods and Neyman-orthogonal learners for causal
quantities estimation.
|
2502.04276
|
Gaussian Process Regression for Inverse Problems in Linear PDEs
|
stat.ML cs.LG math.AC
|
This paper introduces a computationally efficient algorithm in system theory
for solving inverse problems governed by linear partial differential equations
(PDEs). We model solutions of linear PDEs using Gaussian processes with priors
defined based on advanced commutative algebra and algebraic analysis. The
implementation of these priors is algorithmic and achieved using the Macaulay2
computer algebra software. An example application includes identifying the wave
speed from noisy data for classical wave equations, which are widely used in
physics. The method achieves high accuracy while enhancing computational
efficiency.
|
2502.04281
|
DECAF: Learning to be Fair in Multi-agent Resource Allocation
|
cs.LG cs.CY cs.MA
|
A wide variety of resource allocation problems operate under resource
constraints that are managed by a central arbitrator, with agents who evaluate
and communicate preferences over these resources. We formulate this broad class
of problems as Distributed Evaluation, Centralized Allocation (DECA) problems
and propose methods to learn fair and efficient policies in centralized
resource allocation. Our methods are applied to learning long-term fairness in
a novel and general framework for fairness in multi-agent systems. We show
three different methods based on Double Deep Q-Learning: (1) A joint weighted
optimization of fairness and utility, (2) a split optimization, learning two
separate Q-estimators for utility and fairness, and (3) an online policy
perturbation to guide existing black-box utility functions toward fair
solutions. Our methods outperform existing fair MARL approaches on multiple
resource allocation domains, even when evaluated using diverse fairness
functions, and allow for flexible online trade-offs between utility and
fairness.
|
2502.04286
|
A Methodology for Studying Linguistic and Cultural Change in China,
1900-1950
|
cs.CL
|
This paper presents a quantitative approach to studying linguistic and
cultural change in China during the first half of the twentieth century, a
period that remains understudied in computational humanities research. The
dramatic changes in Chinese language and culture during this time call for
greater reflection on the tools and methods used for text analysis. This
preliminary study offers a framework for analyzing Chinese texts from the late
nineteenth and twentieth centuries, demonstrating how established methods such
as word counts and word embeddings can provide new historical insights into the
complex negotiations between Western modernity and Chinese cultural discourse.
|
2502.04288
|
Leveraging Geolocation in Clinical Records to Improve Alzheimer's
Disease Diagnosis Using DMV Framework
|
cs.LG
|
Alzheimer's Disease (AD) early detection is critical for enabling timely
intervention and improving patient outcomes. This paper presents a DMV
framework using Llama3-70B and GPT-4o as embedding models to analyze clinical
notes and predict a continuous risk score associated with early AD onset.
Framing the task as a regression problem, we model the relationship between
linguistic features in clinical notes (inputs) and a target variable (data
value) that answers specific questions related to AD risk within certain topic
categories. By leveraging a multi-faceted feature set that includes geolocation
data, we capture additional environmental context potentially linked to AD. Our
results demonstrate that the integration of the geolocation information
significantly decreases the error of predicting early AD risk scores over prior
models by 28.57% (Llama3-70B) and 33.47% (GPT4-o). Our findings suggest that
this combined approach can enhance the predictive accuracy of AD risk
assessment, supporting early diagnosis and intervention in clinical settings.
Additionally, the framework's ability to incorporate geolocation data provides
a more comprehensive risk assessment model that could help healthcare providers
better understand and address environmental factors contributing to AD
development.
|
2502.04289
|
Retro-Rank-In: A Ranking-Based Approach for Inorganic Materials
Synthesis Planning
|
physics.chem-ph cs.LG
|
Retrosynthesis strategically plans the synthesis of a chemical target
compound from simpler, readily available precursor compounds. This process is
critical for synthesizing novel inorganic materials, yet traditional methods in
inorganic chemistry continue to rely on trial-and-error experimentation.
Emerging machine-learning approaches struggle to generalize to entirely new
reactions due to their reliance on known precursors, as they frame
retrosynthesis as a multi-label classification task. To address these
limitations, we propose Retro-Rank-In, a novel framework that reformulates the
retrosynthesis problem by embedding target and precursor materials into a
shared latent space and learning a pairwise ranker on a bipartite graph of
inorganic compounds. We evaluate Retro-Rank-In's generalizability on
challenging retrosynthesis dataset splits designed to mitigate data duplicates
and overlaps. For instance, for Cr2AlB2, it correctly predicts the verified
precursor pair CrB + Al despite never seeing them in training, a capability
absent in prior work. Extensive experiments show that Retro-Rank-In sets a new
state-of-the-art, particularly in out-of-distribution generalization and
candidate set ranking, offering a powerful tool for accelerating inorganic
material synthesis.
|
2502.04290
|
Every Call is Precious: Global Optimization of Black-Box Functions with
Unknown Lipschitz Constants
|
cs.LG cs.AI cs.SY eess.SY math.OC stat.ML
|
Optimizing expensive, non-convex, black-box Lipschitz continuous functions
presents significant challenges, particularly when the Lipschitz constant of
the underlying function is unknown. Such problems often demand numerous
function evaluations to approximate the global optimum, which can be
prohibitive in terms of time, energy, or resources. In this work, we introduce
Every Call is Precious (ECP), a novel global optimization algorithm that
minimizes unpromising evaluations by strategically focusing on potentially
optimal regions. Unlike previous approaches, ECP eliminates the need to
estimate the Lipschitz constant, thereby avoiding additional function
evaluations. ECP guarantees no-regret performance for infinite evaluation
budgets and achieves minimax-optimal regret bounds within finite budgets.
Extensive ablation studies validate the algorithm's robustness, while empirical
evaluations show that ECP outperforms 10 benchmark algorithms including
Lipschitz, Bayesian, bandits, and evolutionary methods across 30
multi-dimensional non-convex synthetic and real-world optimization problems,
which positions ECP as a competitive approach for global optimization.
|
2502.04293
|
GCE-Pose: Global Context Enhancement for Category-level Object Pose
Estimation
|
cs.CV
|
A key challenge in model-free category-level pose estimation is the
extraction of contextual object features that generalize across varying
instances within a specific category. Recent approaches leverage foundational
features to capture semantic and geometry cues from data. However, these
approaches fail under partial visibility. We overcome this with a
first-complete-then-aggregate strategy for feature extraction utilizing class
priors. In this paper, we present GCE-Pose, a method that enhances pose
estimation for novel instances by integrating category-level global context
prior. GCE-Pose performs semantic shape reconstruction with a proposed Semantic
Shape Reconstruction (SSR) module. Given an unseen partial RGB-D object
instance, our SSR module reconstructs the instance's global geometry and
semantics by deforming category-specific 3D semantic prototypes through a
learned deep Linear Shape Model. We further introduce a Global Context Enhanced
(GCE) feature fusion module that effectively fuses features from partial RGB-D
observations and the reconstructed global context. Extensive experiments
validate the impact of our global context prior and the effectiveness of the
GCE fusion module, demonstrating that GCE-Pose significantly outperforms
existing methods on challenging real-world datasets HouseCat6D and
NOCS-REAL275. Our project page is available at
https://colin-de.github.io/GCE-Pose/.
|
2502.04294
|
Prediction-Powered E-Values
|
stat.ML cs.LG stat.ME
|
Quality statistical inference requires a sufficient amount of data, which can
be missing or hard to obtain. To this end, prediction-powered inference has
risen as a promising methodology, but existing approaches are largely limited
to Z-estimation problems such as inference of means and quantiles. In this
paper, we apply ideas of prediction-powered inference to e-values. By doing so,
we inherit all the usual benefits of e-values -- such as anytime-validity,
post-hoc validity and versatile sequential inference -- as well as greatly
expand the set of inferences achievable in a prediction-powered manner. In
particular, we show that every inference procedure that can be framed in terms
of e-values has a prediction-powered counterpart, given by our method. We
showcase the effectiveness of our framework across a wide range of inference
tasks, from simple hypothesis testing and confidence intervals to more involved
procedures for change-point detection and causal discovery, which were out of
reach of previous techniques. Our approach is modular and easily integrable
into existing algorithms, making it a compelling choice for practical
applications.
|
2502.04295
|
Beyond Prompt Content: Enhancing LLM Performance via Content-Format
Integrated Prompt Optimization
|
cs.CL
|
Large Language Models (LLMs) have shown significant capability across various
tasks, with their real-world effectiveness often driven by prompt design. While
recent research has focused on optimizing prompt content, the role of prompt
formatting, a critical but often overlooked dimension, has received limited
systematic investigation. In this paper, we introduce Content-Format Integrated
Prompt Optimization (CFPO), an innovative methodology that jointly optimizes
both prompt content and formatting through an iterative refinement process.
CFPO leverages natural language mutations to explore content variations and
employs a dynamic format exploration strategy that systematically evaluates
diverse format options. Our extensive evaluations across multiple tasks and
open-source LLMs demonstrate that CFPO demonstrates measurable performance
improvements compared to content-only optimization methods. This highlights the
importance of integrated content-format optimization and offers a practical,
model-agnostic approach to enhancing LLM performance. Code is available at
https://github.com/HenryLau7/CFPO.
|
2502.04296
|
Learning Real-World Action-Video Dynamics with Heterogeneous Masked
Autoregression
|
cs.RO cs.CV cs.LG
|
We propose Heterogeneous Masked Autoregression (HMA) for modeling
action-video dynamics to generate high-quality data and evaluation in scaling
robot learning. Building interactive video world models and policies for
robotics is difficult due to the challenge of handling diverse settings while
maintaining computational efficiency to run in real time. HMA uses
heterogeneous pre-training from observations and action sequences across
different robotic embodiments, domains, and tasks. HMA uses masked
autoregression to generate quantized or soft tokens for video predictions.
\ourshort achieves better visual fidelity and controllability than the previous
robotic video generation models with 15 times faster speed in the real world.
After post-training, this model can be used as a video simulator from low-level
action inputs for evaluating policies and generating synthetic data. See this
link https://liruiw.github.io/hma for more information.
|
2502.04297
|
Statistical guarantees for continuous-time policy evaluation: blessing
of ellipticity and new tradeoffs
|
cs.LG math.OC math.PR math.ST stat.TH
|
We study the estimation of the value function for continuous-time Markov
diffusion processes using a single, discretely observed ergodic trajectory. Our
work provides non-asymptotic statistical guarantees for the least-squares
temporal-difference (LSTD) method, with performance measured in the first-order
Sobolev norm. Specifically, the estimator attains an $O(1 / \sqrt{T})$
convergence rate when using a trajectory of length $T$; notably, this rate is
achieved as long as $T$ scales nearly linearly with both the mixing time of the
diffusion and the number of basis functions employed.
A key insight of our approach is that the ellipticity inherent in the
diffusion process ensures robust performance even as the effective horizon
diverges to infinity. Moreover, we demonstrate that the Markovian component of
the statistical error can be controlled by the approximation error, while the
martingale component grows at a slower rate relative to the number of basis
functions. By carefully balancing these two sources of error, our analysis
reveals novel trade-offs between approximation and statistical errors.
|
2502.04299
|
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video
Generation
|
cs.CV
|
This paper presents a method that allows users to design cinematic video
shots in the context of image-to-video generation. Shot design, a critical
aspect of filmmaking, involves meticulously planning both camera movements and
object motions in a scene. However, enabling intuitive shot design in modern
image-to-video generation systems presents two main challenges: first,
effectively capturing user intentions on the motion design, where both camera
movements and scene-space object motions must be specified jointly; and second,
representing motion information that can be effectively utilized by a video
diffusion model to synthesize the image animations. To address these
challenges, we introduce MotionCanvas, a method that integrates user-driven
controls into image-to-video (I2V) generation models, allowing users to control
both object and camera motions in a scene-aware manner. By connecting insights
from classical computer graphics and contemporary video generation techniques,
we demonstrate the ability to achieve 3D-aware motion control in I2V synthesis
without requiring costly 3D-related training data. MotionCanvas enables users
to intuitively depict scene-space motion intentions, and translates them into
spatiotemporal motion-conditioning signals for video diffusion models. We
demonstrate the effectiveness of our method on a wide range of real-world image
content and shot-design scenarios, highlighting its potential to enhance the
creative workflows in digital content creation and adapt to various image and
video editing applications.
|
2502.04302
|
Strong Equivalence in Answer Set Programming with Constraints
|
cs.AI cs.LO
|
We investigate the concept of strong equivalence within the extended
framework of Answer Set Programming with constraints. Two groups of rules are
considered strongly equivalent if, informally speaking, they have the same
meaning in any context. We demonstrate that, under certain assumptions, strong
equivalence between rule sets in this extended setting can be precisely
characterized by their equivalence in the logic of Here-and-There with
constraints. Furthermore, we present a translation from the language of several
clingo-based answer set solvers that handle constraints into the language of
Here-and-There with constraints. This translation enables us to leverage the
logic of Here-and-There to reason about strong equivalence within the context
of these solvers. We also explore the computational complexity of determining
strong equivalence in this context.
|
2502.04306
|
ScoreFlow: Mastering LLM Agent Workflows via Score-based Preference
Optimization
|
cs.CL
|
Recent research has leveraged large language model multi-agent systems for
complex problem-solving while trying to reduce the manual effort required to
build them, driving the development of automated agent workflow optimization
methods. However, existing methods remain inflexible due to representational
limitations, a lack of adaptability, and poor scalability when relying on
discrete optimization techniques. We address these challenges with ScoreFlow, a
simple yet high-performance framework that leverages efficient gradient-based
optimization in a continuous space. ScoreFlow incorporates Score-DPO, a novel
variant of the direct preference optimization method that accounts for
quantitative feedback. Across six benchmarks spanning question answering,
coding, and mathematical reasoning, ScoreFlow achieves an 8.2% improvement over
existing baselines. Moreover, it empowers smaller models to outperform larger
ones with lower inference costs. Project:
https://github.com/Gen-Verse/ScoreFlow
|
2502.04307
|
DexterityGen: Foundation Controller for Unprecedented Dexterity
|
cs.RO cs.AI cs.LG cs.SY eess.SY
|
Teaching robots dexterous manipulation skills, such as tool use, presents a
significant challenge. Current approaches can be broadly categorized into two
strategies: human teleoperation (for imitation learning) and sim-to-real
reinforcement learning. The first approach is difficult as it is hard for
humans to produce safe and dexterous motions on a different embodiment without
touch feedback. The second RL-based approach struggles with the domain gap and
involves highly task-specific reward engineering on complex tasks. Our key
insight is that RL is effective at learning low-level motion primitives, while
humans excel at providing coarse motion commands for complex, long-horizon
tasks. Therefore, the optimal solution might be a combination of both
approaches. In this paper, we introduce DexterityGen (DexGen), which uses RL to
pretrain large-scale dexterous motion primitives, such as in-hand rotation or
translation. We then leverage this learned dataset to train a dexterous
foundational controller. In the real world, we use human teleoperation as a
prompt to the controller to produce highly dexterous behavior. We evaluate the
effectiveness of DexGen in both simulation and real world, demonstrating that
it is a general-purpose controller that can realize input dexterous
manipulation commands and significantly improves stability by 10-100x measured
as duration of holding objects across diverse tasks. Notably, with DexGen we
demonstrate unprecedented dexterous skills including diverse object
reorientation and dexterous tool use such as pen, syringe, and screwdriver for
the first time.
|
2502.04308
|
HOG-Diff: Higher-Order Guided Diffusion for Graph Generation
|
cs.LG cs.AI cs.SI physics.soc-ph
|
Graph generation is a critical yet challenging task as empirical analyses
require a deep understanding of complex, non-Euclidean structures. Although
diffusion models have recently made significant achievements in graph
generation, these models typically adapt from the frameworks designed for image
generation, making them ill-suited for capturing the topological properties of
graphs. In this work, we propose a novel Higher-order Guided Diffusion
(HOG-Diff) model that follows a coarse-to-fine generation curriculum and is
guided by higher-order information, enabling the progressive generation of
plausible graphs with inherent topological structures. We further prove that
our model exhibits a stronger theoretical guarantee than classical diffusion
frameworks. Extensive experiments on both molecular and generic graph
generation tasks demonstrate that our method consistently outperforms or
remains competitive with state-of-the-art baselines. Our code is available at
https://github.com/Yiminghh/HOG-Diff.
|
2502.04309
|
Targeted Learning for Data Fairness
|
cs.LG stat.ML
|
Data and algorithms have the potential to produce and perpetuate
discrimination and disparate treatment. As such, significant effort has been
invested in developing approaches to defining, detecting, and eliminating
unfair outcomes in algorithms. In this paper, we focus on performing
statistical inference for fairness. Prior work in fairness inference has
largely focused on inferring the fairness properties of a given predictive
algorithm. Here, we expand fairness inference by evaluating fairness in the
data generating process itself, referred to here as data fairness. We perform
inference on data fairness using targeted learning, a flexible framework for
nonparametric inference. We derive estimators demographic parity, equal
opportunity, and conditional mutual information. Additionally, we find that our
estimators for probabilistic metrics exploit double robustness. To validate our
approach, we perform several simulations and apply our estimators to real data.
|
2502.04310
|
Finding Pegasus: Enhancing Unsupervised Anomaly Detection in
High-Dimensional Data using a Manifold-Based Approach
|
cs.LG astro-ph.CO
|
Unsupervised machine learning methods are well suited to searching for
anomalies at scale but can struggle with the high-dimensional representation of
many modern datasets, hence dimensionality reduction (DR) is often performed
first. In this paper we analyse unsupervised anomaly detection (AD) from the
perspective of the manifold created in DR. We present an idealised
illustration, "Finding Pegasus", and a novel formal framework with which we
categorise AD methods and their results into "on manifold" and "off manifold".
We define these terms and show how they differ. We then use this insight to
develop an approach of combining AD methods which significantly boosts AD
recall without sacrificing precision in situations employing high DR. When
tested on MNIST data, our approach of combining AD methods improves recall by
as much as 16 percent compared with simply combining with the best standalone
AD method (Isolation Forest), a result which shows great promise for its
application to real-world data.
|
2502.04312
|
Consistency of augmentation graph and network approximability in
contrastive learning
|
cs.LG math.AP math.SP
|
Contrastive learning leverages data augmentation to develop feature
representation without relying on large labeled datasets. However, despite its
empirical success, the theoretical foundations of contrastive learning remain
incomplete, with many essential guarantees left unaddressed, particularly the
realizability assumption concerning neural approximability of an optimal
spectral contrastive loss solution. In this work, we overcome these limitations
by analyzing the pointwise and spectral consistency of the augmentation graph
Laplacian. We establish that, under specific conditions for data generation and
graph connectivity, as the augmented dataset size increases, the augmentation
graph Laplacian converges to a weighted Laplace-Beltrami operator on the
natural data manifold. These consistency results ensure that the graph
Laplacian spectrum effectively captures the manifold geometry. Consequently,
they give way to a robust framework for establishing neural approximability,
directly resolving the realizability assumption in a current paradigm.
|
2502.04313
|
Great Models Think Alike and this Undermines AI Oversight
|
cs.LG cs.AI cs.CL
|
As Language Model (LM) capabilities advance, evaluating and supervising them
at scale is getting harder for humans. There is hope that other language models
can automate both these tasks, which we refer to as "AI Oversight". We study
how model similarity affects both aspects of AI oversight by proposing a
probabilistic metric for LM similarity based on overlap in model mistakes.
Using this metric, we first show that LLM-as-a-judge scores favor models
similar to the judge, generalizing recent self-preference results. Then, we
study training on LM annotations, and find complementary knowledge between the
weak supervisor and strong student model plays a crucial role in gains from
"weak-to-strong generalization". As model capabilities increase, it becomes
harder to find their mistakes, and we might defer more to AI oversight.
However, we observe a concerning trend -- model mistakes are becoming more
similar with increasing capabilities, pointing to risks from correlated
failures. Our work underscores the importance of reporting and correcting for
model similarity, especially in the emerging paradigm of AI oversight.
|
2502.04314
|
BOUQuET: dataset, Benchmark and Open initiative for Universal Quality
Evaluation in Translation
|
cs.CL
|
This paper presents BOUQuET, a multicentric and multi-register/domain dataset
and benchmark, and its broader collaborative extension initiative. This dataset
is handcrafted in non-English languages first, each of these source languages
being represented among the 23 languages commonly used by half of the world's
population and therefore having the potential to serve as pivot languages that
will enable more accurate translations. The dataset is specially designed to
avoid contamination and be multicentric, so as to enforce representation of
multilingual language features. In addition, the dataset goes beyond the
sentence level, as it is organized in paragraphs of various lengths. Compared
with related machine translation (MT) datasets, we show that BOUQuET has a
broader representation of domains while simplifying the translation task for
non-experts. Therefore, BOUQuET is specially suitable for the open initiative
and call for translation participation that we are launching to extend it to a
multi-way parallel corpus to any written language.
|
2502.04315
|
ChameleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time
Clusters
|
cs.CL cs.AI cs.LG
|
Recent advances in large language models (LLMs) have shown remarkable
performance across diverse tasks. However, these models are typically deployed
with fixed weights, which limits their ability to adapt dynamically to the
variability inherent in real-world data during inference. This paper introduces
ChameleonLLM, a novel framework that enables inference-time adaptation of LLMs
by leveraging batch-aware clustering and on-the-fly generation of low-rank
updates. Unlike traditional fine-tuning approaches such as Low-Rank Adaptation
(LoRA) or methods that rely on a fixed set of pre-learned uniforms (changeable
masks), our method dynamically generates adaptive modifications to the decoder
weights based on the aggregated statistics of clustered batches. By
intelligently grouping similar inputs and computing context-aware low-rank
updates via a hyper-network, ChameleonLLM achieves significant performance
gains, outperforming conventional LoRA methods while eliminating the overhead
of maintaining multiple expert models. Our experiments highlight the potential
of our approach to serve as a versatile and highly adaptive solution for
language model inference. ChameleonLLM is open-sourced to ensure the
reproducibility of our experiments:
https://anonymous.4open.science/r/ChamaleonLLM/
|
2502.04317
|
Factorized Implicit Global Convolution for Automotive Computational
Fluid Dynamics Prediction
|
cs.CV
|
Computational Fluid Dynamics (CFD) is crucial for automotive design,
requiring the analysis of large 3D point clouds to study how vehicle geometry
affects pressure fields and drag forces. However, existing deep learning
approaches for CFD struggle with the computational complexity of processing
high-resolution 3D data. We propose Factorized Implicit Global Convolution
(FIGConv), a novel architecture that efficiently solves CFD problems for very
large 3D meshes with arbitrary input and output geometries. FIGConv achieves
quadratic complexity $O(N^2)$, a significant improvement over existing 3D
neural CFD models that require cubic complexity $O(N^3)$. Our approach combines
Factorized Implicit Grids to approximate high-resolution domains, efficient
global convolutions through 2D reparameterization, and a U-shaped architecture
for effective information gathering and integration. We validate our approach
on the industry-standard Ahmed body dataset and the large-scale DrivAerNet
dataset. In DrivAerNet, our model achieves an $R^2$ value of 0.95 for drag
prediction, outperforming the previous state-of-the-art by a significant
margin. This represents a 40% improvement in relative mean squared error and a
70% improvement in absolute mean squared error over previous methods.
|
2502.04318
|
sshELF: Single-Shot Hierarchical Extrapolation of Latent Features for 3D
Reconstruction from Sparse-Views
|
cs.CV
|
Reconstructing unbounded outdoor scenes from sparse outward-facing views
poses significant challenges due to minimal view overlap. Previous methods
often lack cross-scene understanding and their primitive-centric formulations
overload local features to compensate for missing global context, resulting in
blurriness in unseen parts of the scene. We propose sshELF, a fast, single-shot
pipeline for sparse-view 3D scene reconstruction via hierarchal extrapolation
of latent features. Our key insights is that disentangling information
extrapolation from primitive decoding allows efficient transfer of structural
patterns across training scenes. Our method: (1) learns cross-scene priors to
generate intermediate virtual views to extrapolate to unobserved regions, (2)
offers a two-stage network design separating virtual view generation from 3D
primitive decoding for efficient training and modular model design, and (3)
integrates a pre-trained foundation model for joint inference of latent
features and texture, improving scene understanding and generalization. sshELF
can reconstruct 360 degree scenes from six sparse input views and achieves
competitive results on synthetic and real-world datasets. We find that sshELF
faithfully reconstructs occluded regions, supports real-time rendering, and
provides rich latent features for downstream applications. The code will be
released.
|
2502.04320
|
ConceptAttention: Diffusion Transformers Learn Highly Interpretable
Features
|
cs.CV cs.LG
|
Do the rich representations of multi-modal diffusion transformers (DiTs)
exhibit unique properties that enhance their interpretability? We introduce
ConceptAttention, a novel method that leverages the expressive power of DiT
attention layers to generate high-quality saliency maps that precisely locate
textual concepts within images. Without requiring additional training,
ConceptAttention repurposes the parameters of DiT attention layers to produce
highly contextualized concept embeddings, contributing the major discovery that
performing linear projections in the output space of DiT attention layers
yields significantly sharper saliency maps compared to commonly used
cross-attention mechanisms. Remarkably, ConceptAttention even achieves
state-of-the-art performance on zero-shot image segmentation benchmarks,
outperforming 11 other zero-shot interpretability methods on the
ImageNet-Segmentation dataset and on a single-class subset of PascalVOC. Our
work contributes the first evidence that the representations of multi-modal DiT
models like Flux are highly transferable to vision tasks like segmentation,
even outperforming multi-modal foundation models like CLIP.
|
2502.04321
|
Variation of sentence length across time and genre
|
cs.CL
|
The goal of this paper is threefold: i) to present some practical aspects of
using full-text version of Corpus of Historical American English (COHA), the
largest diachronic multi-genre corpus of the English language, in the
investigation of a linguistic trend of change; ii) to test a widely held
assumption that sentence length in written English has been steadily decreasing
over the past few centuries; iii) to point to a possible link between the
changes in sentence length and changes in the English syntactic usage. The
empirical proof of concept for iii) is provided by the decline in the frequency
of the non-finite purpose subordinator in order to. Sentence length, genre and
the likelihood of occurrence of in order to are shown to be interrelated.
|
2502.04322
|
Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple
Interactions
|
cs.LG cs.AI cs.CL cs.CY
|
Despite extensive safety alignment efforts, large language models (LLMs)
remain vulnerable to jailbreak attacks that elicit harmful behavior. While
existing studies predominantly focus on attack methods that require technical
expertise, two critical questions remain underexplored: (1) Are jailbroken
responses truly useful in enabling average users to carry out harmful actions?
(2) Do safety vulnerabilities exist in more common, simple human-LLM
interactions? In this paper, we demonstrate that LLM responses most effectively
facilitate harmful actions when they are both actionable and informative--two
attributes easily elicited in multi-step, multilingual interactions. Using this
insight, we propose HarmScore, a jailbreak metric that measures how effectively
an LLM response enables harmful actions, and Speak Easy, a simple multi-step,
multilingual attack framework. Notably, by incorporating Speak Easy into direct
request and jailbreak baselines, we see an average absolute increase of 0.319
in Attack Success Rate and 0.426 in HarmScore in both open-source and
proprietary LLMs across four safety benchmarks. Our work reveals a critical yet
often overlooked vulnerability: Malicious users can easily exploit common
interaction patterns for harmful intentions.
|
2502.04323
|
The Uniformly Rotated Mondrian Kernel
|
cs.LG math.PR
|
First proposed by Rahimi and Recht, random features are used to decrease the
computational cost of kernel machines in large-scale problems. The Mondrian
kernel is one such example of a fast random feature approximation of the
Laplace kernel, generated by a computationally efficient hierarchical random
partition of the input space known as the Mondrian process. In this work, we
study a variation of this random feature map by using uniformly randomly
rotated Mondrian processes to approximate a kernel that is invariant under
rotations. We obtain a closed-form expression for this isotropic kernel, as
well as a uniform convergence rate of the uniformly rotated Mondrian kernel to
this limit. To this end, we utilize techniques from the theory of stationary
random tessellations in stochastic geometry and prove a new result on the
geometry of the typical cell of the superposition of uniformly random rotations
of Mondrian tessellations. Finally, we test the empirical performance of this
random feature map on both synthetic and real-world datasets, demonstrating its
improved performance over the Mondrian kernel on a debiased dataset.
|
2502.04324
|
Can Grammarly and ChatGPT accelerate language change? AI-powered
technologies and their impact on the English language: wordiness vs.
conciseness
|
cs.CL cs.CY
|
The proliferation of NLP-powered language technologies, AI-based natural
language generation models, and English as a mainstream means of communication
among both native and non-native speakers make the output of AI-powered tools
especially intriguing to linguists. This paper investigates how Grammarly and
ChatGPT affect the English language regarding wordiness vs. conciseness. A case
study focusing on the purpose subordinator in order to is presented to
illustrate the way in which Grammarly and ChatGPT recommend shorter grammatical
structures instead of longer and more elaborate ones. Although the analysed
sentences were produced by native speakers, are perfectly correct, and were
extracted from a language corpus of contemporary English, both Grammarly and
ChatGPT suggest more conciseness and less verbosity, even for relatively short
sentences. The present article argues that technologies such as Grammarly not
only mirror language change but also have the potential to facilitate or
accelerate it.
|
2502.04326
|
WorldSense: Evaluating Real-world Omnimodal Understanding for Multimodal
LLMs
|
cs.CV cs.AI
|
In this paper, we introduce WorldSense, the first benchmark to assess the
multi-modal video understanding, that simultaneously encompasses visual, audio,
and text inputs. In contrast to existing benchmarks, our WorldSense has several
features: (i) collaboration of omni-modality, we design the evaluation tasks to
feature a strong coupling of audio and video, requiring models to effectively
utilize the synergistic perception of omni-modality; (ii) diversity of videos
and tasks, WorldSense encompasses a diverse collection of 1,662 audio-visual
synchronised videos, systematically categorized into 8 primary domains and 67
fine-grained subcategories to cover the broad scenarios, and 3,172 multi-choice
QA pairs across 26 distinct tasks to enable the comprehensive evaluation; (iii)
high-quality annotations, all the QA pairs are manually labeled by 80 expert
annotators with multiple rounds of correction to ensure quality. Based on our
WorldSense, we extensively evaluate various state-of-the-art models. The
experimental results indicate that existing models face significant challenges
in understanding real-world scenarios (48.0% best accuracy). We hope our
WorldSense can provide a platform for evaluating the ability in constructing
and understanding coherent contexts from omni-modality.
|
2502.04327
|
Value-Based Deep RL Scales Predictably
|
cs.LG
|
Scaling data and compute is critical to the success of machine learning.
However, scaling demands predictability: we want methods to not only perform
well with more compute or data, but also have their performance be predictable
from small-scale runs, without running the large-scale experiment. In this
paper, we show that value-based off-policy RL methods are predictable despite
community lore regarding their pathological behavior. First, we show that data
and compute requirements to attain a given performance level lie on a Pareto
frontier, controlled by the updates-to-data (UTD) ratio. By estimating this
frontier, we can predict this data requirement when given more compute, and
this compute requirement when given more data. Second, we determine the optimal
allocation of a total resource budget across data and compute for a given
performance and use it to determine hyperparameters that maximize performance
for a given budget. Third, this scaling behavior is enabled by first estimating
predictable relationships between hyperparameters, which is used to manage
effects of overfitting and plasticity loss unique to RL. We validate our
approach using three algorithms: SAC, BRO, and PQL on DeepMind Control, OpenAI
gym, and IsaacGym, when extrapolating to higher levels of data, compute,
budget, or performance.
|
2502.04328
|
Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive
Modality Alignment
|
cs.CV cs.CL cs.MM cs.SD eess.AS eess.IV
|
Recent advances in large language models, particularly following GPT-4o, have
sparked increasing interest in developing omni-modal models capable of
understanding more modalities. While some open-source alternatives have
emerged, there is still a notable lag behind specialized single-modality models
in performance. In this paper, we present Ola, an Omni-modal language model
that achieves competitive performance across image, video, and audio
understanding compared to specialized counterparts. The core design of Ola lies
in its progressive modality alignment strategy that extends the supporting
modality of the language model progressively. Our training pipeline begins with
the most distinct modalities: image and text, then gradually expands the skill
sets of the model using speech data that connects language and audio knowledge,
and video data that connects all modalities. The progressive learning pipeline
also enables us to maintain a relatively small size of the cross-modal
alignment data, making developing omni-modal from existing vision-language
models easy and less costly. Moreover, to unlock an advanced interactive
experience like GPT-4o, we further design a sentence-wise decoding solution for
streaming speech generation. Extensive experiments demonstrate that Ola
surpasses existing open omni-modal LLMs across all modalities while achieving
highly competitive performance compared to state-of-the-art specialized models
of similar sizes. We aim to make Ola a fully open omni-modal understanding
solution to advance future research in this emerging field. Model weights,
code, and data are open-sourced at https://github.com/Ola-Omni/Ola.
|
2502.04329
|
SMART: Advancing Scalable Map Priors for Driving Topology Reasoning
|
cs.CV cs.RO
|
Topology reasoning is crucial for autonomous driving as it enables
comprehensive understanding of connectivity and relationships between lanes and
traffic elements. While recent approaches have shown success in perceiving
driving topology using vehicle-mounted sensors, their scalability is hindered
by the reliance on training data captured by consistent sensor configurations.
We identify that the key factor in scalable lane perception and topology
reasoning is the elimination of this sensor-dependent feature. To address this,
we propose SMART, a scalable solution that leverages easily available
standard-definition (SD) and satellite maps to learn a map prior model,
supervised by large-scale geo-referenced high-definition (HD) maps independent
of sensor settings. Attributed to scaled training, SMART alone achieves
superior offline lane topology understanding using only SD and satellite
inputs. Extensive experiments further demonstrate that SMART can be seamlessly
integrated into any online topology reasoning methods, yielding significant
improvements of up to 28% on the OpenLane-V2 benchmark.
|
2502.04339
|
Analysis of Diffusion Models for Manifold Data
|
math.ST cond-mat.dis-nn cs.IT cs.LG math.IT math.PR stat.TH
|
We analyze the time reversed dynamics of generative diffusion models. If the
exact empirical score function is used in a regime of large dimension and
exponentially large number of samples, these models are known to undergo
transitions between distinct dynamical regimes. We extend this analysis and
compute the transitions for an analytically tractable manifold model where the
statistical model for the data is a mixture of lower dimensional Gaussians
embedded in higher dimensional space. We compute the so-called speciation and
collapse transition times, as a function of the ratio of manifold-to-ambient
space dimensions, and other characteristics of the data model. An important
tool used in our analysis is the exact formula for the mutual information (or
free energy) of Generalized Linear Models.
|
2502.04341
|
Comparative Analysis of Community Detection Algorithms on the SNAP
Social Circles Dataset
|
cs.SI cs.AI
|
In network research, Community Detection has always been a topic of
significant interest in network science, with numerous papers and algorithms
proposing to uncover the underlying structures within networks. In this paper,
we conduct a comparative analysis of several prominent community detection
algorithms applied to the SNAP Social Circles Dataset, derived from the
Facebook Social Media network. The algorithms implemented include Louvain,
Girvan-Newman, Spectral Clustering, K-Means Clustering, etc. We evaluate the
performance of these algorithms based on various metrics such as modularity,
normalized cut-ratio, silhouette score, compactness, and separability. Our
findings reveal insights into the effectiveness of each algorithm in detecting
various meaningful communities within the social network, shedding light on
their strength and limitations. This research contributes to the understanding
of community detection methods and provides valuable guidance for their
application in analyzing real-world social networks.
|
2502.04342
|
Tutorial on Using Machine Learning and Deep Learning Models for Mental
Illness Detection
|
cs.CL cs.AI cs.LG
|
Social media has become an important source for understanding mental health,
providing researchers with a way to detect conditions like depression from
user-generated posts. This tutorial provides practical guidance to address
common challenges in applying machine learning and deep learning methods for
mental health detection on these platforms. It focuses on strategies for
working with diverse datasets, improving text preprocessing, and addressing
issues such as imbalanced data and model evaluation. Real-world examples and
step-by-step instructions demonstrate how to apply these techniques
effectively, with an emphasis on transparency, reproducibility, and ethical
considerations. By sharing these approaches, this tutorial aims to help
researchers build more reliable and widely applicable models for mental health
research, contributing to better tools for early detection and intervention.
|
2502.04343
|
Synergistic Traffic Assignment
|
cs.GT cs.MA math.OC
|
Traffic assignment analyzes traffic flows in road networks that emerge due to
traveler interaction. Traditionally, travelers are assumed to use private cars,
so road costs grow with the number of users due to congestion. However, in
sustainable transit systems, travelers share vehicles s.t. more users on a road
lead to higher sharing potential and reduced cost per user. Thus, we invert the
usual avoidant traffic assignment (ATA) and instead consider synergistic
traffic assignment (STA) where road costs decrease with use.
We find that STA is significantly different from ATA from a game-theoretical
point of view. We show that a simple iterative best-response method with
simultaneous updates converges to an equilibrium state. This enables efficient
computation of equilibria using optimized speedup techniques for shortest-path
queries. In contrast, ATA requires slower sequential updates or more
complicated iteration schemes that only approximate an equilibrium. Experiments
with a realistic scenario for the city of Stuttgart indicate that STA indeed
quickly converges to an equilibrium. We envision STA as a part of
software-defined transportation systems that dynamically adapt to current
travel demand. As a first demonstration, we show that an STA equilibrium can be
used to incorporate traveler synergism in a simple bus line planning algorithm
to potentially greatly reduce the required vehicle resources.
|
2502.04345
|
JingFang: A Traditional Chinese Medicine Large Language Model of
Expert-Level Medical Diagnosis and Syndrome Differentiation-Based Treatment
|
cs.CL cs.AI cs.LG
|
Traditional Chinese medicine (TCM) plays a vital role in health protection
and disease treatment, but its practical application requires extensive medical
knowledge and clinical experience. Existing TCM Large Language Models (LLMs)
exhibit critical limitations of uncomprehensive medical consultation and
diagnoses, and inaccurate syndrome differentiation-based treatment. To address
these issues, this study establishes JingFang (JF): a novel TCM Large Language
Model that demonstrates the expert-level capability of medical diagnosis and
syndrome differentiation-based treatment. We innovate a Multi-agent Dynamic
Collaborative Chain-of-Thought Mechanism (MDCCTM) for medical consultation,
enabling JF with effective and accurate diagnostic ability. In addition, a
Syndrome Agent and a Dual-Stage Retrieval Scheme (DSRS) are developed to
significantly enhance the capacity of JF for disease treatment based on
syndrome differentiation. JingFang not only facilitates the application of LLMs
but also promotes the effective practice of TCM in human health protection and
disease treatment.
|
2502.04346
|
Multi-Lingual Cyber Threat Detection in Tweets/X Using ML, DL, and LLM:
A Comparative Analysis
|
cs.CL cs.AI
|
Cyber threat detection has become an important area of focus in today's
digital age due to the growing spread of fake information and harmful content
on social media platforms such as Twitter (now 'X'). These cyber threats, often
disguised within tweets, pose significant risks to individuals, communities,
and even nations, emphasizing the need for effective detection systems. While
previous research has explored tweet-based threats, much of the work is limited
to specific languages, domains, or locations, or relies on single-model
approaches, reducing their applicability to diverse real-world scenarios. To
address these gaps, our study focuses on multi-lingual tweet cyber threat
detection using a variety of advanced models. The research was conducted in
three stages: (1) We collected and labeled tweet datasets in four languages
English, Chinese, Russian, and Arabic employing both manual and polarity-based
labeling methods to ensure high-quality annotations. (2) Each dataset was
analyzed individually using machine learning (ML) and deep learning (DL) models
to assess their performance on distinct languages. (3) Finally, we combined all
four datasets into a single multi-lingual dataset and applied DL and large
language model (LLM) architectures to evaluate their efficacy in identifying
cyber threats across various languages. Our results show that among machine
learning models, Random Forest (RF) attained the highest performance; however,
the Bi-LSTM architecture consistently surpassed other DL and LLM architectures
across all datasets. These findings underline the effectiveness of Bi-LSTM in
multilingual cyber threat detection. The code for this paper can be found at
this link: https://github.com/Mmurrad/Tweet-Data-Classification.git.
|
2502.04347
|
SCALM: Detecting Bad Practices in Smart Contracts Through LLMs
|
cs.CL cs.AI
|
As the Ethereum platform continues to mature and gain widespread usage, it is
crucial to maintain high standards of smart contract writing practices. While
bad practices in smart contracts may not directly lead to security issues, they
do elevate the risk of encountering problems. Therefore, to understand and
avoid these bad practices, this paper introduces the first systematic study of
bad practices in smart contracts, delving into over 35 specific issues.
Specifically, we propose a large language models (LLMs)-based framework, SCALM.
It combines Step-Back Prompting and Retrieval-Augmented Generation (RAG) to
identify and address various bad practices effectively. Our extensive
experiments using multiple LLMs and datasets have shown that SCALM outperforms
existing tools in detecting bad practices in smart contracts.
|
2502.04348
|
Prompt-based Depth Pruning of Large Language Models
|
cs.CL cs.AI
|
Depth pruning aims to reduce the inference cost of a large language model
without any hardware-specific complications, by simply removing several less
important transformer blocks. However, our empirical findings suggest that the
importance of a transformer block may be highly task-dependent -- a block that
is crucial for a task can be removed without degrading the accuracy on another
task. Based on this observation, we develop a dynamic depth pruning algorithm,
coined PuDDing (Prompt-routed Dynamic Depth Pruning), which determines which
blocks to omit from the model based on the input prompt. PuDDing operates by
training a lightweight router to predict the best omission set among a set of
options, where this option set has also been constructed in a data-driven
manner. Empirical results on commonsense reasoning benchmarks demonstrate that
PuDDing effectively accelerates the inference language models, and achieves
better on-task performance than static depth pruning baselines.
|
2502.04349
|
Dynamic benchmarking framework for LLM-based conversational data capture
|
cs.CL cs.AI
|
The rapid evolution of large language models (LLMs) has transformed
conversational agents, enabling complex human-machine interactions. However,
evaluation frameworks often focus on single tasks, failing to capture the
dynamic nature of multi-turn dialogues. This paper introduces a dynamic
benchmarking framework to assess LLM-based conversational agents through
interactions with synthetic users. The framework integrates generative agent
simulation to evaluate performance on key dimensions: information extraction,
context awareness, and adaptive engagement. By simulating various aspects of
user behavior, our work provides a scalable, automated, and flexible
benchmarking approach. Experimental evaluation - within a loan application use
case - demonstrates the framework's effectiveness under one-shot and few-shot
extraction conditions. Results show that adaptive strategies improve data
extraction accuracy, especially when handling ambiguous responses. Future work
will extend its applicability to broader domains and incorporate additional
metrics (e.g., conversational coherence, user engagement). This study
contributes a structured, scalable approach to evaluating LLM-based
conversational agents, facilitating real-world deployment.
|
2502.04350
|
CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
|
cs.CL cs.AI cs.LG cs.SC cs.SE
|
Existing methods fail to effectively steer Large Language Models (LLMs)
between textual reasoning and code generation, leaving symbolic computing
capabilities underutilized. We introduce CodeSteer, an effective method for
guiding LLM code/text generation. We construct a comprehensive benchmark
SymBench comprising 37 symbolic tasks with adjustable complexity and also
synthesize datasets of 12k multi-round guidance/generation trajectories and
5.5k guidance comparison pairs. We fine-tune the Llama-3-8B model with a newly
designed multi-round supervised fine-tuning (SFT) and direct preference
optimization (DPO). The resulting model, CodeSteerLLM, augmented with the
proposed symbolic and self-answer checkers, effectively guides the code/text
generation of larger models. Augmenting GPT-4o with CodeSteer raises its
average performance score from 53.3 to 86.4, even outperforming the existing
best LLM OpenAI o1 (82.7), o1-preview (74.8), and DeepSeek R1 (76.8) across all
37 tasks (28 seen, 9 unseen). Trained for GPT-4o, CodeSteer demonstrates
superior generalizability, providing an average 41.8 performance boost on
Claude, Mistral, and GPT-3.5. CodeSteer-guided LLMs fully harness symbolic
computing to maintain strong performance on highly complex tasks. Models,
Datasets, and Codes are available at
https://github.com/yongchao98/CodeSteer-v1.0.
|
2502.04351
|
NER4all or Context is All You Need: Using LLMs for low-effort,
high-performance NER on historical texts. A humanities informed approach
|
cs.CL cs.AI
|
Named entity recognition (NER) is a core task for historical research in
automatically establishing all references to people, places, events and the
like. Yet, do to the high linguistic and genre diversity of sources, only
limited canonisation of spellings, the level of required historical domain
knowledge, and the scarcity of annotated training data, established approaches
to natural language processing (NLP) have been both extremely expensive and
yielded only unsatisfactory results in terms of recall and precision. Our paper
introduces a new approach. We demonstrate how readily-available,
state-of-the-art LLMs significantly outperform two leading NLP frameworks,
spaCy and flair, for NER in historical documents by seven to twentytwo percent
higher F1-Scores. Our ablation study shows how providing historical context to
the task and a bit of persona modelling that turns focus away from a purely
linguistic approach are core to a successful prompting strategy. We also
demonstrate that, contrary to our expectations, providing increasing numbers of
examples in few-shot approaches does not improve recall or precision below a
threshold of 16-shot. In consequence, our approach democratises access to NER
for all historians by removing the barrier of scripting languages and
computational skills required for established NLP tools and instead leveraging
natural language prompts and consumer-grade tools and frontends.
|
2502.04352
|
Investigating the Robustness of Deductive Reasoning with Large Language
Models
|
cs.CL cs.AI
|
Large Language Models (LLMs) have been shown to achieve impressive results
for many reasoning-based Natural Language Processing (NLP) tasks, suggesting a
degree of deductive reasoning capability. However, it remains unclear to which
extent LLMs, in both informal and autoformalisation methods, are robust on
logical deduction tasks. Moreover, while many LLM-based deduction methods have
been proposed, there is a lack of a systematic study that analyses the impact
of their design components. Addressing these two challenges, we propose the
first study of the robustness of LLM-based deductive reasoning methods. We
devise a framework with two families of perturbations: adversarial noise and
counterfactual statements, which jointly generate seven perturbed datasets. We
organize the landscape of LLM reasoners according to their reasoning format,
formalisation syntax, and feedback for error recovery. The results show that
adversarial noise affects autoformalisation, while counterfactual statements
influence all approaches. Detailed feedback does not improve overall accuracy
despite reducing syntax errors, pointing to the challenge of LLM-based methods
to self-correct effectively.
|
2502.04353
|
CognArtive: Large Language Models for Automating Art Analysis and
Decoding Aesthetic Elements
|
cs.CL cs.AI cs.CV
|
Art, as a universal language, can be interpreted in diverse ways, with
artworks embodying profound meanings and nuances. The advent of Large Language
Models (LLMs) and the availability of Multimodal Large Language Models (MLLMs)
raise the question of how these transformative models can be used to assess and
interpret the artistic elements of artworks. While research has been conducted
in this domain, to the best of our knowledge, a deep and detailed understanding
of the technical and expressive features of artworks using LLMs has not been
explored. In this study, we investigate the automation of a formal art analysis
framework to analyze a high-throughput number of artworks rapidly and examine
how their patterns evolve over time. We explore how LLMs can decode artistic
expressions, visual elements, composition, and techniques, revealing emerging
patterns that develop across periods. Finally, we discuss the strengths and
limitations of LLMs in this context, emphasizing their ability to process vast
quantities of art-related data and generate insightful interpretations. Due to
the exhaustive and granular nature of the results, we have developed
interactive data visualizations, available online
https://cognartive.github.io/, to enhance understanding and accessibility.
|
2502.04354
|
Reviving The Classics: Active Reward Modeling in Large Language Model
Alignment
|
cs.CL cs.AI cs.LG
|
Building neural reward models from human preferences is a pivotal component
in reinforcement learning from human feedback (RLHF) and large language model
alignment research. Given the scarcity and high cost of human annotation, how
to select the most informative pairs to annotate is an essential yet
challenging open problem. In this work, we highlight the insight that an ideal
comparison dataset for reward modeling should balance exploration of the
representation space and make informative comparisons between pairs with
moderate reward differences. Technically, challenges arise in quantifying the
two objectives and efficiently prioritizing the comparisons to be annotated. To
address this, we propose the Fisher information-based selection strategies,
adapt theories from the classical experimental design literature, and apply
them to the final linear layer of the deep neural network-based reward modeling
tasks. Empirically, our method demonstrates remarkable performance, high
computational efficiency, and stability compared to other selection methods
from deep learning and classical statistical literature across multiple
open-source LLMs and datasets. Further ablation studies reveal that
incorporating cross-prompt comparisons in active reward modeling significantly
enhances labeling efficiency, shedding light on the potential for improved
annotation strategies in RLHF.
|
2502.04355
|
LLM-ProS: Analyzing Large Language Models' Performance in Competitive
Problem Solving
|
cs.CL cs.AI
|
The rapid advancement of large language models has opened new avenues for
automating complex problem-solving tasks such as algorithmic coding and
competitive programming. This paper introduces a novel evaluation technique,
LLM-ProS, to assess the performance of state-of-the-art LLMs on International
Collegiate Programming Contest (ICPC) problems. Using a curated dataset of 166
World Finals problems from 2011 to 2024, we benchmark the models' reasoning,
accuracy, and efficiency. We evaluate the five models-GPT-4o, Mistral Large,
Llama-3.1-405B, and the o1 family, consisting of o1-mini and o1-preview, across
critical metrics like correctness, resource utilization, and response
calibration. Our results reveal significant differences in the models'
abilities to generalize, adapt, and solve novel problems. We also investigated
the impact of training methodologies, dataset contamination, and
chain-of-thought reasoning on model performance. The findings provide new
insights into optimizing LLMs for algorithmic tasks, highlighting both
strengths and limitations of current models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.