id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.13095
|
Understanding and Rectifying Safety Perception Distortion in VLMs
|
cs.CV cs.CL cs.LG
|
Recent studies reveal that vision-language models (VLMs) become more
susceptible to harmful requests and jailbreak attacks after integrating the
vision modality, exhibiting greater vulnerability than their text-only LLM
backbones. To uncover the root cause of this phenomenon, we conduct an in-depth
analysis and identify a key issue: multimodal inputs introduce an
modality-induced activation shift toward a "safer" direction compared to their
text-only counterparts, leading VLMs to systematically overestimate the safety
of harmful inputs. We refer to this issue as safety perception distortion. To
mitigate such distortion, we propose Activation Shift Disentanglement and
Calibration (ShiftDC), a training-free method that decomposes and calibrates
the modality-induced activation shift to reduce the impact of modality on
safety. By isolating and removing the safety-relevant component, ShiftDC
restores the inherent safety alignment of the LLM backbone while preserving the
vision-language capabilities of VLMs. Empirical results demonstrate that
ShiftDC significantly enhances alignment performance on safety benchmarks
without impairing model utility.
|
2502.13103
|
WeedsGalore: A Multispectral and Multitemporal UAV-based Dataset for
Crop and Weed Segmentation in Agricultural Maize Fields
|
cs.CV
|
Weeds are one of the major reasons for crop yield loss but current weeding
practices fail to manage weeds in an efficient and targeted manner. Effective
weed management is especially important for crops with high worldwide
production such as maize, to maximize crop yield for meeting increasing global
demands. Advances in near-sensing and computer vision enable the development of
new tools for weed management. Specifically, state-of-the-art segmentation
models, coupled with novel sensing technologies, can facilitate timely and
accurate weeding and monitoring systems. However, learning-based approaches
require annotated data and show a lack of generalization to aerial imaging for
different crops. We present a novel dataset for semantic and instance
segmentation of crops and weeds in agricultural maize fields. The multispectral
UAV-based dataset contains images with RGB, red-edge, and near-infrared bands,
a large number of plant instances, dense annotations for maize and four weed
classes, and is multitemporal. We provide extensive baseline results for both
tasks, including probabilistic methods to quantify prediction uncertainty,
improve model calibration, and demonstrate the approach's applicability to
out-of-distribution data. The results show the effectiveness of the two
additional bands compared to RGB only, and better performance in our target
domain than models trained on existing datasets. We hope our dataset advances
research on methods and operational systems for fine-grained weed
identification, enhancing the robustness and applicability of UAV-based weed
management. The dataset and code are available at
https://github.com/GFZ/weedsgalore
|
2502.13105
|
Enhanced uncertainty quantification variational autoencoders for the
solution of Bayesian inverse problems
|
cs.LG cs.NA math.NA
|
Among other uses, neural networks are a powerful tool for solving
deterministic and Bayesian inverse problems in real-time. In the Bayesian
framework, variational autoencoders, a specialized type of neural network,
enable the estimation of model parameters and their distribution based on
observational data allowing to perform real-time inverse uncertainty
quantification. In this work, we build upon existing research [Goh, H. et al.,
Proceedings of Machine Learning Research, 2022] by proposing a novel loss
function to train variational autoencoders for Bayesian inverse problems. When
the forward map is affine, we provide a theoretical proof of the convergence of
the latent states of variational autoencoders to the posterior distribution of
the model parameters. We validate this theoretical result through numerical
tests and we compare the proposed variational autoencoder with the existing one
in the literature. Finally, we test the proposed variational autoencoder on the
Laplace equation.
|
2502.13107
|
MatterChat: A Multi-Modal LLM for Material Science
|
cs.AI cs.LG
|
Understanding and predicting the properties of inorganic materials is crucial
for accelerating advancements in materials science and driving applications in
energy, electronics, and beyond. Integrating material structure data with
language-based information through multi-modal large language models (LLMs)
offers great potential to support these efforts by enhancing human-AI
interaction. However, a key challenge lies in integrating atomic structures at
full resolution into LLMs. In this work, we introduce MatterChat, a versatile
structure-aware multi-modal LLM that unifies material structural data and
textual inputs into a single cohesive model. MatterChat employs a bridging
module to effectively align a pretrained machine learning interatomic potential
with a pretrained LLM, reducing training costs and enhancing flexibility. Our
results demonstrate that MatterChat significantly improves performance in
material property prediction and human-AI interaction, surpassing
general-purpose LLMs such as GPT-4. We also demonstrate its usefulness in
applications such as more advanced scientific reasoning and step-by-step
material synthesis.
|
2502.13108
|
Improving Clinical Question Answering with Multi-Task Learning: A Joint
Approach for Answer Extraction and Medical Categorization
|
cs.CL cs.AI cs.LG
|
Clinical Question Answering (CQA) plays a crucial role in medical
decision-making, enabling physicians to extract relevant information from
Electronic Medical Records (EMRs). While transformer-based models such as BERT,
BioBERT, and ClinicalBERT have demonstrated state-of-the-art performance in
CQA, existing models lack the ability to categorize extracted answers, which is
critical for structured retrieval, content filtering, and medical decision
support.
To address this limitation, we introduce a Multi-Task Learning (MTL)
framework that jointly trains CQA models for both answer extraction and medical
categorization. In addition to predicting answer spans, our model classifies
responses into five standardized medical categories: Diagnosis, Medication,
Symptoms, Procedure, and Lab Reports. This categorization enables more
structured and interpretable outputs, making clinical QA models more useful in
real-world healthcare settings.
We evaluate our approach on emrQA, a large-scale dataset for medical question
answering. Results show that MTL improves F1-score by 2.2% compared to standard
fine-tuning, while achieving 90.7% accuracy in answer categorization. These
findings suggest that MTL not only enhances CQA performance but also introduces
an effective mechanism for categorization and structured medical information
retrieval.
|
2502.13110
|
MLPs at the EOC: Dynamics of Feature Learning
|
cs.LG
|
Since infinitely wide neural networks in the kernel regime are random feature
models, the success of contemporary deep learning lies in the rich regime,
where a satisfying theory should explain not only the convergence of gradient
descent but the learning of features along the way. Such a theory should also
cover phenomena observed by practicioners including the Edge of Stability (EOS)
and the catapult mechanism. For a practically relevant theory in the limit,
neural network parameterizations have to efficiently reproduce limiting
behavior as width and depth are scaled up. While widthwise scaling is mostly
settled, depthwise scaling is solved only at initialization by the Edge of
Chaos (EOC). During training, scaling up depth is either done by inversely
scaling the learning rate or adding residual connections. We propose $(1)$ the
Normalized Update Parameterization ($\nu$P) to solve this issue by growing
hidden layer sizes depthwise inducing the regularized evolution of
preactivations, $(2)$ a hypothetical explanation for feature learning via the
cosine of new and cumulative parameter updates and $(3)$ a geometry-aware
learning rate schedule that is able to prolong the catapult phase indefinitely.
We support our hypotheses and demonstrate the usefulness of $\nu$P and the
learning rate schedule by empirical evidence.
|
2502.13112
|
Constrained Online Convex Optimization with Polyak Feasibility Steps
|
cs.LG math.OC
|
In this work, we study online convex optimization with a fixed constraint
function $g : \mathbb{R}^d \rightarrow \mathbb{R}$. Prior work on this problem
has shown $O(\sqrt{T})$ regret and cumulative constraint satisfaction
$\sum_{t=1}^{T} g(x_t) \leq 0$, while only accessing the constraint value and
subgradient at the played actions $g(x_t), \partial g(x_t)$. Using the same
constraint information, we show a stronger guarantee of anytime constraint
satisfaction $g(x_t) \leq 0 \ \forall t \in [T]$, and matching $O(\sqrt{T})$
regret guarantees. These contributions are thanks to our approach of using
Polyak feasibility steps to ensure constraint satisfaction, without sacrificing
regret. Specifically, after each step of online gradient descent, our algorithm
applies a subgradient descent step on the constraint function where the
step-size is chosen according to the celebrated Polyak step-size. We further
validate this approach with numerical experiments.
|
2502.13114
|
The influence of motion features in temporal perception
|
cs.CL
|
This paper examines the role of manner-of-motion verbs in shaping subjective
temporal perception and emotional resonance. Through four complementary
studies, we explore how these verbs influence the conceptualization of time,
examining their use in literal and metaphorical (temporal) contexts. Our
findings reveal that faster verbs (e.g., fly, zoom) evoke dynamic and engaging
temporal experiences, often linked to positive emotions and greater agency. In
contrast, slower verbs (e.g., crawl, drag) convey passivity, monotony, and
negative emotions, reflecting tedious or constrained experiences of time. These
effects are amplified in metaphorical contexts, where manner verbs encode
emotional and experiential nuances that transcend their literal meanings. We
also find that participants prefer manner verbs over path verbs (e.g., go,
pass) in emotionally charged temporal contexts, as manner verbs capture the
experiential and emotional qualities of time more effectively. These findings
highlight the interplay between language, motion, and emotion in shaping
temporal perception, offering insights into how linguistic framing influences
subjective experiences of time.
|
2502.13115
|
Near-Optimal Private Learning in Linear Contextual Bandits
|
cs.LG cs.AI cs.CR math.ST stat.ML stat.TH
|
We analyze the problem of private learning in generalized linear contextual
bandits. Our approach is based on a novel method of re-weighted regression,
yielding an efficient algorithm with regret of order
$\sqrt{T}+\frac{1}{\alpha}$ and $\sqrt{T}/\alpha$ in the joint and local model
of $\alpha$-privacy, respectively. Further, we provide near-optimal private
procedures that achieve dimension-independent rates in private linear models
and linear contextual bandits. In particular, our results imply that joint
privacy is almost "for free" in all the settings we consider, partially
addressing the open problem posed by Azize and Basu (2024).
|
2502.13117
|
Performance Evaluation of Large Language Models in Statistical
Programming
|
stat.AP cs.AI
|
The programming capabilities of large language models (LLMs) have
revolutionized automatic code generation and opened new avenues for automatic
statistical analysis. However, the validity and quality of these generated
codes need to be systematically evaluated before they can be widely adopted.
Despite their growing prominence, a comprehensive evaluation of statistical
code generated by LLMs remains scarce in the literature. In this paper, we
assess the performance of LLMs, including two versions of ChatGPT and one
version of Llama, in the domain of SAS programming for statistical analysis.
Our study utilizes a set of statistical analysis tasks encompassing diverse
statistical topics and datasets. Each task includes a problem description,
dataset information, and human-verified SAS code. We conduct a comprehensive
assessment of the quality of SAS code generated by LLMs through human expert
evaluation based on correctness, effectiveness, readability, executability, and
the accuracy of output results. The analysis of rating scores reveals that
while LLMs demonstrate usefulness in generating syntactically correct code,
they struggle with tasks requiring deep domain understanding and may produce
redundant or incorrect results. This study offers valuable insights into the
capabilities and limitations of LLMs in statistical programming, providing
guidance for future advancements in AI-assisted coding systems for statistical
analysis.
|
2502.13119
|
STEER-ME: Assessing the Microeconomic Reasoning of Large Language Models
|
cs.CL
|
How should one judge whether a given large language model (LLM) can reliably
perform economic reasoning? Most existing LLM benchmarks focus on specific
applications and fail to present the model with a rich variety of economic
tasks. A notable exception is Raman et al. [2024], who offer an approach for
comprehensively benchmarking strategic decision-making; however, this approach
fails to address the non-strategic settings prevalent in microeconomics, such
as supply-and-demand analysis. We address this gap by taxonomizing
microeconomic reasoning into $58$ distinct elements, focusing on the logic of
supply and demand, each grounded in up to $10$ distinct domains, $5$
perspectives, and $3$ types. The generation of benchmark data across this
combinatorial space is powered by a novel LLM-assisted data generation protocol
that we dub auto-STEER, which generates a set of questions by adapting
handwritten templates to target new domains and perspectives. Because it offers
an automated way of generating fresh questions, auto-STEER mitigates the risk
that LLMs will be trained to over-fit evaluation benchmarks; we thus hope that
it will serve as a useful tool both for evaluating and fine-tuning models for
years to come. We demonstrate the usefulness of our benchmark via a case study
on $27$ LLMs, ranging from small open-source models to the current state of the
art. We examined each model's ability to solve microeconomic problems across
our whole taxonomy and present the results across a range of prompting
strategies and scoring metrics.
|
2502.13120
|
Adapting Psycholinguistic Research for LLMs: Gender-inclusive Language
in a Coreference Context
|
cs.CL cs.AI
|
Gender-inclusive language is often used with the aim of ensuring that all
individuals, regardless of gender, can be associated with certain concepts.
While psycholinguistic studies have examined its effects in relation to human
cognition, it remains unclear how Large Language Models (LLMs) process
gender-inclusive language. Given that commercial LLMs are gaining an
increasingly strong foothold in everyday applications, it is crucial to examine
whether LLMs in fact interpret gender-inclusive language neutrally, because the
language they generate has the potential to influence the language of their
users. This study examines whether LLM-generated coreferent terms align with a
given gender expression or reflect model biases. Adapting psycholinguistic
methods from French to English and German, we find that in English, LLMs
generally maintain the antecedent's gender but exhibit underlying masculine
bias. In German, this bias is much stronger, overriding all tested
gender-neutralization strategies.
|
2502.13124
|
NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions
|
cs.CL
|
Scaling reasoning capabilities beyond traditional domains such as math and
coding is hindered by the lack of diverse and high-quality questions. To
overcome this limitation, we introduce a scalable approach for generating
diverse and challenging reasoning questions, accompanied by reference answers.
We present NaturalReasoning, a comprehensive dataset comprising 2.8 million
questions that span multiple domains, including STEM fields (e.g., Physics,
Computer Science), Economics, Social Sciences, and more. We demonstrate the
utility of the questions in NaturalReasoning through knowledge distillation
experiments which show that NaturalReasoning can effectively elicit and
transfer reasoning capabilities from a strong teacher model. Furthermore, we
demonstrate that NaturalReasoning is also effective for unsupervised
self-training using external reward models or self-rewarding.
|
2502.13125
|
RuozhiBench: Evaluating LLMs with Logical Fallacies and Misleading
Premises
|
cs.CL
|
Recent advances in large language models (LLMs) have shown that they can
answer questions requiring complex reasoning. However, their ability to
identify and respond to text containing logical fallacies or deliberately
misleading premises remains less studied. To address this gap, we introduce
RuozhiBench, a bilingual dataset comprising 677 carefully curated questions
that contain various forms of deceptive reasoning, meticulously crafted through
extensive human effort and expert review. In a comprehensive evaluation of 17
LLMs from 5 Series over RuozhiBench using both open-ended and two-choice
formats, we conduct extensive analyses on evaluation protocols and result
patterns. Despite their high scores on conventional benchmarks, these models
showed limited ability to detect and reason correctly about logical fallacies,
with even the best-performing model, Claude-3-haiku, achieving only 62%
accuracy compared to the human of more than 90%.
|
2502.13127
|
Facilitating Long Context Understanding via Supervised Chain-of-Thought
Reasoning
|
cs.CL
|
Recent advances in Large Language Models (LLMs) have enabled them to process
increasingly longer sequences, ranging from 2K to 2M tokens and even beyond.
However, simply extending the input sequence length does not necessarily lead
to effective long-context understanding. In this study, we integrate
Chain-of-Thought (CoT) reasoning into LLMs in a supervised manner to facilitate
effective long-context understanding. To achieve this, we introduce
LongFinanceQA, a synthetic dataset in the financial domain designed to improve
long-context reasoning. Unlike existing long-context synthetic data,
LongFinanceQA includes intermediate CoT reasoning before the final conclusion,
which encourages LLMs to perform explicit reasoning, improving accuracy and
interpretability in long-context understanding. To generate synthetic CoT
reasoning, we propose Property-driven Agentic Inference (PAI), an agentic
framework that simulates human-like reasoning steps, including property
extraction, retrieval, and summarization. We evaluate PAI's reasoning
capabilities by assessing GPT-4o-mini w/ PAI on the Loong benchmark,
outperforming standard GPT-4o-mini by 20.0%. Furthermore, we fine-tune
LLaMA-3.1-8B-Instruct on LongFinanceQA, achieving a 24.6% gain on Loong's
financial subset.
|
2502.13128
|
SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song
Generation
|
cs.SD cs.AI
|
Text-to-song generation, the task of creating vocals and accompaniment from
textual inputs, poses significant challenges due to domain complexity and data
scarcity. Existing approaches often employ multi-stage generation procedures,
resulting in cumbersome training and inference pipelines. In this paper, we
propose SongGen, a fully open-source, single-stage auto-regressive transformer
designed for controllable song generation. The proposed model facilitates
fine-grained control over diverse musical attributes, including lyrics and
textual descriptions of instrumentation, genre, mood, and timbre, while also
offering an optional three-second reference clip for voice cloning. Within a
unified auto-regressive framework, SongGen supports two output modes: mixed
mode, which generates a mixture of vocals and accompaniment directly, and
dual-track mode, which synthesizes them separately for greater flexibility in
downstream applications. We explore diverse token pattern strategies for each
mode, leading to notable improvements and valuable insights. Furthermore, we
design an automated data preprocessing pipeline with effective quality control.
To foster community engagement and future research, we will release our model
weights, training code, annotated data, and preprocessing pipeline. The
generated samples are showcased on our project page at
https://liuzh-19.github.io/SongGen/ , and the code will be available at
https://github.com/LiuZH-19/SongGen .
|
2502.13129
|
Is Noise Conditioning Necessary for Denoising Generative Models?
|
cs.CV
|
It is widely believed that noise conditioning is indispensable for denoising
diffusion models to work successfully. This work challenges this belief.
Motivated by research on blind image denoising, we investigate a variety of
denoising-based generative models in the absence of noise conditioning. To our
surprise, most models exhibit graceful degradation, and in some cases, they
even perform better without noise conditioning. We provide a theoretical
analysis of the error caused by removing noise conditioning and demonstrate
that our analysis aligns with empirical observations. We further introduce a
noise-unconditional model that achieves a competitive FID of 2.23 on CIFAR-10,
significantly narrowing the gap to leading noise-conditional models. We hope
our findings will inspire the community to revisit the foundations and
formulations of denoising generative models.
|
2502.13130
|
Magma: A Foundation Model for Multimodal AI Agents
|
cs.CV cs.AI cs.HC cs.LG cs.RO
|
We present Magma, a foundation model that serves multimodal AI agentic tasks
in both the digital and physical worlds. Magma is a significant extension of
vision-language (VL) models in that it not only retains the VL understanding
ability (verbal intelligence) of the latter, but is also equipped with the
ability to plan and act in the visual-spatial world (spatial-temporal
intelligence) and complete agentic tasks ranging from UI navigation to robot
manipulation. To endow the agentic capabilities, Magma is pretrained on large
amounts of heterogeneous datasets spanning from images, videos to robotics
data, where the actionable visual objects (e.g., clickable buttons in GUI) in
images are labeled by Set-of-Mark (SoM) for action grounding, and the object
movements (e.g., the trace of human hands or robotic arms) in videos are
labeled by Trace-of-Mark (ToM) for action planning. Extensive experiments show
that SoM and ToM reach great synergy and facilitate the acquisition of
spatial-temporal intelligence for our Magma model, which is fundamental to a
wide range of tasks as shown in Fig.1. In particular, Magma creates new
state-of-the-art results on UI navigation and robotic manipulation tasks,
outperforming previous models that are specifically tailored to these tasks. On
image and video-related multimodal tasks, Magma also compares favorably to
popular large multimodal models that are trained on much larger datasets. We
make our model and code public for reproducibility at
https://microsoft.github.io/Magma.
|
2502.13131
|
Rethinking Diverse Human Preference Learning through Principal Component
Analysis
|
cs.AI cs.CL
|
Understanding human preferences is crucial for improving foundation models
and building personalized AI systems. However, preferences are inherently
diverse and complex, making it difficult for traditional reward models to
capture their full range. While fine-grained preference data can help,
collecting it is expensive and hard to scale. In this paper, we introduce
Decomposed Reward Models (DRMs), a novel approach that extracts diverse human
preferences from binary comparisons without requiring fine-grained annotations.
Our key insight is to represent human preferences as vectors and analyze them
using Principal Component Analysis (PCA). By constructing a dataset of
embedding differences between preferred and rejected responses, DRMs identify
orthogonal basis vectors that capture distinct aspects of preference. These
decomposed rewards can be flexibly combined to align with different user needs,
offering an interpretable and scalable alternative to traditional reward
models. We demonstrate that DRMs effectively extract meaningful preference
dimensions (e.g., helpfulness, safety, humor) and adapt to new users without
additional training. Our results highlight DRMs as a powerful framework for
personalized and interpretable LLM alignment.
|
2502.13132
|
Learning to Defer for Causal Discovery with Imperfect Experts
|
cs.LG cs.AI stat.ML
|
Integrating expert knowledge, e.g. from large language models, into causal
discovery algorithms can be challenging when the knowledge is not guaranteed to
be correct. Expert recommendations may contradict data-driven results, and
their reliability can vary significantly depending on the domain or specific
query. Existing methods based on soft constraints or inconsistencies in
predicted causal relationships fail to account for these variations in
expertise. To remedy this, we propose L2D-CD, a method for gauging the
correctness of expert recommendations and optimally combining them with
data-driven causal discovery results. By adapting learning-to-defer (L2D)
algorithms for pairwise causal discovery (CD), we learn a deferral function
that selects whether to rely on classical causal discovery methods using
numerical data or expert recommendations based on textual meta-data. We
evaluate L2D-CD on the canonical T\"ubingen pairs dataset and demonstrate its
superior performance compared to both the causal discovery method and the
expert used in isolation. Moreover, our approach identifies domains where the
expert's performance is strong or weak. Finally, we outline a strategy for
generalizing this approach to causal discovery on graphs with more than two
variables, paving the way for further research in this area.
|
2502.13133
|
AV-Flow: Transforming Text to Audio-Visual Human-like Interactions
|
cs.CV
|
We introduce AV-Flow, an audio-visual generative model that animates
photo-realistic 4D talking avatars given only text input. In contrast to prior
work that assumes an existing speech signal, we synthesize speech and vision
jointly. We demonstrate human-like speech synthesis, synchronized lip motion,
lively facial expressions and head pose; all generated from just text
characters. The core premise of our approach lies in the architecture of our
two parallel diffusion transformers. Intermediate highway connections ensure
communication between the audio and visual modalities, and thus, synchronized
speech intonation and facial dynamics (e.g., eyebrow motion). Our model is
trained with flow matching, leading to expressive results and fast inference.
In case of dyadic conversations, AV-Flow produces an always-on avatar, that
actively listens and reacts to the audio-visual input of a user. Through
extensive experiments, we show that our method outperforms prior work,
synthesizing natural-looking 4D talking avatars. Project page:
https://aggelinacha.github.io/AV-Flow/
|
2502.13134
|
RHINO: Learning Real-Time Humanoid-Human-Object Interaction from Human
Demonstrations
|
cs.RO cs.HC cs.LG
|
Humanoid robots have shown success in locomotion and manipulation. Despite
these basic abilities, humanoids are still required to quickly understand human
instructions and react based on human interaction signals to become valuable
assistants in human daily life. Unfortunately, most existing works only focus
on multi-stage interactions, treating each task separately, and neglecting
real-time feedback. In this work, we aim to empower humanoid robots with
real-time reaction abilities to achieve various tasks, allowing human to
interrupt robots at any time, and making robots respond to humans immediately.
To support such abilities, we propose a general humanoid-human-object
interaction framework, named RHINO, i.e., Real-time Humanoid-human Interaction
and Object manipulation. RHINO provides a unified view of reactive motion,
instruction-based manipulation, and safety concerns, over multiple human signal
modalities, such as languages, images, and motions. RHINO is a hierarchical
learning framework, enabling humanoids to learn reaction skills from
human-human-object demonstrations and teleoperation data. In particular, it
decouples the interaction process into two levels: 1) a high-level planner
inferring human intentions from real-time human behaviors; and 2) a low-level
controller achieving reactive motion behaviors and object manipulation skills
based on the predicted intentions. We evaluate the proposed framework on a real
humanoid robot and demonstrate its effectiveness, flexibility, and safety in
various scenarios.
|
2502.13135
|
Sleepless Nights, Sugary Days: Creating Synthetic Users with Health
Conditions for Realistic Coaching Agent Interactions
|
cs.LG cs.AI cs.CL
|
We present an end-to-end framework for generating synthetic users for
evaluating interactive agents designed to encourage positive behavior changes,
such as in health and lifestyle coaching. The synthetic users are grounded in
health and lifestyle conditions, specifically sleep and diabetes management in
this study, to ensure realistic interactions with the health coaching agent.
Synthetic users are created in two stages: first, structured data are generated
grounded in real-world health and lifestyle factors in addition to basic
demographics and behavioral attributes; second, full profiles of the synthetic
users are developed conditioned on the structured data. Interactions between
synthetic users and the coaching agent are simulated using generative
agent-based models such as Concordia, or directly by prompting a language
model. Using two independently-developed agents for sleep and diabetes coaching
as case studies, the validity of this framework is demonstrated by analyzing
the coaching agent's understanding of the synthetic users' needs and
challenges. Finally, through multiple blinded evaluations of user-coach
interactions by human experts, we demonstrate that our synthetic users with
health and behavioral attributes more accurately portray real human users with
the same attributes, compared to generic synthetic users not grounded in such
attributes. The proposed framework lays the foundation for efficient
development of conversational agents through extensive, realistic, and grounded
simulated interactions.
|
2502.13137
|
Theorem Prover as a Judge for Synthetic Data Generation
|
cs.AI
|
The demand for synthetic data in mathematical reasoning has increased due to
its potential to enhance the mathematical capabilities of large language models
(LLMs). However, ensuring the validity of intermediate reasoning steps remains
a significant challenge, affecting data quality. While formal verification via
theorem provers effectively validates LLM reasoning, the autoformalisation of
mathematical proofs remains error-prone. In response, we introduce iterative
autoformalisation, an approach that iteratively refines theorem prover
formalisation to mitigate errors, thereby increasing the execution rate on the
Lean prover from 60% to 87%. Building upon that, we introduce Theorem Prover as
a Judge (TP-as-a-Judge), a method that employs theorem prover formalisation to
rigorously assess LLM intermediate reasoning, effectively integrating
autoformalisation with synthetic data generation. Finally, we present
Reinforcement Learning from Theorem Prover Feedback (RLTPF), a framework that
replaces human annotation with theorem prover feedback in Reinforcement
Learning from Human Feedback (RLHF). Across multiple LLMs, applying
TP-as-a-Judge and RLTPF improves benchmarks with only 3,508 samples, achieving
5.56% accuracy gain on Mistral-7B for MultiArith, 6.00% on Llama-2-7B for
SVAMP, and 3.55% on Llama-3.1-8B for AQUA.
|
2502.13138
|
AIDE: AI-Driven Exploration in the Space of Code
|
cs.AI cs.LG
|
Machine learning, the foundation of modern artificial intelligence, has
driven innovations that have fundamentally transformed the world. Yet, behind
advancements lies a complex and often tedious process requiring labor and
compute intensive iteration and experimentation. Engineers and scientists
developing machine learning models spend much of their time on trial-and-error
tasks instead of conceptualizing innovative solutions or research hypotheses.
To address this challenge, we introduce AI-Driven Exploration (AIDE), a machine
learning engineering agent powered by large language models (LLMs). AIDE frames
machine learning engineering as a code optimization problem, and formulates
trial-and-error as a tree search in the space of potential solutions. By
strategically reusing and refining promising solutions, AIDE effectively trades
computational resources for enhanced performance, achieving state-of-the-art
results on multiple machine learning engineering benchmarks, including our
Kaggle evaluations, OpenAI MLE-Bench and METRs RE-Bench.
|
2502.13140
|
Towards Quantum Tensor Decomposition in Biomedical Applications
|
q-bio.QM cs.LG
|
Tensor decomposition has emerged as a powerful framework for feature
extraction in multi-modal biomedical data. In this review, we present a
comprehensive analysis of tensor decomposition methods such as Tucker,
CANDECOMP/PARAFAC, spiked tensor decomposition, etc. and their diverse
applications across biomedical domains such as imaging, multi-omics, and
spatial transcriptomics. To systematically investigate the literature, we
applied a topic modeling-based approach that identifies and groups distinct
thematic sub-areas in biomedicine where tensor decomposition has been used,
thereby revealing key trends and research directions. We evaluated challenges
related to the scalability of latent spaces along with obtaining the optimal
rank of the tensor, which often hinder the extraction of meaningful features
from increasingly large and complex datasets. Additionally, we discuss recent
advances in quantum algorithms for tensor decomposition, exploring how quantum
computing can be leveraged to address these challenges. Our study includes a
preliminary resource estimation analysis for quantum computing platforms and
examines the feasibility of implementing quantum-enhanced tensor decomposition
methods on near-term quantum devices. Collectively, this review not only
synthesizes current applications and challenges of tensor decomposition in
biomedical analyses but also outlines promising quantum computing strategies to
enhance its impact on deriving actionable insights from complex biomedical
data.
|
2502.13141
|
UniGuardian: A Unified Defense for Detecting Prompt Injection, Backdoor
Attacks and Adversarial Attacks in Large Language Models
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) are vulnerable to attacks like prompt injection,
backdoor attacks, and adversarial attacks, which manipulate prompts or models
to generate harmful outputs. In this paper, departing from traditional deep
learning attack paradigms, we explore their intrinsic relationship and
collectively term them Prompt Trigger Attacks (PTA). This raises a key
question: Can we determine if a prompt is benign or poisoned? To address this,
we propose UniGuardian, the first unified defense mechanism designed to detect
prompt injection, backdoor attacks, and adversarial attacks in LLMs.
Additionally, we introduce a single-forward strategy to optimize the detection
pipeline, enabling simultaneous attack detection and text generation within a
single forward pass. Our experiments confirm that UniGuardian accurately and
efficiently identifies malicious prompts in LLMs.
|
2502.13142
|
Pre-training Auto-regressive Robotic Models with 4D Representations
|
cs.RO cs.AI
|
Foundation models pre-trained on massive unlabeled datasets have
revolutionized natural language and computer vision, exhibiting remarkable
generalization capabilities, thus highlighting the importance of pre-training.
Yet, efforts in robotics have struggled to achieve similar success, limited by
either the need for costly robotic annotations or the lack of representations
that effectively model the physical world. In this paper, we introduce ARM4R,
an Auto-regressive Robotic Model that leverages low-level 4D Representations
learned from human video data to yield a better pre-trained robotic model.
Specifically, we focus on utilizing 3D point tracking representations from
videos derived by lifting 2D representations into 3D space via monocular depth
estimation across time. These 4D representations maintain a shared geometric
structure between the points and robot state representations up to a linear
transformation, enabling efficient transfer learning from human video data to
low-level robotic control. Our experiments show that ARM4R can transfer
efficiently from human video data to robotics and consistently improves
performance on tasks across various robot environments and configurations.
|
2502.13143
|
SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and
Object Manipulation
|
cs.RO cs.AI cs.CV
|
Spatial intelligence is a critical component of embodied AI, promoting robots
to understand and interact with their environments. While recent advances have
enhanced the ability of VLMs to perceive object locations and positional
relationships, they still lack the capability to precisely understand object
orientations-a key requirement for tasks involving fine-grained manipulations.
Addressing this limitation not only requires geometric reasoning but also an
expressive and intuitive way to represent orientation. In this context, we
propose that natural language offers a more flexible representation space than
canonical frames, making it particularly suitable for instruction-following
robotic systems. In this paper, we introduce the concept of semantic
orientation, which defines object orientations using natural language in a
reference-frame-free manner (e.g., the ''plug-in'' direction of a USB or the
''handle'' direction of a knife). To support this, we construct OrienText300K,
a large-scale dataset of 3D models annotated with semantic orientations that
link geometric understanding to functional semantics. By integrating semantic
orientation into a VLM system, we enable robots to generate manipulation
actions with both positional and orientational constraints. Extensive
experiments in simulation and real world demonstrate that our approach
significantly enhances robotic manipulation capabilities, e.g., 48.7% accuracy
on Open6DOR and 74.9% accuracy on SIMPLER.
|
2502.13144
|
RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based
Reinforcement Learning
|
cs.CV cs.RO
|
Existing end-to-end autonomous driving (AD) algorithms typically follow the
Imitation Learning (IL) paradigm, which faces challenges such as causal
confusion and the open-loop gap. In this work, we establish a 3DGS-based
closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS
techniques, we construct a photorealistic digital replica of the real physical
world, enabling the AD policy to extensively explore the state space and learn
to handle out-of-distribution scenarios through large-scale trial and error. To
enhance safety, we design specialized rewards that guide the policy to
effectively respond to safety-critical events and understand real-world causal
relationships. For better alignment with human driving behavior, IL is
incorporated into RL training as a regularization term. We introduce a
closed-loop evaluation benchmark consisting of diverse, previously unseen 3DGS
environments. Compared to IL-based methods, RAD achieves stronger performance
in most closed-loop metrics, especially 3x lower collision rate. Abundant
closed-loop results are presented at https://hgao-cv.github.io/RAD.
|
2502.13145
|
Multimodal Mamba: Decoder-only Multimodal State Space Model via
Quadratic to Linear Distillation
|
cs.CV
|
Recent Multimodal Large Language Models (MLLMs) have achieved remarkable
performance but face deployment challenges due to their quadratic computational
complexity, growing Key-Value cache requirements, and reliance on separate
vision encoders. We propose mmMamba, a framework for developing
linear-complexity native multimodal state space models through progressive
distillation from existing MLLMs using moderate academic computational
resources. Our approach enables the direct conversion of trained decoder-only
MLLMs to linear-complexity architectures without requiring pre-trained
RNN-based LLM or vision encoders. We propose an seeding strategy to carve Mamba
from trained Transformer and a three-stage distillation recipe, which can
effectively transfer the knowledge from Transformer to Mamba while preserving
multimodal capabilities. Our method also supports flexible hybrid architectures
that combine Transformer and Mamba layers for customizable
efficiency-performance trade-offs. Distilled from the Transformer-based
decoder-only HoVLE, mmMamba-linear achieves competitive performance against
existing linear and quadratic-complexity VLMs, while mmMamba-hybrid further
improves performance significantly, approaching HoVLE's capabilities. At 103K
tokens, mmMamba-linear demonstrates 20.6$\times$ speedup and 75.8% GPU memory
reduction compared to HoVLE, while mmMamba-hybrid achieves 13.5$\times$ speedup
and 60.2% memory savings. Code and models are released at
https://github.com/hustvl/mmMamba
|
2502.13146
|
Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct
Preference Optimization
|
cs.CV cs.LG
|
The emergence of large Vision Language Models (VLMs) has broadened the scope
and capabilities of single-modal Large Language Models (LLMs) by integrating
visual modalities, thereby unlocking transformative cross-modal applications in
a variety of real-world scenarios. Despite their impressive performance, VLMs
are prone to significant hallucinations, particularly in the form of
cross-modal inconsistencies. Building on the success of Reinforcement Learning
from Human Feedback (RLHF) in aligning LLMs, recent advancements have focused
on applying direct preference optimization (DPO) on carefully curated datasets
to mitigate these issues. Yet, such approaches typically introduce preference
signals in a brute-force manner, neglecting the crucial role of visual
information in the alignment process. In this paper, we introduce Re-Align, a
novel alignment framework that leverages image retrieval to construct a
dual-preference dataset, effectively incorporating both textual and visual
preference signals. We further introduce rDPO, an extension of the standard
direct preference optimization that incorporates an additional visual
preference objective during fine-tuning. Our experimental results demonstrate
that Re-Align not only mitigates hallucinations more effectively than previous
methods but also yields significant performance gains in general visual
question-answering (VQA) tasks. Moreover, we show that Re-Align maintains
robustness and scalability across a wide range of VLM sizes and architectures.
This work represents a significant step forward in aligning multimodal LLMs,
paving the way for more reliable and effective cross-modal applications. We
release all the code in https://github.com/taco-group/Re-Align.
|
2502.13149
|
Bi-Fact: A Bidirectional Factorization-based Evaluation of Intent
Extraction from UI Trajectories
|
cs.AI
|
Evaluating intent extraction from GUIs demands accurate, fine-grained
metrics. This paper introduces Bi-Fact, a novel method that decomposes intents
into atomic facts and performs bidirectional comparisons to assess precision
and recall. Experiments demonstrate Bi-Fact's superior correlation with human
judgments compared to existing metrics, establishing a more robust evaluation
framework for UI-driven intent understanding.
|
2502.13160
|
Understanding Dynamic Diffusion Process of LLM-based Agents under
Information Asymmetry
|
cs.MA cs.AI
|
Large language models have been used to simulate human society using
multi-agent systems. Most current social simulation research emphasizes
interactive behaviors in fixed environments, ignoring information opacity,
relationship variability and diffusion diversity. In this paper, we study the
dynamics of information diffusion in 12 asymmetric open environments defined by
information content and distribution mechanisms. We first present a general
framework to capture the features of information diffusion. Then, we designed a
dynamic attention mechanism to help agents allocate attention to different
information, addressing the limitations of LLM-based attention. Agents start by
responding to external information stimuli within a five-agent group,
increasing group size and forming information circles while developing
relationships and sharing information. Additionally, we observe the emergence
of information cocoons, the evolution of information gaps, and the accumulation
of social capital, which are closely linked to psychological, sociological, and
communication theories.
|
2502.13161
|
Noumenal Labs White Paper: How To Build A Brain
|
q-bio.NC cs.AI
|
This white paper describes some of the design principles for artificial or
machine intelligence that guide efforts at Noumenal Labs. These principles are
drawn from both nature and from the means by which we come to represent and
understand it. The end goal of research and development in this field should be
to design machine intelligences that augment our understanding of the world and
enhance our ability to act in it, without replacing us. In the first two
sections, we examine the core motivation for our approach: resolving the
grounding problem. We argue that the solution to the grounding problem rests in
the design of models grounded in the world that we inhabit, not mere word
models. A machine super intelligence that is capable of significantly enhancing
our understanding of the human world must represent the world as we do and be
capable of generating new knowledge, building on what we already know. In other
words, it must be properly grounded and explicitly designed for rational,
empirical inquiry, modeled after the scientific method. A primary implication
of this design principle is that agents must be capable of engaging
autonomously in causal physics discovery. We discuss the pragmatic implications
of this approach, and in particular, the use cases in realistic 3D world
modeling and multimodal, multidimensional time series analysis.
|
2502.13162
|
ShieldLearner: A New Paradigm for Jailbreak Attack Defense in LLMs
|
cs.CR cs.AI cs.CL
|
Large Language Models (LLMs) have achieved remarkable success in various
domains but remain vulnerable to adversarial jailbreak attacks. Existing
prompt-defense strategies, including parameter-modifying and parameter-free
approaches, face limitations in adaptability, interpretability, and
customization, constraining their effectiveness against evolving threats. To
address these challenges, we propose ShieldLearner, a novel paradigm that
mimics human learning in defense. Through trial and error, it autonomously
distills attack signatures into a Pattern Atlas and synthesizes defense
heuristics into a Meta-analysis Framework, enabling systematic and
interpretable threat detection. Furthermore, we introduce Adaptive Adversarial
Augmentation to generate adversarial variations of successfully defended
prompts, enabling continuous self-improvement without model retraining. In
addition to standard benchmarks, we create a hard test set by curating
adversarial prompts from the Wildjailbreak dataset, emphasizing more concealed
malicious intent. Experimental results show that ShieldLearner achieves a
significantly higher defense success rate than existing baselines on both
conventional and hard test sets, while also operating with lower computational
overhead, making it a practical and efficient solution for real-world
adversarial defense.
|
2502.13164
|
Multi-Agent Actor-Critic Generative AI for Query Resolution and Analysis
|
cs.MA cs.AI
|
In this paper, we introduce MASQRAD (Multi-Agent Strategic Query Resolution
and Diagnostic tool), a transformative framework for query resolution based on
the actor-critic model, which utilizes multiple generative AI agents. MASQRAD
is excellent at translating imprecise or ambiguous user inquiries into precise
and actionable requests. This framework generates pertinent visualizations and
responses to these focused queries, as well as thorough analyses and insightful
interpretations for users. MASQRAD addresses the common shortcomings of
existing solutions in domains that demand fast and precise data interpretation,
such as their incapacity to successfully apply AI for generating actionable
insights and their challenges with the inherent ambiguity of user queries.
MASQRAD functions as a sophisticated multi-agent system but "masquerades" to
users as a single AI entity, which lowers errors and enhances data interaction.
This approach makes use of three primary AI agents: Actor Generative AI, Critic
Generative AI, and Expert Analysis Generative AI. Each is crucial for creating,
enhancing, and evaluating data interactions. The Actor AI generates Python
scripts to generate data visualizations from large datasets within operational
constraints, and the Critic AI rigorously refines these scripts through
multi-agent debate. Finally, the Expert Analysis AI contextualizes the outcomes
to aid in decision-making. With an accuracy rate of 87\% when handling tasks
related to natural language visualization, MASQRAD establishes new benchmarks
for automated data interpretation and showcases a noteworthy advancement that
has the potential to revolutionize AI-driven applications.
|
2502.13165
|
HedgeAgents: A Balanced-aware Multi-agent Financial Trading System
|
cs.MA cs.AI q-fin.TR
|
As automated trading gains traction in the financial market, algorithmic
investment strategies are increasingly prominent. While Large Language Models
(LLMs) and Agent-based models exhibit promising potential in real-time market
analysis and trading decisions, they still experience a significant -20% loss
when confronted with rapid declines or frequent fluctuations, impeding their
practical application. Hence, there is an imperative to explore a more robust
and resilient framework. This paper introduces an innovative multi-agent
system, HedgeAgents, aimed at bolstering system robustness via ``hedging''
strategies. In this well-balanced system, an array of hedging agents has been
tailored, where HedgeAgents consist of a central fund manager and multiple
hedging experts specializing in various financial asset classes. These agents
leverage LLMs' cognitive capabilities to make decisions and coordinate through
three types of conferences. Benefiting from the powerful understanding of LLMs,
our HedgeAgents attained a 70% annualized return and a 400% total return over a
period of 3 years. Moreover, we have observed with delight that HedgeAgents can
even formulate investment experience comparable to those of human experts
(https://hedgeagents.github.io/).
|
2502.13166
|
Large Language Models Can Help Mitigate Barren Plateaus
|
quant-ph cs.AI cs.CL cs.LG
|
In the era of noisy intermediate-scale quantum (NISQ) computing, Quantum
Neural Networks (QNNs) have emerged as a promising approach for various
applications, yet their training is often hindered by barren plateaus (BPs),
where gradient variance vanishes exponentially as the model size increases. To
address this challenge, we propose a new Large Language Model (LLM)-driven
search framework, AdaInit, that iteratively searches for optimal initial
parameters of QNNs to maximize gradient variance and therefore mitigate BPs.
Unlike conventional one-time initialization methods, AdaInit dynamically
refines QNN's initialization using LLMs with adaptive prompting. Theoretical
analysis of the Expected Improvement (EI) proves a supremum for the search,
ensuring this process can eventually identify the optimal initial parameter of
the QNN. Extensive experiments across four public datasets demonstrate that
AdaInit significantly enhances QNN's trainability compared to classic
initialization methods, validating its effectiveness in mitigating BPs.
|
2502.13167
|
SmartLLM: Smart Contract Auditing using Custom Generative AI
|
cs.CR cs.AI
|
Smart contracts are essential to decentralized finance (DeFi) and blockchain
ecosystems but are increasingly vulnerable to exploits due to coding errors and
complex attack vectors. Traditional static analysis tools and existing
vulnerability detection methods often fail to address these challenges
comprehensively, leading to high false-positive rates and an inability to
detect dynamic vulnerabilities. This paper introduces SmartLLM, a novel
approach leveraging fine-tuned LLaMA 3.1 models with Retrieval-Augmented
Generation (RAG) to enhance the accuracy and efficiency of smart contract
auditing. By integrating domain-specific knowledge from ERC standards and
employing advanced techniques such as QLoRA for efficient fine-tuning, SmartLLM
achieves superior performance compared to static analysis tools like Mythril
and Slither, as well as zero-shot large language model (LLM) prompting methods
such as GPT-3.5 and GPT-4. Experimental results demonstrate a perfect recall of
100% and an accuracy score of 70%, highlighting the model's robustness in
identifying vulnerabilities, including reentrancy and access control issues.
This research advances smart contract security by offering a scalable and
effective auditing solution, supporting the secure adoption of decentralized
applications.
|
2502.13170
|
Unveiling the Magic of Code Reasoning through Hypothesis Decomposition
and Amendment
|
cs.AI cs.LG
|
The reasoning abilities are one of the most enigmatic and captivating aspects
of large language models (LLMs). Numerous studies are dedicated to exploring
and expanding the boundaries of this reasoning capability. However, tasks that
embody both reasoning and recall characteristics are often overlooked. In this
paper, we introduce such a novel task, code reasoning, to provide a new
perspective for the reasoning abilities of LLMs. We summarize three
meta-benchmarks based on established forms of logical reasoning, and
instantiate these into eight specific benchmark tasks. Our testing on these
benchmarks reveals that LLMs continue to struggle with identifying satisfactory
reasoning pathways. Additionally, we present a new pathway exploration pipeline
inspired by human intricate problem-solving methods. This Reflective Hypothesis
Decomposition and Amendment (RHDA) pipeline consists of the following iterative
steps: (1) Proposing potential hypotheses based on observations and decomposing
them; (2) Utilizing tools to validate hypotheses and reflection outcomes; (3)
Revising hypothesis in light of observations. Our approach effectively
mitigates logical chain collapses arising from forgetting or hallucination
issues in multi-step reasoning, resulting in performance gains of up to
$3\times$. Finally, we expanded this pipeline by applying it to simulate
complex household tasks in real-world scenarios, specifically in VirtualHome,
enhancing the handling of failure cases. We release our code and all of results
at https://github.com/TnTWoW/code_reasoning.
|
2502.13171
|
Web Phishing Net (WPN): A scalable machine learning approach for
real-time phishing campaign detection
|
cs.CR cs.AI cs.LG
|
Phishing is the most prevalent type of cyber-attack today and is recognized
as the leading source of data breaches with significant consequences for both
individuals and corporations. Web-based phishing attacks are the most frequent
with vectors such as social media posts and emails containing links to phishing
URLs that once clicked on render host systems vulnerable to more sinister
attacks. Research efforts to detect phishing URLs have involved the use of
supervised learning techniques that use large amounts of data to train models
and have high computational requirements. They also involve analysis of
features derived from vectors including email contents thus affecting user
privacy. Additionally, they suffer from a lack of resilience against evolution
of threats especially with the advent of generative AI techniques to bypass
these systems as with AI-generated phishing URLs. Unsupervised methods such as
clustering techniques have also been used in phishing detection in the past,
however, they are at times unscalable due to the use of pair-wise comparisons.
They also lack high detection rates while detecting phishing campaigns. In this
paper, we propose an unsupervised learning approach that is not only fast but
scalable, as it does not involve pair-wise comparisons. It is able to detect
entire campaigns at a time with a high detection rate while preserving user
privacy; this includes the recent surge of campaigns with targeted phishing
URLs generated by malicious entities using generative AI techniques.
|
2502.13172
|
Unveiling Privacy Risks in LLM Agent Memory
|
cs.CR cs.AI
|
Large Language Model (LLM) agents have become increasingly prevalent across
various real-world applications. They enhance decision-making by storing
private user-agent interactions in the memory module for demonstrations,
introducing new privacy risks for LLM agents. In this work, we systematically
investigate the vulnerability of LLM agents to our proposed Memory EXTRaction
Attack (MEXTRA) under a black-box setting. To extract private information from
memory, we propose an effective attacking prompt design and an automated prompt
generation method based on different levels of knowledge about the LLM agent.
Experiments on two representative agents demonstrate the effectiveness of
MEXTRA. Moreover, we explore key factors influencing memory leakage from both
the agent's and the attacker's perspectives. Our findings highlight the urgent
need for effective memory safeguards in LLM agent design and deployment.
|
2502.13173
|
Thinking Preference Optimization
|
cs.LG cs.AI
|
Supervised Fine-Tuning (SFT) has been a go-to and effective method for
enhancing long chain-of-thought (CoT) reasoning in relatively small LLMs by
fine-tuning them with long CoT responses from larger LLMs. To continually
improve reasoning abilities, we can either collect new high-quality long CoT
reasoning SFT data or repeatedly train on existing SFT datasets. However,
acquiring new long CoT SFT data is costly and limited, while repeated training
often results in a performance plateau or decline. To further boost the
performance with the SFT data, we propose Thinking Preference Optimization
(ThinkPO), a simple yet effective post-SFT method that enhances long CoT
reasoning without requiring new long CoT responses. Instead, ThinkPO utilizes
readily available or easily obtainable short CoT reasoning responses as
rejected answers and long CoT responses as chosen answers for the same
question. It then applies direct preference optimization to encourage the model
to favor longer reasoning outputs. Experiments show that ThinkPO further
improves the reasoning performance of SFT-ed models, e.g. it increases math
reasoning accuracy of SFT-ed models by 8.6% and output length by 25.9%.
Notably, ThinkPO is capable of continually boosting the performance of the
publicly distilled SFT model, e.g., increasing the official
DeepSeek-R1-Distill-Qwen-7B's performance on MATH500 from 87.4% to 91.2%.
|
2502.13174
|
Generative Topology Optimization: Exploring Diverse Solutions in
Structural Design
|
cs.LG cond-mat.mtrl-sci cs.AI cs.CV
|
Topology optimization (TO) is a family of computational methods that derive
near-optimal geometries from formal problem descriptions. Despite their
success, established TO methods are limited to generating single solutions,
restricting the exploration of alternative designs. To address this limitation,
we introduce Generative Topology Optimization (GenTO) - a data-free method that
trains a neural network to generate structurally compliant shapes and explores
diverse solutions through an explicit diversity constraint. The network is
trained with a solver-in-the-loop, optimizing the material distribution in each
iteration. The trained model produces diverse shapes that closely adhere to the
design requirements. We validate GenTO on 2D and 3D TO problems. Our results
demonstrate that GenTO produces more diverse solutions than any prior method
while maintaining near-optimality and being an order of magnitude faster due to
inherent parallelism. These findings open new avenues for engineering and
design, offering enhanced flexibility and innovation in structural
optimization.
|
2502.13175
|
Towards Robust and Secure Embodied AI: A Survey on Vulnerabilities and
Attacks
|
cs.CR cs.AI cs.RO
|
Embodied AI systems, including robots and autonomous vehicles, are
increasingly integrated into real-world applications, where they encounter a
range of vulnerabilities stemming from both environmental and system-level
factors. These vulnerabilities manifest through sensor spoofing, adversarial
attacks, and failures in task and motion planning, posing significant
challenges to robustness and safety. Despite the growing body of research,
existing reviews rarely focus specifically on the unique safety and security
challenges of embodied AI systems. Most prior work either addresses general AI
vulnerabilities or focuses on isolated aspects, lacking a dedicated and unified
framework tailored to embodied AI. This survey fills this critical gap by: (1)
categorizing vulnerabilities specific to embodied AI into exogenous (e.g.,
physical attacks, cybersecurity threats) and endogenous (e.g., sensor failures,
software flaws) origins; (2) systematically analyzing adversarial attack
paradigms unique to embodied AI, with a focus on their impact on perception,
decision-making, and embodied interaction; (3) investigating attack vectors
targeting large vision-language models (LVLMs) and large language models (LLMs)
within embodied systems, such as jailbreak attacks and instruction
misinterpretation; (4) evaluating robustness challenges in algorithms for
embodied perception, decision-making, and task planning; and (5) proposing
targeted strategies to enhance the safety and reliability of embodied AI
systems. By integrating these dimensions, we provide a comprehensive framework
for understanding the interplay between vulnerabilities and safety in embodied
AI.
|
2502.13176
|
BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference
|
cs.LG cs.AI
|
In Large Language Model (LLM) inference, Key-Value (KV) caches (KV-caches)
are essential for reducing time complexity. However, they result in a linear
increase in GPU memory as the context length grows. While recent work explores
KV-cache eviction and compression policies to reduce memory usage, they often
consider uniform KV-caches across all attention heads, leading to suboptimal
performance. We introduce BaKlaVa, a method to allocate optimal memory for
individual KV-caches across the model by estimating the importance of each
KV-cache. Our empirical analysis demonstrates that not all KV-caches are
equally critical for LLM performance. Using a one-time profiling approach,
BaKlaVa assigns optimal memory budgets to each KV-cache. We evaluated our
method on LLaMA-3-8B, and Qwen2.5-7B models, achieving up to a 70\% compression
ratio while keeping baseline performance and delivering up to an
order-of-magnitude accuracy improvement at higher compression levels.
|
2502.13177
|
KL Penalty Control via Perturbation for Direct Preference Optimization
|
cs.LG cs.AI
|
Direct Preference Optimization (DPO) demonstrates the advantage of aligning a
large language model with human preference using only an offline dataset.
However, DPO has the limitation that the KL penalty, which prevents excessive
deviation from the reference model, is static throughout the training process.
Several methods try to turn this static KL penalty into a dynamic one, but no
approach can adaptively assign different KL penalties for each preference pair.
In this paper, we propose $\varepsilon$-Direct Preference Optimization
($\varepsilon$-DPO), which allows adaptive control of the KL penalty strength
$\beta$ for each preference pair. Specifically, $\varepsilon$-DPO adaptively
controls $\beta$ for each preference pair based on the monotonicity of logits
as a preference model under the perturbation of $\beta$ during training by
simply reusing the logit of the current policy and the reference policy.
Experimental results show that $\varepsilon$-DPO outperforms existing direct
alignment algorithms and KL penalty relaxation methods on general chatbot
benchmarks, highlighting the significance of adaptive KL penalty relaxation at
the instance-level in DPO.
|
2502.13178
|
Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy,
Unified Evaluation, and Comparative Analysis
|
cs.LG cs.AI
|
Post-training Quantization (PTQ) technique has been extensively adopted for
large language models (LLMs) compression owing to its efficiency and low
resource requirement. However, current research lacks a in-depth analysis of
the superior and applicable scenarios of each PTQ strategy. In addition,
existing algorithms focus primarily on performance, overlooking the trade-off
among model size, performance, and quantization bitwidth. To mitigate these
confusions, we provide a novel benchmark for LLMs PTQ in this paper. Firstly,
in order to support our benchmark, we propose a comprehensive taxonomy for
existing mainstream methods by scrutinizing their computational strategies
(e.g., optimization-based, compensation-based, etc.). Then, we conduct
extensive experiments with the baseline within each class, covering models with
various sizes (7B-70B), bitwidths, training levels (LLaMA1/2/3/3.1),
architectures (Mixtral, DeepSeekMoE and Mamba) and modality (LLaVA1.5 and
VILA1.5) on a wide range of evaluation metrics.Through comparative analysis on
the results, we summarize the superior of each PTQ strategy and
modelsize-bitwidth trade-off considering the performance. For example, our
benchmark reveals that compensation-based technique demonstrates outstanding
cross-architecture robustness and extremely low-bit PTQ for ultra large models
should be reexamined. Finally, we further accordingly claim that a practical
combination of compensation and other PTQ strategy can achieve SOTA various
robustness. We believe that our benchmark will provide valuable recommendations
for the deployment of LLMs and future research on PTQ approaches.
|
2502.13179
|
PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training
Quantization Methods for Large Language Models
|
cs.LG cs.AI
|
Large Language Models (LLMs) suffer severe performance degradation when
facing extremely low-bit (sub 2-bit) quantization. Several existing sub 2-bit
post-training quantization (PTQ) methods utilize a mix-precision scheme by
leveraging an unstructured fine-grained mask to explicitly distinguish salient
weights, while which introduces an extra 1-bit or more per weight. To explore
the real limit of PTQ, we propose an extremely low-bit PTQ method called
PTQ1.61, which enables weight quantization to 1.61-bit for the first time.
Specifically, we first introduce a one-dimensional structured mask with
negligibly additional 0.0002-bit per weight based on input activations from the
perspective of reducing the upper bound of quantization error to allocate
corresponding salient weight channels to 4-bit. For non-salient channels
binarization, an efficient block-wise scaling factors optimization framework is
then presented to take implicit row-wise correlations and angular biases into
account. Different from prior works that concentrate on adjusting quantization
methodologies, we further propose a novel paradigm called quantization
preprocessing, where we argue that transforming the weight distribution of the
pretrained model before quantization can alleviate the difficulty in
per-channel extremely low-bit PTQ. Extensive experiments indicate our PTQ1.61
achieves state-of-the-art performance in extremely low-bit quantization. Codes
are available at https://github.com/zjq0455/PTQ1.61.
|
2502.13180
|
Uncertain Multi-Objective Recommendation via Orthogonal Meta-Learning
Enhanced Bayesian Optimization
|
cs.LG cs.AI
|
Recommender systems (RSs) play a crucial role in shaping our digital
interactions, influencing how we access and engage with information across
various domains. Traditional research has predominantly centered on maximizing
recommendation accuracy, often leading to unintended side effects such as echo
chambers and constrained user experiences. Drawing inspiration from autonomous
driving, we introduce a novel framework that categorizes RS autonomy into five
distinct levels, ranging from basic rule-based accuracy-driven systems to
behavior-aware, uncertain multi-objective RSs - where users may have varying
needs, such as accuracy, diversity, and fairness. In response, we propose an
approach that dynamically identifies and optimizes multiple objectives based on
individual user preferences, fostering more ethical and intelligent
user-centric recommendations. To navigate the uncertainty inherent in
multi-objective RSs, we develop a Bayesian optimization (BO) framework that
captures personalized trade-offs between different objectives while accounting
for their uncertain interdependencies. Furthermore, we introduce an orthogonal
meta-learning paradigm to enhance BO efficiency and effectiveness by leveraging
shared knowledge across similar tasks and mitigating conflicts among objectives
through the discovery of orthogonal information. Finally, extensive empirical
evaluations demonstrate the effectiveness of our method in optimizing uncertain
multi-objectives for individual users, paving the way for more adaptive and
user-focused RSs.
|
2502.13181
|
RingFormer: Rethinking Recurrent Transformer with Adaptive Level Signals
|
cs.LG cs.AI
|
Transformers have achieved great success in effectively processing sequential
data such as text. Their architecture consisting of several attention and
feedforward blocks can model relations between elements of a sequence in
parallel manner, which makes them very efficient to train and effective in
sequence modeling. Even though they have shown strong performance in processing
sequential data, the size of their parameters is considerably larger when
compared to other architectures such as RNN and CNN based models. Therefore,
several approaches have explored parameter sharing and recurrence in
Transformer models to address their computational demands. However, such
methods struggle to maintain high performance compared to the original
transformer model. To address this challenge, we propose our novel approach,
RingFormer, which employs one Transformer layer that processes input repeatedly
in a circular, ring-like manner, while utilizing low-rank matrices to generate
input-dependent level signals. This allows us to reduce the model parameters
substantially while maintaining high performance in a variety of tasks such as
translation and image classification, as validated in the experiments.
|
2502.13182
|
Fundus2Globe: Generative AI-Driven 3D Digital Twins for Personalized
Myopia Management
|
eess.IV cs.CV eess.SP
|
Myopia, projected to affect 50% population globally by 2050, is a leading
cause of vision loss. Eyes with pathological myopia exhibit distinctive shape
distributions, which are closely linked to the progression of
vision-threatening complications. Recent understanding of eye-shape-based
biomarkers requires magnetic resonance imaging (MRI), however, it is costly and
unrealistic in routine ophthalmology clinics. We present Fundus2Globe, the
first AI framework that synthesizes patient-specific 3D eye globes from
ubiquitous 2D color fundus photographs (CFPs) and routine metadata (axial
length, spherical equivalent), bypassing MRI dependency. By integrating a 3D
morphable eye model (encoding biomechanical shape priors) with a latent
diffusion model, our approach achieves submillimeter accuracy in reconstructing
posterior ocular anatomy efficiently. Fundus2Globe uniquely quantifies how
vision-threatening lesions (e.g., staphylomas) in CFPs correlate with
MRI-validated 3D shape abnormalities, enabling clinicians to simulate posterior
segment changes in response to refractive shifts. External validation
demonstrates its robust generation performance, ensuring fairness across
underrepresented groups. By transforming 2D fundus imaging into 3D digital
replicas of ocular structures, Fundus2Globe is a gateway for precision
ophthalmology, laying the foundation for AI-driven, personalized myopia
management.
|
2502.13183
|
Synthetic generation of 2D data records based on Autoencoders
|
eess.IV cs.LG
|
Gas Chromatography coupled with Ion Mobility Spectrometry (GC-IMS) is a
dual-separation analytical technique widely used for identifying components in
gaseous samples by separating and analysing the arrival times of their
constituent species. Data generated by GC-IMS is typically represented as
two-dimensional spectra, providing rich information but posing challenges for
data-driven analysis due to limited labelled datasets. This study introduces a
novel method for generating synthetic 2D spectra using a deep learning
framework based on Autoencoders. Although applied here to GC-IMS data, the
approach is broadly applicable to any two-dimensional spectral measurements
where labelled data are scarce. While performing component classification over
a labelled dataset of GC-IMS records, the addition of synthesized records
significantly has improved the classification performance, demonstrating the
method's potential for overcoming dataset limitations in machine learning
frameworks.
|
2502.13185
|
CondensNet: Enabling stable long-term climate simulations via hybrid
deep learning models with adaptive physical constraints
|
physics.ao-ph cs.AI cs.LG
|
Accurate and efficient climate simulations are crucial for understanding
Earth's evolving climate. However, current general circulation models (GCMs)
face challenges in capturing unresolved physical processes, such as cloud and
convection. A common solution is to adopt cloud resolving models, that provide
more accurate results than the standard subgrid parametrisation schemes
typically used in GCMs. However, cloud resolving models, also referred to as
super paramtetrizations, remain computationally prohibitive. Hybrid modeling,
which integrates deep learning with equation-based GCMs, offers a promising
alternative but often struggles with long-term stability and accuracy issues.
In this work, we find that water vapor oversaturation during condensation is a
key factor compromising the stability of hybrid models. To address this, we
introduce CondensNet, a novel neural network architecture that embeds a
self-adaptive physical constraint to correct unphysical condensation processes.
CondensNet effectively mitigates water vapor oversaturation, enhancing
simulation stability while maintaining accuracy and improving computational
efficiency compared to super parameterization schemes.
We integrate CondensNet into a GCM to form PCNN-GCM (Physics-Constrained
Neural Network GCM), a hybrid deep learning framework designed for long-term
stable climate simulations in real-world conditions, including ocean and land.
PCNN-GCM represents a significant milestone in hybrid climate modeling, as it
shows a novel way to incorporate physical constraints adaptively, paving the
way for accurate, lightweight, and stable long-term climate simulations.
|
2502.13186
|
Model selection for behavioral learning data and applications to
contextual bandits
|
stat.ML cs.LG
|
Learning for animals or humans is the process that leads to behaviors better
adapted to the environment. This process highly depends on the individual that
learns and is usually observed only through the individual's actions. This
article presents ways to use this individual behavioral data to find the model
that best explains how the individual learns. We propose two model selection
methods: a general hold-out procedure and an AIC-type criterion, both adapted
to non-stationary dependent data. We provide theoretical error bounds for these
methods that are close to those of the standard i.i.d. case. To compare these
approaches, we apply them to contextual bandit models and illustrate their use
on both synthetic and experimental learning data in a human categorization
task.
|
2502.13187
|
A Survey of Sim-to-Real Methods in RL: Progress, Prospects and
Challenges with Foundation Models
|
cs.LG cs.AI cs.RO
|
Deep Reinforcement Learning (RL) has been explored and verified to be
effective in solving decision-making tasks in various domains, such as
robotics, transportation, recommender systems, etc. It learns from the
interaction with environments and updates the policy using the collected
experience. However, due to the limited real-world data and unbearable
consequences of taking detrimental actions, the learning of RL policy is mainly
restricted within the simulators. This practice guarantees safety in learning
but introduces an inevitable sim-to-real gap in terms of deployment, thus
causing degraded performance and risks in execution. There are attempts to
solve the sim-to-real problems from different domains with various techniques,
especially in the era with emerging techniques such as large foundations or
language models that have cast light on the sim-to-real. This survey paper, to
the best of our knowledge, is the first taxonomy that formally frames the
sim-to-real techniques from key elements of the Markov Decision Process (State,
Action, Transition, and Reward). Based on the framework, we cover comprehensive
literature from the classic to the most advanced methods including the
sim-to-real techniques empowered by foundation models, and we also discuss the
specialties that are worth attention in different domains of sim-to-real
problems. Then we summarize the formal evaluation process of sim-to-real
performance with accessible code or benchmarks. The challenges and
opportunities are also presented to encourage future exploration of this
direction. We are actively maintaining a to include the most up-to-date
sim-to-real research outcomes to help the researchers in their work.
|
2502.13188
|
Autonomous Vehicles Using Multi-Agent Reinforcement Learning for Routing
Decisions Can Harm Urban Traffic
|
cs.MA cs.LG cs.RO
|
Autonomous vehicles (AVs) using Multi-Agent Reinforcement Learning (MARL) for
simultaneous route optimization may destabilize traffic environments, with
human drivers possibly experiencing longer travel times. We study this
interaction by simulating human drivers and AVs. Our experiments with standard
MARL algorithms reveal that, even in trivial cases, policies often fail to
converge to an optimal solution or require long training periods. The problem
is amplified by the fact that we cannot rely entirely on simulated training, as
there are no accurate models of human routing behavior. At the same time,
real-world training in cities risks destabilizing urban traffic systems,
increasing externalities, such as $CO_2$ emissions, and introducing
non-stationarity as human drivers adapt unpredictably to AV behaviors.
Centralization can improve convergence in some cases, however, it raises
privacy concerns for the travelers' destination data. In this position paper,
we argue that future research must prioritize realistic benchmarks, cautious
deployment strategies, and tools for monitoring and regulating AV routing
behaviors to ensure sustainable and equitable urban mobility systems.
|
2502.13189
|
MoBA: Mixture of Block Attention for Long-Context LLMs
|
cs.LG cs.AI cs.CL
|
Scaling the effective context length is essential for advancing large
language models (LLMs) toward artificial general intelligence (AGI). However,
the quadratic increase in computational complexity inherent in traditional
attention mechanisms presents a prohibitive overhead. Existing approaches
either impose strongly biased structures, such as sink or window attention
which are task-specific, or radically modify the attention mechanism into
linear approximations, whose performance in complex reasoning tasks remains
inadequately explored.
In this work, we propose a solution that adheres to the ``less structure''
principle, allowing the model to determine where to attend autonomously, rather
than introducing predefined biases. We introduce Mixture of Block Attention
(MoBA), an innovative approach that applies the principles of Mixture of
Experts (MoE) to the attention mechanism. This novel architecture demonstrates
superior performance on long-context tasks while offering a key advantage: the
ability to seamlessly transition between full and sparse attention, enhancing
efficiency without the risk of compromising performance. MoBA has already been
deployed to support Kimi's long-context requests and demonstrates significant
advancements in efficient attention computation for LLMs. Our code is available
at https://github.com/MoonshotAI/MoBA.
|
2502.13190
|
Application of machine learning algorithm in temperature field
reconstruction
|
cs.LG physics.flu-dyn
|
This study focuses on the stratification patterns and dynamic evolution of
reservoir water temperatures, aiming to estimate and reconstruct the
temperature field using limited and noisy local measurement data. Due to
complex measurement environments and technical limitations, obtaining complete
temperature information for reservoirs is highly challenging. Therefore,
accurately reconstructing the temperature field from a small number of local
data points has become a critical scientific issue. To address this, the study
employs Proper Orthogonal Decomposition (POD) and sparse representation methods
to reconstruct the temperature field based on temperature data from a limited
number of local measurement points. The results indicate that satisfactory
reconstruction can be achieved when the number of POD basis functions is set to
2 and the number of measurement points is 10. Under different water intake
depths, the reconstruction errors of both POD and sparse representation methods
remain stable at around 0.15, fully validating the effectiveness of these
methods in reconstructing the temperature field based on limited local
temperature data. Additionally, the study further explores the distribution
characteristics of reconstruction errors for POD and sparse representation
methods under different water level intervals, analyzing the optimal
measurement point layout scheme and potential limitations of the reconstruction
methods in this case. This research not only effectively reduces measurement
costs and computational resource consumption but also provides a new technical
approach for reservoir temperature analysis, holding significant theoretical
and practical importance.
|
2502.13191
|
On the Privacy Risks of Spiking Neural Networks: A Membership Inference
Analysis
|
cs.LG cs.AI
|
Spiking Neural Networks (SNNs) are increasingly explored for their energy
efficiency and robustness in real-world applications, yet their privacy risks
remain largely unexamined. In this work, we investigate the susceptibility of
SNNs to Membership Inference Attacks (MIAs) -- a major privacy threat where an
adversary attempts to determine whether a given sample was part of the training
dataset. While prior work suggests that SNNs may offer inherent robustness due
to their discrete, event-driven nature, we find that its resilience diminishes
as latency (T) increases. Furthermore, we introduce an input dropout strategy
under black box setting, that significantly enhances membership inference in
SNNs. Our findings challenge the assumption that SNNs are inherently more
secure, and even though they are expected to be better, our results reveal that
SNNs exhibit privacy vulnerabilities that are equally comparable to Artificial
Neural Networks (ANNs). Our code is available at
https://anonymous.4open.science/r/MIA_SNN-3610.
|
2502.13193
|
Private Text Generation by Seeding Large Language Model Prompts
|
cs.CL
|
We explore how private synthetic text can be generated by suitably prompting
a large language model (LLM). This addresses a challenge for organizations like
hospitals, which hold sensitive text data like patient medical records, and
wish to share it in order to train machine learning models for medical tasks,
while preserving patient privacy. Methods that rely on training or finetuning a
model may be out of reach, either due to API limits of third-party LLMs, or due
to ethical and legal prohibitions on sharing the private data with the LLM
itself.
We propose Differentially Private Keyphrase Prompt Seeding (DP-KPS), a method
that generates a private synthetic text corpus from a sensitive input corpus,
by accessing an LLM only through privatized prompts. It is based on seeding the
prompts with private samples from a distribution over phrase embeddings, thus
capturing the input corpus while achieving requisite output diversity and
maintaining differential privacy. We evaluate DP-KPS on downstream ML text
classification tasks, and show that the corpora it generates preserve much of
the predictive power of the original ones. Our findings offer hope that
institutions can reap ML insights by privately sharing data with simple prompts
and little compute.
|
2502.13194
|
Conditional Max-Sum for Asynchronous Multiagent Decision Making
|
cs.MA cs.AI
|
In this paper we present a novel approach for multiagent decision making in
dynamic environments based on Factor Graphs and the Max-Sum algorithm,
considering asynchronous variable reassignments and distributed message-passing
among agents. Motivated by the challenging domain of lane-free traffic where
automated vehicles can communicate and coordinate as agents, we propose a more
realistic communication framework for Factor Graph formulations that satisfies
the above-mentioned restrictions, along with Conditional Max-Sum: an extension
of Max-Sum with a revised message-passing process that is better suited for
asynchronous settings. The overall application in lane-free traffic can be
viewed as a hybrid system where the Factor Graph formulation undertakes the
strategic decision making of vehicles, that of desired lateral alignment in a
coordinated manner; and acts on top of a rule-based method we devise that
provides a structured representation of the lane-free environment for the
factors, while also handling the underlying control of vehicles regarding core
operations and safety. Our experimental evaluation showcases the capabilities
of the proposed framework in problems with intense coordination needs when
compared to a domain-specific baseline without communication, and an increased
adeptness of Conditional Max-Sum with respect to the standard algorithm.
|
2502.13195
|
Linguistic Generalizations are not Rules: Impacts on Evaluation of LMs
|
cs.CL
|
Linguistic evaluations of how well LMs generalize to produce or understand
novel text often implicitly take for granted that natural languages are
generated by symbolic rules. Grammaticality is thought to be determined by
whether or not sentences obey such rules. Interpretation is believed to be
compositionally generated by syntactic rules operating on meaningful words.
Semantic parsing is intended to map sentences into formal logic. Failures of
LMs to obey strict rules have been taken to reveal that LMs do not produce or
understand language like humans. Here we suggest that LMs' failures to obey
symbolic rules may be a feature rather than a bug, because natural languages
are not based on rules. New utterances are produced and understood by a
combination of flexible interrelated and context-dependent schemata or
constructions. We encourage researchers to reimagine appropriate benchmarks and
analyses that acknowledge the rich flexible generalizations that comprise
natural languages.
|
2502.13196
|
GS-QA: Comprehensive Quality Assessment Benchmark for Gaussian Splatting
View Synthesis
|
cs.MM cs.CV
|
Gaussian Splatting (GS) offers a promising alternative to Neural Radiance
Fields (NeRF) for real-time 3D scene rendering. Using a set of 3D Gaussians to
represent complex geometry and appearance, GS achieves faster rendering times
and reduced memory consumption compared to the neural network approach used in
NeRF. However, quality assessment of GS-generated static content is not yet
explored in-depth. This paper describes a subjective quality assessment study
that aims to evaluate synthesized videos obtained with several static GS
state-of-the-art methods. The methods were applied to diverse visual scenes,
covering both 360-degree and forward-facing (FF) camera trajectories. Moreover,
the performance of 18 objective quality metrics was analyzed using the scores
resulting from the subjective study, providing insights into their strengths,
limitations, and alignment with human perception. All videos and scores are
made available providing a comprehensive database that can be used as benchmark
on GS view synthesis and objective quality metrics.
|
2502.13198
|
Enhancing Machine Learning Performance through Intelligent Data Quality
Assessment: An Unsupervised Data-centric Framework
|
cs.LG cs.AI stat.ML
|
Poor data quality limits the advantageous power of Machine Learning (ML) and
weakens high-performing ML software systems. Nowadays, data are more prone to
the risk of poor quality due to their increasing volume and complexity.
Therefore, tedious and time-consuming work goes into data preparation and
improvement before moving further in the ML pipeline. To address this
challenge, we propose an intelligent data-centric evaluation framework that can
identify high-quality data and improve the performance of an ML system. The
proposed framework combines the curation of quality measurements and
unsupervised learning to distinguish high- and low-quality data. The framework
is designed to integrate flexible and general-purpose methods so that it is
deployed in various domains and applications. To validate the outcomes of the
designed framework, we implemented it in a real-world use case from the field
of analytical chemistry, where it is tested on three datasets of anti-sense
oligonucleotides. A domain expert is consulted to identify the relevant quality
measurements and evaluate the outcomes of the framework. The results show that
the quality-centric data evaluation framework identifies the characteristics of
high-quality data that guide the conduct of efficient laboratory experiments
and consequently improve the performance of the ML system.
|
2502.13199
|
The Role of GitHub Copilot on Software Development: A Perspec-tive on
Productivity, Security, Best Practices and Future Directions
|
cs.SE cs.AI
|
GitHub Copilot is transforming software development by automating tasks and
boosting productivity through AI-driven code generation. In this paper, we
con-duct a literature survey to synthesize insights on Copilot's impact on
productivity and security. We review academic journal databases, industry
reports, and official docu-mentation to highlight key findings and challenges.
While Copilot accelerates coding and prototyping, concerns over security
vulnerabilities and intellectual property risks persist. Drawing from the
literature, we provide a perspective on best practices and future directions
for responsible AI adoption in software engineering, offering action-able
insights for developers and organizations to integrate Copilot effectively
while maintaining high standards of quality and security.
|
2502.13200
|
Learning To Explore With Predictive World Model Via Self-Supervised
Learning
|
cs.LG cs.AI
|
Autonomous artificial agents must be able to learn behaviors in complex
environments without humans to design tasks and rewards. Designing these
functions for each environment is not feasible, thus, motivating the
development of intrinsic reward functions. In this paper, we propose using
several cognitive elements that have been neglected for a long time to build an
internal world model for an intrinsically motivated agent. Our agent performs
satisfactory iterations with the environment, learning complex behaviors
without needing previously designed reward functions. We used 18 Atari games to
evaluate what cognitive skills emerge in games that require reactive and
deliberative behaviors. Our results show superior performance compared to the
state-of-the-art in many test cases with dense and sparse rewards.
|
2502.13207
|
Thinking Outside the (Gray) Box: A Context-Based Score for Assessing
Value and Originality in Neural Text Generation
|
cs.CL cs.AI cs.CY cs.LG
|
Despite the increasing use of large language models for creative tasks, their
outputs often lack diversity. Common solutions, such as sampling at higher
temperatures, can compromise the quality of the results. Drawing on information
theory, we propose a context-based score to quantitatively evaluate value and
originality. This score incentivizes accuracy and adherence to the request
while fostering divergence from the learned distribution. We propose using our
score as a reward in a reinforcement learning framework to fine-tune large
language models for maximum performance. We validate our strategy through
experiments in poetry generation and math problem solving, demonstrating that
it enhances the value and originality of the generated solutions.
|
2502.13220
|
The impact of conformer quality on learned representations of molecular
conformer ensembles
|
cs.LG physics.chem-ph
|
Training machine learning models to predict properties of molecular conformer
ensembles is an increasingly popular strategy to accelerate the conformational
analysis of drug-like small molecules, reactive organic substrates, and
homogeneous catalysts. For high-throughput analyses especially, trained
surrogate models can help circumvent traditional approaches to conformational
analysis that rely on expensive conformer searches and geometry optimizations.
Here, we question how the performance of surrogate models for predicting 3D
conformer-dependent properties (of a single, active conformer) is affected by
the quality of the 3D conformers used as their input. How well do lower-quality
conformers inform the prediction of properties of higher-quality conformers?
Does the fidelity of geometry optimization matter when encoding random
conformers? For models that encode sets of conformers, how does the presence of
the active conformer that induces the target property affect model accuracy?
How do predictions from a surrogate model compare to estimating the properties
from cheap ensembles themselves? We explore these questions in the context of
predicting Sterimol parameters of conformer ensembles optimized with density
functional theory. Although answers will be case-specific, our analyses provide
a valuable perspective on 3D representation learning models and raise practical
considerations regarding when conformer quality matters.
|
2502.13221
|
Two Tickets are Better than One: Fair and Accurate Hiring Under
Strategic LLM Manipulations
|
cs.LG cs.AI cs.CY cs.GT
|
In an era of increasingly capable foundation models, job seekers are turning
to generative AI tools to enhance their application materials. However, unequal
access to and knowledge about generative AI tools can harm both employers and
candidates by reducing the accuracy of hiring decisions and giving some
candidates an unfair advantage. To address these challenges, we introduce a new
variant of the strategic classification framework tailored to manipulations
performed using large language models, accommodating varying levels of
manipulations and stochastic outcomes. We propose a ``two-ticket'' scheme,
where the hiring algorithm applies an additional manipulation to each submitted
resume and considers this manipulated version together with the original
submitted resume. We establish theoretical guarantees for this scheme, showing
improvements for both the fairness and accuracy of hiring decisions when the
true positive rate is maximized subject to a no false positives constraint. We
further generalize this approach to an $n$-ticket scheme and prove that hiring
outcomes converge to a fixed, group-independent decision, eliminating
disparities arising from differential LLM access. Finally, we empirically
validate our framework and the performance of our two-ticket scheme on real
resumes using an open-source resume screening tool.
|
2502.13228
|
Conformal Prediction as Bayesian Quadrature
|
cs.LG cs.AI stat.ML
|
As machine learning-based prediction systems are increasingly used in
high-stakes situations, it is important to understand how such predictive
models will perform upon deployment. Distribution-free uncertainty
quantification techniques such as conformal prediction provide guarantees about
the loss black-box models will incur even when the details of the models are
hidden. However, such methods are based on frequentist probability, which
unduly limits their applicability. We revisit the central aspects of conformal
prediction from a Bayesian perspective and thereby illuminate the shortcomings
of frequentist guarantees. We propose a practical alternative based on Bayesian
quadrature that provides interpretable guarantees and offers a richer
representation of the likely range of losses to be observed at test time.
|
2502.13233
|
SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question
Answering?
|
cs.CL cs.AI cs.IR cs.IT math.IT
|
Large Language Models (LLMs) have shown remarkable capabilities in general
domains but often struggle with tasks requiring specialized knowledge.
Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve
external information from static knowledge bases, which can be outdated or
incomplete, missing fine-grained clinical details essential for accurate
medical question answering. In this work, we propose SearchRAG, a novel
framework that overcomes these limitations by leveraging real-time search
engines. Our method employs synthetic query generation to convert complex
medical questions into search-engine-friendly queries and utilizes
uncertainty-based knowledge selection to filter and incorporate the most
relevant and informative medical knowledge into the LLM's input. Experimental
results demonstrate that our method significantly improves response accuracy in
medical question answering tasks, particularly for complex questions requiring
detailed and up-to-date knowledge.
|
2502.13234
|
MotionMatcher: Motion Customization of Text-to-Video Diffusion Models
via Motion Feature Matching
|
cs.CV cs.AI cs.LG
|
Text-to-video (T2V) diffusion models have shown promising capabilities in
synthesizing realistic videos from input text prompts. However, the input text
description alone provides limited control over the precise objects movements
and camera framing. In this work, we tackle the motion customization problem,
where a reference video is provided as motion guidance. While most existing
methods choose to fine-tune pre-trained diffusion models to reconstruct the
frame differences of the reference video, we observe that such strategy suffer
from content leakage from the reference video, and they cannot capture complex
motion accurately. To address this issue, we propose MotionMatcher, a motion
customization framework that fine-tunes the pre-trained T2V diffusion model at
the feature level. Instead of using pixel-level objectives, MotionMatcher
compares high-level, spatio-temporal motion features to fine-tune diffusion
models, ensuring precise motion learning. For the sake of memory efficiency and
accessibility, we utilize a pre-trained T2V diffusion model, which contains
considerable prior knowledge about video motion, to compute these motion
features. In our experiments, we demonstrate state-of-the-art motion
customization performances, validating the design of our framework.
|
2502.13243
|
Learning the Universe: Learning to Optimize Cosmic Initial Conditions
with Non-Differentiable Structure Formation Models
|
astro-ph.CO astro-ph.GA cs.LG
|
Making the most of next-generation galaxy clustering surveys requires
overcoming challenges in complex, non-linear modelling to access the
significant amount of information at smaller cosmological scales. Field-level
inference has provided a unique opportunity beyond summary statistics to use
all of the information of the galaxy distribution. However, addressing current
challenges often necessitates numerical modelling that incorporates
non-differentiable components, hindering the use of efficient gradient-based
inference methods. In this paper, we introduce Learning the Universe by
Learning to Optimize (LULO), a gradient-free framework for reconstructing the
3D cosmic initial conditions. Our approach advances deep learning to train an
optimization algorithm capable of fitting state-of-the-art non-differentiable
simulators to data at the field level. Importantly, the neural optimizer solely
acts as a search engine in an iterative scheme, always maintaining full physics
simulations in the loop, ensuring scalability and reliability. We demonstrate
the method by accurately reconstructing initial conditions from
$M_{200\mathrm{c}}$ halos identified in a dark matter-only $N$-body simulation
with a spherical overdensity algorithm. The derived dark matter and halo
overdensity fields exhibit $\geq80\%$ cross-correlation with the ground truth
into the non-linear regime $k \sim 1h$ Mpc$^{-1}$. Additional cosmological
tests reveal accurate recovery of the power spectra, bispectra, halo mass
function, and velocities. With this work, we demonstrate a promising path
forward to non-linear field-level inference surpassing the requirement of a
differentiable physics model.
|
2502.13245
|
Range Retrieval with Graph-Based Indices
|
cs.IR
|
Retrieving points based on proximity in a high-dimensional vector space is a
crucial step in information retrieval applications. The approximate nearest
neighbor search (ANNS) problem, which identifies the $k$ nearest neighbors for
a query (approximately, since exactly is hard), has been extensively studied in
recent years. However, comparatively little attention has been paid to the
related problem of finding all points within a given distance of a query, the
range retrieval problem, despite its applications in areas such as duplicate
detection, plagiarism checking, and facial recognition. In this paper, we
present a set of algorithms for range retrieval on graph-based vector indices,
which are known to achieve excellent performance on ANNS queries. Since a range
query may have anywhere from no matching results to thousands of matching
results in the database, we introduce a set of range retrieval algorithms based
on modifications of the standard graph search that adapt to terminate quickly
on queries in the former group, and to put more resources into finding results
for the latter group. Due to the lack of existing benchmarks for range
retrieval, we also undertake a comprehensive study of range characteristics of
existing embedding datasets, and select a suitable range retrieval radius for
eight existing datasets with up to 100 million points in addition to the one
existing benchmark. We test our algorithms on these datasets, and find up to
100x improvement in query throughput over a naive baseline approach, with 5-10x
improvement on average, and strong performance up to 100 million data points.
|
2502.13246
|
When People are Floods: Analyzing Dehumanizing Metaphors in Immigration
Discourse with Large Language Models
|
cs.CL cs.CY
|
Metaphor, discussing one concept in terms of another, is abundant in politics
and can shape how people understand important issues. We develop a
computational approach to measure metaphorical language, focusing on
immigration discourse on social media. Grounded in qualitative social science
research, we identify seven concepts evoked in immigration discourse (e.g.
"water" or "vermin"). We propose and evaluate a novel technique that leverages
both word-level and document-level signals to measure metaphor with respect to
these concepts. We then study the relationship between metaphor, political
ideology, and user engagement in 400K US tweets about immigration. While
conservatives tend to use dehumanizing metaphors more than liberals, this
effect varies widely across concepts. Moreover, creature-related metaphor is
associated with more retweets, especially for liberal authors. Our work
highlights the potential for computational methods to complement qualitative
approaches in understanding subtle and implicit language in political
discourse.
|
2502.13247
|
Grounding LLM Reasoning with Knowledge Graphs
|
cs.CL
|
Knowledge Graphs (KGs) are valuable tools for representing relationships
between entities in a structured format. Traditionally, these knowledge bases
are queried to extract specific information. However, question-answering (QA)
over such KGs poses a challenge due to the intrinsic complexity of natural
language compared to the structured format and the size of these graphs.
Despite these challenges, the structured nature of KGs can provide a solid
foundation for grounding the outputs of Large Language Models (LLMs), offering
organizations increased reliability and control.
Recent advancements in LLMs have introduced reasoning methods at inference
time to improve their performance and maximize their capabilities. In this
work, we propose integrating these reasoning strategies with KGs to anchor
every step or "thought" of the reasoning chains in KG data. Specifically, we
evaluate both agentic and automated search methods across several reasoning
strategies, including Chain-of-Thought (CoT), Tree-of-Thought (ToT), and
Graph-of-Thought (GoT), using GRBench, a benchmark dataset for graph reasoning
with domain-specific graphs. Our experiments demonstrate that this approach
consistently outperforms baseline models, highlighting the benefits of
grounding LLM reasoning processes in structured KG data.
|
2502.13248
|
Communication Strategy on Macro-and-Micro Traffic State in Cooperative
Deep Reinforcement Learning for Regional Traffic Signal Control
|
cs.MA cs.AI cs.LG
|
Adaptive Traffic Signal Control (ATSC) has become a popular research topic in
intelligent transportation systems. Regional Traffic Signal Control (RTSC)
using the Multi-agent Deep Reinforcement Learning (MADRL) technique has become
a promising approach for ATSC due to its ability to achieve the optimum
trade-off between scalability and optimality. Most existing RTSC approaches
partition a traffic network into several disjoint regions, followed by applying
centralized reinforcement learning techniques to each region. However, the
pursuit of cooperation among RTSC agents still remains an open issue and no
communication strategy for RTSC agents has been investigated. In this paper, we
propose communication strategies to capture the correlation of micro-traffic
states among lanes and the correlation of macro-traffic states among
intersections. We first justify the evolution equation of the RTSC process is
Markovian via a system of store-and-forward queues. Next, based on the
evolution equation, we propose two GAT-Aggregated (GA2) communication
modules--GA2-Naive and GA2-Aug to extract both intra-region and inter-region
correlations between macro and micro traffic states. While GA2-Naive only
considers the movements at each intersection, GA2-Aug also considers the
lane-changing behavior of vehicles. Two proposed communication modules are then
aggregated into two existing novel RTSC frameworks--RegionLight and
Regional-DRL. Experimental results demonstrate that both GA2-Naive and GA2-Aug
effectively improve the performance of existing RTSC frameworks under both real
and synthetic scenarios. Hyperparameter testing also reveals the robustness and
potential of our communication modules in large-scale traffic networks.
|
2502.13249
|
Evidence of Replica Symmetry Breaking under the Nishimori conditions in
epidemic inference on graphs
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT physics.soc-ph
|
In Bayesian inference, computing the posterior distribution from the data is
typically a non-trivial problem, which usually requires approximations such as
mean-field approaches or numerical methods, like the Monte Carlo Markov Chain.
Being a high-dimensional distribution over a set of correlated variables, the
posterior distribution can undergo the notorious replica symmetry breaking
transition. When it happens, several mean-field methods and virtually every
Monte Carlo scheme can not provide a reasonable approximation to the posterior
and its marginals. Replica symmetry is believed to be guaranteed whenever the
data is generated with known prior and likelihood distributions, namely under
the so-called Nishimori conditions. In this paper, we break this belief, by
providing a counter-example showing that, under the Nishimori conditions,
replica symmetry breaking arises. Introducing a simple, geometrical model that
can be thought of as a patient zero retrieval problem in a highly infectious
regime of the epidemic Susceptible-Infectious model, we show that under the
Nishimori conditions, there is evidence of replica symmetry breaking. We
achieve this result by computing the instability of the replica symmetric
cavity method toward the one step replica symmetry broken phase. The origin of
this phenomenon -- replica symmetry breaking under the Nishimori conditions --
is likely due to the correlated disorder appearing in the epidemic models.
|
2502.13251
|
Neural Attention Search
|
cs.CL cs.AI
|
We present Neural Attention Search (NAtS), a framework that automatically
evaluates the importance of each token within a sequence and determines if the
corresponding token can be dropped after several steps. This approach can
efficiently reduce the KV cache sizes required by transformer-based models
during inference and thus reduce inference costs. In this paper, we design a
search space that contains three token types: (i) Global Tokens will be
preserved and queried by all the following tokens. (ii) Local Tokens survive
until the next global token appears. (iii) Sliding Window Tokens have an impact
on the inference of a fixed size of the next following tokens. Similar to the
One-Shot Neural Architecture Search approach, this token-type information can
be learned jointly with the architecture weights via a learnable attention
mask. Experiments on both training a new transformer from scratch and
fine-tuning existing large language models show that NAtS can efficiently
reduce the KV cache size required for the models while maintaining the models'
performance.
|
2502.13252
|
Multilingual Language Model Pretraining using Machine-translated Data
|
cs.CL
|
High-resource languages such as English, enables the pretraining of
high-quality large language models (LLMs). The same can not be said for most
other languages as LLMs still underperform for non-English languages, likely
due to a gap in the quality and diversity of the available multilingual
pretraining corpora. In this work, we find that machine-translated texts from a
single high-quality source language can contribute significantly to the
pretraining quality of multilingual LLMs. We translate FineWeb-Edu, a
high-quality English web dataset, into nine languages, resulting in a
1.7-trillion-token dataset, which we call TransWebEdu and pretrain a
1.3B-parameter model, TransWebLLM, from scratch on this dataset. Across nine
non-English reasoning tasks, we show that TransWebLLM matches or outperforms
state-of-the-art multilingual models trained using closed data, such as
Llama3.2, Qwen2.5, and Gemma, despite using an order of magnitude less data. We
demonstrate that adding less than 5% of TransWebEdu as domain-specific
pretraining data sets a new state-of-the-art in Arabic, Italian, Indonesian,
Swahili, and Welsh understanding and commonsense reasoning tasks. To promote
reproducibility, we release our corpus, models, and training pipeline under
Open Source Initiative-approved licenses.
|
2502.13255
|
PCB Renewal: Iterative Reuse of PCB Substrates for Sustainable
Electronic Making
|
cs.HC cs.CY cs.RO
|
PCB (printed circuit board) substrates are often single-use, leading to
material waste in electronics making. We introduce PCB Renewal, a novel
technique that "erases" and "reconfigures" PCB traces by selectively depositing
conductive epoxy onto outdated areas, transforming isolated paths into
conductive planes that support new traces. We present the PCB Renewal workflow,
evaluate its electrical performance and mechanical durability, and model its
sustainability impact, including material usage, cost, energy consumption, and
time savings. We develop a software plug-in that guides epoxy deposition,
generates updated PCB profiles, and calculates resource usage. To demonstrate
PCB Renewal's effectiveness and versatility, we repurpose a single PCB across
four design iterations spanning three projects: a camera roller, a WiFi radio,
and an ESPboy game console. We also show how an outsourced double-layer PCB can
be reconfigured, transforming it from an LED watch to an interactive cat toy.
The paper concludes with limitations and future directions.
|
2502.13256
|
A Survey of Anomaly Detection in Cyber-Physical Systems
|
cs.CR cs.AI
|
In our increasingly interconnected world, Cyber-Physical Systems (CPS) play a
crucial role in industries like healthcare, transportation, and manufacturing
by combining physical processes with computing power. These systems, however,
face many challenges, especially regarding security and system faults.
Anomalies in CPS may indicate unexpected problems, from sensor malfunctions to
cyber-attacks, and must be detected to prevent failures that can cause harm or
disrupt services. This paper provides an overview of the different ways
researchers have approached anomaly detection in CPS. We categorize and compare
methods like machine learning, deep learning, mathematical models, invariant,
and hybrid techniques. Our goal is to help readers understand the strengths and
weaknesses of these methods and how they can be used to create safer, more
reliable CPS. By identifying the gaps in current solutions, we aim to encourage
future research that will make CPS more secure and adaptive in our increasingly
automated world.
|
2502.13257
|
Random Forest Autoencoders for Guided Representation Learning
|
cs.LG
|
Decades of research have produced robust methods for unsupervised data
visualization, yet supervised visualization$\unicode{x2013}$where expert labels
guide representations$\unicode{x2013}$remains underexplored, as most supervised
approaches prioritize classification over visualization. Recently, RF-PHATE, a
diffusion-based manifold learning method leveraging random forests and
information geometry, marked significant progress in supervised visualization.
However, its lack of an explicit mapping function limits scalability and
prevents application to unseen data, posing challenges for large datasets and
label-scarce scenarios. To overcome these limitations, we introduce Random
Forest Autoencoders (RF-AE), a neural network-based framework for out-of-sample
kernel extension that combines the flexibility of autoencoders with the
supervised learning strengths of random forests and the geometry captured by
RF-PHATE. RF-AE enables efficient out-of-sample supervised visualization and
outperforms existing methods, including RF-PHATE's standard kernel extension,
in both accuracy and interpretability. Additionally, RF-AE is robust to the
choice of hyper-parameters and generalizes to any kernel-based dimensionality
reduction method.
|
2502.13259
|
HumT DumT: Measuring and controlling human-like language in LLMs
|
cs.CL cs.AI cs.CY
|
Should LLMs generate language that makes them seem human? Human-like language
might improve user experience, but might also lead to overreliance and
stereotyping. Assessing these potential impacts requires a systematic way to
measure human-like tone in LLM outputs. We introduce HumT and SocioT, metrics
for human-like tone and other dimensions of social perceptions in text data
based on relative probabilities from an LLM. By measuring HumT across
preference and usage datasets, we find that users prefer less human-like
outputs from LLMs. HumT also offers insights into the impacts of
anthropomorphism: human-like LLM outputs are highly correlated with warmth,
social closeness, femininity, and low status, which are closely linked to the
aforementioned harms. We introduce DumT, a method using HumT to systematically
control and reduce the degree of human-like tone while preserving model
performance. DumT offers a practical approach for mitigating risks associated
with anthropomorphic language generation.
|
2502.13260
|
Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought
Reasoning in Large Language Models
|
cs.CL cs.AI cs.LG
|
Chain-of-Thought (CoT) reasoning, which breaks down complex tasks into
intermediate reasoning steps, has significantly enhanced the performance of
large language models (LLMs) on challenging tasks. However, the detailed
reasoning process in CoT often incurs long generation times and high
computational costs, partly due to the inclusion of unnecessary steps. To
address this, we propose a method to identify critical reasoning steps using
perplexity as a measure of their importance: a step is deemed critical if its
removal causes a significant increase in perplexity. Our method enables models
to focus solely on generating these critical steps. This can be achieved
through two approaches: refining demonstration examples in few-shot CoT or
fine-tuning the model using selected examples that include only critical steps.
Comprehensive experiments validate the effectiveness of our method, which
achieves a better balance between the reasoning accuracy and efficiency of CoT.
|
2502.13263
|
Spectral method for low-dose Poisson and Bernoulli phase retrieval
|
cs.IT math.IT math.PR
|
We consider the problem of phaseless reconstruction from measurements with
Poisson or Bernoulli distributed noise. This is of particular interest in
biological imaging experiments where a low dose of radiation has to be used to
mitigate potential damage of the specimen, resulting in low observed particle
counts. We derive recovery guarantees for the spectral method for these noise
models in the case of Gaussian measurements. Our results give a quantitative
insight in the trade-off between the employed radiation dose per measurement
and the overall sampling complexity.
|
2502.13266
|
A Machine Learning Approach That Beats Large Rubik's Cubes
|
cs.LG cs.DM
|
The paper proposes a novel machine learning-based approach to the pathfinding
problem on extremely large graphs. This method leverages diffusion distance
estimation via a neural network and uses beam search for pathfinding. We
demonstrate its efficiency by finding solutions for 4x4x4 and 5x5x5 Rubik's
cubes with unprecedentedly short solution lengths, outperforming all available
solvers and introducing the first machine learning solver beyond the 3x3x3
case. In particular, it surpasses every single case of the combined best
results in the Kaggle Santa 2023 challenge, which involved over 1,000 teams.
For the 3x3x3 Rubik's cube, our approach achieves an optimality rate exceeding
98%, matching the performance of task-specific solvers and significantly
outperforming prior solutions such as DeepCubeA (60.3%) and EfficientCube
(69.6%). Additionally, our solution is more than 26 times faster in solving
3x3x3 Rubik's cubes while requiring up to 18.5 times less model training time
than the most efficient state-of-the-art competitor.
|
2502.13267
|
BeforeIT.jl: High-Performance Agent-Based Macroeconomics Made Easy
|
cs.MA cs.CE econ.GN q-fin.EC
|
BeforeIT is an open-source software for building and simulating
state-of-the-art macroeconomic agent-based models (macro ABMs) based on the
recently introduced macro ABM developed in [1] and here referred to as the base
model. Written in Julia, it combines extraordinary computational efficiency
with user-friendliness and extensibility. We present the main structure of the
software, demonstrate its ease of use with illustrative examples, and benchmark
its performance. Our benchmarks show that the base model built with BeforeIT is
orders of magnitude faster than a Matlab version, and significantly faster than
Matlab-generated C code. BeforeIT is designed to facilitate reproducibility,
extensibility, and experimentation. As the first open-source, industry-grade
software to build macro ABMs of the type of the base model, BeforeIT can
significantly foster collaboration and innovation in the field of agent-based
macroeconomic modelling. The package, along with its documentation, is freely
available at https://github.com/bancaditalia/BeforeIT.jl under the AGPL-3.0.
|
2502.13268
|
Talking About the Assumption in the Room
|
cs.HC cs.LG
|
The reference to assumptions in how practitioners use or interact with
machine learning (ML) systems is ubiquitous in HCI and responsible ML
discourse. However, what remains unclear from prior works is the
conceptualization of assumptions and how practitioners identify and handle
assumptions throughout their workflows. This leads to confusion about what
assumptions are and what needs to be done with them. We use the concept of an
argument from Informal Logic, a branch of Philosophy, to offer a new
perspective to understand and explicate the confusions surrounding assumptions.
Through semi-structured interviews with 22 ML practitioners, we find what
contributes most to these confusions is how independently assumptions are
constructed, how reactively and reflectively they are handled, and how
nebulously they are recorded. Our study brings the peripheral discussion of
assumptions in ML to the center and presents recommendations for practitioners
to better think about and work with assumptions.
|
2502.13270
|
REALTALK: A 21-Day Real-World Dataset for Long-Term Conversation
|
cs.CL
|
Long-term, open-domain dialogue capabilities are essential for chatbots
aiming to recall past interactions and demonstrate emotional intelligence (EI).
Yet, most existing research relies on synthetic, LLM-generated data, leaving
open questions about real-world conversational patterns. To address this gap,
we introduce REALTALK, a 21-day corpus of authentic messaging app dialogues,
providing a direct benchmark against genuine human interactions.
We first conduct a dataset analysis, focusing on EI attributes and persona
consistency to understand the unique challenges posed by real-world dialogues.
By comparing with LLM-generated conversations, we highlight key differences,
including diverse emotional expressions and variations in persona stability
that synthetic dialogues often fail to capture.
Building on these insights, we introduce two benchmark tasks: (1) persona
simulation where a model continues a conversation on behalf of a specific user
given prior dialogue context; and (2) memory probing where a model answers
targeted questions requiring long-term memory of past interactions.
Our findings reveal that models struggle to simulate a user solely from
dialogue history, while fine-tuning on specific user chats improves persona
emulation. Additionally, existing models face significant challenges in
recalling and leveraging long-term context within real-world conversations.
|
2502.13277
|
HyperGCL: Multi-Modal Graph Contrastive Learning via Learnable
Hypergraph Views
|
cs.LG cs.AI
|
Recent advancements in Graph Contrastive Learning (GCL) have demonstrated
remarkable effectiveness in improving graph representations. However, relying
on predefined augmentations (e.g., node dropping, edge perturbation, attribute
masking) may result in the loss of task-relevant information and a lack of
adaptability to diverse input data. Furthermore, the selection of negative
samples remains rarely explored. In this paper, we introduce HyperGCL, a novel
multimodal GCL framework from a hypergraph perspective. HyperGCL constructs
three distinct hypergraph views by jointly utilizing the input graph's
structure and attributes, enabling a comprehensive integration of multiple
modalities in contrastive learning. A learnable adaptive topology augmentation
technique enhances these views by preserving important relations and filtering
out noise. View-specific encoders capture essential characteristics from each
view, while a network-aware contrastive loss leverages the underlying topology
to define positive and negative samples effectively. Extensive experiments on
benchmark datasets demonstrate that HyperGCL achieves state-of-the-art node
classification performance.
|
2502.13278
|
Performance Evaluation of Sentiment Analysis on Text and Emoji Data
Using End-to-End, Transfer Learning, Distributed and Explainable AI Models
|
cs.CL cs.AI
|
Emojis are being frequently used in todays digital world to express from
simple to complex thoughts more than ever before. Hence, they are also being
used in sentiment analysis and targeted marketing campaigns. In this work, we
performed sentiment analysis of Tweets as well as on emoji dataset from the
Kaggle. Since tweets are sentences we have used Universal Sentence Encoder
(USE) and Sentence Bidirectional Encoder Representations from Transformers
(SBERT) end-to-end sentence embedding models to generate the embeddings which
are used to train the Standard fully connected Neural Networks (NN), and LSTM
NN models. We observe the text classification accuracy was almost the same for
both the models around 98 percent. On the contrary, when the validation set was
built using emojis that were not present in the training set then the accuracy
of both the models reduced drastically to 70 percent. In addition, the models
were also trained using the distributed training approach instead of a
traditional singlethreaded model for better scalability. Using the distributed
training approach, we were able to reduce the run-time by roughly 15% without
compromising on accuracy. Finally, as part of explainable AI the Shap algorithm
was used to explain the model behaviour and check for model biases for the
given feature set.
|
2502.13280
|
Value Gradient Sampler: Sampling as Sequential Decision Making
|
cs.LG
|
We propose the Value Gradient Sampler (VGS), a trainable sampler based on the
interpretation of sampling as discrete-time sequential decision-making. VGS
generates samples from a given unnormalized density (i.e., energy) by drifting
and diffusing randomly initialized particles. In VGS, finding the optimal drift
is equivalent to solving an optimal control problem where the cost is the upper
bound of the KL divergence between the target density and the samples. We
employ value-based dynamic programming to solve this optimal control problem,
which gives the gradient of the value function as the optimal drift vector. The
connection to sequential decision making allows VGS to leverage extensively
studied techniques in reinforcement learning, making VGS a fast, adaptive, and
accurate sampler that achieves competitive results in various sampling
benchmarks. Furthermore, VGS can replace MCMC in contrastive divergence
training of energy-based models. We demonstrate the effectiveness of VGS in
training accurate energy-based models in industrial anomaly detection
applications.
|
2502.13283
|
Benefits of Early Stopping in Gradient Descent for Overparameterized
Logistic Regression
|
cs.LG stat.ML
|
In overparameterized logistic regression, gradient descent (GD) iterates
diverge in norm while converging in direction to the maximum $\ell_2$-margin
solution -- a phenomenon known as the implicit bias of GD. This work
investigates additional regularization effects induced by early stopping in
well-specified high-dimensional logistic regression. We first demonstrate that
the excess logistic risk vanishes for early-stopped GD but diverges to infinity
for GD iterates at convergence. This suggests that early-stopped GD is
well-calibrated, whereas asymptotic GD is statistically inconsistent. Second,
we show that to attain a small excess zero-one risk, polynomially many samples
are sufficient for early-stopped GD, while exponentially many samples are
necessary for any interpolating estimator, including asymptotic GD. This
separation underscores the statistical benefits of early stopping in the
overparameterized regime. Finally, we establish nonasymptotic bounds on the
norm and angular differences between early-stopped GD and $\ell_2$-regularized
empirical risk minimizer, thereby connecting the implicit regularization of GD
with explicit $\ell_2$-regularization.
|
2502.13285
|
Task Shift: From Classification to Regression in Overparameterized
Linear Models
|
stat.ML cs.LG
|
Modern machine learning methods have recently demonstrated remarkable
capability to generalize under task shift, where latent knowledge is
transferred to a different, often more difficult, task under a similar data
distribution. We investigate this phenomenon in an overparameterized linear
regression setting where the task shifts from classification during training to
regression during evaluation. In the zero-shot case, wherein no regression data
is available, we prove that task shift is impossible in both sparse signal and
random signal models for any Gaussian covariate distribution. In the few-shot
case, wherein limited regression data is available, we propose a simple
postprocessing algorithm which asymptotically recovers the ground-truth
predictor. Our analysis leverages a fine-grained characterization of individual
parameters arising from minimum-norm interpolation which may be of independent
interest. Our results show that while minimum-norm interpolators for
classification cannot transfer to regression a priori, they experience
surprisingly structured attenuation which enables successful task shift with
limited additional data.
|
2502.13286
|
BoundPlanner: A convex-set-based approach to bounded manipulator
trajectory planning
|
cs.RO
|
Online trajectory planning enables robot manipulators to react quickly to
changing environments or tasks. Many robot trajectory planners exist for known
environments but are often too slow for online computations. Current methods in
online trajectory planning do not find suitable trajectories in challenging
scenarios that respect the limits of the robot and account for collisions. This
work proposes a trajectory planning framework consisting of the novel Cartesian
path planner based on convex sets, called BoundPlanner, and the online
trajectory planner BoundMPC. BoundPlanner explores and maps the collision-free
space using convex sets to compute a reference path with bounds. BoundMPC is
extended in this work to handle convex sets for path deviations, which allows
the robot to optimally follow the path within the bounds while accounting for
the robot's kinematics. Collisions of the robot's kinematic chain are
considered by a novel convex-set-based collision avoidance formulation
independent on the number of obstacles. Simulations and experiments with a
7-DoF manipulator show the performance of the proposed planner compared to
state-of-the-art methods. The source code is available at
github.com/Thieso/BoundPlanner and videos of the experiments can be found at
www.acin.tuwien.ac.at/42d4
|
2502.13287
|
Breaking the bonds of generative artificial intelligence by minimizing
the maximum entropy
|
cs.LG cond-mat.stat-mech cs.IT math.IT
|
The emergence of generative artificial intelligence (GenAI), comprising large
language models, text-to-image generators, and AI algorithms for medical drug
and material design, had a transformative impact on society. However, despite
an initial exponential growth surpassing Moore's law, progress is now
plateauing, suggesting we are approaching the limits of current technology.
Indeed, these models are notoriously data-hungry, prone to overfitting, and
challenging to direct during the generative process, hampering their effective
professional employment. To cope with these limitations, we propose a paradigm
shift in GenAI by introducing an ab initio method based on the minimal maximum
entropy principle. Our approach does not fit the data. Instead, it compresses
information in the training set by finding a latent representation
parameterized by arbitrary nonlinear functions, such as neural networks. The
result is a general physics-driven model, which is data-efficient, resistant to
overfitting, and flexible, permitting to control and influence the generative
process. Benchmarking shows that our method outperforms variational
autoencoders (VAEs) with similar neural architectures, particularly on
undersampled datasets. We demonstrate the methods effectiveness in generating
images, even with limited training data, and its unprecedented capability to
customize the generation process a posteriori without the need of any
fine-tuning or retraining.
|
2502.13289
|
Multiple Distribution Shift -- Aerial (MDS-A): A Dataset for Test-Time
Error Detection and Model Adaptation
|
cs.LG
|
Machine learning models assume that training and test samples are drawn from
the same distribution. As such, significant differences between training and
test distributions often lead to degradations in performance. We introduce
Multiple Distribution Shift -- Aerial (MDS-A) -- a collection of inter-related
datasets of the same aerial domain that are perturbed in different ways to
better characterize the effects of out-of-distribution performance.
Specifically, MDS-A is a set of simulated aerial datasets collected under
different weather conditions. We include six datasets under different simulated
weather conditions along with six baseline object-detection models, as well as
several test datasets that are a mix of weather conditions that we show have
significant differences from the training data. In this paper, we present
characterizations of MDS-A, provide performance results for the baseline
machine learning models (on both their specific training datasets and the test
data), as well as results of the baselines after employing recent
knowledge-engineering error-detection techniques (EDR) thought to improve
out-of-distribution performance. The dataset is available at
https://lab-v2.github.io/mdsa-dataset-website.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.