id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.04356
|
Open Foundation Models in Healthcare: Challenges, Paradoxes, and
Opportunities with GenAI Driven Personalized Prescription
|
cs.CL cs.AI cs.LG
|
In response to the success of proprietary Large Language Models (LLMs) such
as OpenAI's GPT-4, there is a growing interest in developing open,
non-proprietary LLMs and AI foundation models (AIFMs) for transparent use in
academic, scientific, and non-commercial applications. Despite their inability
to match the refined functionalities of their proprietary counterparts, open
models hold immense potential to revolutionize healthcare applications. In this
paper, we examine the prospects of open-source LLMs and AIFMs for developing
healthcare applications and make two key contributions. Firstly, we present a
comprehensive survey of the current state-of-the-art open-source healthcare
LLMs and AIFMs and introduce a taxonomy of these open AIFMs, categorizing their
utility across various healthcare tasks. Secondly, to evaluate the
general-purpose applications of open LLMs in healthcare, we present a case
study on personalized prescriptions. This task is particularly significant due
to its critical role in delivering tailored, patient-specific medications that
can greatly improve treatment outcomes. In addition, we compare the performance
of open-source models with proprietary models in settings with and without
Retrieval-Augmented Generation (RAG). Our findings suggest that, although less
refined, open LLMs can achieve performance comparable to proprietary models
when paired with grounding techniques such as RAG. Furthermore, to highlight
the clinical significance of LLMs-empowered personalized prescriptions, we
perform subjective assessment through an expert clinician. We also elaborate on
ethical considerations and potential risks associated with the misuse of
powerful LLMs and AIFMs, highlighting the need for a cautious and responsible
implementation in healthcare.
|
2502.04357
|
Reusing Embeddings: Reproducible Reward Model Research in Large Language
Model Alignment without GPUs
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) have made substantial strides in structured
tasks through Reinforcement Learning (RL), demonstrating proficiency in
mathematical reasoning and code generation. However, applying RL in broader
domains like chatbots and content generation -- through the process known as
Reinforcement Learning from Human Feedback (RLHF) -- presents unique
challenges. Reward models in RLHF are critical, acting as proxies that evaluate
the alignment of LLM outputs with human intent. Despite advancements, the
development of reward models is hindered by challenges such as computational
heavy training, costly evaluation, and therefore poor reproducibility. We
advocate for using embedding-based input in reward model research as an
accelerated solution to those challenges. By leveraging embeddings for reward
modeling, we can enhance reproducibility, reduce computational demands on
hardware, improve training stability, and significantly reduce training and
evaluation costs, hence facilitating fair and efficient comparisons in this
active research area. We then show a case study of reproducing existing reward
model ensemble research using embedding-based reward models. We discussed
future avenues for research, aiming to contribute to safer and more effective
LLM deployments.
|
2502.04358
|
Position: Scaling LLM Agents Requires Asymptotic Analysis with LLM
Primitives
|
cs.CL cs.AI cs.CC cs.LG cs.NE
|
Decomposing hard problems into subproblems often makes them easier and more
efficient to solve. With large language models (LLMs) crossing critical
reliability thresholds for a growing slate of capabilities, there is an
increasing effort to decompose systems into sets of LLM-based agents, each of
whom can be delegated sub-tasks. However, this decomposition (even when
automated) is often intuitive, e.g., based on how a human might assign roles to
members of a human team. How close are these role decompositions to optimal?
This position paper argues that asymptotic analysis with LLM primitives is
needed to reason about the efficiency of such decomposed systems, and that
insights from such analysis will unlock opportunities for scaling them. By
treating the LLM forward pass as the atomic unit of computational cost, one can
separate out the (often opaque) inner workings of a particular LLM from the
inherent efficiency of how a set of LLMs are orchestrated to solve hard
problems. In other words, if we want to scale the deployment of LLMs to the
limit, instead of anthropomorphizing LLMs, asymptotic analysis with LLM
primitives should be used to reason about and develop more powerful
decompositions of large problems into LLM agents.
|
2502.04359
|
Exploring Spatial Language Grounding Through Referring Expressions
|
cs.CL cs.AI cs.CV
|
Spatial Reasoning is an important component of human cognition and is an area
in which the latest Vision-language models (VLMs) show signs of difficulty. The
current analysis works use image captioning tasks and visual question
answering. In this work, we propose using the Referring Expression
Comprehension task instead as a platform for the evaluation of spatial
reasoning by VLMs. This platform provides the opportunity for a deeper analysis
of spatial comprehension and grounding abilities when there is 1) ambiguity in
object detection, 2) complex spatial expressions with a longer sentence
structure and multiple spatial relations, and 3) expressions with negation
('not'). In our analysis, we use task-specific architectures as well as large
VLMs and highlight their strengths and weaknesses in dealing with these
specific situations. While all these models face challenges with the task at
hand, the relative behaviors depend on the underlying models and the specific
categories of spatial semantics (topological, directional, proximal, etc.). Our
results highlight these challenges and behaviors and provide insight into
research gaps and future directions.
|
2502.04360
|
MARAGE: Transferable Multi-Model Adversarial Attack for
Retrieval-Augmented Generation Data Extraction
|
cs.CL cs.CR cs.LG
|
Retrieval-Augmented Generation (RAG) offers a solution to mitigate
hallucinations in Large Language Models (LLMs) by grounding their outputs to
knowledge retrieved from external sources. The use of private resources and
data in constructing these external data stores can expose them to risks of
extraction attacks, in which attackers attempt to steal data from these private
databases. Existing RAG extraction attacks often rely on manually crafted
prompts, which limit their effectiveness. In this paper, we introduce a
framework called MARAGE for optimizing an adversarial string that, when
appended to user queries submitted to a target RAG system, causes outputs
containing the retrieved RAG data verbatim. MARAGE leverages a continuous
optimization scheme that integrates gradients from multiple models with
different architectures simultaneously to enhance the transferability of the
optimized string to unseen models. Additionally, we propose a strategy that
emphasizes the initial tokens in the target RAG data, further improving the
attack's generalizability. Evaluations show that MARAGE consistently
outperforms both manual and optimization-based baselines across multiple LLMs
and RAG datasets, while maintaining robust transferability to previously unseen
models. Moreover, we conduct probing tasks to shed light on the reasons why
MARAGE is more effective compared to the baselines and to analyze the impact of
our approach on the model's internal state.
|
2502.04361
|
Predicting 3D Motion from 2D Video for Behavior-Based VR Biometrics
|
cs.CV cs.AI cs.HC
|
Critical VR applications in domains such as healthcare, education, and
finance that use traditional credentials, such as PIN, password, or
multi-factor authentication, stand the chance of being compromised if a
malicious person acquires the user credentials or if the user hands over their
credentials to an ally. Recently, a number of approaches on user authentication
have emerged that use motions of VR head-mounted displays (HMDs) and hand
controllers during user interactions in VR to represent the user's behavior as
a VR biometric signature. One of the fundamental limitations of behavior-based
approaches is that current on-device tracking for HMDs and controllers lacks
capability to perform tracking of full-body joint articulation, losing key
signature data encapsulated by the user articulation. In this paper, we propose
an approach that uses 2D body joints, namely shoulder, elbow, wrist, hip, knee,
and ankle, acquired from the right side of the participants using an external
2D camera. Using a Transformer-based deep neural network, our method uses the
2D data of body joints that are not tracked by the VR device to predict past
and future 3D tracks of the right controller, providing the benefit of
augmenting 3D knowledge in authentication. Our approach provides a minimum
equal error rate (EER) of 0.025, and a maximum EER drop of 0.040 over prior
work that uses single-unit 3D trajectory as the input.
|
2502.04362
|
LLMs can be easily Confused by Instructional Distractions
|
cs.CL cs.AI
|
Despite the fact that large language models (LLMs) show exceptional skill in
instruction following tasks, this strength can turn into a vulnerability when
the models are required to disregard certain instructions.
Instruction-following tasks typically involve a clear task description and
input text containing the target data to be processed. However, when the input
itself resembles an instruction, confusion may arise, even if there is explicit
prompting to distinguish between the task instruction and the input. We refer
to this phenomenon as instructional distraction. In this paper, we introduce a
novel benchmark, named DIM-Bench, specifically designed to assess LLMs'
performance under instructional distraction. The benchmark categorizes
real-world instances of instructional distraction and evaluates LLMs across
four instruction tasks: rewriting, proofreading, translation, and style
transfer -- alongside five input tasks: reasoning, code generation,
mathematical reasoning, bias detection, and question answering. Our
experimental results reveal that even the most advanced LLMs are susceptible to
instructional distraction, often failing to accurately follow user intent in
such cases.
|
2502.04363
|
On-device Sora: Enabling Diffusion-Based Text-to-Video Generation for
Mobile Devices
|
cs.CV
|
We present On-device Sora, a first pioneering solution for diffusion-based
on-device text-to-video generation that operates efficiently on
smartphone-grade devices. Building on Open-Sora, On-device Sora applies three
novel techniques to address the challenges of diffusion-based text-to-video
generation on computation- and memory-limited mobile devices. First, Linear
Proportional Leap (LPL) reduces the excessive denoising steps required in video
diffusion through an efficient leap-based approach. Second, Temporal Dimension
Token Merging (TDTM) minimizes intensive token-processing computation in
attention layers by merging consecutive tokens along the temporal dimension.
Third, Concurrent Inference with Dynamic Loading (CI-DL) dynamically partitions
large models into smaller blocks and loads them into memory for concurrent
model inference, effectively addressing the challenges of limited device
memory. We implement On-device Sora on the iPhone 15 Pro, and the experimental
evaluations demonstrate that it is capable of generating high-quality videos on
the device, comparable to those produced by Open-Sora running on high-end GPUs.
These results show that On-device Sora enables efficient and high-quality video
generation on resource-constrained mobile devices, expanding accessibility,
ensuring user privacy, reducing dependence on cloud infrastructure, and
lowering associated costs. We envision the proposed On-device Sora as a
significant first step toward democratizing state-of-the-art generative
technologies, enabling video generation capabilities on commodity mobile and
embedded devices. The code implementation is publicly available at an GitHub
repository: https://github.com/eai-lab/On-device-Sora.
|
2502.04364
|
Lost in Edits? A $\lambda$-Compass for AIGC Provenance
|
cs.CV cs.AI cs.HC cs.LG
|
Recent advancements in diffusion models have driven the growth of text-guided
image editing tools, enabling precise and iterative modifications of
synthesized content. However, as these tools become increasingly accessible,
they also introduce significant risks of misuse, emphasizing the critical need
for robust attribution methods to ensure content authenticity and traceability.
Despite the creative potential of such tools, they pose significant challenges
for attribution, particularly in adversarial settings where edits can be
layered to obscure an image's origins. We propose LambdaTracer, a novel
latent-space attribution method that robustly identifies and differentiates
authentic outputs from manipulated ones without requiring any modifications to
generative or editing pipelines. By adaptively calibrating reconstruction
losses, LambdaTracer remains effective across diverse iterative editing
processes, whether automated through text-guided editing tools such as
InstructPix2Pix and ControlNet or performed manually with editing software such
as Adobe Photoshop. Extensive experiments reveal that our method consistently
outperforms baseline approaches in distinguishing maliciously edited images,
providing a practical solution to safeguard ownership, creativity, and
credibility in the open, fast-evolving AI ecosystems.
|
2502.04365
|
AI-Based Thermal Video Analysis in Privacy-Preserving Healthcare: A Case
Study on Detecting Time of Birth
|
cs.CV cs.AI
|
Approximately 10% of newborns need some assistance to start breathing and 5\%
proper ventilation. It is crucial that interventions are initiated as soon as
possible after birth. Accurate documentation of Time of Birth (ToB) is thereby
essential for documenting and improving newborn resuscitation performance.
However, current clinical practices rely on manual recording of ToB, typically
with minute precision. In this study, we present an AI-driven, video-based
system for automated ToB detection using thermal imaging, designed to preserve
the privacy of healthcare providers and mothers by avoiding the use of
identifiable visual data. Our approach achieves 91.4% precision and 97.4%
recall in detecting ToB within thermal video clips during performance
evaluation. Additionally, our system successfully identifies ToB in 96% of test
cases with an absolute median deviation of 1 second compared to manual
annotations. This method offers a reliable solution for improving ToB
documentation and enhancing newborn resuscitation outcomes.
|
2502.04366
|
Contrastive Token-level Explanations for Graph-based Rumour Detection
|
cs.CL cs.AI cs.LG
|
The widespread use of social media has accelerated the dissemination of
information, but it has also facilitated the spread of harmful rumours, which
can disrupt economies, influence political outcomes, and exacerbate public
health crises, such as the COVID-19 pandemic. While Graph Neural Network
(GNN)-based approaches have shown significant promise in automated rumour
detection, they often lack transparency, making their predictions difficult to
interpret. Existing graph explainability techniques fall short in addressing
the unique challenges posed by the dependencies among feature dimensions in
high-dimensional text embeddings used in GNN-based models. In this paper, we
introduce Contrastive Token Layerwise Relevance Propagation (CT-LRP), a novel
framework designed to enhance the explainability of GNN-based rumour detection.
CT-LRP extends current graph explainability methods by providing token-level
explanations that offer greater granularity and interpretability. We evaluate
the effectiveness of CT-LRP across multiple GNN models trained on three
publicly available rumour detection datasets, demonstrating that it
consistently produces high-fidelity, meaningful explanations, paving the way
for more robust and trustworthy rumour detection systems.
|
2502.04367
|
Hybrid Deep Learning Framework for Classification of Kidney CT Images:
Diagnosis of Stones, Cysts, and Tumors
|
eess.IV cs.CV cs.LG
|
Medical image classification is a vital research area that utilizes advanced
computational techniques to improve disease diagnosis and treatment planning.
Deep learning models, especially Convolutional Neural Networks (CNNs), have
transformed this field by providing automated and precise analysis of complex
medical images. This study introduces a hybrid deep learning model that
integrates a pre-trained ResNet101 with a custom CNN to classify kidney CT
images into four categories: normal, stone, cyst, and tumor. The proposed model
leverages feature fusion to enhance classification accuracy, achieving 99.73%
training accuracy and 100% testing accuracy. Using a dataset of 12,446 CT
images and advanced feature mapping techniques, the hybrid CNN model
outperforms standalone ResNet101. This architecture delivers a robust and
efficient solution for automated kidney disease diagnosis, providing improved
precision, recall, and reduced testing time, making it highly suitable for
clinical applications.
|
2502.04369
|
HSI: A Holistic Style Injector for Arbitrary Style Transfer
|
cs.CV cs.LG eess.IV
|
Attention-based arbitrary style transfer methods have gained significant
attention recently due to their impressive ability to synthesize style details.
However, the point-wise matching within the attention mechanism may overly
focus on local patterns such that neglect the remarkable global features of
style images. Additionally, when processing large images, the quadratic
complexity of the attention mechanism will bring high computational load. To
alleviate above problems, we propose Holistic Style Injector (HSI), a novel
attention-style transformation module to deliver artistic expression of target
style. Specifically, HSI performs stylization only based on global style
representation that is more in line with the characteristics of style transfer,
to avoid generating local disharmonious patterns in stylized images. Moreover,
we propose a dual relation learning mechanism inside the HSI to dynamically
render images by leveraging semantic similarity in content and style, ensuring
the stylized images preserve the original content and improve style fidelity.
Note that the proposed HSI achieves linear computational complexity because it
establishes feature mapping through element-wise multiplication rather than
matrix multiplication. Qualitative and quantitative results demonstrate that
our method outperforms state-of-the-art approaches in both effectiveness and
efficiency.
|
2502.04370
|
DreamDPO: Aligning Text-to-3D Generation with Human Preferences via
Direct Preference Optimization
|
cs.CL cs.GR cs.LG
|
Text-to-3D generation automates 3D content creation from textual
descriptions, which offers transformative potential across various fields.
However, existing methods often struggle to align generated content with human
preferences, limiting their applicability and flexibility. To address these
limitations, in this paper, we propose DreamDPO, an optimization-based
framework that integrates human preferences into the 3D generation process,
through direct preference optimization. Practically, DreamDPO first constructs
pairwise examples, then compare their alignment with human preferences using
reward or large multimodal models, and lastly optimizes the 3D representation
with a preference-driven loss function. By leveraging pairwise comparison to
reflect preferences, DreamDPO reduces reliance on precise pointwise quality
evaluations while enabling fine-grained controllability through
preference-guided optimization. Experiments demonstrate that DreamDPO achieves
competitive results, and provides higher-quality and more controllable 3D
content compared to existing methods. The code and models will be open-sourced.
|
2502.04371
|
PerPO: Perceptual Preference Optimization via Discriminative Rewarding
|
cs.AI cs.CL cs.LG
|
This paper presents Perceptual Preference Optimization (PerPO), a perception
alignment method aimed at addressing the visual discrimination challenges in
generative pre-trained multimodal large language models (MLLMs). To align MLLMs
with human visual perception process, PerPO employs discriminative rewarding to
gather diverse negative samples, followed by listwise preference optimization
to rank them.By utilizing the reward as a quantitative margin for ranking, our
method effectively bridges generative preference optimization and
discriminative empirical risk minimization. PerPO significantly enhances MLLMs'
visual discrimination capabilities while maintaining their generative
strengths, mitigates image-unconditional reward hacking, and ensures consistent
performance across visual tasks. This work marks a crucial step towards more
perceptually aligned and versatile MLLMs. We also hope that PerPO will
encourage the community to rethink MLLM alignment strategies.
|
2502.04372
|
Mining Unstructured Medical Texts With Conformal Active Learning
|
cs.CL cs.LG stat.ML
|
The extraction of relevant data from Electronic Health Records (EHRs) is
crucial to identifying symptoms and automating epidemiological surveillance
processes. By harnessing the vast amount of unstructured text in EHRs, we can
detect patterns that indicate the onset of disease outbreaks, enabling faster,
more targeted public health responses. Our proposed framework provides a
flexible and efficient solution for mining data from unstructured texts,
significantly reducing the need for extensive manual labeling by specialists.
Experiments show that our framework achieving strong performance with as few as
200 manually labeled texts, even for complex classification problems.
Additionally, our approach can function with simple lightweight models,
achieving competitive and occasionally even better results compared to more
resource-intensive deep learning models. This capability not only accelerates
processing times but also preserves patient privacy, as the data can be
processed on weaker on-site hardware rather than being transferred to external
systems. Our methodology, therefore, offers a practical, scalable, and
privacy-conscious approach to real-time epidemiological monitoring, equipping
health institutions to respond rapidly and effectively to emerging health
threats.
|
2502.04375
|
An Analysis for Reasoning Bias of Language Models with Small
Initialization
|
cs.CL cs.LG
|
Transformer-based Large Language Models (LLMs) have revolutionized Natural
Language Processing by demonstrating exceptional performance across diverse
tasks. This study investigates the impact of the parameter initialization scale
on the training behavior and task preferences of LLMs. We discover that smaller
initialization scales encourage models to favor reasoning tasks, whereas larger
initialization scales lead to a preference for memorization tasks. We validate
this reasoning bias via real datasets and meticulously designed anchor
functions. Further analysis of initial training dynamics suggests that specific
model components, particularly the embedding space and self-attention
mechanisms, play pivotal roles in shaping these learning biases. We provide a
theoretical framework from the perspective of model training dynamics to
explain these phenomena. Additionally, experiments on real-world language tasks
corroborate our theoretical insights. This work enhances our understanding of
how initialization strategies influence LLM performance on reasoning tasks and
offers valuable guidelines for training models.
|
2502.04376
|
MEETING DELEGATE: Benchmarking LLMs on Attending Meetings on Our Behalf
|
cs.CL cs.AI
|
In contemporary workplaces, meetings are essential for exchanging ideas and
ensuring team alignment but often face challenges such as time consumption,
scheduling conflicts, and inefficient participation. Recent advancements in
Large Language Models (LLMs) have demonstrated their strong capabilities in
natural language generation and reasoning, prompting the question: can LLMs
effectively delegate participants in meetings? To explore this, we develop a
prototype LLM-powered meeting delegate system and create a comprehensive
benchmark using real meeting transcripts. Our evaluation reveals that GPT-4/4o
maintain balanced performance between active and cautious engagement
strategies. In contrast, Gemini 1.5 Pro tends to be more cautious, while Gemini
1.5 Flash and Llama3-8B/70B display more active tendencies. Overall, about 60\%
of responses address at least one key point from the ground-truth. However,
improvements are needed to reduce irrelevant or repetitive content and enhance
tolerance for transcription errors commonly found in real-world settings.
Additionally, we implement the system in practical settings and collect
real-world feedback from demos. Our findings underscore the potential and
challenges of utilizing LLMs as meeting delegates, offering valuable insights
into their practical application for alleviating the burden of meetings.
|
2502.04377
|
MapFusion: A Novel BEV Feature Fusion Network for Multi-modal Map
Construction
|
cs.CV cs.AI
|
Map construction task plays a vital role in providing precise and
comprehensive static environmental information essential for autonomous driving
systems. Primary sensors include cameras and LiDAR, with configurations varying
between camera-only, LiDAR-only, or camera-LiDAR fusion, based on
cost-performance considerations. While fusion-based methods typically perform
best, existing approaches often neglect modality interaction and rely on simple
fusion strategies, which suffer from the problems of misalignment and
information loss. To address these issues, we propose MapFusion, a novel
multi-modal Bird's-Eye View (BEV) feature fusion method for map construction.
Specifically, to solve the semantic misalignment problem between camera and
LiDAR BEV features, we introduce the Cross-modal Interaction Transform (CIT)
module, enabling interaction between two BEV feature spaces and enhancing
feature representation through a self-attention mechanism. Additionally, we
propose an effective Dual Dynamic Fusion (DDF) module to adaptively select
valuable information from different modalities, which can take full advantage
of the inherent information between different modalities. Moreover, MapFusion
is designed to be simple and plug-and-play, easily integrated into existing
pipelines. We evaluate MapFusion on two map construction tasks, including
High-definition (HD) map and BEV map segmentation, to show its versatility and
effectiveness. Compared with the state-of-the-art methods, MapFusion achieves
3.6% and 6.2% absolute improvements on the HD map construction and BEV map
segmentation tasks on the nuScenes dataset, respectively, demonstrating the
superiority of our approach.
|
2502.04378
|
DILLEMA: Diffusion and Large Language Models for Multi-Modal
Augmentation
|
cs.CV cs.GR cs.LG cs.SE
|
Ensuring the robustness of deep learning models requires comprehensive and
diverse testing. Existing approaches, often based on simple data augmentation
techniques or generative adversarial networks, are limited in producing
realistic and varied test cases. To address these limitations, we present a
novel framework for testing vision neural networks that leverages Large
Language Models and control-conditioned Diffusion Models to generate synthetic,
high-fidelity test cases. Our approach begins by translating images into
detailed textual descriptions using a captioning model, allowing the language
model to identify modifiable aspects of the image and generate counterfactual
descriptions. These descriptions are then used to produce new test images
through a text-to-image diffusion process that preserves spatial consistency
and maintains the critical elements of the scene. We demonstrate the
effectiveness of our method using two datasets: ImageNet1K for image
classification and SHIFT for semantic segmentation in autonomous driving. The
results show that our approach can generate significant test cases that reveal
weaknesses and improve the robustness of the model through targeted retraining.
We conducted a human assessment using Mechanical Turk to validate the generated
images. The responses from the participants confirmed, with high agreement
among the voters, that our approach produces valid and realistic images.
|
2502.04379
|
Can Large Language Models Capture Video Game Engagement?
|
cs.CV cs.AI cs.CL cs.HC
|
Can out-of-the-box pretrained Large Language Models (LLMs) detect human
affect successfully when observing a video? To address this question, for the
first time, we evaluate comprehensively the capacity of popular LLMs to
annotate and successfully predict continuous affect annotations of videos when
prompted by a sequence of text and video frames in a multimodal fashion.
Particularly in this paper, we test LLMs' ability to correctly label changes of
in-game engagement in 80 minutes of annotated videogame footage from 20
first-person shooter games of the GameVibe corpus. We run over 2,400
experiments to investigate the impact of LLM architecture, model size, input
modality, prompting strategy, and ground truth processing method on engagement
prediction. Our findings suggest that while LLMs rightfully claim human-like
performance across multiple domains, they generally fall behind capturing
continuous experience annotations provided by humans. We examine some of the
underlying causes for the relatively poor overall performance, highlight the
cases where LLMs exceed expectations, and draw a roadmap for the further
exploration of automated emotion labelling via LLMs.
|
2502.04380
|
Diversity as a Reward: Fine-Tuning LLMs on a Mixture of
Domain-Undetermined Data
|
cs.CL cs.AI cs.LG
|
Fine-tuning large language models (LLMs) using diverse datasets is crucial
for enhancing their overall performance across various domains. In practical
scenarios, existing methods based on modeling the mixture proportions of data
composition often struggle with data whose domain labels are missing, imprecise
or non-normalized, while methods based on data selection usually encounter
difficulties in balancing multi-domain performance. To address these
challenges, in this paper, we study the role of data diversity in enhancing the
overall abilities of LLMs by empirically constructing contrastive data pools
and theoretically deriving explanations for both inter- and intra-diversity.
Building upon the insights gained, we propose a new method that gives the LLM a
dual identity: an output model to cognitively probe and select data based on
diversity reward, as well as an input model to be tuned with the selected data.
Extensive experiments show that the proposed method notably boosts performance
across domain-undetermined data and a series of foundational downstream tasks
when applied to various advanced LLMs. We release our code and hope this study
can shed light on the understanding of data diversity and advance
feedback-driven data-model co-development for LLMs.
|
2502.04381
|
Limitations of Large Language Models in Clinical Problem-Solving Arising
from Inflexible Reasoning
|
cs.CL cs.AI
|
Large Language Models (LLMs) have attained human-level accuracy on medical
question-answer (QA) benchmarks. However, their limitations in navigating
open-ended clinical scenarios have recently been shown, raising concerns about
the robustness and generalizability of LLM reasoning across diverse, real-world
medical tasks. To probe potential LLM failure modes in clinical
problem-solving, we present the medical abstraction and reasoning corpus
(M-ARC). M-ARC assesses clinical reasoning through scenarios designed to
exploit the Einstellung effect -- the fixation of thought arising from prior
experience, targeting LLM inductive biases toward inflexible pattern matching
from their training data rather than engaging in flexible reasoning. We find
that LLMs, including current state-of-the-art o1 and Gemini models, perform
poorly compared to physicians on M-ARC, often demonstrating lack of commonsense
medical reasoning and a propensity to hallucinate. In addition, uncertainty
estimation analyses indicate that LLMs exhibit overconfidence in their answers,
despite their limited accuracy. The failure modes revealed by M-ARC in LLM
medical reasoning underscore the need to exercise caution when deploying these
models in clinical settings.
|
2502.04382
|
Sparse Autoencoders for Hypothesis Generation
|
cs.CL cs.AI cs.CY
|
We describe HypotheSAEs, a general method to hypothesize interpretable
relationships between text data (e.g., headlines) and a target variable (e.g.,
clicks). HypotheSAEs has three steps: (1) train a sparse autoencoder on text
embeddings to produce interpretable features describing the data distribution,
(2) select features that predict the target variable, and (3) generate a
natural language interpretation of each feature (e.g., "mentions being
surprised or shocked") using an LLM. Each interpretation serves as a hypothesis
about what predicts the target variable. Compared to baselines, our method
better identifies reference hypotheses on synthetic datasets (at least +0.06 in
F1) and produces more predictive hypotheses on real datasets (~twice as many
significant findings), despite requiring 1-2 orders of magnitude less compute
than recent LLM-based methods. HypotheSAEs also produces novel discoveries on
two well-studied tasks: explaining partisan differences in Congressional
speeches and identifying drivers of engagement with online headlines.
|
2502.04384
|
Enhancing Reasoning to Adapt Large Language Models for Domain-Specific
Applications
|
cs.CL cs.AI cs.LG cs.SY eess.SY
|
This paper presents SOLOMON, a novel Neuro-inspired Large Language Model
(LLM) Reasoning Network architecture that enhances the adaptability of
foundation models for domain-specific applications. Through a case study in
semiconductor layout design, we demonstrate how SOLOMON enables swift
adaptation of general-purpose LLMs to specialized tasks by leveraging Prompt
Engineering and In-Context Learning techniques. Our experiments reveal the
challenges LLMs face in spatial reasoning and applying domain knowledge to
practical problems. Results show that SOLOMON instances significantly
outperform their baseline LLM counterparts and achieve performance comparable
to state-of-the-art reasoning model, o1-preview. We discuss future research
directions for developing more adaptive AI systems that can continually learn,
adapt, and evolve in response to new information and changing requirements.
|
2502.04385
|
TexLiDAR: Automated Text Understanding for Panoramic LiDAR Data
|
cs.CV cs.AI
|
Efforts to connect LiDAR data with text, such as LidarCLIP, have primarily
focused on embedding 3D point clouds into CLIP text-image space. However, these
approaches rely on 3D point clouds, which present challenges in encoding
efficiency and neural network processing. With the advent of advanced LiDAR
sensors like Ouster OS1, which, in addition to 3D point clouds, produce fixed
resolution depth, signal, and ambient panoramic 2D images, new opportunities
emerge for LiDAR based tasks. In this work, we propose an alternative approach
to connect LiDAR data with text by leveraging 2D imagery generated by the OS1
sensor instead of 3D point clouds. Using the Florence 2 large model in a
zero-shot setting, we perform image captioning and object detection. Our
experiments demonstrate that Florence 2 generates more informative captions and
achieves superior performance in object detection tasks compared to existing
methods like CLIP. By combining advanced LiDAR sensor data with a large
pre-trained model, our approach provides a robust and accurate solution for
challenging detection scenarios, including real-time applications requiring
high accuracy and robustness.
|
2502.04386
|
Towards Fair Medical AI: Adversarial Debiasing of 3D CT Foundation
Embeddings
|
cs.CV cs.AI cs.LG
|
Self-supervised learning has revolutionized medical imaging by enabling
efficient and generalizable feature extraction from large-scale unlabeled
datasets. Recently, self-supervised foundation models have been extended to
three-dimensional (3D) computed tomography (CT) data, generating compact,
information-rich embeddings with 1408 features that achieve state-of-the-art
performance on downstream tasks such as intracranial hemorrhage detection and
lung cancer risk forecasting. However, these embeddings have been shown to
encode demographic information, such as age, sex, and race, which poses a
significant risk to the fairness of clinical applications.
In this work, we propose a Variation Autoencoder (VAE) based adversarial
debiasing framework to transform these embeddings into a new latent space where
demographic information is no longer encoded, while maintaining the performance
of critical downstream tasks. We validated our approach on the NLST lung cancer
screening dataset, demonstrating that the debiased embeddings effectively
eliminate multiple encoded demographic information and improve fairness without
compromising predictive accuracy for lung cancer risk at 1-year and 2-year
intervals. Additionally, our approach ensures the embeddings are robust against
adversarial bias attacks. These results highlight the potential of adversarial
debiasing techniques to ensure fairness and equity in clinical applications of
self-supervised 3D CT embeddings, paving the way for their broader adoption in
unbiased medical decision-making.
|
2502.04387
|
FedP$^2$EFT: Federated Learning to Personalize Parameter Efficient
Fine-Tuning for Multilingual LLMs
|
cs.CL cs.AI
|
Federated learning (FL) has enabled the training of multilingual large
language models (LLMs) on diverse and decentralized multilingual data,
especially on low-resource languages. To improve client-specific performance,
personalization via the use of parameter-efficient fine-tuning (PEFT) modules
such as LoRA is common. This involves a personalization strategy (PS), such as
the design of the PEFT adapter structures (e.g., in which layers to add LoRAs
and what ranks) and choice of hyperparameters (e.g., learning rates) for
fine-tuning. Instead of manual PS configuration, we propose FedP$^2$EFT, a
federated learning-to-personalize method for multilingual LLMs in cross-device
FL settings. Unlike most existing PEFT structure selection methods, which are
prone to overfitting low-data regimes, FedP$^2$EFT collaboratively learns the
optimal personalized PEFT structure for each client via Bayesian sparse rank
selection. Evaluations on both simulated and real-world multilingual FL
benchmarks demonstrate that FedP$^2$EFT largely outperforms existing
personalized fine-tuning methods, while complementing a range of existing FL
methods.
|
2502.04388
|
Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms
|
cs.MA cs.AI
|
Artificially intelligent (AI) agents that are capable of autonomous learning
and independent decision-making hold great promise for addressing complex
challenges across domains like transportation, energy systems, and
manufacturing. However, the surge in AI systems' design and deployment driven
by various stakeholders with distinct and unaligned objectives introduces a
crucial challenge: how can uncoordinated AI systems coexist and evolve
harmoniously in shared environments without creating chaos? To address this, we
advocate for a fundamental rethinking of existing multi-agent frameworks, such
as multi-agent systems and game theory, which are largely limited to predefined
rules and static objective structures. We posit that AI agents should be
empowered to dynamically adjust their objectives, make compromises, form
coalitions, and safely compete or cooperate through evolving relationships and
social feedback. Through this paper, we call for a shift toward the emergent,
self-organizing, and context-aware nature of these systems.
|
2502.04389
|
Overcoming Vision Language Model Challenges in Diagram Understanding: A
Proof-of-Concept with XML-Driven Large Language Models Solutions
|
cs.SE cs.AI
|
Diagrams play a crucial role in visually conveying complex relationships and
processes within business documentation. Despite recent advances in
Vision-Language Models (VLMs) for various image understanding tasks, accurately
identifying and extracting the structures and relationships depicted in
diagrams continues to pose significant challenges. This study addresses these
challenges by proposing a text-driven approach that bypasses reliance on VLMs'
visual recognition capabilities. Instead, it utilizes the editable source
files--such as xlsx, pptx or docx--where diagram elements (e.g., shapes, lines,
annotations) are preserved as textual metadata. In our proof-of-concept, we
extracted diagram information from xlsx-based system design documents and
transformed the extracted shape data into textual input for Large Language
Models (LLMs). This approach allowed the LLM to analyze relationships and
generate responses to business-oriented questions without the bottleneck of
image-based processing. Experimental comparisons with a VLM-based method
demonstrated that the proposed text-driven framework yielded more accurate
answers for questions requiring detailed comprehension of diagram
structures.The results obtained in this study are not limited to the tested
.xlsx files but can also be extended to diagrams in other documents with source
files, such as Office pptx and docx formats. These findings highlight the
feasibility of circumventing VLM constraints through direct textual extraction
from original source files. By enabling robust diagram understanding through
LLMs, our method offers a promising path toward enhanced workflow efficiency
and information analysis in real-world business scenarios.
|
2502.04390
|
In Praise of Stubbornness: The Case for Cognitive-Dissonance-Aware
Knowledge Updates in LLMs
|
cs.CL cs.AI cs.LG q-bio.NC
|
Despite remarkable capabilities, large language models (LLMs) struggle to
continually update their knowledge without catastrophic forgetting. In
contrast, humans effortlessly integrate new information, detect conflicts with
existing beliefs, and selectively update their mental models. This paper
introduces a cognitive-inspired investigation paradigm to study continual
knowledge updating in LLMs. We implement two key components inspired by human
cognition: (1) Dissonance and Familiarity Awareness, analyzing model behavior
to classify information as novel, familiar, or dissonant; and (2) Targeted
Network Updates, which track neural activity to identify frequently used
(stubborn) and rarely used (plastic) neurons. Through carefully designed
experiments in controlled settings, we uncover a number of empirical findings
demonstrating the potential of this approach. First, dissonance detection is
feasible using simple activation and gradient features, suggesting potential
for cognitive-inspired training. Second, we find that non-dissonant updates
largely preserve prior knowledge regardless of targeting strategy, revealing
inherent robustness in LLM knowledge integration. Most critically, we discover
that dissonant updates prove catastrophically destructive to the model's
knowledge base, indiscriminately affecting even information unrelated to the
current updates. This suggests fundamental limitations in how neural networks
handle contradictions and motivates the need for new approaches to knowledge
updating that better mirror human cognitive mechanisms.
|
2502.04391
|
Towards Fair and Robust Face Parsing for Generative AI: A
Multi-Objective Approach
|
cs.CV cs.AI
|
Face parsing is a fundamental task in computer vision, enabling applications
such as identity verification, facial editing, and controllable image
synthesis. However, existing face parsing models often lack fairness and
robustness, leading to biased segmentation across demographic groups and errors
under occlusions, noise, and domain shifts. These limitations affect downstream
face synthesis, where segmentation biases can degrade generative model outputs.
We propose a multi-objective learning framework that optimizes accuracy,
fairness, and robustness in face parsing. Our approach introduces a
homotopy-based loss function that dynamically adjusts the importance of these
objectives during training. To evaluate its impact, we compare multi-objective
and single-objective U-Net models in a GAN-based face synthesis pipeline
(Pix2PixHD). Our results show that fairness-aware and robust segmentation
improves photorealism and consistency in face generation. Additionally, we
conduct preliminary experiments using ControlNet, a structured conditioning
model for diffusion-based synthesis, to explore how segmentation quality
influences guided image generation. Our findings demonstrate that
multi-objective face parsing improves demographic consistency and robustness,
leading to higher-quality GAN-based synthesis.
|
2502.04392
|
Division-of-Thoughts: Harnessing Hybrid Language Model Synergy for
Efficient On-Device Agents
|
cs.CL cs.AI
|
The rapid expansion of web content has made on-device AI assistants
indispensable for helping users manage the increasing complexity of online
tasks. The emergent reasoning ability in large language models offer a
promising path for next-generation on-device AI agents. However, deploying
full-scale Large Language Models (LLMs) on resource-limited local devices is
challenging. In this paper, we propose Division-of-Thoughts (DoT), a
collaborative reasoning framework leveraging the synergy between locally
deployed Smaller-scale Language Models (SLMs) and cloud-based LLMs. DoT
leverages a Task Decomposer to elicit the inherent planning abilities in
language models to decompose user queries into smaller sub-tasks, which allows
hybrid language models to fully exploit their respective strengths. Besides,
DoT employs a Task Scheduler to analyze the pair-wise dependency of sub-tasks
and create a dependency graph, facilitating parallel reasoning of sub-tasks and
the identification of key steps. To allocate the appropriate model based on the
difficulty of sub-tasks, DoT leverages a Plug-and-Play Adapter, which is an
additional task head attached to the SLM that does not alter the SLM's
parameters. To boost adapter's task allocation capability, we propose a
self-reinforced training method that relies solely on task execution feedback.
Extensive experiments on various benchmarks demonstrate that our DoT
significantly reduces LLM costs while maintaining competitive reasoning
accuracy. Specifically, DoT reduces the average reasoning time and API costs by
66.12% and 83.57%, while achieving comparable reasoning accuracy with the best
baseline methods.
|
2502.04393
|
UniCP: A Unified Caching and Pruning Framework for Efficient Video
Generation
|
cs.CV
|
Diffusion Transformers (DiT) excel in video generation but encounter
significant computational challenges due to the quadratic complexity of
attention. Notably, attention differences between adjacent diffusion steps
follow a U-shaped pattern. Current methods leverage this property by caching
attention blocks, however, they still struggle with sudden error spikes and
large discrepancies. To address these issues, we propose UniCP a unified
caching and pruning framework for efficient video generation. UniCP optimizes
both temporal and spatial dimensions through. Error Aware Dynamic Cache Window
(EDCW): Dynamically adjusts cache window sizes for different blocks at various
timesteps, adapting to abrupt error changes. PCA based Slicing (PCAS) and
Dynamic Weight Shift (DWS): PCAS prunes redundant attention components, and DWS
integrates caching and pruning by enabling dynamic switching between pruned and
cached outputs. By adjusting cache windows and pruning redundant components,
UniCP enhances computational efficiency and maintains video detail fidelity.
Experimental results show that UniCP outperforms existing methods in both
performance and efficiency.
|
2502.04394
|
DECT: Harnessing LLM-assisted Fine-Grained Linguistic Knowledge and
Label-Switched and Label-Preserved Data Generation for Diagnosis of
Alzheimer's Disease
|
cs.CL cs.AI
|
Alzheimer's Disease (AD) is an irreversible neurodegenerative disease
affecting 50 million people worldwide. Low-cost, accurate identification of key
markers of AD is crucial for timely diagnosis and intervention. Language
impairment is one of the earliest signs of cognitive decline, which can be used
to discriminate AD patients from normal control individuals.
Patient-interviewer dialogues may be used to detect such impairments, but they
are often mixed with ambiguous, noisy, and irrelevant information, making the
AD detection task difficult. Moreover, the limited availability of AD speech
samples and variability in their speech styles pose significant challenges in
developing robust speech-based AD detection models. To address these
challenges, we propose DECT, a novel speech-based domain-specific approach
leveraging large language models (LLMs) for fine-grained linguistic analysis
and label-switched label-preserved data generation. Our study presents four
novelties: We harness the summarizing capabilities of LLMs to identify and
distill key Cognitive-Linguistic information from noisy speech transcripts,
effectively filtering irrelevant information. We leverage the inherent
linguistic knowledge of LLMs to extract linguistic markers from unstructured
and heterogeneous audio transcripts. We exploit the compositional ability of
LLMs to generate AD speech transcripts consisting of diverse linguistic
patterns to overcome the speech data scarcity challenge and enhance the
robustness of AD detection models. We use the augmented AD textual speech
transcript dataset and a more fine-grained representation of AD textual speech
transcript data to fine-tune the AD detection model. The results have shown
that DECT demonstrates superior model performance with an 11% improvement in AD
detection accuracy on the datasets from DementiaBank compared to the baselines.
|
2502.04395
|
Time-VLM: Exploring Multimodal Vision-Language Models for Augmented Time
Series Forecasting
|
cs.CV cs.LG
|
Recent advancements in time series forecasting have explored augmenting
models with text or vision modalities to improve accuracy. While text provides
contextual understanding, it often lacks fine-grained temporal details.
Conversely, vision captures intricate temporal patterns but lacks semantic
context, limiting the complementary potential of these modalities. To address
this, we propose Time-VLM, a novel multimodal framework that leverages
pre-trained Vision-Language Models (VLMs) to bridge temporal, visual, and
textual modalities for enhanced forecasting. Our framework comprises three key
components: (1) a Retrieval-Augmented Learner, which extracts enriched temporal
features through memory bank interactions; (2) a Vision-Augmented Learner,
which encodes time series as informative images; and (3) a Text-Augmented
Learner, which generates contextual textual descriptions. These components
collaborate with frozen pre-trained VLMs to produce multimodal embeddings,
which are then fused with temporal features for final prediction. Extensive
experiments across diverse datasets demonstrate that Time-VLM achieves superior
performance, particularly in few-shot and zero-shot scenarios, thereby
establishing a new direction for multimodal time series forecasting.
|
2502.04397
|
Multimodal Medical Code Tokenizer
|
cs.CL cs.AI cs.LG
|
Foundation models trained on patient electronic health records (EHRs) require
tokenizing medical data into sequences of discrete vocabulary items. Existing
tokenizers treat medical codes from EHRs as isolated textual tokens. However,
each medical code is defined by its textual description, its position in
ontological hierarchies, and its relationships to other codes, such as disease
co-occurrences and drug-treatment associations. Medical vocabularies contain
more than 600,000 codes with critical information for clinical reasoning. We
introduce MedTok, a multimodal medical code tokenizer that uses the text
descriptions and relational context of codes. MedTok processes text using a
language model encoder and encodes the relational structure with a graph
encoder. It then quantizes both modalities into a unified token space,
preserving modality-specific and cross-modality information. We integrate
MedTok into five EHR models and evaluate it on operational and clinical tasks
across in-patient and out-patient datasets, including outcome prediction,
diagnosis classification, drug recommendation, and risk stratification.
Swapping standard EHR tokenizers with MedTok improves AUPRC across all EHR
models, by 4.10% on MIMIC-III, 4.78% on MIMIC-IV, and 11.30% on EHRShot, with
the largest gains in drug recommendation. Beyond EHR modeling, we demonstrate
using MedTok tokenizer with medical QA systems. Our results demonstrate the
potential of MedTok as a unified tokenizer for medical codes, improving
tokenization for medical foundation models.
|
2502.04398
|
XMTC: Explainable Early Classification of Multivariate Time Series in
Reach-to-Grasp Hand Kinematics
|
cs.LG cs.GR cs.HC
|
Hand kinematics can be measured in Human-Computer Interaction (HCI) with the
intention to predict the user's intention in a reach-to-grasp action. Using
multiple hand sensors, multivariate time series data are being captured. Given
a number of possible actions on a number of objects, the goal is to classify
the multivariate time series data, where the class shall be predicted as early
as possible. Many machine-learning methods have been developed for such
classification tasks, where different approaches produce favorable solutions on
different data sets. We, therefore, employ an ensemble approach that includes
and weights different approaches. To provide a trustworthy classification
production, we present the XMTC tool that incorporates coordinated
multiple-view visualizations to analyze the predictions. Temporal accuracy
plots, confusion matrix heatmaps, temporal confidence heatmaps, and partial
dependence plots allow for the identification of the best trade-off between
early prediction and prediction quality, the detection and analysis of
challenging classification conditions, and the investigation of the prediction
evolution in an overview and detail manner. We employ XMTC to real-world HCI
data in multiple scenarios and show that good classification predictions can be
achieved early on with our classifier as well as which conditions are easy to
distinguish, which multivariate time series measurements impose challenges, and
which features have most impact.
|
2502.04399
|
Online Location Planning for AI-Defined Vehicles: Optimizing Joint Tasks
of Order Serving and Spatio-Temporal Heterogeneous Model Fine-Tuning
|
cs.LG cs.AI cs.SY eess.SY
|
Advances in artificial intelligence (AI) including foundation models (FMs),
are increasingly transforming human society, with smart city driving the
evolution of urban living.Meanwhile, vehicle crowdsensing (VCS) has emerged as
a key enabler, leveraging vehicles' mobility and sensor-equipped capabilities.
In particular, ride-hailing vehicles can effectively facilitate flexible data
collection and contribute towards urban intelligence, despite resource
limitations. Therefore, this work explores a promising scenario, where
edge-assisted vehicles perform joint tasks of order serving and the emerging
foundation model fine-tuning using various urban data. However, integrating the
VCS AI task with the conventional order serving task is challenging, due to
their inconsistent spatio-temporal characteristics: (i) The distributions of
ride orders and data point-of-interests (PoIs) may not coincide in geography,
both following a priori unknown patterns; (ii) they have distinct forms of
temporal effects, i.e., prolonged waiting makes orders become instantly invalid
while data with increased staleness gradually reduces its utility for model
fine-tuning.To overcome these obstacles, we propose an online framework based
on multi-agent reinforcement learning (MARL) with careful augmentation. A new
quality-of-service (QoS) metric is designed to characterize and balance the
utility of the two joint tasks, under the effects of varying data volumes and
staleness. We also integrate graph neural networks (GNNs) with MARL to enhance
state representations, capturing graph-structured, time-varying dependencies
among vehicles and across locations. Extensive experiments on our testbed
simulator, utilizing various real-world foundation model fine-tuning tasks and
the New York City Taxi ride order dataset, demonstrate the advantage of our
proposed method.
|
2502.04400
|
Adaptive Prototype Knowledge Transfer for Federated Learning with Mixed
Modalities and Heterogeneous Tasks
|
cs.LG cs.AI cs.CR cs.MM
|
Multimodal Federated Learning (MFL) enables multiple clients to
collaboratively train models on multimodal data while ensuring clients'
privacy. However, modality and task heterogeneity hinder clients from learning
a unified representation, weakening local model generalization, especially in
MFL with mixed modalities where only some clients have multimodal data. In this
work, we propose an Adaptive prototype-based Multimodal Federated Learning
(AproMFL) framework for mixed modalities and heterogeneous tasks to address the
aforementioned issues. Our AproMFL transfers knowledge through
adaptively-constructed prototypes without a prior public dataset. Clients
adaptively select prototype construction methods in line with tasks; server
converts client prototypes into unified multimodal prototypes and aggregates
them to form global prototypes, avoid clients keeping unified labels. We divide
the model into various modules and only aggregate mapping modules to reduce
communication and computation overhead. To address aggregation issues in
heterogeneity, we develop a client relationship graph-based scheme to
dynamically adjust aggregation weights. Extensive experiments on representative
datasets evidence effectiveness of AproMFL.
|
2502.04402
|
Beyond Interpolation: Extrapolative Reasoning with Reinforcement
Learning and Graph Neural Networks
|
cs.LG cs.AI
|
Despite incredible progress, many neural architectures fail to properly
generalize beyond their training distribution. As such, learning to reason in a
correct and generalizable way is one of the current fundamental challenges in
machine learning. In this respect, logic puzzles provide a great testbed, as we
can fully understand and control the learning environment. Thus, they allow to
evaluate performance on previously unseen, larger and more difficult puzzles
that follow the same underlying rules. Since traditional approaches often
struggle to represent such scalable logical structures, we propose to model
these puzzles using a graph-based approach. Then, we investigate the key
factors enabling the proposed models to learn generalizable solutions in a
reinforcement learning setting. Our study focuses on the impact of the
inductive bias of the architecture, different reward systems and the role of
recurrent modeling in enabling sequential reasoning. Through extensive
experiments, we demonstrate how these elements contribute to successful
extrapolation on increasingly complex puzzles.These insights and frameworks
offer a systematic way to design learning-based systems capable of
generalizable reasoning beyond interpolation.
|
2502.04403
|
Agency Is Frame-Dependent
|
cs.AI
|
Agency is a system's capacity to steer outcomes toward a goal, and is a
central topic of study across biology, philosophy, cognitive science, and
artificial intelligence. Determining if a system exhibits agency is a
notoriously difficult question: Dennett (1989), for instance, highlights the
puzzle of determining which principles can decide whether a rock, a thermostat,
or a robot each possess agency. We here address this puzzle from the viewpoint
of reinforcement learning by arguing that agency is fundamentally
frame-dependent: Any measurement of a system's agency must be made relative to
a reference frame. We support this claim by presenting a philosophical argument
that each of the essential properties of agency proposed by Barandiaran et al.
(2009) and Moreno (2018) are themselves frame-dependent. We conclude that any
basic science of agency requires frame-dependence, and discuss the implications
of this claim for reinforcement learning.
|
2502.04404
|
Step Back to Leap Forward: Self-Backtracking for Boosting Reasoning of
Language Models
|
cs.CL cs.AI
|
The integration of slow-thinking mechanisms into large language models (LLMs)
offers a promising way toward achieving Level 2 AGI Reasoners, as exemplified
by systems like OpenAI's o1. However, several significant challenges remain,
including inefficient overthinking and an overreliance on auxiliary reward
models. We point out that these limitations stem from LLMs' inability to
internalize the search process, a key component of effective reasoning. A
critical step toward addressing this issue is enabling LLMs to autonomously
determine when and where to backtrack, a fundamental operation in traditional
search algorithms. To this end, we propose a self-backtracking mechanism that
equips LLMs with the ability to backtrack during both training and inference.
This mechanism not only enhances reasoning ability but also efficiency by
transforming slow-thinking processes into fast-thinking through
self-improvement. Empirical evaluations demonstrate that our proposal
significantly enhances the reasoning capabilities of LLMs, achieving a
performance gain of over 40 percent compared to the optimal-path supervised
fine-tuning method. We believe this study introduces a novel and promising
pathway for developing more advanced and robust Reasoners.
|
2502.04405
|
FAS: Fast ANN-SNN Conversion for Spiking Large Language Models
|
cs.LG cs.AI cs.CL
|
Spiking Large Language Models have been shown as a good alternative to LLMs
in various scenarios. Existing methods for creating Spiking LLMs, i.e., direct
training and ANN-SNN conversion, often suffer from performance degradation and
relatively high computational costs. To address these issues, we propose a
novel Fast ANN-SNN conversion strategy (FAS) that transforms LLMs into spiking
LLMs in two stages. The first stage employs a full-parameter fine-tuning of
pre-trained models, so it does not need any direct training from scratch. The
second stage introduces a coarse-to-fine calibration method to reduce
conversion errors and improve accuracy. Our experiments on both language and
vision-language tasks across four different scales of LLMs demonstrate that FAS
can achieve state-of-the-art performance yet with significantly reduced
inference latency and computational costs. For example, FAS only takes 8
timesteps to achieve an accuracy of 3% higher than that of the OPT-7B model,
while reducing energy consumption by 96.63%.
|
2502.04406
|
Calibrated Physics-Informed Uncertainty Quantification
|
cs.LG cs.AI physics.comp-ph
|
Neural PDEs offer efficient alternatives to computationally expensive
numerical PDE solvers for simulating complex physical systems. However, their
lack of robust uncertainty quantification (UQ) limits deployment in critical
applications. We introduce a model-agnostic, physics-informed conformal
prediction (CP) framework that provides guaranteed uncertainty estimates
without requiring labelled data. By utilising a physics-based approach, we are
able to quantify and calibrate the model's inconsistencies with the PDE rather
than the uncertainty arising from the data. Our approach uses convolutional
layers as finite-difference stencils and leverages physics residual errors as
nonconformity scores, enabling data-free UQ with marginal and joint coverage
guarantees across prediction domains for a range of complex PDEs. We further
validate the efficacy of our method on neural PDE models for plasma modelling
and shot design in fusion reactors.
|
2502.04407
|
Illuminating Spaces: Deep Reinforcement Learning and Laser-Wall
Partitioning for Architectural Layout Generation
|
cs.LG cs.AI
|
Space layout design (SLD), occurring in the early stages of the design
process, nonetheless influences both the functionality and aesthetics of the
ultimate architectural outcome. The complexity of SLD necessitates innovative
approaches to efficiently explore vast solution spaces. While image-based
generative AI has emerged as a potential solution, they often rely on
pixel-based space composition methods that lack intuitive representation of
architectural processes. This paper leverages deep Reinforcement Learning (RL),
as it offers a procedural approach that intuitively mimics the process of human
designers. Effectively using RL for SLD requires an explorative space composing
method to generate desirable design solutions. We introduce "laser-wall", a
novel space partitioning method that conceptualizes walls as emitters of
imaginary light beams to partition spaces. This approach bridges vector-based
and pixel-based partitioning methods, offering both flexibility and exploratory
power in generating diverse layouts. We present two planning strategies:
one-shot planning, which generates entire layouts in a single pass, and dynamic
planning, which allows for adaptive refinement by continuously transforming
laser-walls. Additionally, we introduce on-light and off-light wall
transformations for smooth and fast layout refinement, as well as identity-less
and identity-full walls for versatile room assignment. We developed
SpaceLayoutGym, an open-source OpenAI Gym compatible simulator for generating
and evaluating space layouts. The RL agent processes the input design scenarios
and generates solutions following a reward function that balances geometrical
and topological requirements. Our results demonstrate that the RL-based
laser-wall approach can generate diverse and functional space layouts that
satisfy both geometric constraints and topological requirements and is
architecturally intuitive.
|
2502.04408
|
Transforming Multimodal Models into Action Models for Radiotherapy
|
cs.LG cs.AI
|
Radiotherapy is a crucial cancer treatment that demands precise planning to
balance tumor eradication and preservation of healthy tissue. Traditional
treatment planning (TP) is iterative, time-consuming, and reliant on human
expertise, which can potentially introduce variability and inefficiency. We
propose a novel framework to transform a large multimodal foundation model
(MLM) into an action model for TP using a few-shot reinforcement learning (RL)
approach. Our method leverages the MLM's extensive pre-existing knowledge of
physics, radiation, and anatomy, enhancing it through a few-shot learning
process. This allows the model to iteratively improve treatment plans using a
Monte Carlo simulator. Our results demonstrate that this method outperforms
conventional RL-based approaches in both quality and efficiency, achieving
higher reward scores and more optimal dose distributions in simulations on
prostate cancer data. This proof-of-concept suggests a promising direction for
integrating advanced AI models into clinical workflows, potentially enhancing
the speed, quality, and standardization of radiotherapy treatment planning.
|
2502.04409
|
Learning low-dimensional representations of ensemble forecast fields
using autoencoder-based methods
|
cs.LG physics.ao-ph
|
Large-scale numerical simulations often produce high-dimensional gridded data
that is challenging to process for downstream applications. A prime example is
numerical weather prediction, where atmospheric processes are modeled using
discrete gridded representations of the physical variables and dynamics.
Uncertainties are assessed by running the simulations multiple times, yielding
ensembles of simulated fields as a high-dimensional stochastic representation
of the forecast distribution. The high-dimensionality and large volume of
ensemble datasets poses major computing challenges for subsequent forecasting
stages. Data-driven dimensionality reduction techniques could help to reduce
the data volume before further processing by learning meaningful and compact
representations. However, existing dimensionality reduction methods are
typically designed for deterministic and single-valued inputs, and thus cannot
handle ensemble data from multiple randomized simulations. In this study, we
propose novel dimensionality reduction approaches specifically tailored to the
format of ensemble forecast fields. We present two alternative frameworks,
which yield low-dimensional representations of ensemble forecasts while
respecting their probabilistic character. The first approach derives a
distribution-based representation of an input ensemble by applying standard
dimensionality reduction techniques in a member-by-member fashion and merging
the member representations into a joint parametric distribution model. The
second approach achieves a similar representation by encoding all members
jointly using a tailored variational autoencoder. We evaluate and compare both
approaches in a case study using 10 years of temperature and wind speed
forecasts over Europe. The approaches preserve key spatial and statistical
characteristics of the ensemble and enable probabilistic reconstructions of the
forecast fields.
|
2502.04411
|
Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and
Uncertainty Based Routing
|
cs.LG cs.AI cs.CL
|
Model merging aggregates Large Language Models (LLMs) finetuned on different
tasks into a stronger one. However, parameter conflicts between models leads to
performance degradation in averaging. While model routing addresses this issue
by selecting individual models during inference, it imposes excessive storage
and compute costs, and fails to leverage the common knowledge from different
models. In this work, we observe that different layers exhibit varying levels
of parameter conflicts. Building on this insight, we average layers with
minimal parameter conflicts and use a novel task-level expert routing for
layers with significant conflicts. To further reduce storage costs, inspired by
task arithmetic sparsity, we decouple multiple fine-tuned experts into a dense
expert and several sparse experts. Considering the out-of-distribution samples,
we select and merge appropriate experts based on the task uncertainty of the
input data. We conduct extensive experiments on both LLaMA and Qwen with
varying parameter scales, and evaluate on real-world reasoning tasks. Results
demonstrate that our method consistently achieves significant performance
improvements while requiring less system cost compared to existing methods.
|
2502.04412
|
Decoder-Only LLMs are Better Controllers for Diffusion Models
|
cs.CV cs.AI cs.CL
|
Groundbreaking advancements in text-to-image generation have recently been
achieved with the emergence of diffusion models. These models exhibit a
remarkable ability to generate highly artistic and intricately detailed images
based on textual prompts. However, obtaining desired generation outcomes often
necessitates repetitive trials of manipulating text prompts just like casting
spells on a magic mirror, and the reason behind that is the limited capability
of semantic understanding inherent in current image generation models.
Specifically, existing diffusion models encode the text prompt input with a
pre-trained encoder structure, which is usually trained on a limited number of
image-caption pairs. The state-of-the-art large language models (LLMs) based on
the decoder-only structure have shown a powerful semantic understanding
capability as their architectures are more suitable for training on very
large-scale unlabeled data. In this work, we propose to enhance text-to-image
diffusion models by borrowing the strength of semantic understanding from large
language models, and devise a simple yet effective adapter to allow the
diffusion models to be compatible with the decoder-only structure. Meanwhile,
we also provide a supporting theoretical analysis with various architectures
(e.g., encoder-only, encoder-decoder, and decoder-only), and conduct extensive
empirical evaluations to verify its effectiveness. The experimental results
show that the enhanced models with our adapter module are superior to the
stat-of-the-art models in terms of text-to-image generation quality and
reliability.
|
2502.04413
|
MedRAG: Enhancing Retrieval-augmented Generation with Knowledge
Graph-Elicited Reasoning for Healthcare Copilot
|
cs.CL cs.AI cs.IR
|
Retrieval-augmented generation (RAG) is a well-suited technique for
retrieving privacy-sensitive Electronic Health Records (EHR). It can serve as a
key module of the healthcare copilot, helping reduce misdiagnosis for
healthcare practitioners and patients. However, the diagnostic accuracy and
specificity of existing heuristic-based RAG models used in the medical domain
are inadequate, particularly for diseases with similar manifestations. This
paper proposes MedRAG, a RAG model enhanced by knowledge graph (KG)-elicited
reasoning for the medical domain that retrieves diagnosis and treatment
recommendations based on manifestations. MedRAG systematically constructs a
comprehensive four-tier hierarchical diagnostic KG encompassing critical
diagnostic differences of various diseases. These differences are dynamically
integrated with similar EHRs retrieved from an EHR database, and reasoned
within a large language model. This process enables more accurate and specific
decision support, while also proactively providing follow-up questions to
enhance personalized medical decision-making. MedRAG is evaluated on both a
public dataset DDXPlus and a private chronic pain diagnostic dataset (CPDD)
collected from Tan Tock Seng Hospital, and its performance is compared against
various existing RAG methods. Experimental results show that, leveraging the
information integration and relational abilities of the KG, our MedRAG provides
more specific diagnostic insights and outperforms state-of-the-art models in
reducing misdiagnosis rates. Our code will be available at
https://github.com/SNOWTEAM2023/MedRAG
|
2502.04415
|
TerraQ: Spatiotemporal Question-Answering on Satellite Image Archives
|
cs.CV cs.AI
|
TerraQ is a spatiotemporal question-answering engine for satellite image
archives. It is a natural language processing system that is built to process
requests for satellite images satisfying certain criteria. The requests can
refer to image metadata and entities from a specialized knowledge base (e.g.,
the Emilia-Romagna region). With it, users can make requests like "Give me a
hundred images of rivers near ports in France, with less than 20% snow coverage
and more than 10% cloud coverage", thus making Earth Observation data more
easily accessible, in-line with the current landscape of digital assistants.
|
2502.04416
|
CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference
|
cs.LG cs.AI
|
Large language models (LLMs) achieve impressive performance by scaling model
parameters, but this comes with significant inference overhead. Feed-forward
networks (FFNs), which dominate LLM parameters, exhibit high activation
sparsity in hidden neurons. To exploit this, researchers have proposed using a
mixture-of-experts (MoE) architecture, where only a subset of parameters is
activated. However, existing approaches often require extensive training data
and resources, limiting their practicality. We propose CMoE (Carved MoE), a
novel framework to efficiently carve MoE models from dense models. CMoE
achieves remarkable performance through efficient expert grouping and
lightweight adaptation. First, neurons are grouped into shared and routed
experts based on activation rates. Next, we construct a routing mechanism
without training from scratch, incorporating a differentiable routing process
and load balancing. Using modest data, CMoE produces a well-designed, usable
MoE from a 7B dense model within five minutes. With lightweight fine-tuning, it
achieves high-performance recovery in under an hour. We make our code publicly
available at https://github.com/JarvisPei/CMoE.
|
2502.04417
|
NeuralMOVES: A lightweight and microscopic vehicle emission estimation
model based on reverse engineering and surrogate learning
|
cs.LG cs.AI
|
The transportation sector significantly contributes to greenhouse gas
emissions, necessitating accurate emission models to guide mitigation
strategies. Despite its field validation and certification, the
industry-standard Motor Vehicle Emission Simulator (MOVES) faces challenges
related to complexity in usage, high computational demands, and its
unsuitability for microscopic real-time applications. To address these
limitations, we present NeuralMOVES, a comprehensive suite of high-performance,
lightweight surrogate models for vehicle CO2 emissions. Developed based on
reverse engineering and Neural Networks, NeuralMOVES achieves a remarkable
6.013% Mean Average Percentage Error relative to MOVES across extensive tests
spanning over two million scenarios with diverse trajectories and the factors
regarding environments and vehicles. NeuralMOVES is only 2.4 MB, largely
condensing the original MOVES and the reverse engineered MOVES into a compact
representation, while maintaining high accuracy. Therefore, NeuralMOVES
significantly enhances accessibility while maintaining the accuracy of MOVES,
simplifying CO2 evaluation for transportation analyses and enabling real-time,
microscopic applications across diverse scenarios without reliance on complex
software or extensive computational resources. Moreover, this paper provides,
for the first time, a framework for reverse engineering industrial-grade
software tailored specifically to transportation scenarios, going beyond MOVES.
The surrogate models are available at https://github.com/edgar-rs/neuralMOVES.
|
2502.04418
|
Autotelic Reinforcement Learning: Exploring Intrinsic Motivations for
Skill Acquisition in Open-Ended Environments
|
cs.LG cs.AI
|
This paper presents a comprehensive overview of autotelic Reinforcement
Learning (RL), emphasizing the role of intrinsic motivations in the open-ended
formation of skill repertoires. We delineate the distinctions between
knowledge-based and competence-based intrinsic motivations, illustrating how
these concepts inform the development of autonomous agents capable of
generating and pursuing self-defined goals. The typology of Intrinsically
Motivated Goal Exploration Processes (IMGEPs) is explored, with a focus on the
implications for multi-goal RL and developmental robotics. The autotelic
learning problem is framed within a reward-free Markov Decision Process (MDP),
WHERE agents must autonomously represent, generate, and master their own goals.
We address the unique challenges in evaluating such agents, proposing various
metrics for measuring exploration, generalization, and robustness in complex
environments. This work aims to advance the understanding of autotelic RL
agents and their potential for enhancing skill acquisition in a diverse and
dynamic setting.
|
2502.04419
|
Understanding and Mitigating the Bias Inheritance in LLM-based Data
Augmentation on Downstream Tasks
|
cs.LG cs.AI cs.CL
|
Generating synthetic datasets via large language models (LLMs) themselves has
emerged as a promising approach to improve LLM performance. However, LLMs
inherently reflect biases present in their training data, leading to a critical
challenge: when these models generate synthetic data for training, they may
propagate and amplify their inherent biases that can significantly impact model
fairness and robustness on downstream tasks--a phenomenon we term bias
inheritance. This work presents the first systematic investigation in
understanding, analyzing, and mitigating bias inheritance. We study this
problem by fine-tuning LLMs with a combined dataset consisting of original and
LLM-augmented data, where bias ratio represents the proportion of augmented
data. Through systematic experiments across 10 classification and generation
tasks, we analyze how 6 different types of biases manifest at varying bias
ratios. Our results reveal that bias inheritance has nuanced effects on
downstream tasks, influencing both classification tasks and generation tasks
differently. Then, our analysis identifies three key misalignment factors:
misalignment of values, group data, and data distributions. Based on these
insights, we propose three mitigation strategies: token-based, mask-based, and
loss-based approaches. Experiments demonstrate that these strategies also work
differently on various tasks and bias, indicating the substantial challenges to
fully mitigate bias inheritance. We hope this work can provide valuable
insights to the research of LLM data augmentation.
|
2502.04420
|
KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache
Quantization for Efficient and Nearly Lossless LLM Inference
|
cs.LG cs.AI cs.CL
|
KV cache quantization can improve Large Language Models (LLMs) inference
throughput and latency in long contexts and large batch-size scenarios while
preserving LLMs effectiveness. However, current methods have three unsolved
issues: overlooking layer-wise sensitivity to KV cache quantization, high
overhead of online fine-grained decision-making, and low flexibility to
different LLMs and constraints. Therefore, we thoroughly analyze the inherent
correlation of layer-wise transformer attention patterns to KV cache
quantization errors and study why key cache is more important than value cache
for quantization error reduction. We further propose a simple yet effective
framework KVTuner to adaptively search for the optimal hardware-friendly
layer-wise KV quantization precision pairs for coarse-grained KV cache with
multi-objective optimization and directly utilize the offline searched
configurations during online inference. To reduce the computational cost of
offline calibration, we utilize the intra-layer KV precision pair pruning and
inter-layer clustering to reduce the search space. Experimental results show
that we can achieve nearly lossless 3.25-bit mixed precision KV cache
quantization for LLMs like Llama-3.1-8B-Instruct and 4.0-bit for sensitive
models like Qwen2.5-7B-Instruct on mathematical reasoning tasks. The maximum
inference throughput can be improved by 38.3% compared with KV8 quantization
over various context lengths.
|
2502.04421
|
Assessing and Prioritizing Ransomware Risk Based on Historical Victim
Data
|
cs.CR cs.AI cs.LG
|
We present an approach to identifying which ransomware adversaries are most
likely to target specific entities, thereby assisting these entities in
formulating better protection strategies. Ransomware poses a formidable
cybersecurity threat characterized by profit-driven motives, a complex
underlying economy supporting criminal syndicates, and the overt nature of its
attacks. This type of malware has consistently ranked among the most prevalent,
with a rapid escalation in activity observed. Recent estimates indicate that
approximately two-thirds of organizations experienced ransomware attacks in
2023 \cite{Sophos2023Ransomware}. A central tactic in ransomware campaigns is
publicizing attacks to coerce victims into paying ransoms. Our study utilizes
public disclosures from ransomware victims to predict the likelihood of an
entity being targeted by a specific ransomware variant. We employ a Large
Language Model (LLM) architecture that uses a unique chain-of-thought,
multi-shot prompt methodology to define adversary SKRAM (Skills, Knowledge,
Resources, Authorities, and Motivation) profiles from ransomware bulletins,
threat reports, and news items. This analysis is enriched with publicly
available victim data and is further enhanced by a heuristic for generating
synthetic data that reflects victim profiles. Our work culminates in the
development of a machine learning model that assists organizations in
prioritizing ransomware threats and formulating defenses based on the tactics,
techniques, and procedures (TTP) of the most likely attackers.
|
2502.04423
|
Primary Care Diagnoses as a Reliable Predictor for Orthopedic Surgical
Interventions
|
cs.LG cs.AI cs.CL
|
Referral workflow inefficiencies, including misaligned referrals and delays,
contribute to suboptimal patient outcomes and higher healthcare costs. In this
study, we investigated the possibility of predicting procedural needs based on
primary care diagnostic entries, thereby improving referral accuracy,
streamlining workflows, and providing better care to patients. A de-identified
dataset of 2,086 orthopedic referrals from the University of Texas Health at
Tyler was analyzed using machine learning models built on Base General
Embeddings (BGE) for semantic extraction. To ensure real-world applicability,
noise tolerance experiments were conducted, and oversampling techniques were
employed to mitigate class imbalance. The selected optimum and parsimonious
embedding model demonstrated high predictive accuracy (ROC-AUC: 0.874, Matthews
Correlation Coefficient (MCC): 0.540), effectively distinguishing patients
requiring surgical intervention. Dimensionality reduction techniques confirmed
the model's ability to capture meaningful clinical relationships. A threshold
sensitivity analysis identified an optimal decision threshold (0.30) to balance
precision and recall, maximizing referral efficiency. In the predictive
modeling analysis, the procedure rate increased from 11.27% to an optimal
60.1%, representing a 433% improvement with significant implications for
operational efficiency and healthcare revenue.
The results of our study demonstrate that referral optimization can enhance
primary and surgical care integration. Through this approach, precise and
timely predictions of procedural requirements can be made, thereby minimizing
delays, improving surgical planning, and reducing administrative burdens. In
addition, the findings highlight the potential of clinical decision support as
a scalable solution for improving patient outcomes and the efficiency of the
healthcare system.
|
2502.04424
|
EmoBench-M: Benchmarking Emotional Intelligence for Multimodal Large
Language Models
|
cs.CL cs.AI
|
With the integration of Multimodal large language models (MLLMs) into robotic
systems and various AI applications, embedding emotional intelligence (EI)
capabilities into these models is essential for enabling robots to effectively
address human emotional needs and interact seamlessly in real-world scenarios.
Existing static, text-based, or text-image benchmarks overlook the multimodal
complexities of real-world interactions and fail to capture the dynamic,
multimodal nature of emotional expressions, making them inadequate for
evaluating MLLMs' EI. Based on established psychological theories of EI, we
build EmoBench-M, a novel benchmark designed to evaluate the EI capability of
MLLMs across 13 valuation scenarios from three key dimensions: foundational
emotion recognition, conversational emotion understanding, and socially complex
emotion analysis. Evaluations of both open-source and closed-source MLLMs on
EmoBench-M reveal a significant performance gap between them and humans,
highlighting the need to further advance their EI capabilities. All benchmark
resources, including code and datasets, are publicly available at
https://emo-gml.github.io/.
|
2502.04426
|
Decoding AI Judgment: How LLMs Assess News Credibility and Bias
|
cs.CL cs.AI cs.CY
|
Large Language Models (LLMs) are increasingly used to assess news
credibility, yet little is known about how they make these judgments. While
prior research has examined political bias in LLM outputs or their potential
for automated fact-checking, their internal evaluation processes remain largely
unexamined. Understanding how LLMs assess credibility provides insights into AI
behavior and how credibility is structured and applied in large-scale language
models. This study benchmarks the reliability and political classifications of
state-of-the-art LLMs - Gemini 1.5 Flash (Google), GPT-4o mini (OpenAI), and
LLaMA 3.1 (Meta) - against structured, expert-driven rating systems such as
NewsGuard and Media Bias Fact Check. Beyond assessing classification
performance, we analyze the linguistic markers that shape LLM decisions,
identifying which words and concepts drive their evaluations. We uncover
patterns in how LLMs associate credibility with specific linguistic features by
examining keyword frequency, contextual determinants, and rank distributions.
Beyond static classification, we introduce a framework in which LLMs refine
their credibility assessments by retrieving external information, querying
other models, and adapting their responses. This allows us to investigate
whether their assessments reflect structured reasoning or rely primarily on
prior learned associations.
|
2502.04428
|
Confident or Seek Stronger: Exploring Uncertainty-Based On-device LLM
Routing From Benchmarking to Generalization
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) are increasingly deployed and democratized on
edge devices. To improve the efficiency of on-device deployment, small language
models (SLMs) are often adopted due to their efficient decoding latency and
reduced energy consumption. However, these SLMs often generate inaccurate
responses when handling complex queries. One promising solution is
uncertainty-based SLM routing, offloading high-stakes queries to stronger LLMs
when resulting in low-confidence responses on SLM. This follows the principle
of "If you lack confidence, seek stronger support" to enhance reliability.
Relying on more powerful LLMs is yet effective but increases invocation costs.
Therefore, striking a routing balance between efficiency and efficacy remains a
critical challenge. Additionally, efficiently generalizing the routing strategy
to new datasets remains under-explored. In this paper, we conduct a
comprehensive investigation into benchmarking and generalization of
uncertainty-driven routing strategies from SLMs to LLMs over 1500+ settings.
Our findings highlight: First, uncertainty-correctness alignment in different
uncertainty quantification (UQ) methods significantly impacts routing
performance. Second, uncertainty distributions depend more on both the specific
SLM and the chosen UQ method, rather than downstream data. Building on the
insight, we propose a calibration data construction instruction pipeline and
open-source a constructed hold-out set to enhance routing generalization on new
downstream scenarios. The experimental results indicate calibration data
effectively bootstraps routing performance without any new data.
|
2502.04457
|
"In order that" -- a data driven study of symptoms and causes of
obsolescence
|
cs.CL cs.CY
|
The paper is an empirical case study of grammatical obsolescence in progress.
The main studied variable is the purpose subordinator in order that, which is
shown to be steadily decreasing in the frequency of use starting from the
beginning of the twentieth century. This work applies a data-driven approach
for the investigation and description of obsolescence, recently developed by
the Rudnicka (2019). The methodology combines philological analysis with
statistical methods used on data acquired from mega-corpora. Moving from the
description of possible symptoms of obsolescence to different causes for it,
the paper aims at presenting a comprehensive account of the studied phenomenon.
Interestingly, a very significant role in the decline of in order that can be
ascribed to the so-called higher-order processes, understood as processes
influencing the constructional level from above. Two kinds of higher-order
processes are shown to play an important role, namely i) an
externally-motivated higher-order process exemplified by the drastic
socio-cultural changes of the 19th and 20th centuries; ii) an
internally-motivated higher-order processes instantiated by the rise of the
to-infinitive (rise of infinite clauses).
|
2502.04463
|
Training Language Models to Reason Efficiently
|
cs.LG cs.CL
|
Scaling model size and training data has led to great advances in the
performance of Large Language Models (LLMs). However, the diminishing returns
of this approach necessitate alternative methods to improve model capabilities,
particularly in tasks requiring advanced reasoning. Large reasoning models,
which leverage long chain-of-thoughts, bring unprecedented breakthroughs in
problem-solving capabilities but at a substantial deployment cost associated to
longer generations. Reducing inference costs is crucial for the economic
feasibility, user experience, and environmental sustainability of these models.
In this work, we propose to train large reasoning models to reason
efficiently. More precisely, we use reinforcement learning (RL) to train
reasoning models to dynamically allocate inference-time compute based on task
complexity. Our method incentivizes models to minimize unnecessary
computational overhead while maintaining accuracy, thereby achieving
substantial efficiency gains. It enables the derivation of a family of
reasoning models with varying efficiency levels, controlled via a single
hyperparameter. Experiments on two open-weight large reasoning models
demonstrate significant reductions in inference cost while preserving most of
the accuracy.
|
2502.04465
|
FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks
|
cs.LG cs.AI cs.SD eess.AS
|
Large language models have revolutionized natural language processing through
self-supervised pretraining on massive datasets. Inspired by this success,
researchers have explored adapting these methods to speech by discretizing
continuous audio into tokens using neural audio codecs. However, existing
approaches face limitations, including high bitrates, the loss of either
semantic or acoustic information, and the reliance on multi-codebook designs
when trying to capture both, which increases architectural complexity for
downstream tasks. To address these challenges, we introduce FocalCodec, an
efficient low-bitrate codec based on focal modulation that utilizes a single
binary codebook to compress speech between 0.16 and 0.65 kbps. FocalCodec
delivers competitive performance in speech resynthesis and voice conversion at
lower bitrates than the current state-of-the-art, while effectively handling
multilingual speech and noisy environments. Evaluation on downstream tasks
shows that FocalCodec successfully preserves sufficient semantic and acoustic
information, while also being well-suited for generative modeling. Demo
samples, code and checkpoints are available at
https://lucadellalib.github.io/focalcodec-web/.
|
2502.04467
|
Efficient variable-length hanging tether parameterization for marsupial
robot planning in 3D environments
|
cs.RO
|
This paper presents a novel approach to efficiently parameterize and estimate
the state of a hanging tether for path and trajectory planning of a UGV tied to
a UAV in a marsupial configuration. Most implementations in the state of the
art assume a taut tether or make use of the catenary curve to model the shape
of the hanging tether. The catenary model is complex to compute and must be
instantiated thousands of times during the planning process, becoming a
time-consuming task, while the taut tether assumption simplifies the problem,
but might overly restrict the movement of the platforms. In order to accelerate
the planning process, this paper proposes defining an analytical model to
efficiently compute the hanging tether state, and a method to get a tether
state parameterization free of collisions. We exploit the existing similarity
between the catenary and parabola curves to derive analytical expressions of
the tether state.
|
2502.04468
|
Iterative Importance Fine-tuning of Diffusion Models
|
cs.LG eess.IV math.PR
|
Diffusion models are an important tool for generative modelling, serving as
effective priors in applications such as imaging and protein design. A key
challenge in applying diffusion models for downstream tasks is efficiently
sampling from resulting posterior distributions, which can be addressed using
the $h$-transform. This work introduces a self-supervised algorithm for
fine-tuning diffusion models by estimating the $h$-transform, enabling
amortised conditional sampling. Our method iteratively refines the
$h$-transform using a synthetic dataset resampled with path-based importance
weights. We demonstrate the effectiveness of this framework on
class-conditional sampling and reward fine-tuning for text-to-image diffusion
models.
|
2502.04469
|
No Images, No Problem: Retaining Knowledge in Continual VQA with
Questions-Only Memory
|
cs.CV cs.AI
|
Continual Learning in Visual Question Answering (VQACL) requires models to
learn new visual-linguistic tasks (plasticity) while retaining knowledge from
previous tasks (stability). The multimodal nature of VQACL presents unique
challenges, requiring models to balance stability across visual and textual
domains while maintaining plasticity to adapt to novel objects and reasoning
tasks. Existing methods, predominantly designed for unimodal tasks, often
struggle to balance these demands effectively. In this work, we introduce
QUestion-only replay with Attention Distillation (QUAD), a novel approach for
VQACL that leverages only past task questions for regularisation, eliminating
the need to store visual data and addressing both memory and privacy concerns.
QUAD achieves stability by introducing a question-only replay mechanism that
selectively uses questions from previous tasks to prevent overfitting to the
current task's answer space, thereby mitigating the out-of-answer-set problem.
Complementing this, we propose attention consistency distillation, which
uniquely enforces both intra-modal and inter-modal attention consistency across
tasks, preserving essential visual-linguistic associations. Extensive
experiments on VQAv2 and NExT-QA demonstrate that QUAD significantly
outperforms state-of-the-art methods, achieving robust performance in continual
VQA.
|
2502.04470
|
Color in Visual-Language Models: CLIP deficiencies
|
cs.CV cs.AI
|
This work explores how color is encoded in CLIP (Contrastive Language-Image
Pre-training) which is currently the most influential VML (Visual Language
model) in Artificial Intelligence. After performing different experiments on
synthetic datasets created for this task, we conclude that CLIP is able to
attribute correct color labels to colored visual stimulus, but, we come across
two main deficiencies: (a) a clear bias on achromatic stimuli that are poorly
related to the color concept, thus white, gray and black are rarely assigned as
color labels; and (b) the tendency to prioritize text over other visual
information. Here we prove it is highly significant in color labelling through
an exhaustive Stroop-effect test. With the aim to find the causes of these
color deficiencies, we analyse the internal representation at the neuron level.
We conclude that CLIP presents an important amount of neurons selective to
text, specially in deepest layers of the network, and a smaller amount of
multi-modal color neurons which could be the key of understanding the concept
of color properly. Our investigation underscores the necessity of refining
color representation mechanisms in neural networks to foster a more
comprehensive comprehension of colors as humans understand them, thereby
advancing the efficacy and versatility of multimodal models like CLIP in
real-world scenarios.
|
2502.04471
|
Identifying Flaky Tests in Quantum Code: A Machine Learning Approach
|
cs.SE cs.LG
|
Testing and debugging quantum software pose significant challenges due to the
inherent complexities of quantum mechanics, such as superposition and
entanglement. One challenge is indeterminacy, a fundamental characteristic of
quantum systems, which increases the likelihood of flaky tests in quantum
programs. To the best of our knowledge, there is a lack of comprehensive
studies on quantum flakiness in the existing literature. In this paper, we
present a novel machine learning platform that leverages multiple machine
learning models to automatically detect flaky tests in quantum programs. Our
evaluation shows that the extreme gradient boosting and decision tree-based
models outperform other models (i.e., random forest, k-nearest neighbors, and
support vector machine), achieving the highest F1 score and Matthews
Correlation Coefficient in a balanced dataset and an imbalanced dataset,
respectively. Furthermore, we expand the currently limited dataset for
researchers interested in quantum flaky tests. In the future, we plan to
explore the development of unsupervised learning techniques to detect and
classify quantum flaky tests more effectively. These advancements aim to
improve the reliability and robustness of quantum software testing.
|
2502.04475
|
Augmented Conditioning Is Enough For Effective Training Image Generation
|
cs.CV cs.AI cs.LG
|
Image generation abilities of text-to-image diffusion models have
significantly advanced, yielding highly photo-realistic images from descriptive
text and increasing the viability of leveraging synthetic images to train
computer vision models. To serve as effective training data, generated images
must be highly realistic while also sufficiently diverse within the support of
the target data distribution. Yet, state-of-the-art conditional image
generation models have been primarily optimized for creative applications,
prioritizing image realism and prompt adherence over conditional diversity. In
this paper, we investigate how to improve the diversity of generated images
with the goal of increasing their effectiveness to train downstream image
classification models, without fine-tuning the image generation model. We find
that conditioning the generation process on an augmented real image and text
prompt produces generations that serve as effective synthetic datasets for
downstream training. Conditioning on real training images contextualizes the
generation process to produce images that are in-domain with the real image
distribution, while data augmentations introduce visual diversity that improves
the performance of the downstream classifier. We validate
augmentation-conditioning on a total of five established long-tail and few-shot
image classification benchmarks and show that leveraging augmentations to
condition the generation process results in consistent improvements over the
state-of-the-art on the long-tailed benchmark and remarkable gains in extreme
few-shot regimes of the remaining four benchmarks. These results constitute an
important step towards effectively leveraging synthetic data for downstream
training.
|
2502.04476
|
ADIFF: Explaining audio difference using natural language
|
cs.SD cs.AI eess.AS
|
Understanding and explaining differences between audio recordings is crucial
for fields like audio forensics, quality assessment, and audio generation. This
involves identifying and describing audio events, acoustic scenes, signal
characteristics, and their emotional impact on listeners. This paper stands out
as the first work to comprehensively study the task of explaining audio
differences and then propose benchmark, baselines for the task. First, we
present two new datasets for audio difference explanation derived from the
AudioCaps and Clotho audio captioning datasets. Using Large Language Models
(LLMs), we generate three levels of difference explanations: (1) concise
descriptions of audio events and objects, (2) brief sentences about audio
events, acoustic scenes, and signal properties, and (3) comprehensive
explanations that include semantics and listener emotions. For the baseline, we
use prefix tuning where audio embeddings from two audio files are used to
prompt a frozen language model. Our empirical analysis and ablation studies
reveal that the naive baseline struggles to distinguish perceptually similar
sounds and generate detailed tier 3 explanations. To address these limitations,
we propose ADIFF, which introduces a cross-projection module, position
captioning, and a three-step training process to enhance the model's ability to
produce detailed explanations. We evaluate our model using objective metrics
and human evaluation and show our model enhancements lead to significant
improvements in performance over naive baseline and SoTA Audio-Language Model
(ALM) Qwen Audio. Lastly, we conduct multiple ablation studies to study the
effects of cross-projection, language model parameters, position captioning,
third stage fine-tuning, and present our findings. Our benchmarks, findings,
and strong baseline pave the way for nuanced and human-like explanations of
audio differences.
|
2502.04478
|
OneTrack-M: A multitask approach to transformer-based MOT models
|
cs.CV cs.LG
|
Multi-Object Tracking (MOT) is a critical problem in computer vision,
essential for understanding how objects move and interact in videos. This field
faces significant challenges such as occlusions and complex environmental
dynamics, impacting model accuracy and efficiency. While traditional approaches
have relied on Convolutional Neural Networks (CNNs), introducing transformers
has brought substantial advancements. This work introduces OneTrack-M, a
transformer-based MOT model designed to enhance tracking computational
efficiency and accuracy. Our approach simplifies the typical transformer-based
architecture by eliminating the need for a decoder model for object detection
and tracking. Instead, the encoder alone serves as the backbone for temporal
data interpretation, significantly reducing processing time and increasing
inference speed. Additionally, we employ innovative data pre-processing and
multitask training techniques to address occlusion and diverse objective
challenges within a single set of weights. Experimental results demonstrate
that OneTrack-M achieves at least 25% faster inference times compared to
state-of-the-art models in the literature while maintaining or improving
tracking accuracy metrics. These improvements highlight the potential of the
proposed solution for real-time applications such as autonomous vehicles,
surveillance systems, and robotics, where rapid responses are crucial for
system effectiveness.
|
2502.04480
|
On Techniques for Barely Coupled Multiphysics
|
cs.CE
|
A technique to combine codes to solve barely coupled multiphysics problems
has been developed. Each field is advanced separately until a stop is
triggered. This could be due to a preset time increment, a preset number of
timesteps, a preset decrease of residuals, a preset change in unknowns, a
preset change in geometry, or any other physically meaningful quantity. The
technique allows for a simple implementation in coupled codes using the loose
coupling approach. Examples from evaporative cooling of electric motors, a
problem that has come to the forefront with the rise of electric propulsion in
the aerospace sector (drones and air taxis in particular) shows the viability
and accuracy of the proposed procedure.
|
2502.04483
|
Measuring Physical Plausibility of 3D Human Poses Using Physics
Simulation
|
cs.CV
|
Modeling humans in physical scenes is vital for understanding
human-environment interactions for applications involving augmented reality or
assessment of human actions from video (e.g. sports or physical
rehabilitation). State-of-the-art literature begins with a 3D human pose, from
monocular or multiple views, and uses this representation to ground the person
within a 3D world space. While standard metrics for accuracy capture joint
position errors, they do not consider physical plausibility of the 3D pose.
This limitation has motivated researchers to propose other metrics evaluating
jitter, floor penetration, and unbalanced postures. Yet, these approaches
measure independent instances of errors and are not representative of balance
or stability during motion. In this work, we propose measuring physical
plausibility from within physics simulation. We introduce two metrics to
capture the physical plausibility and stability of predicted 3D poses from any
3D Human Pose Estimation model. Using physics simulation, we discover
correlations with existing plausibility metrics and measuring stability during
motion. We evaluate and compare the performances of two state-of-the-art
methods, a multi-view triangulated baseline, and ground truth 3D markers from
the Human3.6m dataset.
|
2502.04484
|
The ML Supply Chain in the Era of Software 2.0: Lessons Learned from
Hugging Face
|
cs.SE cs.LG
|
The last decade has seen widespread adoption of Machine Learning (ML)
components in software systems. This has occurred in nearly every domain, from
natural language processing to computer vision. These ML components range from
relatively simple neural networks to complex and resource-intensive large
language models. However, despite this widespread adoption, little is known
about the supply chain relationships that produce these models, which can have
implications for compliance and security. In this work, we conduct an extensive
analysis of 760,460 models and 175,000 datasets mined from the popular
model-sharing site Hugging Face. First, we evaluate the current state of
documentation in the Hugging Face supply chain, report real-world examples of
shortcomings, and offer actionable suggestions for improvement. Next, we
analyze the underlying structure of the extant supply chain. Finally, we
explore the current licensing landscape against what was reported in prior work
and discuss the unique challenges posed in this domain. Our results motivate
multiple research avenues, including the need for better license management for
ML models/datasets, better support for model documentation, and automated
inconsistency checking and validation. We make our research infrastructure and
dataset available to facilitate future research.
|
2502.04485
|
Active Task Disambiguation with LLMs
|
cs.CL cs.AI cs.LG
|
Despite the impressive performance of large language models (LLMs) across
various benchmarks, their ability to address ambiguously specified
problems--frequent in real-world interactions--remains underexplored. To
address this gap, we introduce a formal definition of task ambiguity and frame
the problem of task disambiguation through the lens of Bayesian Experimental
Design. By posing clarifying questions, LLM agents can acquire additional task
specifications, progressively narrowing the space of viable solutions and
reducing the risk of generating unsatisfactory outputs. Yet, generating
effective clarifying questions requires LLM agents to engage in a form of
meta-cognitive reasoning, an ability LLMs may presently lack. Our proposed
approach of active task disambiguation enables LLM agents to generate targeted
questions maximizing the information gain. Effectively, this approach shifts
the load from implicit to explicit reasoning about the space of viable
solutions. Empirical results demonstrate that this form of question selection
leads to more effective task disambiguation in comparison to approaches relying
on reasoning solely within the space of questions.
|
2502.04488
|
Building A Unified AI-centric Language System: analysis, framework and
future work
|
cs.CL cs.AI
|
Recent advancements in large language models have demonstrated that extended
inference through techniques can markedly improve performance, yet these gains
come with increased computational costs and the propagation of inherent biases
found in natural languages. This paper explores the design of a unified
AI-centric language system that addresses these challenges by offering a more
concise, unambiguous, and computationally efficient alternative to traditional
human languages. We analyze the limitations of natural language such as gender
bias, morphological irregularities, and contextual ambiguities and examine how
these issues are exacerbated within current Transformer architectures, where
redundant attention heads and token inefficiencies prevail. Drawing on insights
from emergent artificial communication systems and constructed languages like
Esperanto and Lojban, we propose a framework that translates diverse natural
language inputs into a streamlined AI-friendly language, enabling more
efficient model training and inference while reducing memory footprints.
Finally, we outline a pathway for empirical validation through controlled
experiments, paving the way for a universal interchange format that could
revolutionize AI-to-AI and human-to-AI interactions by enhancing clarity,
fairness, and overall performance.
|
2502.04489
|
CNN Autoencoders for Hierarchical Feature Extraction and Fusion in
Multi-sensor Human Activity Recognition
|
cs.LG cs.AI
|
Deep learning methods have been widely used for Human Activity Recognition
(HAR) using recorded signals from Iner-tial Measurement Units (IMUs) sensors
that are installed on various parts of the human body. For this type of HAR,
sev-eral challenges exist, the most significant of which is the analysis of
multivarious IMU sensors data. Here, we introduce a Hierarchically Unsupervised
Fusion (HUF) model designed to extract, and fuse features from IMU sensors data
via a hybrid structure of Convolutional Neural Networks (CNN)s and Autoencoders
(AE)s. First, we design a stack CNN-AE to embed short-time signals into sets of
high dimensional features. Second, we develop another CNN-AE network to locally
fuse the extracted features from each sensor unit. Finally, we unify all the
sensor features through a third CNN-AE architecture as globally feature fusion
to create a unique feature set. Additionally, we analyze the effects of varying
the model hyperparameters. The best results are achieved with eight
convolutional layers in each AE. Furthermore, it is determined that an
overcomplete AE with 256 kernels in the code layer is suitable for feature
extraction in the first block of the proposed HUF model; this number reduces to
64 in the last block of the model to customize the size of the applied features
to the classifier. The tuned model is applied to the UCI-HAR, DaLiAc, and
Parkinson's disease gait da-tasets, achieving the classification accuracies of
97%, 97%, and 88%, respectively, which are nearly 3% better com-pared to the
state-of-the-art supervised methods.
|
2502.04491
|
Provable Sample-Efficient Transfer Learning Conditional Diffusion Models
via Representation Learning
|
cs.LG math.ST stat.ML stat.TH
|
While conditional diffusion models have achieved remarkable success in
various applications, they require abundant data to train from scratch, which
is often infeasible in practice. To address this issue, transfer learning has
emerged as an essential paradigm in small data regimes. Despite its empirical
success, the theoretical underpinnings of transfer learning conditional
diffusion models remain unexplored. In this paper, we take the first step
towards understanding the sample efficiency of transfer learning conditional
diffusion models through the lens of representation learning. Inspired by
practical training procedures, we assume that there exists a low-dimensional
representation of conditions shared across all tasks. Our analysis shows that
with a well-learned representation from source tasks, the samplecomplexity of
target tasks can be reduced substantially. In addition, we investigate the
practical implications of our theoretical results in several real-world
applications of conditional diffusion models. Numerical experiments are also
conducted to verify our results.
|
2502.04492
|
Multi-Agent Reinforcement Learning with Focal Diversity Optimization
|
cs.CL
|
The advancement of Large Language Models (LLMs) and their finetuning
strategies has triggered the renewed interests in multi-agent reinforcement
learning. In this paper, we introduce a focal diversity-optimized multi-agent
reinforcement learning approach, coined as MARL-Focal, with three unique
characteristics. First, we develop an agent-fusion framework for encouraging
multiple LLM based agents to collaborate in producing the final inference
output for each LLM query. Second, we develop a focal-diversity optimized agent
selection algorithm that can choose a small subset of the available agents
based on how well they can complement one another to generate the query output.
Finally, we design a conflict-resolution method to detect output inconsistency
among multiple agents and produce our MARL-Focal output through reward-aware
and policy-adaptive inference fusion. Extensive evaluations on five benchmarks
show that MARL-Focal is cost-efficient and adversarial-robust. Our multi-agent
fusion model achieves performance improvement of 5.51\% compared to the best
individual LLM-agent and offers stronger robustness over the TruthfulQA
benchmark. Code is available at https://github.com/sftekin/rl-focal
|
2502.04493
|
LUND-PROBE -- LUND Prostate Radiotherapy Open Benchmarking and
Evaluation dataset
|
physics.med-ph cs.CV eess.IV
|
Radiotherapy treatment for prostate cancer relies on computed tomography (CT)
and/or magnetic resonance imaging (MRI) for segmentation of target volumes and
organs at risk (OARs). Manual segmentation of these volumes is regarded as the
gold standard for ground truth in machine learning applications but to acquire
such data is tedious and time-consuming. A publicly available clinical dataset
is presented, comprising MRI- and synthetic CT (sCT) images, target and OARs
segmentations, and radiotherapy dose distributions for 432 prostate cancer
patients treated with MRI-guided radiotherapy. An extended dataset with 35
patients is also included, with the addition of deep learning (DL)-generated
segmentations, DL segmentation uncertainty maps, and DL segmentations manually
adjusted by four radiation oncologists. The publication of these resources aims
to aid research within the fields of automated radiotherapy treatment planning,
segmentation, inter-observer analyses, and DL model uncertainty investigation.
The dataset is hosted on the AIDA Data Hub and offers a free-to-use resource
for the scientific community, valuable for the advancement of medical imaging
and prostate cancer radiotherapy research.
|
2502.04495
|
Discovering Physics Laws of Dynamical Systems via Invariant Function
Learning
|
cs.LG
|
We consider learning underlying laws of dynamical systems governed by
ordinary differential equations (ODE). A key challenge is how to discover
intrinsic dynamics across multiple environments while circumventing
environment-specific mechanisms. Unlike prior work, we tackle more complex
environments where changes extend beyond function coefficients to entirely
different function forms. For example, we demonstrate the discovery of ideal
pendulum's natural motion $\alpha^2 \sin{\theta_t}$ by observing pendulum
dynamics in different environments, such as the damped environment $\alpha^2
\sin(\theta_t) - \rho \omega_t$ and powered environment $\alpha^2
\sin(\theta_t) + \rho \frac{\omega_t}{\left|\omega_t\right|}$. Here, we
formulate this problem as an \emph{invariant function learning} task and
propose a new method, known as \textbf{D}isentanglement of \textbf{I}nvariant
\textbf{F}unctions (DIF), that is grounded in causal analysis. We propose a
causal graph and design an encoder-decoder hypernetwork that explicitly
disentangles invariant functions from environment-specific dynamics. The
discovery of invariant functions is guaranteed by our information-based
principle that enforces the independence between extracted invariant functions
and environments. Quantitative comparisons with meta-learning and invariant
learning baselines on three ODE systems demonstrate the effectiveness and
efficiency of our method. Furthermore, symbolic regression explanation results
highlight the ability of our framework to uncover intrinsic laws.
|
2502.04497
|
Distributed Resilient Asymmetric Bipartite Consensus: A Data-Driven
Event-Triggered Mechanism
|
eess.SY cs.SY
|
The problem of asymmetric bipartite consensus control is investigated within
the context of nonlinear, discrete-time, networked multi-agent systems (MAS)
subject to aperiodic denial-of-service (DoS) attacks. To address the challenges
posed by these aperiodic DoS attacks, a data-driven event-triggered (DDET)
mechanism has been developed. This mechanism is specifically designed to
synchronize the states of the follower agents with the leader's state, even in
the face of aperiodic communication disruptions and data losses. Given the
constraints of unavailable agents' states and data packet loss during these
attacks, the DDET control framework resiliently achieves leader-follower
consensus. The effectiveness of the proposed framework is validated through two
numerical examples, which showcase its ability to adeptly handle the
complexities arising from aperiodic DoS attacks in nonlinear MAS settings.
|
2502.04498
|
Verifiable Format Control for Large Language Model Generations
|
cs.CL
|
Recent Large Language Models (LLMs) have demonstrated satisfying general
instruction following ability. However, small LLMs with about 7B parameters
still struggle fine-grained format following (e.g., JSON format), which
seriously hinder the advancements of their applications. Most existing methods
focus on benchmarking general instruction following while overlook how to
improve the specific format following ability for small LLMs. Besides, these
methods often rely on evaluations based on advanced LLMs (e.g., GPT-4), which
can introduce the intrinsic bias of LLMs and be costly due to the API calls. In
this paper, we first curate a fully verifiable format following dataset VFF. In
contrast to existing works often adopting external LLMs for
instruction-following validations, every sample of VFF can be easily validated
with a Python function. Further, we propose to leverage this verifiable feature
to synthesize massive data for progressively training small LLMs, in order to
improve their format following abilities. Experimental results highlight the
prevalent limitations in the format following capabilities of 7B level
open-source LLMs and demonstrate the effectiveness of our method in enhancing
this essential ability.
|
2502.04499
|
Revisiting Intermediate-Layer Matching in Knowledge Distillation:
Layer-Selection Strategy Doesn't Matter (Much)
|
cs.LG cs.AI cs.CL
|
Knowledge distillation (KD) is a popular method of transferring knowledge
from a large "teacher" model to a small "student" model. KD can be divided into
two categories: prediction matching and intermediate-layer matching. We explore
an intriguing phenomenon: layer-selection strategy does not matter (much) in
intermediate-layer matching. In this paper, we show that seemingly nonsensical
matching strategies such as matching the teacher's layers in reverse still
result in surprisingly good student performance. We provide an interpretation
for this phenomenon by examining the angles between teacher layers viewed from
the student's perspective.
|
2502.04501
|
ULPT: Prompt Tuning with Ultra-Low-Dimensional Optimization
|
cs.CL
|
Large language models achieve state-of-the-art performance but are costly to
fine-tune due to their size. Parameter-efficient fine-tuning methods, such as
prompt tuning, address this by reducing trainable parameters while maintaining
strong performance. However, prior methods tie prompt embeddings to the model's
dimensionality, which may not scale well with larger LLMs and more customized
LLMs. In this paper, we propose Ultra-Low-dimensional Prompt Tuning (ULPT),
which optimizes prompts in a low-dimensional space (e.g., 2D) and use a random
but frozen matrix for the up-projection. To enhance alignment, we introduce
learnable shift and scale embeddings. ULPT drastically reduces the trainable
parameters, e.g., 2D only using 2% parameters compared with vanilla prompt
tuning while retaining most of the performance across 21 NLP tasks. Our
theoretical analysis shows that random projections can capture high-rank
structures effectively, and experimental results demonstrate ULPT's competitive
performance over existing parameter-efficient methods.
|
2502.04506
|
When One LLM Drools, Multi-LLM Collaboration Rules
|
cs.CL
|
This position paper argues that in many realistic (i.e., complex,
contextualized, subjective) scenarios, one LLM is not enough to produce a
reliable output. We challenge the status quo of relying solely on a single
general-purpose LLM and argue for multi-LLM collaboration to better represent
the extensive diversity of data, skills, and people. We first posit that a
single LLM underrepresents real-world data distributions, heterogeneous skills,
and pluralistic populations, and that such representation gaps cannot be
trivially patched by further training a single LLM. We then organize existing
multi-LLM collaboration methods into a hierarchy, based on the level of access
and information exchange, ranging from API-level, text-level, logit-level, to
weight-level collaboration. Based on these methods, we highlight how multi-LLM
collaboration addresses challenges that a single LLM struggles with, such as
reliability, democratization, and pluralism. Finally, we identify the
limitations of existing multi-LLM methods and motivate future work. We envision
multi-LLM collaboration as an essential path toward compositional intelligence
and collaborative AI development.
|
2502.04507
|
Fast Video Generation with Sliding Tile Attention
|
cs.CV
|
Diffusion Transformers (DiTs) with 3D full attention power state-of-the-art
video generation, but suffer from prohibitive compute cost -- when generating
just a 5-second 720P video, attention alone takes 800 out of 945 seconds of
total inference time. This paper introduces sliding tile attention (STA) to
address this challenge. STA leverages the observation that attention scores in
pretrained video diffusion models predominantly concentrate within localized 3D
windows. By sliding and attending over the local spatial-temporal region, STA
eliminates redundancy from full attention. Unlike traditional token-wise
sliding window attention (SWA), STA operates tile-by-tile with a novel
hardware-aware sliding window design, preserving expressiveness while being
hardware-efficient. With careful kernel-level optimizations, STA offers the
first efficient 2D/3D sliding-window-like attention implementation, achieving
58.79% MFU. Precisely, STA accelerates attention by 2.8-17x over
FlashAttention-2 (FA2) and 1.6-10x over FlashAttention-3 (FA3). On the leading
video DiT, HunyuanVideo, STA reduces end-to-end latency from 945s (FA3) to 685s
without quality degradation, requiring no training. Enabling finetuning further
lowers latency to 268s with only a 0.09% drop on VBench.
|
2502.04510
|
Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for
Multi-LLM Systems
|
cs.CL
|
We propose Heterogeneous Swarms, an algorithm to design multi-LLM systems by
jointly optimizing model roles and weights. We represent multi-LLM systems as
directed acyclic graphs (DAGs) of LLMs with topological message passing for
collaborative generation. Given a pool of LLM experts and a utility function,
Heterogeneous Swarms employs two iterative steps: role-step and weight-step.
For role-step, we interpret model roles as learning a DAG that specifies the
flow of inputs and outputs between LLMs. Starting from a swarm of random
continuous adjacency matrices, we decode them into discrete DAGs, call the LLMs
in topological order, evaluate on the utility function (e.g. accuracy on a
task), and optimize the adjacency matrices with particle swarm optimization
based on the utility score. For weight-step, we assess the contribution of
individual LLMs in the multi-LLM systems and optimize model weights with swarm
intelligence. We propose JFK-score to quantify the individual contribution of
each LLM in the best-found DAG of the role-step, then optimize model weights
with particle swarm optimization based on the JFK-score. Experiments
demonstrate that Heterogeneous Swarms outperforms 15 role- and/or weight-based
baselines by 18.5% on average across 12 tasks. Further analysis reveals that
Heterogeneous Swarms discovers multi-LLM systems with heterogeneous model roles
and substantial collaborative gains, and benefits from the diversity of
language models.
|
2502.04511
|
Beyond Sample-Level Feedback: Using Reference-Level Feedback to Guide
Data Synthesis
|
cs.CL
|
LLMs demonstrate remarkable capabilities in following natural language
instructions, largely due to instruction-tuning on high-quality datasets. While
synthetic data generation has emerged as a scalable approach for creating such
datasets, maintaining consistent quality standards remains challenging. Recent
approaches incorporate feedback to improve data quality, but typically operate
at the sample level, generating and applying feedback for each response
individually. In this work, we propose Reference-Level Feedback, a novel
methodology that instead collects feedback based on high-quality reference
samples from carefully curated seed data. We use this feedback to capture rich
signals of desirable characteristics and propagate it throughout the data
synthesis process. We present REFED, a dataset of 10K instruction-response
pairs synthesized using such feedback. We demonstrate the effectiveness of our
approach by showing that Llama-3.1-8B-Instruct finetuned on REFED achieves
state-of-the-art performance among similar-sized SFT-based models on AlpacaEval
2.0 and strong results on Arena-Hard. Through extensive experiments, we show
that our approach consistently outperforms traditional sample-level feedback
methods with significantly fewer feedback collections and improves performance
across different model architectures.
|
2502.04512
|
Safety is Essential for Responsible Open-Ended Systems
|
cs.AI
|
AI advancements have been significantly driven by a combination of foundation
models and curiosity-driven learning aimed at increasing capability and
adaptability. A growing area of interest within this field is Open-Endedness -
the ability of AI systems to continuously and autonomously generate novel and
diverse artifacts or solutions. This has become relevant for accelerating
scientific discovery and enabling continual adaptation in AI agents. This
position paper argues that the inherently dynamic and self-propagating nature
of Open-Ended AI introduces significant, underexplored risks, including
challenges in maintaining alignment, predictability, and control. This paper
systematically examines these challenges, proposes mitigation strategies, and
calls for action for different stakeholders to support the safe, responsible
and successful development of Open-Ended AI.
|
2502.04515
|
MedGNN: Towards Multi-resolution Spatiotemporal Graph Learning for
Medical Time Series Classification
|
cs.LG cs.AI
|
Medical time series has been playing a vital role in real-world healthcare
systems as valuable information in monitoring health conditions of patients.
Accurate classification for medical time series, e.g., Electrocardiography
(ECG) signals, can help for early detection and diagnosis. Traditional methods
towards medical time series classification rely on handcrafted feature
extraction and statistical methods; with the recent advancement of artificial
intelligence, the machine learning and deep learning methods have become more
popular. However, existing methods often fail to fully model the complex
spatial dynamics under different scales, which ignore the dynamic
multi-resolution spatial and temporal joint inter-dependencies. Moreover, they
are less likely to consider the special baseline wander problem as well as the
multi-view characteristics of medical time series, which largely hinders their
prediction performance. To address these limitations, we propose a
Multi-resolution Spatiotemporal Graph Learning framework, MedGNN, for medical
time series classification. Specifically, we first propose to construct
multi-resolution adaptive graph structures to learn dynamic multi-scale
embeddings. Then, to address the baseline wander problem, we propose Difference
Attention Networks to operate self-attention mechanisms on the finite
difference for temporal modeling. Moreover, to learn the multi-view
characteristics, we utilize the Frequency Convolution Networks to capture
complementary information of medical time series from the frequency domain. In
addition, we introduce the Multi-resolution Graph Transformer architecture to
model the dynamic dependencies and fuse the information from different
resolutions. Finally, we have conducted extensive experiments on multiple
medical real-world datasets that demonstrate the superior performance of our
method. Our Code is available.
|
2502.04517
|
Towards Cost-Effective Reward Guided Text Generation
|
cs.LG cs.CL
|
Reward-guided text generation (RGTG) has emerged as a viable alternative to
offline reinforcement learning from human feedback (RLHF). RGTG methods can
align baseline language models to human preferences without further training
like in standard RLHF methods. However, they rely on a reward model to score
each candidate token generated by the language model at inference, incurring
significant test-time overhead. Additionally, the reward model is usually only
trained to score full sequences, which can lead to sub-optimal choices for
partial sequences. In this work, we present a novel reward model architecture
that is trained, using a Bradley-Terry loss, to prefer the optimal expansion of
a sequence with just a \emph{single call} to the reward model at each step of
the generation process. That is, a score for all possible candidate tokens is
generated simultaneously, leading to efficient inference. We theoretically
analyze various RGTG reward models and demonstrate that prior techniques prefer
sub-optimal sequences compared to our method during inference. Empirically, our
reward model leads to significantly faster inference than other RGTG methods.
It requires fewer calls to the reward model and performs competitively compared
to previous RGTG and offline RLHF methods.
|
2502.04519
|
GenVC: Self-Supervised Zero-Shot Voice Conversion
|
eess.AS cs.LG
|
Zero-shot voice conversion has recently made substantial progress, but many
models still depend on external supervised systems to disentangle speaker
identity and linguistic content. Furthermore, current methods often use
parallel conversion, where the converted speech inherits the source utterance's
temporal structure, restricting speaker similarity and privacy. To overcome
these limitations, we introduce GenVC, a generative zero-shot voice conversion
model. GenVC learns to disentangle linguistic content and speaker style in a
self-supervised manner, eliminating the need for external models and enabling
efficient training on large, unlabeled datasets. Experimental results show that
GenVC achieves state-of-the-art speaker similarity while maintaining
naturalness competitive with leading approaches. Its autoregressive generation
also allows the converted speech to deviate from the source utterance's
temporal structure. This feature makes GenVC highly effective for voice
anonymization, as it minimizes the preservation of source prosody and speaker
characteristics, enhancing privacy protection.
|
2502.04520
|
Linear Correlation in LM's Compositional Generalization and
Hallucination
|
cs.CL
|
The generalization of language models (LMs) is undergoing active debates,
contrasting their potential for general intelligence with their struggles with
basic knowledge composition (e.g., reverse/transition curse). This paper
uncovers the phenomenon of linear correlations in LMs during knowledge
composition. For explanation, there exists a linear transformation between
certain related knowledge that maps the next token prediction logits from one
prompt to another, e.g., "X lives in the city of" $\rightarrow$ "X lives in the
country of" for every given X. This mirrors the linearity in human knowledge
composition, such as Paris $\rightarrow$ France. Our findings indicate that the
linear transformation is resilient to large-scale fine-tuning, generalizing
updated knowledge when aligned with real-world relationships, but causing
hallucinations when it deviates. Empirical results suggest that linear
correlation can serve as a potential identifier of LM's generalization.
Finally, we show such linear correlations can be learned with a single
feedforward network and pre-trained vocabulary representations, indicating LM
generalization heavily relies on the latter.
|
2502.04521
|
Generative Autoregressive Transformers for Model-Agnostic Federated MRI
Reconstruction
|
eess.IV cs.CV
|
Although learning-based models hold great promise for MRI reconstruction,
single-site models built on limited local datasets often suffer from poor
generalization. This challenge has spurred interest in collaborative model
training on multi-site datasets via federated learning (FL) -- a
privacy-preserving framework that aggregates model updates instead of sharing
imaging data. Conventional FL builds a global model by aggregating locally
trained model weights, inherently constraining all sites to a homogeneous model
architecture. This rigid homogeneity requirement forces sites to forgo
architectures tailored to their compute infrastructure and application-specific
demands. Consequently, existing FL methods for MRI reconstruction fail to
support model-heterogeneous settings, where individual sites are allowed to use
distinct architectures. To overcome this fundamental limitation, here we
introduce FedGAT, a novel model-agnostic FL technique based on generative
autoregressive transformers. FedGAT decentralizes the training of a global
generative prior that captures the distribution of multi-site MR images. For
enhanced fidelity, we propose a novel site-prompted GAT prior that controllably
synthesizes MR images from desired sites via autoregressive prediction across
spatial scales. Each site then trains its site-specific reconstruction model --
using its preferred architecture -- on a hybrid dataset comprising the local
MRI dataset and GAT-generated synthetic MRI datasets for other sites.
Comprehensive experiments on multi-institutional datasets demonstrate that
FedGAT supports flexible collaborations while enjoying superior within-site and
across-site reconstruction performance compared to state-of-the-art FL
baselines.
|
2502.04522
|
ImprovNet: Generating Controllable Musical Improvisations with Iterative
Corruption Refinement
|
cs.SD cs.AI eess.AS
|
Deep learning has enabled remarkable advances in style transfer across
various domains, offering new possibilities for creative content generation.
However, in the realm of symbolic music, generating controllable and expressive
performance-level style transfers for complete musical works remains
challenging due to limited datasets, especially for genres such as jazz, and
the lack of unified models that can handle multiple music generation tasks.
This paper presents ImprovNet, a transformer-based architecture that generates
expressive and controllable musical improvisations through a self-supervised
corruption-refinement training strategy. ImprovNet unifies multiple
capabilities within a single model: it can perform cross-genre and intra-genre
improvisations, harmonize melodies with genre-specific styles, and execute
short prompt continuation and infilling tasks. The model's iterative generation
framework allows users to control the degree of style transfer and structural
similarity to the original composition. Objective and subjective evaluations
demonstrate ImprovNet's effectiveness in generating musically coherent
improvisations while maintaining structural relationships with the original
pieces. The model outperforms Anticipatory Music Transformer in short
continuation and infilling tasks and successfully achieves recognizable genre
conversion, with 79\% of participants correctly identifying jazz-style
improvisations. Our code and demo page can be found at
https://github.com/keshavbhandari/improvnet.
|
2502.04528
|
Group-Adaptive Threshold Optimization for Robust AI-Generated Text
Detection
|
cs.CL cs.LG
|
The advancement of large language models (LLMs) has made it difficult to
differentiate human-written text from AI-generated text. Several AI-text
detectors have been developed in response, which typically utilize a fixed
global threshold (e.g., {\theta} = 0.5) to classify machine-generated text.
However, we find that one universal threshold can fail to account for
subgroup-specific distributional variations. For example, when using a fixed
threshold, detectors make more false positive errors on shorter human-written
text than longer, and more positive classifications on neurotic writing styles
than open among long text. These discrepancies can lead to misclassification
that disproportionately affects certain groups. We address this critical
limitation by introducing FairOPT, an algorithm for group-specific threshold
optimization in AI-generated content classifiers. Our approach partitions data
into subgroups based on attributes (e.g., text length and writing style) and
learns decision thresholds for each group, which enables careful balancing of
performance and fairness metrics within each subgroup. In experiments with four
AI text classifiers on three datasets, FairOPT enhances overall F1 score and
decreases balanced error rate (BER) discrepancy across subgroups. Our framework
paves the way for more robust and fair classification criteria in AI-generated
output detection.
|
2502.04529
|
Agricultural Field Boundary Detection through Integration of "Simple
Non-Iterative Clustering (SNIC) Super Pixels" and "Canny Edge Detection
Method"
|
cs.LG cs.CV
|
Efficient use of cultivated areas is a necessary factor for sustainable
development of agriculture and ensuring food security. Along with the rapid
development of satellite technologies in developed countries, new methods are
being searched for accurate and operational identification of cultivated areas.
In this context, identification of cropland boundaries based on spectral
analysis of data obtained from satellite images is considered one of the most
optimal and accurate methods in modern agriculture. This article proposes a new
approach to determine the suitability and green index of cultivated areas using
satellite data obtained through the "Google Earth Engine" (GEE) platform. In
this approach, two powerful algorithms, "SNIC (Simple Non-Iterative Clustering)
Super Pixels" and "Canny Edge Detection Method", are combined. The SNIC
algorithm combines pixels in a satellite image into larger regions (super
pixels) with similar characteristics, thereby providing better image analysis.
The Canny Edge Detection Method detects sharp changes (edges) in the image to
determine the precise boundaries of agricultural fields. This study, carried
out using high-resolution multispectral data from the Sentinel-2 satellite and
the Google Earth Engine JavaScript API, has shown that the proposed method is
effective in accurately and reliably classifying randomly selected agricultural
fields. The combined use of these two tools allows for more accurate
determination of the boundaries of agricultural fields by minimizing the
effects of outliers in satellite images. As a result, more accurate and
reliable maps can be created for agricultural monitoring and resource
management over large areas based on the obtained data. By expanding the
application capabilities of cloud-based platforms and artificial intelligence
methods in the agricultural field.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.