id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.01534
|
Preference Leakage: A Contamination Problem in LLM-as-a-judge
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) as judges and LLM-based data synthesis have
emerged as two fundamental LLM-driven data annotation methods in model
development. While their combination significantly enhances the efficiency of
model training and evaluation, little attention has been given to the potential
contamination brought by this new model development paradigm. In this work, we
expose preference leakage, a contamination problem in LLM-as-a-judge caused by
the relatedness between the synthetic data generators and LLM-based evaluators.
To study this issue, we first define three common relatednesses between data
generator LLM and judge LLM: being the same model, having an inheritance
relationship, and belonging to the same model family. Through extensive
experiments, we empirically confirm the bias of judges towards their related
student models caused by preference leakage across multiple LLM baselines and
benchmarks. Further analysis suggests that preference leakage is a pervasive
issue that is harder to detect compared to previously identified biases in
LLM-as-a-judge scenarios. All of these findings imply that preference leakage
is a widespread and challenging problem in the area of LLM-as-a-judge. We
release all codes and data at:
https://github.com/David-Li0406/Preference-Leakage.
|
2502.01535
|
VisTA: Vision-Text Alignment Model with Contrastive Learning using
Multimodal Data for Evidence-Driven, Reliable, and Explainable Alzheimer's
Disease Diagnosis
|
cs.CV cs.CL q-bio.QM
|
Objective: Assessing Alzheimer's disease (AD) using high-dimensional
radiology images is clinically important but challenging. Although Artificial
Intelligence (AI) has advanced AD diagnosis, it remains unclear how to design
AI models embracing predictability and explainability. Here, we propose VisTA,
a multimodal language-vision model assisted by contrastive learning, to
optimize disease prediction and evidence-based, interpretable explanations for
clinical decision-making.
Methods: We developed VisTA (Vision-Text Alignment Model) for AD diagnosis.
Architecturally, we built VisTA from BiomedCLIP and fine-tuned it using
contrastive learning to align images with verified abnormalities and their
descriptions. To train VisTA, we used a constructed reference dataset
containing images, abnormality types, and descriptions verified by medical
experts. VisTA produces four outputs: predicted abnormality type, similarity to
reference cases, evidence-driven explanation, and final AD diagnoses. To
illustrate VisTA's efficacy, we reported accuracy metrics for abnormality
retrieval and dementia prediction. To demonstrate VisTA's explainability, we
compared its explanations with human experts' explanations.
Results: Compared to 15 million images used for baseline pretraining, VisTA
only used 170 samples for fine-tuning and obtained significant improvement in
abnormality retrieval and dementia prediction. For abnormality retrieval, VisTA
reached 74% accuracy and an AUC of 0.87 (26% and 0.74, respectively, from
baseline models). For dementia prediction, VisTA achieved 88% accuracy and an
AUC of 0.82 (30% and 0.57, respectively, from baseline models). The generated
explanations agreed strongly with human experts' and provided insights into the
diagnostic process. Taken together, VisTA optimize prediction, clinical
reasoning, and explanation.
|
2502.01536
|
VR-Robo: A Real-to-Sim-to-Real Framework for Visual Robot Navigation and
Locomotion
|
cs.RO cs.CV
|
Recent success in legged robot locomotion is attributed to the integration of
reinforcement learning and physical simulators. However, these policies often
encounter challenges when deployed in real-world environments due to
sim-to-real gaps, as simulators typically fail to replicate visual realism and
complex real-world geometry. Moreover, the lack of realistic visual rendering
limits the ability of these policies to support high-level tasks requiring
RGB-based perception like ego-centric navigation. This paper presents a
Real-to-Sim-to-Real framework that generates photorealistic and physically
interactive "digital twin" simulation environments for visual navigation and
locomotion learning. Our approach leverages 3D Gaussian Splatting (3DGS) based
scene reconstruction from multi-view images and integrates these environments
into simulations that support ego-centric visual perception and mesh-based
physical interactions. To demonstrate its effectiveness, we train a
reinforcement learning policy within the simulator to perform a visual
goal-tracking task. Extensive experiments show that our framework achieves
RGB-only sim-to-real policy transfer. Additionally, our framework facilitates
the rapid adaptation of robot policies with effective exploration capability in
complex new environments, highlighting its potential for applications in
households and factories.
|
2502.01538
|
FedGES: A Federated Learning Approach for BN Structure Learning
|
cs.LG
|
Bayesian Network (BN) structure learning traditionally centralizes data,
raising privacy concerns when data is distributed across multiple entities.
This research introduces Federated GES (FedGES), a novel Federated Learning
approach tailored for BN structure learning in decentralized settings using the
Greedy Equivalence Search (GES) algorithm. FedGES uniquely addresses privacy
and security challenges by exchanging only evolving network structures, not
parameters or data. It realizes collaborative model development, using
structural fusion to combine the limited models generated by each client in
successive iterations. A controlled structural fusion is also proposed to
enhance client consensus when adding any edge. Experimental results on various
BNs from {\sf bnlearn}'s BN Repository validate the effectiveness of FedGES,
particularly in high-dimensional (a large number of variables) and sparse data
scenarios, offering a practical and privacy-preserving solution for real-world
BN structure learning.
|
2502.01540
|
What is a Number, That a Large Language Model May Know It?
|
cs.CL cs.AI
|
Numbers are a basic part of how humans represent and describe the world
around them. As a consequence, learning effective representations of numbers is
critical for the success of large language models as they become more
integrated into everyday decisions. However, these models face a challenge:
depending on context, the same sequence of digit tokens, e.g., 911, can be
treated as a number or as a string. What kind of representations arise from
this duality, and what are its downstream implications? Using a
similarity-based prompting technique from cognitive science, we show that LLMs
learn representational spaces that blend string-like and numerical
representations. In particular, we show that elicited similarity judgments from
these models over integer pairs can be captured by a combination of Levenshtein
edit distance and numerical Log-Linear distance, suggesting an entangled
representation. In a series of experiments we show how this entanglement is
reflected in the latent embeddings, how it can be reduced but not entirely
eliminated by context, and how it can propagate into a realistic decision
scenario. These results shed light on a representational tension in transformer
models that must learn what a number is from text input.
|
2502.01543
|
Unsupervised anomaly detection in large-scale estuarine acoustic
telemetry data
|
cs.LG
|
Acoustic telemetry data plays a vital role in understanding the behaviour and
movement of aquatic animals. However, these datasets, which often consist of
millions of individual data points, frequently contain anomalous movements that
pose significant challenges. Traditionally, anomalous movements are identified
either manually or through basic statistical methods, approaches that are
time-consuming and prone to high rates of unidentified anomalies in large
datasets. This study focuses on the development of automated classifiers for a
large telemetry dataset comprising detections from fifty acoustically tagged
dusky kob monitored in the Breede Estuary, South Africa. Using an array of 16
acoustic receivers deployed throughout the estuary between 2016 and 2021, we
collected over three million individual data points. We present detailed
guidelines for data pre-processing, resampling strategies, labelling process,
feature engineering, data splitting methodologies, and the selection and
interpretation of machine learning and deep learning models for anomaly
detection. Among the evaluated models, neural networks autoencoder (NN-AE)
demonstrated superior performance, aided by our proposed threshold-finding
algorithm. NN-AE achieved a high recall with no false normal (i.e., no
misclassifications of anomalous movements as normal patterns), a critical
factor in ensuring that no true anomalies are overlooked. In contrast, other
models exhibited false normal fractions exceeding 0.9, indicating they failed
to detect the majority of true anomalies; a significant limitation for
telemetry studies where undetected anomalies can distort interpretations of
movement patterns. While the NN-AE's performance highlights its reliability and
robustness in detecting anomalies, it faced challenges in accurately learning
normal movement patterns when these patterns gradually deviated from anomalous
ones.
|
2502.01545
|
Towards Data-Driven Multi-Stage OPF
|
eess.SY cs.SY
|
The operation of large-scale power systems is usually scheduled ahead via
numerical optimization. However, this requires models of grid topology, line
parameters, and bus specifications. Classic approaches first identify the
network topology, i.e., the graph of interconnections and the associated
impedances. The power generation schedules are then computed by solving a
multi-stage optimal power flow (OPF) problem built around the model. In this
paper, we explore the prospect of data-driven approaches to multi-stage optimal
power flow. Specifically, we leverage recent findings from systems and control
to bypass the identification step and to construct the optimization problem
directly from data. We illustrate the performance of our method on a 118-bus
system and compare it with the classical identification-based approach.
|
2502.01546
|
Dynamic object goal pushing with mobile manipulators through model-free
constrained reinforcement learning
|
cs.RO cs.LG cs.SY eess.SY
|
Non-prehensile pushing to move and reorient objects to a goal is a versatile
loco-manipulation skill. In the real world, the object's physical properties
and friction with the floor contain significant uncertainties, which makes the
task challenging for a mobile manipulator. In this paper, we develop a
learning-based controller for a mobile manipulator to move an unknown object to
a desired position and yaw orientation through a sequence of pushing actions.
The proposed controller for the robotic arm and the mobile base motion is
trained using a constrained Reinforcement Learning (RL) formulation. We
demonstrate its capability in experiments with a quadrupedal robot equipped
with an arm. The learned policy achieves a success rate of 91.35% in simulation
and at least 80% on hardware in challenging scenarios. Through our extensive
hardware experiments, we show that the approach demonstrates high robustness
against unknown objects of different masses, materials, sizes, and shapes. It
reactively discovers the pushing location and direction, thus achieving
contact-rich behavior while observing only the pose of the object.
Additionally, we demonstrate the adaptive behavior of the learned policy
towards preventing the object from toppling.
|
2502.01547
|
mWhisper-Flamingo for Multilingual Audio-Visual Noise-Robust Speech
Recognition
|
eess.AS cs.CV cs.SD
|
Audio-Visual Speech Recognition (AVSR) combines lip-based video with audio
and can improve performance in noise, but most methods are trained only on
English data. One limitation is the lack of large-scale multilingual video
data, which makes it hard hard to train models from scratch. In this work, we
propose mWhisper-Flamingo for multilingual AVSR which combines the strengths of
a pre-trained audio model (Whisper) and video model (AV-HuBERT). To enable
better multi-modal integration and improve the noisy multilingual performance,
we introduce decoder modality dropout where the model is trained both on paired
audio-visual inputs and separate audio/visual inputs. mWhisper-Flamingo
achieves state-of-the-art WER on MuAViC, an AVSR dataset of 9 languages.
Audio-visual mWhisper-Flamingo consistently outperforms audio-only Whisper on
all languages in noisy conditions.
|
2502.01549
|
VideoRAG: Retrieval-Augmented Generation with Extreme Long-Context
Videos
|
cs.IR cs.AI cs.CV
|
Retrieval-Augmented Generation (RAG) has demonstrated remarkable success in
enhancing Large Language Models (LLMs) through external knowledge integration,
yet its application has primarily focused on textual content, leaving the rich
domain of multi-modal video knowledge predominantly unexplored. This paper
introduces VideoRAG, the first retrieval-augmented generation framework
specifically designed for processing and understanding extremely long-context
videos. Our core innovation lies in its dual-channel architecture that
seamlessly integrates (i) graph-based textual knowledge grounding for capturing
cross-video semantic relationships, and (ii) multi-modal context encoding for
efficiently preserving visual features. This novel design empowers VideoRAG to
process unlimited-length videos by constructing precise knowledge graphs that
span multiple videos while maintaining semantic dependencies through
specialized multi-modal retrieval paradigms. Through comprehensive empirical
evaluation on our proposed LongerVideos benchmark-comprising over 160 videos
totaling 134+ hours across lecture, documentary, and entertainment
categories-VideoRAG demonstrates substantial performance compared to existing
RAG alternatives and long video understanding methods. The source code of
VideoRAG implementation and the benchmark dataset are openly available at:
https://github.com/HKUDS/VideoRAG.
|
2502.01550
|
FireCastNet: Earth-as-a-Graph for Seasonal Fire Prediction
|
cs.CV cs.AI
|
With climate change expected to exacerbate fire weather conditions, the
accurate and timely anticipation of wildfires becomes increasingly crucial for
disaster mitigation. In this study, we utilize SeasFire, a comprehensive global
wildfire dataset with climate, vegetation, oceanic indices, and human-related
variables, to enable seasonal wildfire forecasting with machine learning. For
the predictive analysis, we present FireCastNet, a novel architecture which
combines a 3D convolutional encoder with GraphCast, originally developed for
global short-term weather forecasting using graph neural networks. FireCastNet
is trained to capture the context leading to wildfires, at different spatial
and temporal scales. Our investigation focuses on assessing the effectiveness
of our model in predicting the presence of burned areas at varying forecasting
time horizons globally, extending up to six months into the future, and on how
different spatial or/and temporal context affects the performance. Our findings
demonstrate the potential of deep learning models in seasonal fire forecasting;
longer input time-series leads to more robust predictions, while integrating
spatial information to capture wildfire spatio-temporal dynamics boosts
performance. Finally, our results hint that in order to enhance performance at
longer forecasting horizons, a larger receptive field spatially needs to be
considered.
|
2502.01553
|
Virtual Stars, Real Fans: Understanding the VTuber Ecosystem
|
cs.SI cs.CY cs.HC
|
Livestreaming by VTubers -- animated 2D/3D avatars controlled by real
individuals -- have recently garnered substantial global followings and
achieved significant monetary success. Despite prior research highlighting the
importance of realism in audience engagement, VTubers deliberately conceal
their identities, cultivating dedicated fan communities through virtual
personas. While previous studies underscore that building a core fan community
is essential to a streamer's success, we lack an understanding of the
characteristics of viewers of this new type of streamer. Gaining a deeper
insight into these viewers is critical for VTubers to enhance audience
engagement, foster a more robust fan base, and attract a larger viewership. To
address this gap, we conduct a comprehensive analysis of VTuber viewers on
Bilibili, a leading livestreaming platform where nearly all VTubers in China
stream. By compiling a first-of-its-kind dataset covering 2.7M livestreaming
sessions, we investigate the characteristics, engagement patterns, and
influence of VTuber viewers. Our research yields several valuable insights,
which we then leverage to develop a tool to "recommend" future subscribers to
VTubers. By reversing the typical approach of recommending streams to viewers,
this tool assists VTubers in pinpointing potential future fans to pay more
attention to, and thereby effectively growing their fan community.
|
2502.01555
|
Query Brand Entity Linking in E-Commerce Search
|
cs.IR cs.AI cs.LG
|
In this work, we address the brand entity linking problem for e-commerce
search queries. The entity linking task is done by either i)a two-stage process
consisting of entity mention detection followed by entity disambiguation or ii)
an end-to-end linking approaches that directly fetch the target entity given
the input text. The task presents unique challenges: queries are extremely
short (averaging 2.4 words), lack natural language structure, and must handle a
massive space of unique brands. We present a two-stage approach combining
named-entity recognition with matching, and a novel end-to-end solution using
extreme multi-class classification. We validate our solutions by both offline
benchmarks and the impact of online A/B test.
|
2502.01556
|
Observation Noise and Initialization in Wide Neural Networks
|
cs.LG stat.ML
|
Performing gradient descent in a wide neural network is equivalent to
computing the posterior mean of a Gaussian Process with the Neural Tangent
Kernel (NTK-GP), for a specific choice of prior mean and with zero observation
noise. However, existing formulations of this result have two limitations: i)
the resultant NTK-GP assumes no noise in the observed target variables, which
can result in suboptimal predictions with noisy data; ii) it is unclear how to
extend the equivalence to an arbitrary prior mean, a crucial aspect of
formulating a well-specified model. To address the first limitation, we
introduce a regularizer into the neural network's training objective, formally
showing its correspondence to incorporating observation noise into the NTK-GP
model. To address the second, we introduce a \textit{shifted network} that
enables arbitrary prior mean functions. This approach allows us to perform
gradient descent on a single neural network, without expensive ensembling or
kernel matrix inversion. Our theoretical insights are validated empirically,
with experiments exploring different values of observation noise and network
architectures.
|
2502.01557
|
Training in reverse: How iteration order influences convergence and
stability in deep learning
|
cs.LG math.DS stat.ML
|
Despite exceptional achievements, training neural networks remains
computationally expensive and is often plagued by instabilities that can
degrade convergence. While learning rate schedules can help mitigate these
issues, finding optimal schedules is time-consuming and resource-intensive.
This work explores theoretical issues concerning training stability in the
constant-learning-rate (i.e., without schedule) and small-batch-size regime.
Surprisingly, we show that the order of gradient updates affects stability and
convergence in gradient-based optimizers. We illustrate this new line of
thinking using backward-SGD, which processes batch gradient updates like SGD
but in reverse order. Our theoretical analysis shows that in contractive
regions (e.g., around minima) backward-SGD converges to a point while the
standard forward-SGD generally only converges to a distribution. This leads to
improved stability and convergence which we demonstrate experimentally. While
full backward-SGD is computationally intensive in practice, it highlights
opportunities to exploit reverse training dynamics (or more generally alternate
iteration orders) to improve training. To our knowledge, this represents a new
and unexplored avenue in deep learning optimization.
|
2502.01558
|
Search-Based Adversarial Estimates for Improving Sample Efficiency in
Off-Policy Reinforcement Learning
|
cs.LG cs.AI
|
Sample inefficiency is a long-lasting challenge in deep reinforcement
learning (DRL). Despite dramatic improvements have been made, the problem is
far from being solved and is especially challenging in environments with sparse
or delayed rewards. In our work, we propose to use Adversarial Estimates as a
new, simple and efficient approach to mitigate this problem for a class of
feedback-based DRL algorithms. Our approach leverages latent similarity search
from a small set of human-collected trajectories to boost learning, using only
five minutes of human-recorded experience. The results of our study show
algorithms trained with Adversarial Estimates converge faster than their
original version. Moreover, we discuss how our approach could enable learning
in feedback-based algorithms in extreme scenarios with very sparse rewards.
|
2502.01562
|
Memento No More: Coaching AI Agents to Master Multiple Tasks via Hints
Internalization
|
cs.LG
|
As the general capabilities of artificial intelligence (AI) agents continue
to evolve, their ability to learn to master multiple complex tasks through
experience remains a key challenge. Current LLM agents, particularly those
based on proprietary language models, typically rely on prompts to incorporate
knowledge about the target tasks. This approach does not allow the agent to
internalize this information and instead relies on ever-expanding prompts to
sustain its functionality in diverse scenarios. This resembles a system of
notes used by a person affected by anterograde amnesia, the inability to form
new memories. In this paper, we propose a novel method to train AI agents to
incorporate knowledge and skills for multiple tasks without the need for either
cumbersome note systems or prior high-quality demonstration data. Our approach
employs an iterative process where the agent collects new experiences, receives
corrective feedback from humans in the form of hints, and integrates this
feedback into its weights via a context distillation training procedure. We
demonstrate the efficacy of our approach by implementing it in a Llama-3-based
agent which, after only a few rounds of feedback, outperforms advanced models
GPT-4o and DeepSeek-V3 in a taskset requiring correct sequencing of information
retrieval, tool use, and question answering.
|
2502.01563
|
Massive Values in Self-Attention Modules are the Key to Contextual
Knowledge Understanding
|
cs.CL
|
Large language models (LLMs) have achieved remarkable success in contextual
knowledge understanding. In this paper, we show that these concentrated massive
values consistently emerge in specific regions of attention queries (Q) and
keys (K) while not having such patterns in values (V) in various modern
transformer-based LLMs (Q, K, and V mean the representations output by the
query, key, and value layers respectively). Through extensive experiments, we
further demonstrate that these massive values play a critical role in
interpreting contextual knowledge (knowledge obtained from the current context
window) rather than in retrieving parametric knowledge stored within the
model's parameters. Our further investigation of quantization strategies
reveals that ignoring these massive values leads to a pronounced drop in
performance on tasks requiring rich contextual understanding, aligning with our
analysis. Finally, we trace the emergence of concentrated massive values and
find that such concentration is caused by Rotary Positional Encoding (RoPE),
which has appeared since the first layers. These findings shed new light on how
Q and K operate in LLMs and offer practical insights for model design and
optimization. The Code is Available at
https://github.com/MingyuJ666/Rope_with_LLM.
|
2502.01564
|
MeetMap: Real-Time Collaborative Dialogue Mapping with LLMs in Online
Meetings
|
cs.HC cs.AI
|
Video meeting platforms display conversations linearly through transcripts or
summaries. However, ideas during a meeting do not emerge linearly. We leverage
LLMs to create dialogue maps in real time to help people visually structure and
connect ideas. Balancing the need to reduce the cognitive load on users during
the conversation while giving them sufficient control when using AI, we explore
two system variants that encompass different levels of AI assistance. In
Human-Map, AI generates summaries of conversations as nodes, and users create
dialogue maps with the nodes. In AI-Map, AI produces dialogue maps where users
can make edits. We ran a within-subject experiment with ten pairs of users,
comparing the two MeetMap variants and a baseline. Users preferred MeetMap over
traditional methods for taking notes, which aligned better with their mental
models of conversations. Users liked the ease of use for AI-Map due to the low
effort demands and appreciated the hands-on opportunity in Human-Map for
sense-making.
|
2502.01565
|
GauCho: Gaussian Distributions with Cholesky Decomposition for Oriented
Object Detection
|
cs.CV
|
Oriented Object Detection (OOD) has received increased attention in the past
years, being a suitable solution for detecting elongated objects in remote
sensing analysis. In particular, using regression loss functions based on
Gaussian distributions has become attractive since they yield simple and
differentiable terms. However, existing solutions are still based on regression
heads that produce Oriented Bounding Boxes (OBBs), and the known problem of
angular boundary discontinuity persists. In this work, we propose a regression
head for OOD that directly produces Gaussian distributions based on the
Cholesky matrix decomposition. The proposed head, named GauCho, theoretically
mitigates the boundary discontinuity problem and is fully compatible with
recent Gaussian-based regression loss functions. Furthermore, we advocate using
Oriented Ellipses (OEs) to represent oriented objects, which relates to GauCho
through a bijective function and alleviates the encoding ambiguity problem for
circular objects. Our experimental results show that GauCho can be a viable
alternative to the traditional OBB head, achieving results comparable to or
better than state-of-the-art detectors for the challenging dataset DOTA
|
2502.01567
|
Scalable Language Models with Posterior Inference of Latent Thought
Vectors
|
cs.CL cs.LG stat.ML
|
We propose a novel family of language models, Latent-Thought Language Models
(LTMs), which incorporate explicit latent thought vectors that follow an
explicit prior model in latent space. These latent thought vectors guide the
autoregressive generation of ground tokens through a Transformer decoder.
Training employs a dual-rate optimization process within the classical
variational Bayes framework: fast learning of local variational parameters for
the posterior distribution of latent vectors, and slow learning of global
decoder parameters. Empirical studies reveal that LTMs possess additional
scaling dimensions beyond traditional LLMs, yielding a structured design space.
Higher sample efficiency can be achieved by increasing training compute per
token, with further gains possible by trading model size for more inference
steps. Designed based on these scaling properties, LTMs demonstrate superior
sample and parameter efficiency compared to conventional autoregressive models
and discrete diffusion models. They significantly outperform these counterparts
in validation perplexity and zero-shot language modeling. Additionally, LTMs
exhibit emergent few-shot in-context reasoning capabilities that scale with
model and latent size, and achieve competitive performance in conditional and
unconditional text generation.
|
2502.01568
|
Visual Theory of Mind Enables the Invention of Writing Systems
|
cs.CL cs.AI
|
Abstract symbolic writing systems are semiotic codes that are ubiquitous in
modern society but are otherwise absent in the animal kingdom. Anthropological
evidence suggests that the earliest forms of some writing systems originally
consisted of iconic pictographs, which signify their referent via visual
resemblance. While previous studies have examined the emergence and,
separately, the evolution of pictographic writing systems through a
computational lens, most employ non-naturalistic methodologies that make it
difficult to draw clear analogies to human and animal cognition. We develop a
multi-agent reinforcement learning testbed for emergent communication called a
Signification Game, and formulate a model of inferential communication that
enables agents to leverage visual theory of mind to communicate actions using
pictographs. Our model, which is situated within a broader formalism for animal
communication, sheds light on the cognitive and cultural processes that led to
the development of early writing systems.
|
2502.01572
|
MakeAnything: Harnessing Diffusion Transformers for Multi-Domain
Procedural Sequence Generation
|
cs.CV
|
A hallmark of human intelligence is the ability to create complex artifacts
through structured multi-step processes. Generating procedural tutorials with
AI is a longstanding but challenging goal, facing three key obstacles: (1)
scarcity of multi-task procedural datasets, (2) maintaining logical continuity
and visual consistency between steps, and (3) generalizing across multiple
domains. To address these challenges, we propose a multi-domain dataset
covering 21 tasks with over 24,000 procedural sequences. Building upon this
foundation, we introduce MakeAnything, a framework based on the diffusion
transformer (DIT), which leverages fine-tuning to activate the in-context
capabilities of DIT for generating consistent procedural sequences. We
introduce asymmetric low-rank adaptation (LoRA) for image generation, which
balances generalization capabilities and task-specific performance by freezing
encoder parameters while adaptively tuning decoder layers. Additionally, our
ReCraft model enables image-to-process generation through spatiotemporal
consistency constraints, allowing static images to be decomposed into plausible
creation sequences. Extensive experiments demonstrate that MakeAnything
surpasses existing methods, setting new performance benchmarks for procedural
generation tasks.
|
2502.01573
|
Next Steps in LLM-Supported Java Verification
|
cs.SE cs.AI cs.LG cs.LO
|
Recent work has shown that Large Language Models (LLMs) are not only a
suitable tool for code generation but also capable of generating
annotation-based code specifications. Scaling these methodologies may allow us
to deduce provable correctness guarantees for large-scale software systems. In
comparison to other LLM tasks, the application field of deductive verification
has the notable advantage of providing a rigorous toolset to check
LLM-generated solutions. This short paper provides early results on how this
rigorous toolset can be used to reliably elicit correct specification
annotations from an unreliable LLM oracle.
|
2502.01575
|
Heterogeneous Treatment Effect in Time-to-Event Outcomes: Harnessing
Censored Data with Recursively Imputed Trees
|
stat.ML cs.LG
|
Tailoring treatments to individual needs is a central goal in fields such as
medicine. A key step toward this goal is estimating Heterogeneous Treatment
Effects (HTE) - the way treatments impact different subgroups. While crucial,
HTE estimation is challenging with survival data, where time until an event
(e.g., death) is key. Existing methods often assume complete observation, an
assumption violated in survival data due to right-censoring, leading to bias
and inefficiency. Cui et al. (2023) proposed a doubly-robust method for HTE
estimation in survival data under no hidden confounders, combining a causal
survival forest with an augmented inverse-censoring weighting estimator.
However, we find it struggles under heavy censoring, which is common in
rare-outcome problems such as Amyotrophic lateral sclerosis (ALS). Moreover,
most current methods cannot handle instrumental variables, which are a crucial
tool in the causal inference arsenal. We introduce Multiple Imputation for
Survival Treatment Response (MISTR), a novel, general, and non-parametric
method for estimating HTE in survival data. MISTR uses recursively imputed
survival trees to handle censoring without directly modeling the censoring
mechanism. Through extensive simulations and analysis of two real-world
datasets-the AIDS Clinical Trials Group Protocol 175 and the Illinois
unemployment dataset we show that MISTR outperforms prior methods under heavy
censoring in the no-hidden-confounders setting, and extends to the instrumental
variable setting. To our knowledge, MISTR is the first non-parametric approach
for HTE estimation with unobserved confounders via instrumental variables.
|
2502.01576
|
Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders
for Multi-modal Large Language Models
|
cs.CV
|
Multi-modal Large Language Models (MLLMs) excel in vision-language tasks but
remain vulnerable to visual adversarial perturbations that can induce
hallucinations, manipulate responses, or bypass safety mechanisms. Existing
methods seek to mitigate these risks by applying constrained adversarial
fine-tuning to CLIP vision encoders on ImageNet-scale data, ensuring their
generalization ability is preserved. However, this limited adversarial training
restricts robustness and broader generalization. In this work, we explore an
alternative approach of leveraging existing vision classification models that
have been adversarially pre-trained on large-scale data. Our analysis reveals
two principal contributions: (1) the extensive scale and diversity of
adversarial pre-training enables these models to demonstrate superior
robustness against diverse adversarial threats, ranging from imperceptible
perturbations to advanced jailbreaking attempts, without requiring additional
adversarial training, and (2) end-to-end MLLM integration with these robust
models facilitates enhanced adaptation of language components to robust visual
features, outperforming existing plug-and-play methodologies on complex
reasoning tasks. Through systematic evaluation across visual
question-answering, image captioning, and jail-break attacks, we demonstrate
that MLLMs trained with these robust models achieve superior adversarial
robustness while maintaining favorable clean performance. Our framework
achieves 2x and 1.5x average robustness gains in captioning and VQA tasks,
respectively, and delivers over 10% improvement against jailbreak attacks. Code
and pretrained models will be available at
https://github.com/HashmatShadab/Robust-LLaVA.
|
2502.01578
|
ReGLA: Refining Gated Linear Attention
|
cs.CL
|
Recent advancements in Large Language Models (LLMs) have set themselves apart
with their exceptional performance in complex language modelling tasks.
However, these models are also known for their significant computational and
storage requirements, primarily due to the quadratic computation complexity of
softmax attention. To mitigate this issue, linear attention has been designed
to reduce the quadratic space-time complexity that is inherent in standard
transformers. In this work, we embarked on a comprehensive exploration of three
key components that substantially impact the performance of the Gated Linear
Attention module: feature maps, normalization, and the gating mechanism. We
developed a feature mapping function to address some crucial issues that
previous suggestions overlooked. Then we offered further rationale for the
integration of normalization layers to stabilize the training process.
Moreover, we explored the saturation phenomenon of the gating mechanism and
augmented it with a refining module. We conducted extensive experiments and
showed our architecture outperforms previous Gated Linear Attention mechanisms
in extensive tasks including training from scratch and post-linearization with
continual pre-training.
|
2502.01583
|
Spectral Estimators for Multi-Index Models: Precise Asymptotics and
Optimal Weak Recovery
|
stat.ML cs.IT cs.LG math.IT math.PR math.ST stat.TH
|
Multi-index models provide a popular framework to investigate the
learnability of functions with low-dimensional structure and, also due to their
connections with neural networks, they have been object of recent intensive
study. In this paper, we focus on recovering the subspace spanned by the
signals via spectral estimators -- a family of methods that are routinely used
in practice, often as a warm-start for iterative algorithms. Our main technical
contribution is a precise asymptotic characterization of the performance of
spectral methods, when sample size and input dimension grow proportionally and
the dimension $p$ of the space to recover is fixed. Specifically, we locate the
top-$p$ eigenvalues of the spectral matrix and establish the overlaps between
the corresponding eigenvectors (which give the spectral estimators) and a basis
of the signal subspace. Our analysis unveils a phase transition phenomenon in
which, as the sample complexity grows, eigenvalues escape from the bulk of the
spectrum and, when that happens, eigenvectors recover directions of the desired
subspace. The precise characterization we put forward enables the optimization
of the data preprocessing, thus allowing to identify the spectral estimator
that requires the minimal sample size for weak recovery.
|
2502.01584
|
PhD Knowledge Not Required: A Reasoning Challenge for Large Language
Models
|
cs.AI cs.LG
|
Existing benchmarks for frontier models often test specialized, ``PhD-level''
knowledge that is difficult for non-experts to grasp. In contrast, we present a
benchmark based on the NPR Sunday Puzzle Challenge that requires only general
knowledge. Our benchmark is challenging for both humans and models, however
correct solutions are easy to verify, and models' mistakes are easy to spot.
Our work reveals capability gaps that are not evident in existing benchmarks:
OpenAI o1 significantly outperforms other reasoning models that are on par on
benchmarks that test specialized knowledge. Furthermore, our analysis of
reasoning outputs uncovers new kinds of failures. DeepSeek R1, for instance,
often concedes with ``I give up'' before providing an answer that it knows is
wrong. R1 can also be remarkably ``uncertain'' in its output and in rare cases,
it does not ``finish thinking,'' which suggests the need for an inference-time
technique to ``wrap up'' before the context window limit is reached. We also
quantify the effectiveness of reasoning longer with R1 and Gemini Thinking to
identify the point beyond which more reasoning is unlikely to improve accuracy
on our benchmark.
|
2502.01585
|
Re-examining Double Descent and Scaling Laws under Norm-based Capacity
via Deterministic Equivalence
|
stat.ML cs.LG math.ST stat.TH
|
We investigate double descent and scaling laws in terms of weights rather
than the number of parameters. Specifically, we analyze linear and random
features models using the deterministic equivalence approach from random matrix
theory. We precisely characterize how the weights norm concentrate around
deterministic quantities and elucidate the relationship between the expected
test error and the norm-based capacity (complexity). Our results rigorously
answer whether double descent exists under norm-based capacity and reshape the
corresponding scaling laws. Moreover, they prompt a rethinking of the
data-parameter paradigm - from under-parameterized to over-parameterized
regimes - by shifting the focus to norms (weights) rather than parameter count.
|
2502.01586
|
SubTrack your Grad: Gradient Subspace Tracking for Memory and Time
Efficient Full-Parameter LLM Training
|
cs.LG
|
Training Large Language Models (LLMs) demand significant time and
computational resources due to their large model sizes and optimizer states. To
overcome these challenges, recent methods, such as BAdam, employ partial weight
updates to enhance time and memory efficiency, though sometimes at the cost of
performance. Others, like GaLore, focus on maintaining performance while
optimizing memory usage through full parameter training, but may incur higher
time complexity. By leveraging the low-rank structure of the gradient and the
Grassmannian geometry, we propose SubTrack-Grad, a subspace tracking-based
optimization method that efficiently tracks the evolving gradient subspace by
incorporating estimation errors and previously identified subspaces.
SubTrack-Grad delivers better or on-par results compared to GaLore, while
significantly outperforming BAdam, which, despite being time-efficient,
compromises performance. SubTrack-Grad reduces wall-time by up to 20.57% on
GLUE tasks (15% average reduction) and up to 65% on SuperGLUE tasks (22%
average reduction) compared to GaLore. Notably, for a 3B parameter model,
GaLore incurred a substantial 157% increase in wall-time compared to full-rank
training, whereas SubTrack-Grad exhibited a 31% increase, representing a 49%
reduction in wall-time, while enjoying the same memory reductions as GaLore.
|
2502.01587
|
Verbalized Bayesian Persuasion
|
cs.GT cs.AI cs.LG
|
Information design (ID) explores how a sender influence the optimal behavior
of receivers to achieve specific objectives. While ID originates from everyday
human communication, existing game-theoretic and machine learning methods often
model information structures as numbers, which limits many applications to toy
games. This work leverages LLMs and proposes a verbalized framework in Bayesian
persuasion (BP), which extends classic BP to real-world games involving human
dialogues for the first time. Specifically, we map the BP to a verbalized
mediator-augmented extensive-form game, where LLMs instantiate the sender and
receiver. To efficiently solve the verbalized game, we propose a generalized
equilibrium-finding algorithm combining LLM and game solver. The algorithm is
reinforced with techniques including verbalized commitment assumptions,
verbalized obedience constraints, and information obfuscation. Numerical
experiments in dialogue scenarios, such as recommendation letters, courtroom
interactions, and law enforcement, validate that our framework can both
reproduce theoretical results in classic BP and discover effective persuasion
strategies in more complex natural language and multi-stage scenarios.
|
2502.01588
|
A Differentiable Alignment Framework for Sequence-to-Sequence Modeling
via Optimal Transport
|
cs.LG cs.SD eess.AS stat.ML
|
Accurate sequence-to-sequence (seq2seq) alignment is critical for
applications like medical speech analysis and language learning tools relying
on automatic speech recognition (ASR). State-of-the-art end-to-end (E2E) ASR
systems, such as the Connectionist Temporal Classification (CTC) and
transducer-based models, suffer from peaky behavior and alignment inaccuracies.
In this paper, we propose a novel differentiable alignment framework based on
one-dimensional optimal transport, enabling the model to learn a single
alignment and perform ASR in an E2E manner. We introduce a pseudo-metric,
called Sequence Optimal Transport Distance (SOTD), over the sequence space and
discuss its theoretical properties. Based on the SOTD, we propose Optimal
Temporal Transport Classification (OTTC) loss for ASR and contrast its behavior
with CTC. Experimental results on the TIMIT, AMI, and LibriSpeech datasets show
that our method considerably improves alignment performance, though with a
trade-off in ASR performance when compared to CTC. We believe this work opens
new avenues for seq2seq alignment research, providing a solid foundation for
further exploration and development within the community.
|
2502.01590
|
Downlink Beamforming with Pinching-Antenna Assisted MIMO Systems
|
cs.IT eess.SP math.IT
|
Pinching antennas have been recently proposed as a promising flexible-antenna
technology, which can be implemented by attaching low-cost pinching elements to
dielectric waveguides. This work explores the potential of employing pinching
antenna systems (PASs) for downlink transmission in a multiuser MIMO setting.
We consider the problem of hybrid beamforming, where the digital precoder at
the access point and the activated locations of the pinching elements are
jointly optimized to maximize the achievable weighted sum-rate. Invoking
fractional programming, a novel low-complexity algorithm is developed to
iteratively update the precoding matrix and the locations of the pinching
antennas. We validate the proposed scheme through extensive numerical
experiments. Our investigations demonstrate that using PAS the system
throughput can be significantly boosted as compared with the conventional
fixed-location antenna systems, enlightening the potential of PAS as an
enabling candidate for next-generation wireless networks.
|
2502.01591
|
Improving Transformer World Models for Data-Efficient RL
|
cs.LG cs.AI
|
We present an approach to model-based RL that achieves a new state of the art
performance on the challenging Craftax-classic benchmark, an open-world 2D
survival game that requires agents to exhibit a wide range of general abilities
-- such as strong generalization, deep exploration, and long-term reasoning.
With a series of careful design choices aimed at improving sample efficiency,
our MBRL algorithm achieves a reward of 67.4% after only 1M environment steps,
significantly outperforming DreamerV3, which achieves 53.2%, and, for the first
time, exceeds human performance of 65.0%. Our method starts by constructing a
SOTA model-free baseline, using a novel policy architecture that combines CNNs
and RNNs. We then add three improvements to the standard MBRL setup: (a) "Dyna
with warmup", which trains the policy on real and imaginary data, (b) "nearest
neighbor tokenizer" on image patches, which improves the scheme to create the
transformer world model (TWM) inputs, and (c) "block teacher forcing", which
allows the TWM to reason jointly about the future tokens of the next timestep.
|
2502.01594
|
Faster Adaptive Optimization via Expected Gradient Outer Product
Reparameterization
|
cs.LG math.OC
|
Adaptive optimization algorithms -- such as Adagrad, Adam, and their variants
-- have found widespread use in machine learning, signal processing and many
other settings. Several methods in this family are not rotationally
equivariant, meaning that simple reparameterizations (i.e. change of basis) can
drastically affect their convergence. However, their sensitivity to the choice
of parameterization has not been systematically studied; it is not clear how to
identify a "favorable" change of basis in which these methods perform best. In
this paper we propose a reparameterization method and demonstrate both
theoretically and empirically its potential to improve their convergence
behavior. Our method is an orthonormal transformation based on the expected
gradient outer product (EGOP) matrix, which can be approximated using either
full-batch or stochastic gradient oracles. We show that for a broad class of
functions, the sensitivity of adaptive algorithms to choice-of-basis is
influenced by the decay of the EGOP matrix spectrum. We illustrate the
potential impact of EGOP reparameterization by presenting empirical evidence
and theoretical arguments that common machine learning tasks with "natural"
data exhibit EGOP spectral decay.
|
2502.01597
|
FutureVision: A methodology for the investigation of future cognition
|
cs.CL
|
This paper presents a methodology combining multimodal semantic analysis with
an eye-tracking experimental protocol to investigate the cognitive effort
involved in understanding the communication of future scenarios. To demonstrate
the methodology, we conduct a pilot study examining how visual fixation
patterns vary during the evaluation of valence and counterfactuality in
fictional ad pieces describing futuristic scenarios, using a portable eye
tracker. Participants eye movements are recorded while evaluating the stimuli
and describing them to a conversation partner. Gaze patterns are analyzed
alongside semantic representations of the stimuli and participants
descriptions, constructed from a frame semantic annotation of both linguistic
and visual modalities. Preliminary results show that far-future and pessimistic
scenarios are associated with longer fixations and more erratic saccades,
supporting the hypothesis that fractures in the base spaces underlying the
interpretation of future scenarios increase cognitive load for comprehenders.
|
2502.01600
|
Reinforcement Learning for Long-Horizon Interactive LLM Agents
|
cs.LG cs.AI
|
Interactive digital agents (IDAs) leverage APIs of stateful digital
environments to perform tasks in response to user requests. While IDAs powered
by instruction-tuned large language models (LLMs) can react to feedback from
interface invocations in multi-step exchanges, they have not been trained in
their respective digital environments. Prior methods accomplish less than half
of tasks in sophisticated benchmarks such as AppWorld. We present a
reinforcement learning (RL) approach that trains IDAs directly in their target
environments. We formalize this training as a partially observable Markov
decision process and derive LOOP, a data- and memory-efficient variant of
proximal policy optimization. LOOP uses no value network and maintains exactly
one copy of the underlying LLM in memory, making its implementation
straightforward and as memory-efficient as fine-tuning a single LLM. A
32-billion-parameter agent trained with LOOP in the AppWorld environment
outperforms the much larger OpenAI o1 agent by 9 percentage points (15%
relative). To our knowledge, this is the first reported application of RL to
IDAs that interact with a stateful, multi-domain, multi-app environment via
direct API calls. Our analysis sheds light on the effectiveness of RL in this
area, showing that the agent learns to consult the API documentation, avoid
unwarranted assumptions, minimize confabulation, and recover from setbacks.
|
2502.01609
|
Breaking Focus: Contextual Distraction Curse in Large Language Models
|
cs.CL
|
Recent advances in Large Language Models (LLMs) have revolutionized
generative systems, achieving excellent performance across diverse domains.
Although these models perform well in controlled environments, their real-world
applications frequently encounter inputs containing both essential and
irrelevant details. Our investigation has revealed a critical vulnerability in
LLMs, which we term Contextual Distraction Vulnerability (CDV). This phenomenon
arises when models fail to maintain consistent performance on questions
modified with semantically coherent but irrelevant context. To systematically
investigate this vulnerability, we propose an efficient tree-based search
methodology to automatically generate CDV examples. Our approach successfully
generates CDV examples across four datasets, causing an average performance
degradation of approximately 45% in state-of-the-art LLMs. To address this
critical issue, we explore various mitigation strategies and find that
post-targeted training approaches can effectively enhance model robustness
against contextual distractions. Our findings highlight the fundamental nature
of CDV as an ability-level challenge rather than a knowledge-level issue since
models demonstrate the necessary knowledge by answering correctly in the
absence of distractions. This calls the community's attention to address CDV
during model development to ensure reliability. The code is available at
https://github.com/wyf23187/LLM_CDV.
|
2502.01612
|
Self-Improving Transformers Overcome Easy-to-Hard and Length
Generalization Challenges
|
cs.LG cs.AI
|
Large language models often struggle with length generalization and solving
complex problem instances beyond their training distribution. We present a
self-improvement approach where models iteratively generate and learn from
their own solutions, progressively tackling harder problems while maintaining a
standard transformer architecture. Across diverse tasks including arithmetic,
string manipulation, and maze solving, self-improving enables models to solve
problems far beyond their initial training distribution-for instance,
generalizing from 10-digit to 100-digit addition without apparent saturation.
We observe that in some cases filtering for correct self-generated examples
leads to exponential improvements in out-of-distribution performance across
training rounds. Additionally, starting from pretrained models significantly
accelerates this self-improvement process for several tasks. Our results
demonstrate how controlled weak-to-strong curricula can systematically teach a
model logical extrapolation without any changes to the positional embeddings,
or the model architecture.
|
2502.01615
|
Large Language Models Are Human-Like Internally
|
cs.CL
|
Recent cognitive modeling studies have reported that larger language models
(LMs) exhibit a poorer fit to human reading behavior, leading to claims of
their cognitive implausibility. In this paper, we revisit this argument through
the lens of mechanistic interpretability and argue that prior conclusions were
skewed by an exclusive focus on the final layers of LMs. Our analysis reveals
that next-word probabilities derived from internal layers of larger LMs align
with human sentence processing data as well as, or better than, those from
smaller LMs. This alignment holds consistently across behavioral (self-paced
reading times, gaze durations, MAZE task processing times) and
neurophysiological (N400 brain potentials) measures, challenging earlier mixed
results and suggesting that the cognitive plausibility of larger LMs has been
underestimated. Furthermore, we first identify an intriguing relationship
between LM layers and human measures: earlier layers correspond more closely
with fast gaze durations, while later layers better align with relatively
slower signals such as N400 potentials and MAZE processing times. Our work
opens new avenues for interdisciplinary research at the intersection of
mechanistic interpretability and cognitive modeling.
|
2502.01616
|
Preference VLM: Leveraging VLMs for Scalable Preference-Based
Reinforcement Learning
|
cs.LG
|
Preference-based reinforcement learning (RL) offers a promising approach for
aligning policies with human intent but is often constrained by the high cost
of human feedback. In this work, we introduce PrefVLM, a framework that
integrates Vision-Language Models (VLMs) with selective human feedback to
significantly reduce annotation requirements while maintaining performance. Our
method leverages VLMs to generate initial preference labels, which are then
filtered to identify uncertain cases for targeted human annotation.
Additionally, we adapt VLMs using a self-supervised inverse dynamics loss to
improve alignment with evolving policies. Experiments on Meta-World
manipulation tasks demonstrate that PrefVLM achieves comparable or superior
success rates to state-of-the-art methods while using up to 2 x fewer human
annotations. Furthermore, we show that adapted VLMs enable efficient knowledge
transfer across tasks, further minimizing feedback needs. Our results highlight
the potential of combining VLMs with selective human supervision to make
preference-based RL more scalable and practical.
|
2502.01618
|
A Probabilistic Inference Approach to Inference-Time Scaling of LLMs
using Particle-Based Monte Carlo Methods
|
cs.LG cs.AI
|
Large language models (LLMs) have achieved significant performance gains via
scaling up model sizes and/or data. However, recent evidence suggests
diminishing returns from such approaches, motivating scaling the computation
spent at inference time. Existing inference-time scaling methods, usually with
reward models, cast the task as a search problem, which tends to be vulnerable
to reward hacking as a consequence of approximation errors in reward models. In
this paper, we instead cast inference-time scaling as a probabilistic inference
task and leverage sampling-based techniques to explore the typical set of the
state distribution of a state-space model with an approximate likelihood,
rather than optimize for its mode directly. We propose a novel inference-time
scaling approach by adapting particle-based Monte Carlo methods to this task.
Our empirical evaluation demonstrates that our methods have a 4-16x better
scaling rate over our deterministic search counterparts on various challenging
mathematical reasoning tasks. Using our approach, we show that
Qwen2.5-Math-1.5B-Instruct can surpass GPT-4o accuracy in only 4 rollouts,
while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts.
Our work not only presents an effective method to inference-time scaling, but
also connects the rich literature in probabilistic inference with
inference-time scaling of LLMs to develop more robust algorithms in future
work. Code, videos, and further information available at
https://probabilistic-inference-scaling.github.io.
|
2502.01619
|
Learning to Generate Unit Tests for Automated Debugging
|
cs.SE cs.AI cs.CL cs.LG
|
Unit tests (UTs) play an instrumental role in assessing code correctness as
well as providing feedback to a large language model (LLM) as it iteratively
debugs faulty code, motivating automated test generation. However, we uncover a
trade-off between generating unit test inputs that reveal errors when given a
faulty code and correctly predicting the unit test output without access to the
gold solution. To address this trade-off, we propose UTGen, which teaches LLMs
to generate unit test inputs that reveal errors along with their correct
expected outputs based on task descriptions and candidate code. We integrate
UTGen into UTDebug, a robust debugging pipeline that uses generated tests to
help LLMs debug effectively. Since model-generated tests can provide noisy
signals (e.g., from incorrectly predicted outputs), UTDebug (i) scales UTGen
via test-time compute to improve UT output prediction, and (ii) validates and
back-tracks edits based on multiple generated UTs to avoid overfitting. We show
that UTGen outperforms UT generation baselines by 7.59% based on a metric
measuring the presence of both error-revealing UT inputs and correct UT
outputs. When used with UTDebug, we find that feedback from UTGen's unit tests
improves pass@1 accuracy of Qwen-2.5 7B on HumanEvalFix and our own harder
debugging split of MBPP+ by over 3% and 12.35% (respectively) over other
LLM-based UT generation baselines.
|
2502.01620
|
LLM-TA: An LLM-Enhanced Thematic Analysis Pipeline for Transcripts from
Parents of Children with Congenital Heart Disease
|
cs.CL cs.HC
|
Thematic Analysis (TA) is a fundamental method in healthcare research for
analyzing transcript data, but it is resource-intensive and difficult to scale
for large, complex datasets. This study investigates the potential of large
language models (LLMs) to augment the inductive TA process in high-stakes
healthcare settings. Focusing on interview transcripts from parents of children
with Anomalous Aortic Origin of a Coronary Artery (AAOCA), a rare congenital
heart disease, we propose an LLM-Enhanced Thematic Analysis (LLM-TA) pipeline.
Our pipeline integrates an affordable state-of-the-art LLM (GPT-4o mini),
LangChain, and prompt engineering with chunking techniques to analyze nine
detailed transcripts following the inductive TA framework. We evaluate the
LLM-generated themes against human-generated results using thematic similarity
metrics, LLM-assisted assessments, and expert reviews. Results demonstrate that
our pipeline outperforms existing LLM-assisted TA methods significantly. While
the pipeline alone has not yet reached human-level quality in inductive TA, it
shows great potential to improve scalability, efficiency, and accuracy while
reducing analyst workload when working collaboratively with domain experts. We
provide practical recommendations for incorporating LLMs into high-stakes TA
workflows and emphasize the importance of close collaboration with domain
experts to address challenges related to real-world applicability and dataset
complexity. https://github.com/jiaweixu98/LLM-TA
|
2502.01626
|
MFP-VTON: Enhancing Mask-Free Person-to-Person Virtual Try-On via
Diffusion Transformer
|
cs.CV
|
The garment-to-person virtual try-on (VTON) task, which aims to generate
fitting images of a person wearing a reference garment, has made significant
strides. However, obtaining a standard garment is often more challenging than
using the garment already worn by the person. To improve ease of use, we
propose MFP-VTON, a Mask-Free framework for Person-to-Person VTON. Recognizing
the scarcity of person-to-person data, we adapt a garment-to-person model and
dataset to construct a specialized dataset for this task. Our approach builds
upon a pretrained diffusion transformer, leveraging its strong generative
capabilities. During mask-free model fine-tuning, we introduce a Focus
Attention loss to emphasize the garment of the reference person and the details
outside the garment of the target person. Experimental results demonstrate that
our model excels in both person-to-person and garment-to-person VTON tasks,
generating high-fidelity fitting images.
|
2502.01627
|
A Poisson Process AutoDecoder for X-ray Sources
|
astro-ph.IM astro-ph.HE cs.LG stat.AP
|
X-ray observing facilities, such as the Chandra X-ray Observatory and the
eROSITA, have detected millions of astronomical sources associated with
high-energy phenomena. The arrival of photons as a function of time follows a
Poisson process and can vary by orders-of-magnitude, presenting obstacles for
common tasks such as source classification, physical property derivation, and
anomaly detection. Previous work has either failed to directly capture the
Poisson nature of the data or only focuses on Poisson rate function
reconstruction. In this work, we present Poisson Process AutoDecoder (PPAD).
PPAD is a neural field decoder that maps fixed-length latent features to
continuous Poisson rate functions across energy band and time via unsupervised
learning. PPAD reconstructs the rate function and yields a representation at
the same time. We demonstrate the efficacy of PPAD via reconstruction,
regression, classification and anomaly detection experiments using the Chandra
Source Catalog.
|
2502.01628
|
Harmonic Loss Trains Interpretable AI Models
|
cs.LG
|
In this paper, we introduce **harmonic loss** as an alternative to the
standard cross-entropy loss for training neural networks and large language
models (LLMs). Harmonic loss enables improved interpretability and faster
convergence, owing to its scale invariance and finite convergence point by
design, which can be interpreted as a class center. We first validate the
performance of harmonic models across algorithmic, vision, and language
datasets. Through extensive experiments, we demonstrate that models trained
with harmonic loss outperform standard models by: (a) enhancing
interpretability, (b) requiring less data for generalization, and (c) reducing
grokking. Moreover, we compare a GPT-2 model trained with harmonic loss to the
standard GPT-2, illustrating that the harmonic model develops more
interpretable representations. Looking forward, we believe harmonic loss has
the potential to become a valuable tool in domains with limited data
availability or in high-stakes applications where interpretability and
reliability are paramount, paving the way for more robust and efficient neural
network models.
|
2502.01630
|
TReMu: Towards Neuro-Symbolic Temporal Reasoning for LLM-Agents with
Memory in Multi-Session Dialogues
|
cs.AI
|
Temporal reasoning in multi-session dialogues presents a significant
challenge which has been under-studied in previous temporal reasoning
benchmarks. To bridge this gap, we propose a new evaluation task for temporal
reasoning in multi-session dialogues and introduce an approach to construct a
new benchmark by augmenting dialogues from LoCoMo and creating multi-choice
QAs. Furthermore, we present TReMu, a new framework aimed at enhancing the
temporal reasoning capabilities of LLM-agents in this context. Specifically,
the framework employs \textit{time-aware memorization} through timeline
summarization, generating retrievable memory by summarizing events in each
dialogue session with their inferred dates. Additionally, we integrate
\textit{neuro-symbolic temporal reasoning}, where LLMs generate Python code to
perform temporal calculations and select answers. Experimental evaluations on
popular LLMs demonstrate that our benchmark is challenging, and the proposed
framework significantly improves temporal reasoning performance compared to
baseline methods, raising from 29.83 on GPT-4o via standard prompting to 77.67
via our approach and highlighting its effectiveness in addressing temporal
reasoning in multi-session dialogues.
|
2502.01633
|
Adversarial Reasoning at Jailbreaking Time
|
cs.LG cs.AI
|
As large language models (LLMs) are becoming more capable and widespread, the
study of their failure cases is becoming increasingly important. Recent
advances in standardizing, measuring, and scaling test-time compute suggest new
methodologies for optimizing models to achieve high performance on hard tasks.
In this paper, we apply these advances to the task of model jailbreaking:
eliciting harmful responses from aligned LLMs. We develop an adversarial
reasoning approach to automatic jailbreaking via test-time computation that
achieves SOTA attack success rates (ASR) against many aligned LLMs, even the
ones that aim to trade inference-time compute for adversarial robustness. Our
approach introduces a new paradigm in understanding LLM vulnerabilities, laying
the foundation for the development of more robust and trustworthy AI systems.
|
2502.01634
|
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient
Adding/Deleting Data
|
cs.LG cs.AI cs.CR stat.ML
|
Gradient Boosting Decision Tree (GBDT) is one of the most popular machine
learning models in various applications. However, in the traditional settings,
all data should be simultaneously accessed in the training procedure: it does
not allow to add or delete any data instances after training. In this paper, we
propose an efficient online learning framework for GBDT supporting both
incremental and decremental learning. To the best of our knowledge, this is the
first work that considers an in-place unified incremental and decremental
learning on GBDT. To reduce the learning cost, we present a collection of
optimizations for our framework, so that it can add or delete a small fraction
of data on the fly. We theoretically show the relationship between the
hyper-parameters of the proposed optimizations, which enables trading off
accuracy and cost on incremental and decremental learning. The backdoor attack
results show that our framework can successfully inject and remove backdoor in
a well-trained model using incremental and decremental learning, and the
empirical results on public datasets confirm the effectiveness and efficiency
of our proposed online learning framework and optimizations.
|
2502.01635
|
The AI Agent Index
|
cs.SE cs.AI
|
Leading AI developers and startups are increasingly deploying agentic AI
systems that can plan and execute complex tasks with limited human involvement.
However, there is currently no structured framework for documenting the
technical components, intended uses, and safety features of agentic systems. To
fill this gap, we introduce the AI Agent Index, the first public database to
document information about currently deployed agentic AI systems. For each
system that meets the criteria for inclusion in the index, we document the
system's components (e.g., base model, reasoning implementation, tool use),
application domains (e.g., computer use, software engineering), and risk
management practices (e.g., evaluation results, guardrails), based on publicly
available information and correspondence with developers. We find that while
developers generally provide ample information regarding the capabilities and
applications of agentic systems, they currently provide limited information
regarding safety and risk management practices. The AI Agent Index is available
online at https://aiagentindex.mit.edu/
|
2502.01636
|
Lifelong Sequential Knowledge Editing without Model Degradation
|
cs.CL cs.AI cs.LG
|
Prior work in parameter-modifying knowledge editing has shown that
large-scale sequential editing leads to significant model degradation. In this
paper, we study the reasons behind this and scale sequential knowledge editing
to 10,000 sequential edits, while maintaining the downstream performance of the
original model. We first show that locate-then-edit knowledge editing methods
lead to overfitting on the edited facts. We also show that continuous knowledge
editing using these methods leads to disproportionate growth in the norm of the
edited matrix. We then provide a crucial insight into the inner workings of
locate-then-edit methods. We show that norm-growth is a hidden trick employed
by these methods that gives larger importance to the output activations
produced from the edited layers. With this "importance hacking", the edited
layers provide a much larger contributions to the model's output. To mitigate
these issues, we present ENCORE - Early stopping and Norm-Constrained Robust
knowledge Editing. ENCORE controls for overfitting and the disproportionate
norm-growth to enable long-term sequential editing, where we are able to
perform up to 10,000 sequential edits without loss of downstream performance.
ENCORE is also 61% faster than MEMIT and 64% faster than AlphaEdit on
Llama3-8B.
|
2502.01637
|
Scaling Embedding Layers in Language Models
|
cs.CL cs.LG
|
We propose SCONE ($\textbf{S}$calable, $\textbf{C}$ontextualized,
$\textbf{O}$ffloaded, $\textbf{N}$-gram $\textbf{E}$mbedding), a method for
extending input embedding layers to enhance language model performance as layer
size scales. To avoid increased decoding costs, SCONE retains the original
vocabulary while introducing embeddings for a set of frequent $n$-grams. These
embeddings provide contextualized representation for each input token and are
learned with a separate model during training. During inference, they are
precomputed and stored in off-accelerator memory with minimal impact on
inference speed. SCONE enables two new scaling strategies: increasing the
number of cached $n$-gram embeddings and scaling the model used to learn them,
all while maintaining fixed inference-time FLOPS. We show that scaling both
aspects allows SCONE to outperform a 1.9B parameter baseline across diverse
corpora, while using only half the inference-time FLOPS.
|
2502.01639
|
SliderSpace: Decomposing the Visual Capabilities of Diffusion Models
|
cs.CV cs.GR cs.LG
|
We present SliderSpace, a framework for automatically decomposing the visual
capabilities of diffusion models into controllable and human-understandable
directions. Unlike existing control methods that require a user to specify
attributes for each edit direction individually, SliderSpace discovers multiple
interpretable and diverse directions simultaneously from a single text prompt.
Each direction is trained as a low-rank adaptor, enabling compositional control
and the discovery of surprising possibilities in the model's latent space.
Through extensive experiments on state-of-the-art diffusion models, we
demonstrate SliderSpace's effectiveness across three applications: concept
decomposition, artistic style exploration, and diversity enhancement. Our
quantitative evaluation shows that SliderSpace-discovered directions decompose
the visual structure of model's knowledge effectively, offering insights into
the latent capabilities encoded within diffusion models. User studies further
validate that our method produces more diverse and useful variations compared
to baselines. Our code, data and trained weights are available at
https://sliderspace.baulab.info
|
2502.01643
|
FruitPAL: An IoT-Enabled Framework for Automatic Monitoring of Fruit
Consumption in Smart Healthcare
|
cs.CV
|
Fruits are rich sources of essential vitamins and nutrients that are vital
for human health. This study introduces two fully automated devices, FruitPAL
and its updated version, FruitPAL 2.0, which aim to promote safe fruit
consumption while reducing health risks. Both devices leverage a high-quality
dataset of fifteen fruit types and use advanced models- YOLOv8 and YOLOv5 V6.0-
to enhance detection accuracy. The original FruitPAL device can identify
various fruit types and notify caregivers if an allergic reaction is detected,
thanks to YOLOv8's improved accuracy and rapid response time. Notifications are
transmitted via the cloud to mobile devices, ensuring real-time updates and
immediate accessibility. FruitPAL 2.0 builds upon this by not only detecting
fruit but also estimating its nutritional value, thereby encouraging healthy
consumption. Trained on the YOLOv5 V6.0 model, FruitPAL 2.0 analyzes fruit
intake to provide users with valuable dietary insights. This study aims to
promote fruit consumption by helping individuals make informed choices,
balancing health benefits with allergy awareness. By alerting users to
potential allergens while encouraging the consumption of nutrient-rich fruits,
these devices support both health maintenance and dietary awareness.
|
2502.01648
|
UA-1 PH2 DECISIVE Testing Handbook: Test Methods and Benchmarking
Performance Results for sUAS in Dense Urban Environments
|
cs.RO cs.SY eess.SY
|
This report outlines all test methods and reviews all results derived from
performance benchmarking of small unmanned aerial systems (sUAS) in dense urban
environments conducted during Phase 2 of the Development and Execution of
Comprehensive and Integrated Systematic Intelligent Vehicle Evaluations
(DECISIVE) project by the University of Massachusetts Lowell (HEROES Project
UA-1). Using 9 of the developed test methods, over 100 tests were conducted to
benchmark the performance of 8 sUAS platforms: Cleo Robotics Dronut X1P (P =
prototype), FLIR Black Hornet 3 PRS, Flyability Elios 2 GOV, Lumenier Nighthawk
V3, Parrot ANAFI USA GOV, Skydio X2D, Teal Golden Eagle, and Vantage Robotics
Vesper.
|
2502.01649
|
Privacy-Preserving Edge Speech Understanding with Tiny Foundation Models
|
eess.AS cs.LG cs.SD
|
Robust speech recognition systems rely on cloud service providers for
inference. It needs to ensure that an untrustworthy provider cannot deduce the
sensitive content in speech. Sanitization can be done on speech content keeping
in mind that it has to avoid compromising transcription accuracy. Realizing the
under utilized capabilities of tiny speech foundation models (FMs), for the
first time, we propose a novel use: enhancing speech privacy on
resource-constrained devices. We introduce XYZ, an edge/cloud privacy
preserving speech inference engine that can filter sensitive entities without
compromising transcript accuracy. We utilize a timestamp based on-device
masking approach that utilizes a token to entity prediction model to filter
sensitive entities. Our choice of mask strategically conceals parts of the
input and hides sensitive data. The masked input is sent to a trusted cloud
service or to a local hub to generate the masked output. The effectiveness of
XYZ hinges on how well the entity time segments are masked. Our recovery is a
confidence score based approach that chooses the best prediction between cloud
and on-device model. We implement XYZ on a 64 bit Raspberry Pi 4B. Experiments
show that our solution leads to robust speech recognition without forsaking
privacy. XYZ with < 100 MB memory, achieves state-of-the-art (SOTA) speech
transcription performance while filtering about 83% of private entities
directly on-device. XYZ is 16x smaller in memory and 17x more compute efficient
than prior privacy preserving speech frameworks and has a relative reduction in
word error rate (WER) by 38.8-77.5% when compared to existing offline
transcription services.
|
2502.01651
|
Fine-tuning LLaMA 2 interference: a comparative study of language
implementations for optimal efficiency
|
cs.LG cs.AI
|
This paper presents a comparative study aimed at optimizing Llama2 inference,
a critical aspect of machine learning and natural language processing (NLP). We
evaluate various programming languages and frameworks, including TensorFlow,
PyTorch, Python, Mojo, C++, and Java, analyzing their performance in terms of
speed, memory consumption, and ease of implementation through extensive
benchmarking. Strengths and limitations of each approach are highlighted, along
with proposed optimization strategies for parallel processing and hardware
utilization. Furthermore, we investigate the Mojo SDK, a novel framework
designed for large language model (LLM) inference on Apple Silicon,
benchmarking its performance against implementations in C, C++, Rust, Zig, Go,
and Julia. Our experiments, conducted on an Apple M1 Max, demonstrate Mojo
SDK's competitive performance, ease of use, and seamless Python compatibility,
positioning it as a strong alternative for LLM inference on Apple Silicon. We
also discuss broader implications for LLM deployment on resource-constrained
hardware and identify potential directions for future research.
|
2502.01652
|
Hybrid Group Relative Policy Optimization: A Multi-Sample Approach to
Enhancing Policy Optimization
|
cs.LG cs.AI
|
Hybrid Group Relative Policy Optimization (Hybrid GRPO) is a reinforcement
learning framework that extends Proximal Policy Optimization (PPO) and Group
Relative Policy Optimization (GRPO) by incorporating empirical multi-sample
action evaluation while preserving the stability of value function-based
learning. Unlike DeepSeek GRPO, which eliminates the value function in favor of
purely empirical reward estimation, Hybrid GRPO introduces a structured
advantage computation method that balances empirical action sampling with
bootstrapped value estimation. This approach enhances sample efficiency,
improves learning stability, and mitigates variance amplification observed in
purely empirical methods. A detailed mathematical comparison between PPO,
DeepSeek GRPO, and Hybrid GRPO is presented, highlighting key differences in
advantage estimation and policy updates. Experimental validation in a
controlled reinforcement learning environment demonstrates that Hybrid GRPO
achieves superior convergence speed, more stable policy updates, and improved
sample efficiency compared to existing methods. Several extensions to Hybrid
GRPO are explored, including entropy-regularized sampling, hierarchical
multi-step sub-sampling, adaptive reward normalization, and value-based action
selection. Beyond reinforcement learning in simulated environments, Hybrid GRPO
provides a scalable framework for bridging the gap between large language
models (LLMs) and real-world agent-based decision-making. By integrating
structured empirical sampling with reinforcement learning stability mechanisms,
Hybrid GRPO has potential applications in autonomous robotics, financial
modeling, and AI-driven control systems. These findings suggest that Hybrid
GRPO serves as a robust and adaptable reinforcement learning methodology,
paving the way for further advancements in policy optimization.
|
2502.01654
|
Predicting concentration levels of air pollutants by transfer learning
and recurrent neural network
|
cs.LG cs.NE physics.ao-ph
|
Air pollution (AP) poses a great threat to human health, and people are
paying more attention than ever to its prediction. Accurate prediction of AP
helps people to plan for their outdoor activities and aids protecting human
health. In this paper, long-short term memory (LSTM) recurrent neural networks
(RNNs) have been used to predict the future concentration of air pollutants
(APS) in Macau. Additionally, meteorological data and data on the concentration
of APS have been utilized. Moreover, in Macau, some air quality monitoring
stations (AQMSs) have less observed data in quantity, and, at the same time,
some AQMSs recorded less observed data of certain types of APS. Therefore, the
transfer learning and pre-trained neural networks have been employed to assist
AQMSs with less observed data to build a neural network with high prediction
accuracy. The experimental sample covers a period longer than 12-year and
includes daily measurements from several APS as well as other more classical
meteorological values. Records from five stations, four out of them are AQMSs
and the remaining one is an automatic weather station, have been prepared from
the aforesaid period and eventually underwent to computational intelligence
techniques to build and extract a prediction knowledge-based system. As shown
by experimentation, LSTM RNNs initialized with transfer learning methods have
higher prediction accuracy; it incurred shorter training time than randomly
initialized recurrent neural networks.
|
2502.01655
|
A binary PSO based ensemble under-sampling model for rebalancing
imbalanced training data
|
cs.LG cs.AI cs.NE
|
Ensemble technique and under-sampling technique are both effective tools used
for imbalanced dataset classification problems. In this paper, a novel ensemble
method combining the advantages of both ensemble learning for biasing
classifiers and a new under-sampling method is proposed. The under-sampling
method is named Binary PSO instance selection; it gathers with ensemble
classifiers to find the most suitable length and combination of the majority
class samples to build a new dataset with minority class samples. The proposed
method adopts multi-objective strategy, and contribution of this method is a
notable improvement of the performances of imbalanced classification, and in
the meantime guaranteeing a best integrity possible for the original dataset.
We experimented the proposed method and compared its performance of processing
imbalanced datasets with several other conventional basic ensemble methods.
Experiment is also conducted on these imbalanced datasets using an improved
version where ensemble classifiers are wrapped in the Binary PSO instance
selection. According to experimental results, our proposed methods outperform
single ensemble methods, state-of-the-art under-sampling methods, and also
combinations of these methods with the traditional PSO instance selection
algorithm.
|
2502.01656
|
Imperfect Knowledge Management -- A Case Study in a Chilean
Manufacturing Company
|
cs.DB cs.CY
|
To conceptualize living systems based on the processes that create them,
rather than their interactions with the environment, as in systems theory.
Maturana and Varela (1969) at the University of Chile introduced the term
autopoiesis (from Greek self and production). This concept emphasizes autonomy
as the defining feature of living systems. It describes them as self-sustaining
entities that preserve their identity through continuous self-renewal to
preserve their unity. Furthermore, these systems can only be understood in
reference to themselves, as all internal activities are inherently
self-determined by self-production and self-referentiality. This thesis
introduces the Fuzzy Autopoietic Knowledge Management (FAKM) model, which
integrates the system theory of living systems, the cybernetic theory of viable
systems, and the autopoiesis theory of autopoietic systems. The goal is to move
beyond traditional knowledge management models that rely on Cartesian dualism
(cognition/action) where knowledge is treated as symbolic information
processing. Instead, the FAKM model adopts a dualism of organization/structure
to define an autopoietic system within a sociotechnical approach. The model is
experimentally applied to a manufacturing company in the Maule Region, south of
Santiago, Chile.
|
2502.01657
|
Improving Rule-based Reasoning in LLMs via Neurosymbolic Representations
|
cs.LG cs.AI
|
Large language models (LLMs) continue to face challenges in reliably solving
reasoning tasks, particularly tasks that involve precise rule following, as
often found in mathematical reasoning tasks. This paper introduces a novel
neurosymbolic method that improves LLM reasoning by encoding hidden states into
neurosymbolic vectors, allowing for problem-solving within a neurosymbolic
vector space. The results are decoded and combined with the original hidden
state, boosting the model's performance on numerical reasoning tasks. By
offloading computation through neurosymbolic representations, this method
improves efficiency, reliability, and interpretability. Our experimental
results demonstrate an average of $82.86\%$ lower cross entropy loss and
$24.50$ times more problems correctly solved on a suite of mathematical
reasoning problems compared to chain-of-thought prompting and supervised
fine-tuning (LoRA), while at the same time not hindering the performance of the
LLM on other tasks.
|
2502.01658
|
Large Language Models' Accuracy in Emulating Human Experts' Evaluation
of Public Sentiments about Heated Tobacco Products on Social Media
|
cs.CL cs.CY cs.SI
|
Sentiment analysis of alternative tobacco products on social media is
important for tobacco control research. Large Language Models (LLMs) can help
streamline the labor-intensive human sentiment analysis process. This study
examined the accuracy of LLMs in replicating human sentiment evaluation of
social media messages about heated tobacco products (HTPs).
The research used GPT-3.5 and GPT-4 Turbo to classify 500 Facebook and 500
Twitter messages, including anti-HTPs, pro-HTPs, and neutral messages. The
models evaluated each message up to 20 times, and their majority label was
compared to human evaluators.
Results showed that GPT-3.5 accurately replicated human sentiment 61.2% of
the time for Facebook messages and 57.0% for Twitter messages. GPT-4 Turbo
performed better, with 81.7% accuracy for Facebook and 77.0% for Twitter. Using
three response instances, GPT-4 Turbo achieved 99% of the accuracy of twenty
instances. GPT-4 Turbo also had higher accuracy for anti- and pro-HTPs messages
compared to neutral ones. Misclassifications by GPT-3.5 often involved anti- or
pro-HTPs messages being labeled as neutral or irrelevant, while GPT-4 Turbo
showed improvements across all categories.
In conclusion, LLMs can be used for sentiment analysis of HTP-related social
media messages, with GPT-4 Turbo reaching around 80% accuracy compared to human
experts. However, there's a risk of misrepresenting overall sentiment due to
differences in accuracy across sentiment categories.
|
2502.01659
|
Longer Attention Span: Increasing Transformer Context Length with Sparse
Graph Processing Techniques
|
cs.LG cs.AI cs.DC cs.PF
|
Transformers have demonstrated great success in numerous domains including
natural language processing and bioinformatics. This success stems from the use
of the attention mechanism by these models in order to represent and propagate
pairwise interactions between individual tokens of sequential data. However,
the primary limitation of this operation is its quadratic memory and time
complexity in relation to the input's context length - the length of a sequence
over which the interactions need to be captured. This significantly limits the
length of sequences that can be inferred upon by these models. Extensive
research has been conducted to reduce the number of pairwise interactions to
sub-quadratic in relation to the context length by introducing sparsity into
the attention mechanism through the development of sparse attention masks.
However, efficient implementations that achieve "true sparsity" are lacking.
In this work, we address this issue by proposing a graph computing view of
attention where tokens are perceived as nodes of the graph and the attention
mask determines the edges of the graph. Using this view, we develop graph
processing algorithms to implement the attention mechanism. Both theoretically
and empirically, we demonstrate that our algorithms only perform the needed
computations, i.e., they are work optimal. We also perform extensive
experimentation using popular attention masks to explore the impact of sparsity
on execution time and achievable context length. Our experiments demonstrate
significant speedups in execution times compared to state-of-the-art attention
implementations such as FlashAttention for large sequence lengths. We also
demonstrate that our algorithms are able to achieve extremely long sequence
lengths of as high as 160 million on a single NVIDIA A100 GPU (SXM4 80GB).
|
2502.01660
|
Employee Turnover Prediction: A Cross-component Attention Transformer
with Consideration of Competitor Influence and Contagious Effect
|
cs.LG cs.AI
|
Employee turnover refers to an individual's termination of employment from
the current organization. It is one of the most persistent challenges for
firms, especially those ones in Information Technology (IT) industry that
confront high turnover rates. Effective prediction of potential employee
turnovers benefits multiple stakeholders such as firms and online recruiters.
Prior studies have focused on either the turnover prediction within a single
firm or the aggregated employee movement among firms. How to predict the
individual employees' turnovers among multiple firms has gained little
attention in literature, and thus remains a great research challenge. In this
study, we propose a novel deep learning approach based on job embeddedness
theory to predict the turnovers of individual employees across different firms.
Through extensive experimental evaluations using a real-world dataset, our
developed method demonstrates superior performance over several
state-of-the-art benchmark methods. Additionally, we estimate the cost saving
for recruiters by using our turnover prediction solution and interpret the
attributions of various driving factors to employee's turnover to showcase its
practical business value.
|
2502.01662
|
Speculative Ensemble: Fast Large Language Model Ensemble via Speculation
|
cs.CL cs.AI cs.LG
|
Ensemble methods enhance Large Language Models (LLMs) by combining multiple
models but suffer from high computational costs. In this paper, we introduce
Speculative Ensemble, a novel framework that accelerates LLM ensembles without
sacrificing performance, inspired by Speculative Decoding-where a small
proposal model generates tokens sequentially, and a larger target model
verifies them in parallel. Our approach builds on two key insights: (1) the
verification distribution can be the ensemble distribution of both the proposal
and target models, and (2) alternating each model as the proposer and verifier
can further enhance efficiency. We generalize this method to ensembles with n
models and theoretically prove that SE is never slower than a standard
ensemble, typically achieving faster speed. Extensive experiments demonstrate
speed improvements of 1.11x-2.23x over standard ensemble techniques without
compromising generation quality. Our code is available at
https://github.com/Kamichanw/Speculative-Ensemble/
|
2502.01663
|
Explainable AI for Sentiment Analysis of Human Metapneumovirus (HMPV)
Using XLNet
|
cs.CL
|
In 2024, the outbreak of Human Metapneumovirus (HMPV) in China, which later
spread to the UK and other countries, raised significant public concern. While
HMPV typically causes mild symptoms, its effects on vulnerable individuals
prompted health authorities to emphasize preventive measures. This paper
explores how sentiment analysis can enhance our understanding of public
reactions to HMPV by analyzing social media data. We apply transformer models,
particularly XLNet, achieving 93.50% accuracy in sentiment classification.
Additionally, we use explainable AI (XAI) through SHAP to improve model
transparency.
|
2502.01665
|
Entropy-based measure of rock sample heterogeneity derived from micro-CT
images
|
eess.IV cs.CV
|
This study presents an automated method for objectively measuring rock
heterogeneity via raw X-ray micro-computed tomography (micro-CT) images,
thereby addressing the limitations of traditional methods, which are
time-consuming, costly, and subjective. Unlike approaches that rely on image
segmentation, the proposed method processes micro-CT images directly,
identifying textural heterogeneity. The image is partitioned into subvolumes,
where attributes are calculated for each one, with entropy serving as a measure
of uncertainty. This method adapts to varying sample characteristics and
enables meaningful comparisons across distinct sets of samples. It was applied
to a dataset consisting of 4,935 images of cylindrical plug samples derived
from Brazilian reservoirs. The results showed that the selected attributes play
a key role in producing desirable outcomes, such as strong correlations with
structural heterogeneity. To assess the effectiveness of our method, we used
evaluations provided by four experts who classified 175 samples as either
heterogeneous or homogeneous, where each expert assessed a different number of
samples. One of the presented attributes demonstrated a statistically
significant difference between the homogeneous and heterogeneous samples
labelled by all the experts, whereas the other two attributes yielded
nonsignificant differences for three out of the four experts. The method was
shown to better align with the expert choices than traditional textural
attributes known for extracting heterogeneous properties from images. This
textural heterogeneity measure provides an additional parameter that can assist
in rock characterization, and the automated approach ensures easy reproduction
and high cost-effectiveness.
|
2502.01666
|
Leveraging Stable Diffusion for Monocular Depth Estimation via Image
Semantic Encoding
|
cs.CV cs.LG
|
Monocular depth estimation involves predicting depth from a single RGB image
and plays a crucial role in applications such as autonomous driving, robotic
navigation, 3D reconstruction, etc. Recent advancements in learning-based
methods have significantly improved depth estimation performance. Generative
models, particularly Stable Diffusion, have shown remarkable potential in
recovering fine details and reconstructing missing regions through large-scale
training on diverse datasets. However, models like CLIP, which rely on textual
embeddings, face limitations in complex outdoor environments where rich context
information is needed. These limitations reduce their effectiveness in such
challenging scenarios. Here, we propose a novel image-based semantic embedding
that extracts contextual information directly from visual features,
significantly improving depth prediction in complex environments. Evaluated on
the KITTI and Waymo datasets, our method achieves performance comparable to
state-of-the-art models while addressing the shortcomings of CLIP embeddings in
handling outdoor scenes. By leveraging visual semantics directly, our method
demonstrates enhanced robustness and adaptability in depth estimation tasks,
showcasing its potential for application to other visual perception tasks.
|
2502.01667
|
Refining Alignment Framework for Diffusion Models with Intermediate-Step
Preference Ranking
|
cs.LG cs.AI
|
Direct preference optimization (DPO) has shown success in aligning diffusion
models with human preference. Previous approaches typically assume a consistent
preference label between final generations and noisy samples at intermediate
steps, and directly apply DPO to these noisy samples for fine-tuning. However,
we theoretically identify inherent issues in this assumption and its impacts on
the effectiveness of preference alignment. We first demonstrate the inherent
issues from two perspectives: gradient direction and preference order, and then
propose a Tailored Preference Optimization (TailorPO) framework for aligning
diffusion models with human preference, underpinned by some theoretical
insights. Our approach directly ranks intermediate noisy samples based on their
step-wise reward, and effectively resolves the gradient direction issues
through a simple yet efficient design. Additionally, we incorporate the
gradient guidance of diffusion models into preference alignment to further
enhance the optimization effectiveness. Experimental results demonstrate that
our method significantly improves the model's ability to generate aesthetically
pleasing and human-preferred images.
|
2502.01669
|
Addressing Delayed Feedback in Conversion Rate Prediction via Influence
Functions
|
cs.LG cs.AI cs.IR
|
In the realm of online digital advertising, conversion rate (CVR) prediction
plays a pivotal role in maximizing revenue under cost-per-conversion (CPA)
models, where advertisers are charged only when users complete specific
actions, such as making a purchase. A major challenge in CVR prediction lies in
the delayed feedback problem-conversions may occur hours or even weeks after
initial user interactions. This delay complicates model training, as recent
data may be incomplete, leading to biases and diminished performance. Although
existing methods attempt to address this issue, they often fall short in
adapting to evolving user behaviors and depend on auxiliary models, which
introduces computational inefficiencies and the risk of model inconsistency. In
this work, we propose an Influence Function-empowered framework for Delayed
Feedback Modeling (IF-DFM). IF-DFM leverages influence functions to estimate
how newly acquired and delayed conversion data impact model parameters,
enabling efficient parameter updates without the need for full retraining.
Additionally, we present a scalable algorithm that efficiently computes
parameter updates by reframing the inverse Hessian-vector product as an
optimization problem, striking a balance between computational efficiency and
effectiveness. Extensive experiments on benchmark datasets demonstrate that
IF-DFM consistently surpasses state-of-the-art methods, significantly enhancing
both prediction accuracy and model adaptability.
|
2502.01670
|
A Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural
Networks with Structured Compression
|
cs.AR cs.ET cs.LG
|
Recent advancements in artificial intelligence (AI) and deep neural networks
(DNNs) have revolutionized numerous fields, enabling complex tasks by
extracting intricate features from large datasets. However, the exponential
growth in computational demands has outstripped the capabilities of traditional
electrical hardware accelerators. Optical computing offers a promising
alternative due to its inherent advantages of parallelism, high computational
speed, and low power consumption. Yet, current photonic integrated circuits
(PICs) designed for general matrix multiplication (GEMM) are constrained by
large footprints, high costs of electro-optical (E-O) interfaces, and high
control complexity, limiting their scalability. To overcome these challenges,
we introduce a block-circulant photonic tensor core (CirPTC) for a
structure-compressed optical neural network (StrC-ONN) architecture. By
applying a structured compression strategy to weight matrices, StrC-ONN
significantly reduces model parameters and hardware requirements while
preserving the universal representability of networks and maintaining
comparable expressivity. Additionally, we propose a hardware-aware training
framework to compensate for on-chip nonidealities to improve model robustness
and accuracy. We experimentally demonstrate image processing and classification
tasks, achieving up to a 74.91% reduction in trainable parameters while
maintaining competitive accuracies. Performance analysis expects a
computational density of 5.84 tera operations per second (TOPS) per mm^2 and a
power efficiency of 47.94 TOPS/W, marking a 6.87-times improvement achieved
through the hardware-software co-design approach. By reducing both hardware
requirements and control complexity across multiple dimensions, this work
explores a new pathway to push the limits of optical computing in the pursuit
of high efficiency and scalability.
|
2502.01671
|
Life-Cycle Emissions of AI Hardware: A Cradle-To-Grave Approach and
Generational Trends
|
cs.AR cs.AI
|
Specialized hardware accelerators aid the rapid advancement of artificial
intelligence (AI), and their efficiency impacts AI's environmental
sustainability. This study presents the first publication of a comprehensive AI
accelerator life-cycle assessment (LCA) of greenhouse gas emissions, including
the first publication of manufacturing emissions of an AI accelerator.
Our analysis of five Tensor Processing Units (TPUs) encompasses all stages of
the hardware lifespan - from raw material extraction, manufacturing, and
disposal, to energy consumption during development, deployment, and serving of
AI models. Using first-party data, it offers the most comprehensive evaluation
to date of AI hardware's environmental impact. We include detailed descriptions
of our LCA to act as a tutorial, road map, and inspiration for other computer
engineers to perform similar LCAs to help us all understand the environmental
impacts of our chips and of AI.
A byproduct of this study is the new metric compute carbon intensity (CCI)
that is helpful in evaluating AI hardware sustainability and in estimating the
carbon footprint of training and inference. This study shows that CCI improves
3x from TPU v4i to TPU v6e.
Moreover, while this paper's focus is on hardware, software advancements
leverage and amplify these gains.
|
2502.01672
|
Doubly Robust Monte Carlo Tree Search
|
stat.ML cs.AI cs.LG
|
We present Doubly Robust Monte Carlo Tree Search (DR-MCTS), a novel algorithm
that integrates Doubly Robust (DR) off-policy estimation into Monte Carlo Tree
Search (MCTS) to enhance sample efficiency and decision quality in complex
environments. Our approach introduces a hybrid estimator that combines MCTS
rollouts with DR estimation, offering theoretical guarantees of unbiasedness
and variance reduction under specified conditions. Empirical evaluations in
Tic-Tac-Toe and the partially observable VirtualHome environment demonstrate
DR-MCTS's superior performance over standard MCTS. In Tic-Tac-Toe, DR-MCTS
achieves an 88% win rate compared to a 10% win rate for standard MCTS. In
compound VirtualHome tasks, DR-MCTS attains a 20.7% success rate versus 10.3%
for standard MCTS. Our scaling analysis reveals that DR-MCTS exhibits better
sample efficiency, notably outperforming standard MCTS with larger language
models while using a smaller model. These results underscore DR-MCTS's
potential for efficient decision-making in complex, real-world scenarios where
sample efficiency is paramount.
|
2502.01673
|
Multilingual State Space Models for Structured Question Answering in
Indic Languages
|
cs.CL cs.AI
|
The diversity and complexity of Indic languages present unique challenges for
natural language processing (NLP) tasks, particularly in the domain of question
answering (QA).To address these challenges, this paper explores the application
of State Space Models (SSMs),to build efficient and contextually aware QA
systems tailored for Indic languages. SSMs are particularly suited for this
task due to their ability to model long-term and short-term dependencies in
sequential data, making them well-equipped to handle the rich morphology,
complex syntax, and contextual intricacies characteristic of Indian languages.
We evaluated multiple SSM architectures across diverse datasets representing
various Indic languages and conducted a comparative analysis of their
performance. Our results demonstrate that these models effectively capture
linguistic subtleties, leading to significant improvements in question
interpretation, context alignment, and answer generation. This work represents
the first application of SSMs to question answering tasks in Indic languages,
establishing a foundational benchmark for future research in this domain. We
propose enhancements to existing SSM frameworks, optimizing their applicability
to low-resource settings and multilingual scenarios prevalent in Indic
languages.
|
2502.01674
|
Efficient Brain Tumor Classification with Lightweight CNN Architecture:
A Novel Approach
|
eess.IV cs.CV
|
Brain tumor classification using MRI images is critical in medical
diagnostics, where early and accurate detection significantly impacts patient
outcomes. While recent advancements in deep learning (DL), particularly CNNs,
have shown promise, many models struggle with balancing accuracy and
computational efficiency and often lack robustness across diverse datasets. To
address these challenges, we propose a novel model architecture integrating
separable convolutions and squeeze and excitation (SE) blocks, designed to
enhance feature extraction while maintaining computational efficiency. Our
model further incorporates batch normalization and dropout to prevent
overfitting, ensuring stable and reliable performance. The proposed model is
lightweight because it uses separable convolutions, which reduce the number of
parameters, and incorporates global average pooling instead of fully connected
layers to minimize computational complexity while maintaining high accuracy.
Our model does better than other models by about 0.5% to 1.0% in accuracy and
1.5% to 2.5% in loss reduction, as shown by many experiments. It has a
validation accuracy of 99.22% and a test accuracy of 98.44%. These results
highlight the model's ability to generalize effectively across different brain
tumour types, offering a robust tools for clinical applications. Our work sets
a new benchmark in the field, providing a foundation for future research in
optimizing the accuracy and efficiency of DL models for medical image analysis.
|
2502.01675
|
Semantic Communication based on Generative AI: A New Approach to Image
Compression and Edge Optimization
|
cs.CV cs.AI cs.LG
|
As digital technologies advance, communication networks face challenges in
handling the vast data generated by intelligent devices. Autonomous vehicles,
smart sensors, and IoT systems necessitate new paradigms. This thesis addresses
these challenges by integrating semantic communication and generative models
for optimized image compression and edge network resource allocation. Unlike
bit-centric systems, semantic communication prioritizes transmitting meaningful
data specifically selected to convey the meaning rather than obtain a faithful
representation of the original data. The communication infrastructure can
benefit to significant improvements in bandwidth efficiency and latency
reduction. Central to this work is the design of semantic-preserving image
compression using Generative Adversarial Networks and Denoising Diffusion
Probabilistic Models. These models compress images by encoding only
semantically relevant features, allowing for high-quality reconstruction with
minimal transmission. Additionally, a Goal-Oriented edge network optimization
framework is introduced, leveraging the Information Bottleneck principle and
stochastic optimization to dynamically allocate resources and enhance
efficiency. By integrating semantic communication into edge networks, this
approach balances computational efficiency and communication effectiveness,
making it suitable for real-time applications. The thesis compares
semantic-aware models with conventional image compression techniques using
classical and semantic evaluation metrics. Results demonstrate the potential of
combining generative AI and semantic communication to create more efficient
semantic-goal-oriented communication networks that meet the demands of modern
data-driven applications.
|
2502.01676
|
Benchmark on Peer Review Toxic Detection: A Challenging Task with a New
Dataset
|
cs.CL cs.CY
|
Peer review is crucial for advancing and improving science through
constructive criticism. However, toxic feedback can discourage authors and
hinder scientific progress. This work explores an important but underexplored
area: detecting toxicity in peer reviews. We first define toxicity in peer
reviews across four distinct categories and curate a dataset of peer reviews
from the OpenReview platform, annotated by human experts according to these
definitions. Leveraging this dataset, we benchmark a variety of models,
including a dedicated toxicity detection model, a sentiment analysis model,
several open-source large language models (LLMs), and two closed-source LLMs.
Our experiments explore the impact of different prompt granularities, from
coarse to fine-grained instructions, on model performance. Notably,
state-of-the-art LLMs like GPT-4 exhibit low alignment with human judgments
under simple prompts but achieve improved alignment with detailed instructions.
Moreover, the model's confidence score is a good indicator of better alignment
with human judgments. For example, GPT-4 achieves a Cohen's Kappa score of 0.56
with human judgments, which increases to 0.63 when using only predictions with
a confidence score higher than 95%. Overall, our dataset and benchmarks
underscore the need for continued research to enhance toxicity detection
capabilities of LLMs. By addressing this issue, our work aims to contribute to
a healthy and responsible environment for constructive academic discourse and
scientific collaboration.
|
2502.01677
|
AI Scaling: From Up to Down and Out
|
cs.LG cs.AI
|
AI Scaling has traditionally been synonymous with Scaling Up, which builds
larger and more powerful models. However, the growing demand for efficiency,
adaptability, and collaboration across diverse applications necessitates a
broader perspective. This position paper presents a holistic framework for AI
scaling, encompassing Scaling Up, Scaling Down, and Scaling Out. It argues that
while Scaling Up of models faces inherent bottlenecks, the future trajectory of
AI scaling lies in Scaling Down and Scaling Out. These paradigms address
critical technical and societal challenges, such as reducing carbon footprint,
ensuring equitable access, and enhancing cross-domain collaboration. We explore
transformative applications in healthcare, smart manufacturing, and content
creation, demonstrating how AI Scaling can enable breakthroughs in efficiency,
personalization, and global connectivity. Additionally, we highlight key
challenges, including balancing model complexity with interpretability,
managing resource constraints, and fostering ethical development. By
synthesizing these approaches, we propose a unified roadmap that redefines the
future of AI research and application, paving the way for advancements toward
Artificial General Intelligence (AGI).
|
2502.01678
|
LEAD: Large Foundation Model for EEG-Based Alzheimer's Disease Detection
|
cs.LG cs.AI cs.CE eess.SP
|
Electroencephalogram (EEG) provides a non-invasive, highly accessible, and
cost-effective solution for Alzheimer's Disease (AD) detection. However,
existing methods, whether based on manual feature extraction or deep learning,
face two major challenges: the lack of large-scale datasets for robust feature
learning and evaluation, and poor detection performance due to inter-subject
variations. To address these challenges, we curate an EEG-AD corpus containing
813 subjects, which forms the world's largest EEG-AD dataset to the best of our
knowledge. Using this unique dataset, we propose LEAD, the first large
foundation model for EEG-based AD detection. Our method encompasses an entire
pipeline, from data selection and preprocessing to self-supervised contrastive
pretraining, fine-tuning, and key setups such as subject-independent evaluation
and majority voting for subject-level detection. We pre-train the model on 11
EEG datasets and unified fine-tune it on 5 AD datasets. Our self-supervised
pre-training design includes sample-level and subject-level contrasting to
extract useful general EEG features. Fine-tuning is performed on 5
channel-aligned datasets together. The backbone encoder incorporates temporal
and channel embeddings to capture features across both temporal and spatial
dimensions. Our method demonstrates outstanding AD detection performance,
achieving up to a 9.86% increase in F1 score at the sample-level and up to a
9.31% at the subject-level compared to state-of-the-art methods. The results of
our model strongly confirm the effectiveness of contrastive pre-training and
channel-aligned unified fine-tuning for addressing inter-subject variation. The
source code is at https://github.com/DL4mHealth/LEAD.
|
2502.01679
|
LIBRA: Measuring Bias of Large Language Model from a Local Context
|
cs.CY cs.CL cs.LG
|
Large Language Models (LLMs) have significantly advanced natural language
processing applications, yet their widespread use raises concerns regarding
inherent biases that may reduce utility or harm for particular social groups.
Despite the advancement in addressing LLM bias, existing research has two major
limitations. First, existing LLM bias evaluation focuses on the U.S. cultural
context, making it challenging to reveal stereotypical biases of LLMs toward
other cultures, leading to unfair development and use of LLMs. Second, current
bias evaluation often assumes models are familiar with the target social
groups. When LLMs encounter words beyond their knowledge boundaries that are
unfamiliar in their training data, they produce irrelevant results in the local
context due to hallucinations and overconfidence, which are not necessarily
indicative of inherent bias. This research addresses these limitations with a
Local Integrated Bias Recognition and Assessment Framework (LIBRA) for
measuring bias using datasets sourced from local corpora without crowdsourcing.
Implementing this framework, we develop a dataset comprising over 360,000 test
cases in the New Zealand context. Furthermore, we propose the Enhanced
Idealized CAT Score (EiCAT), integrating the iCAT score with a beyond knowledge
boundary score (bbs) and a distribution divergence-based bias measurement to
tackle the challenge of LLMs encountering words beyond knowledge boundaries.
Our results show that the BERT family, GPT-2, and Llama-3 models seldom
understand local words in different contexts. While Llama-3 exhibits larger
bias, it responds better to different cultural contexts. The code and dataset
are available at: https://github.com/ipangbo/LIBRA.
|
2502.01680
|
Neurosymbolic AI for Travel Demand Prediction: Integrating Decision Tree
Rules into Neural Networks
|
cs.LG cs.AI
|
Travel demand prediction is crucial for optimizing transportation planning,
resource allocation, and infrastructure development, ensuring efficient
mobility and economic sustainability. This study introduces a Neurosymbolic
Artificial Intelligence (Neurosymbolic AI) framework that integrates decision
tree (DT)-based symbolic rules with neural networks (NNs) to predict travel
demand, leveraging the interpretability of symbolic reasoning and the
predictive power of neural learning. The framework utilizes data from diverse
sources, including geospatial, economic, and mobility datasets, to build a
comprehensive feature set. DTs are employed to extract interpretable if-then
rules that capture key patterns, which are then incorporated as additional
features into a NN to enhance its predictive capabilities. Experimental results
show that the combined dataset, enriched with symbolic rules, consistently
outperforms standalone datasets across multiple evaluation metrics, including
Mean Absolute Error (MAE), \(R^2\), and Common Part of Commuters (CPC). Rules
selected at finer variance thresholds (e.g., 0.0001) demonstrate superior
effectiveness in capturing nuanced relationships, reducing prediction errors,
and aligning with observed commuter patterns. By merging symbolic and neural
learning paradigms, this Neurosymbolic approach achieves both interpretability
and accuracy.
|
2502.01681
|
DeepGate4: Efficient and Effective Representation Learning for Circuit
Design at Scale
|
cs.LG cs.AR
|
Circuit representation learning has become pivotal in electronic design
automation, enabling critical tasks such as testability analysis, logic
reasoning, power estimation, and SAT solving. However, existing models face
significant challenges in scaling to large circuits due to limitations like
over-squashing in graph neural networks and the quadratic complexity of
transformer-based models. To address these issues, we introduce DeepGate4, a
scalable and efficient graph transformer specifically designed for large-scale
circuits. DeepGate4 incorporates several key innovations: (1) an update
strategy tailored for circuit graphs, which reduce memory complexity to
sub-linear and is adaptable to any graph transformer; (2) a GAT-based sparse
transformer with global and local structural encodings for AIGs; and (3) an
inference acceleration CUDA kernel that fully exploit the unique sparsity
patterns of AIGs. Our extensive experiments on the ITC99 and EPFL benchmarks
show that DeepGate4 significantly surpasses state-of-the-art methods, achieving
15.5% and 31.1% performance improvements over the next-best models.
Furthermore, the Fused-DeepGate4 variant reduces runtime by 35.1% and memory
usage by 46.8%, making it highly efficient for large-scale circuit analysis.
These results demonstrate the potential of DeepGate4 to handle complex EDA
tasks while offering superior scalability and efficiency.
|
2502.01682
|
The exception of humour: Iconicity, Phonemic Surprisal, Memory Recall,
and Emotional Associations
|
cs.CL
|
This meta-study explores the relationships between humor, phonemic bigram
surprisal, emotional valence, and memory recall. Prior research indicates that
words with higher phonemic surprisal are more readily remembered, suggesting
that unpredictable phoneme sequences promote long-term memory recall. Emotional
valence is another well-documented factor influencing memory, with negative
experiences and stimuli typically being remembered more easily than positive
ones. Building on existing findings, this study highlights that words with
negative associations often exhibit greater surprisal and are easier to recall.
Humor, however, presents an exception: while associated with positive emotions,
humorous words also display heightened surprisal and enhanced memorability.
|
2502.01683
|
LLM-Powered Benchmark Factory: Reliable, Generic, and Efficient
|
cs.CL cs.AI
|
The rapid advancement of large language models (LLMs) has led to a surge in
both model supply and application demands. To facilitate effective matching
between them, reliable, generic and efficient benchmark generators are widely
needed. However, human annotators are constrained by inefficiency, and current
LLM benchmark generators not only lack generalizability but also struggle with
limited reliability, as they lack a comprehensive evaluation framework for
validation and optimization. To fill this gap, we first propose an automated
and unbiased evaluation framework, structured around four dimensions and ten
criteria. Under this framework, we carefully analyze the advantages and
weaknesses of directly prompting LLMs as generic benchmark generators. To
enhance the reliability, we introduce a series of methods to address the
identified weaknesses and integrate them as BenchMaker. Experiments across
multiple LLMs and tasks confirm that BenchMaker achieves superior or comparable
performance to human-annotated benchmarks on all metrics, highlighting its
generalizability and reliability. More importantly, it delivers highly
consistent evaluation results across 12 LLMs (0.967 Pearson correlation against
MMLU-Pro), while taking only $0.005 and 0.38 minutes per sample.
|
2502.01684
|
Leveraging Joint Predictive Embedding and Bayesian Inference in Graph
Self Supervised Learning
|
cs.LG cs.AI cs.SI
|
Graph representation learning has emerged as a cornerstone for tasks like
node classification and link prediction, yet prevailing self-supervised
learning (SSL) methods face challenges such as computational inefficiency,
reliance on contrastive objectives, and representation collapse. Existing
approaches often depend on feature reconstruction, negative sampling, or
complex decoders, which introduce training overhead and hinder generalization.
Further, current techniques which address such limitations fail to account for
the contribution of node embeddings to a certain prediction in the absence of
labeled nodes. To address these limitations, we propose a novel joint embedding
predictive framework for graph SSL that eliminates contrastive objectives and
negative sampling while preserving semantic and structural information.
Additionally, we introduce a semantic-aware objective term that incorporates
pseudo-labels derived from Gaussian Mixture Models (GMMs), enhancing node
discriminability by evaluating latent feature contributions. Extensive
experiments demonstrate that our framework outperforms state-of-the-art graph
SSL methods across benchmarks, achieving superior performance without
contrastive loss or complex decoders. Key innovations include (1) a
non-contrastive, view-invariant joint embedding predictive architecture, (2)
Leveraging single context and multiple targets relationship between subgraphs,
and (3) GMM-based pseudo-label scoring to capture semantic contributions. This
work advances graph SSL by offering a computationally efficient,
collapse-resistant paradigm that bridges spatial and semantic graph features
for downstream tasks. The code for our paper can be found at
https://github.com/Deceptrax123/JPEB-GSSL
|
2502.01685
|
Automated Extraction of Spatio-Semantic Graphs for Identifying Cognitive
Impairment
|
cs.AI cs.CL cs.CV cs.SD eess.AS
|
Existing methods for analyzing linguistic content from picture descriptions
for assessment of cognitive-linguistic impairment often overlook the
participant's visual narrative path, which typically requires eye tracking to
assess. Spatio-semantic graphs are a useful tool for analyzing this narrative
path from transcripts alone, however they are limited by the need for manual
tagging of content information units (CIUs). In this paper, we propose an
automated approach for estimation of spatio-semantic graphs (via automated
extraction of CIUs) from the Cookie Theft picture commonly used in
cognitive-linguistic analyses. The method enables the automatic
characterization of the visual semantic path during picture description.
Experiments demonstrate that the automatic spatio-semantic graphs effectively
differentiate between cognitively impaired and unimpaired speakers. Statistical
analyses reveal that the features derived by the automated method produce
comparable results to the manual method, with even greater group differences
between clinical groups of interest. These results highlight the potential of
the automated approach for extracting spatio-semantic features in developing
clinical speech models for cognitive impairment assessment.
|
2502.01688
|
BrainOOD: Out-of-distribution Generalizable Brain Network Analysis
|
cs.LG q-bio.NC
|
In neuroscience, identifying distinct patterns linked to neurological
disorders, such as Alzheimer's and Autism, is critical for early diagnosis and
effective intervention. Graph Neural Networks (GNNs) have shown promising in
analyzing brain networks, but there are two major challenges in using GNNs: (1)
distribution shifts in multi-site brain network data, leading to poor
Out-of-Distribution (OOD) generalization, and (2) limited interpretability in
identifying key brain regions critical to neurological disorders. Existing
graph OOD methods, while effective in other domains, struggle with the unique
characteristics of brain networks. To bridge these gaps, we introduce BrainOOD,
a novel framework tailored for brain networks that enhances GNNs' OOD
generalization and interpretability. BrainOOD framework consists of a feature
selector and a structure extractor, which incorporates various auxiliary losses
including an improved Graph Information Bottleneck (GIB) objective to recover
causal subgraphs. By aligning structure selection across brain networks and
filtering noisy features, BrainOOD offers reliable interpretations of critical
brain regions. Our approach outperforms 16 existing methods and improves
generalization to OOD subjects by up to 8.5%. Case studies highlight the
scientific validity of the patterns extracted, which aligns with the findings
in known neuroscience literature. We also propose the first OOD brain network
benchmark, which provides a foundation for future research in this field. Our
code is available at https://github.com/AngusMonroe/BrainOOD.
|
2502.01689
|
scGSDR: Harnessing Gene Semantics for Single-Cell Pharmacological
Profiling
|
q-bio.GN cs.AI
|
The rise of single-cell sequencing technologies has revolutionized the
exploration of drug resistance, revealing the crucial role of cellular
heterogeneity in advancing precision medicine. By building computational models
from existing single-cell drug response data, we can rapidly annotate cellular
responses to drugs in subsequent trials. To this end, we developed scGSDR, a
model that integrates two computational pipelines grounded in the knowledge of
cellular states and gene signaling pathways, both essential for understanding
biological gene semantics. scGSDR enhances predictive performance by
incorporating gene semantics and employs an interpretability module to identify
key pathways contributing to drug resistance phenotypes. Our extensive
validation, which included 16 experiments covering 11 drugs, demonstrates
scGSDR's superior predictive accuracy, when trained with either bulk-seq or
scRNA-seq data, achieving high AUROC, AUPR, and F1 Scores. The model's
application has extended from single-drug predictions to scenarios involving
drug combinations. Leveraging pathways of known drug target genes, we found
that scGSDR's cell-pathway attention scores are biologically interpretable,
which helped us identify other potential drug-related genes. Literature review
of top-ranking genes in our predictions such as BCL2, CCND1, the AKT family,
and PIK3CA for PLX4720; and ICAM1, VCAM1, NFKB1, NFKBIA, and RAC1 for
Paclitaxel confirmed their relevance. In conclusion, scGSDR, by incorporating
gene semantics, enhances predictive modeling of cellular responses to diverse
drugs, proving invaluable for scenarios involving both single drug and
combination therapies and effectively identifying key resistance-related
pathways, thus advancing precision medicine and targeted therapy development.
|
2502.01690
|
HuViDPO:Enhancing Video Generation through Direct Preference
Optimization for Human-Centric Alignment
|
cs.CV
|
With the rapid development of AIGC technology, significant progress has been
made in diffusion model-based technologies for text-to-image (T2I) and
text-to-video (T2V). In recent years, a few studies have introduced the
strategy of Direct Preference Optimization (DPO) into T2I tasks, significantly
enhancing human preferences in generated images. However, existing T2V
generation methods lack a well-formed pipeline with exact loss function to
guide the alignment of generated videos with human preferences using DPO
strategies. Additionally, challenges such as the scarcity of paired video
preference data hinder effective model training. At the same time, the lack of
training datasets poses a risk of insufficient flexibility and poor video
generation quality in the generated videos. Based on those problems, our work
proposes three targeted solutions in sequence. 1) Our work is the first to
introduce the DPO strategy into the T2V tasks. By deriving a carefully
structured loss function, we utilize human feedback to align video generation
with human preferences. We refer to this new method as HuViDPO. 2) Our work
constructs small-scale human preference datasets for each action category and
fine-tune this model, improving the aesthetic quality of the generated videos
while reducing training costs. 3) We adopt a First-Frame-Conditioned strategy,
leveraging the rich in formation from the first frame to guide the generation
of subsequent frames, enhancing flexibility in video generation. At the same
time, we employ a SparseCausal Attention mechanism to enhance the quality of
the generated videos.More details and examples can be accessed on our website:
https://tankowa.github.io/HuViDPO. github.io/.
|
2502.01691
|
Agent-Based Uncertainty Awareness Improves Automated Radiology Report
Labeling with an Open-Source Large Language Model
|
cs.CL cs.AI
|
Reliable extraction of structured data from radiology reports using Large
Language Models (LLMs) remains challenging, especially for complex, non-English
texts like Hebrew. This study introduces an agent-based uncertainty-aware
approach to improve the trustworthiness of LLM predictions in medical
applications. We analyzed 9,683 Hebrew radiology reports from Crohn's disease
patients (from 2010 to 2023) across three medical centers. A subset of 512
reports was manually annotated for six gastrointestinal organs and 15
pathological findings, while the remaining reports were automatically annotated
using HSMP-BERT. Structured data extraction was performed using Llama 3.1
(Llama 3-8b-instruct) with Bayesian Prompt Ensembles (BayesPE), which employed
six semantically equivalent prompts to estimate uncertainty. An Agent-Based
Decision Model integrated multiple prompt outputs into five confidence levels
for calibrated uncertainty and was compared against three entropy-based models.
Performance was evaluated using accuracy, F1 score, precision, recall, and
Cohen's Kappa before and after filtering high-uncertainty cases. The
agent-based model outperformed the baseline across all metrics, achieving an F1
score of 0.3967, recall of 0.6437, and Cohen's Kappa of 0.3006. After filtering
high-uncertainty cases (greater than or equal to 0.5), the F1 score improved to
0.4787, and Kappa increased to 0.4258. Uncertainty histograms demonstrated
clear separation between correct and incorrect predictions, with the
agent-based model providing the most well-calibrated uncertainty estimates. By
incorporating uncertainty-aware prompt ensembles and an agent-based decision
model, this approach enhances the performance and reliability of LLMs in
structured data extraction from radiology reports, offering a more
interpretable and trustworthy solution for high-stakes medical applications.
|
2502.01692
|
Fast Direct: Query-Efficient Online Black-box Guidance for
Diffusion-model Target Generation
|
cs.LG cs.AI
|
Guided diffusion-model generation is a promising direction for customizing
the generation process of a pre-trained diffusion-model to address the specific
downstream tasks. Existing guided diffusion models either rely on training of
the guidance model with pre-collected datasets or require the objective
functions to be differentiable. However, for most real-world tasks, the offline
datasets are often unavailable, and their objective functions are often not
differentiable, such as image generation with human preferences, molecular
generation for drug discovery, and material design. Thus, we need an
$\textbf{online}$ algorithm capable of collecting data during runtime and
supporting a $\textbf{black-box}$ objective function. Moreover, the
$\textbf{query efficiency}$ of the algorithm is also critical because the
objective evaluation of the query is often expensive in the real-world
scenarios. In this work, we propose a novel and simple algorithm, $\textbf{Fast
Direct}$, for query-efficient online black-box target generation. Our Fast
Direct builds a pseudo-target on the data manifold to update the noise sequence
of the diffusion model with a universal direction, which is promising to
perform query-efficient guided generation. Extensive experiments on twelve
high-resolution ($\small {1024 \times 1024}$) image target generation tasks and
six 3D-molecule target generation tasks show $\textbf{6}\times$ up to
$\textbf{10}\times$ query efficiency improvement and $\textbf{11}\times$ up to
$\textbf{44}\times$ query efficiency improvement, respectively. Our
implementation is publicly available at:
https://github.com/kimyong95/guide-stable-diffusion/tree/fast-direct
|
2502.01693
|
Predicting Steady-State Behavior in Complex Networks with Graph Neural
Networks
|
cs.LG cs.AI nlin.AO
|
In complex systems, information propagation can be defined as diffused or
delocalized, weakly localized, and strongly localized. This study investigates
the application of graph neural network models to learn the behavior of a
linear dynamical system on networks. A graph convolution and attention-based
neural network framework has been developed to identify the steady-state
behavior of the linear dynamical system. We reveal that our trained model
distinguishes the different states with high accuracy. Furthermore, we have
evaluated model performance with real-world data. In addition, to understand
the explainability of our model, we provide an analytical derivation for the
forward and backward propagation of our framework.
|
2502.01694
|
Metastable Dynamics of Chain-of-Thought Reasoning: Provable Benefits of
Search, RL and Distillation
|
cs.AI cs.LG stat.ML
|
A key paradigm to improve the reasoning capabilities of large language models
(LLMs) is to allocate more inference-time compute to search against a verifier
or reward model. This process can then be utilized to refine the pretrained
model or distill its reasoning patterns into more efficient models. In this
paper, we study inference-time compute by viewing chain-of-thought (CoT)
generation as a metastable Markov process: easy reasoning steps (e.g.,
algebraic manipulations) form densely connected clusters, while hard reasoning
steps (e.g., applying a relevant theorem) create sparse, low-probability edges
between clusters, leading to phase transitions at longer timescales. Under this
framework, we prove that implementing a search protocol that rewards sparse
edges improves CoT by decreasing the expected number of steps to reach
different clusters. In contrast, we establish a limit on reasoning capability
when the model is restricted to local information of the pretrained graph. We
also show that the information gained by search can be utilized to obtain a
better reasoning model: (1) the pretrained model can be directly finetuned to
favor sparse edges via policy gradient methods, and moreover (2) a compressed
metastable representation of the reasoning dynamics can be distilled into a
smaller, more efficient model.
|
2502.01695
|
A Novel Real-Time Full-Color 3D Holographic (Diffractive) Video Capture,
Processing, and Transmission Pipeline Using Off-The-Shelf Hardware
|
eess.IV cs.CV
|
This paper details the world's first live 3D holographic (diffractive) video
call using off-the-shelf hardware. We introduce a novel pipeline that
facilitates the capture, processing, and transmission of RGBZ data, using an
iPhone for image and depth capture with VividQ's SDK for hologram generation
and hardware for display.
|
2502.01697
|
BARE: Combining Base and Instruction-Tuned Language Models for Better
Synthetic Data Generation
|
cs.CL cs.AI cs.LG
|
As the demand for high-quality data in model training grows, researchers and
developers are increasingly generating synthetic data to tune and train LLMs. A
common assumption about synthetic data is that sampling from instruct-tuned
models is sufficient; however, these models struggle to produce diverse
outputs-a key requirement for generalization. Despite various prompting
methods, in this work we show that achieving meaningful diversity from
instruct-tuned models remains challenging. In contrast, we find base models
without post-training exhibit greater diversity, but are less capable at
instruction following and hence of lower quality. Leveraging this insight, we
propose Base-Refine (BARE), a synthetic data generation method that combines
the diversity of base models with the quality of instruct-tuned models through
a two-stage process. With minimal few-shot examples and curation, BARE
generates diverse and high-quality datasets, improving downstream task
performance. We show that fine-tuning with as few as 1,000 BARE-generated
samples can reach performance comparable to the best similarly sized models on
LiveCodeBench tasks. Furthermore, fine-tuning with BARE-generated data achieves
a 101% improvement over instruct-only data on GSM8K and a 18.4% improvement
over SOTA methods on RAFT.
|
2502.01699
|
Multimodal Inverse Attention Network with Intrinsic Discriminant Feature
Exploitation for Fake News Detection
|
cs.LG cs.CL cs.CV cs.IR cs.MM
|
Multimodal fake news detection has garnered significant attention due to its
profound implications for social security. While existing approaches have
contributed to understanding cross-modal consistency, they often fail to
leverage modal-specific representations and explicit discrepant features. To
address these limitations, we propose a Multimodal Inverse Attention Network
(MIAN), a novel framework that explores intrinsic discriminative features based
on news content to advance fake news detection. Specifically, MIAN introduces a
hierarchical learning module that captures diverse intra-modal relationships
through local-to-global and local-to-local interactions, thereby generating
enhanced unimodal representations to improve the identification of fake news at
the intra-modal level. Additionally, a cross-modal interaction module employs a
co-attention mechanism to establish and model dependencies between the refined
unimodal representations, facilitating seamless semantic integration across
modalities. To explicitly extract inconsistency features, we propose an inverse
attention mechanism that effectively highlights the conflicting patterns and
semantic deviations introduced by fake news in both intra- and inter-modality.
Extensive experiments on benchmark datasets demonstrate that MIAN significantly
outperforms state-of-the-art methods, underscoring its pivotal contribution to
advancing social security through enhanced multimodal fake news detection.
|
2502.01700
|
EdgeMark: An Automation and Benchmarking System for Embedded Artificial
Intelligence Tools
|
cs.LG
|
The integration of artificial intelligence (AI) into embedded devices, a
paradigm known as embedded artificial intelligence (eAI) or tiny machine
learning (TinyML), is transforming industries by enabling intelligent data
processing at the edge. However, the many tools available in this domain leave
researchers and developers wondering which one is best suited to their needs.
This paper provides a review of existing eAI tools, highlighting their
features, trade-offs, and limitations. Additionally, we introduce EdgeMark, an
open-source automation system designed to streamline the workflow for deploying
and benchmarking machine learning (ML) models on embedded platforms. EdgeMark
simplifies model generation, optimization, conversion, and deployment while
promoting modularity, reproducibility, and scalability. Experimental
benchmarking results showcase the performance of widely used eAI tools,
including TensorFlow Lite Micro (TFLM), Edge Impulse, Ekkono, and Renesas eAI
Translator, across a wide range of models, revealing insights into their
relative strengths and weaknesses. The findings provide guidance for
researchers and developers in selecting the most suitable tools for specific
application requirements, while EdgeMark lowers the barriers to adoption of eAI
technologies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.