id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.13290
|
Prediction of Clinical Complication Onset using Neural Point Processes
|
cs.LG cs.AI
|
Predicting medical events in advance within critical care settings is
paramount for patient outcomes and resource management. Utilizing predictive
models, healthcare providers can anticipate issues such as cardiac arrest,
sepsis, or respiratory failure before they manifest. Recently, there has been a
surge in research focusing on forecasting adverse medical event onsets prior to
clinical manifestation using machine learning. However, while these models
provide temporal prognostic predictions for the occurrence of a specific
adverse event of interest within defined time intervals, their interpretability
often remains a challenge. In this work, we explore the applicability of neural
temporal point processes in the context of adverse event onset prediction, with
the aim of explaining clinical pathways and providing interpretable insights.
Our experiments span six state-of-the-art neural point processes and six
critical care datasets, each focusing on the onset of distinct adverse events.
This work represents a novel application class of neural temporal point
processes in event prediction.
|
2502.13295
|
Demonstrating specification gaming in reasoning models
|
cs.AI
|
We demonstrate LLM agent specification gaming by instructing models to win
against a chess engine. We find reasoning models like o1 preview and
DeepSeek-R1 will often hack the benchmark by default, while language models
like GPT-4o and Claude 3.5 Sonnet need to be told that normal play won't work
to hack.
We improve upon prior work like (Hubinger et al., 2024; Meinke et al., 2024;
Weij et al., 2024) by using realistic task prompts and avoiding excess nudging.
Our results suggest reasoning models may resort to hacking to solve difficult
problems, as observed in OpenAI (2024)'s o1 Docker escape during cyber
capabilities testing.
|
2502.13297
|
Understanding and Tackling Label Errors in Individual-Level Nature
Language Understanding
|
cs.CL cs.AI
|
Natural language understanding (NLU) is a task that enables machines to
understand human language. Some tasks, such as stance detection and sentiment
analysis, are closely related to individual subjective perspectives, thus
termed individual-level NLU. Previously, these tasks are often simplified to
text-level NLU tasks, ignoring individual factors. This not only makes
inference difficult and unexplainable but often results in a large number of
label errors when creating datasets. To address the above limitations, we
propose a new NLU annotation guideline based on individual-level factors.
Specifically, we incorporate other posts by the same individual and then
annotate individual subjective perspectives after considering all individual
posts. We use this guideline to expand and re-annotate the stance detection and
topic-based sentiment analysis datasets. We find that error rates in the
samples were as high as 31.7\% and 23.3\%. We further use large language models
to conduct experiments on the re-annotation datasets and find that the large
language models perform well on both datasets after adding individual factors.
Both GPT-4o and Llama3-70B can achieve an accuracy greater than 87\% on the
re-annotation datasets. We also verify the effectiveness of individual factors
through ablation studies. We call on future researchers to add individual
factors when creating such datasets. Our re-annotation dataset can be found at
https://github.com/24yearsoldstudent/Individual-NLU
|
2502.13298
|
Improving Multi-turn Task Completion in Task-Oriented Dialog Systems via
Prompt Chaining and Fine-Grained Feedback
|
cs.CL
|
Task-oriented dialog (TOD) systems facilitate users in accomplishing complex,
multi-turn tasks through natural language. While traditional approaches rely on
extensive fine-tuning and annotated data for each domain, instruction-tuned
large language models (LLMs) offer a more flexible alternative. However, LLMs
struggle to reliably handle multi-turn task completion, particularly with
accurately generating API calls and adapting to new domains without explicit
demonstrations. To address these challenges, we propose RealTOD, a novel
framework that enhances TOD systems through prompt chaining and fine-grained
feedback mechanisms. Prompt chaining enables zero-shot domain adaptation via a
two-stage prompting strategy, eliminating the need for human-curated
demonstrations. Meanwhile, the fine-grained feedback mechanism improves task
completion by verifying API calls against domain schemas and providing precise
corrective feedback when errors are detected. We conduct extensive experiments
on the SGD and BiTOD benchmarks using four LLMs. RealTOD improves API accuracy,
surpassing AutoTOD by 37.74% on SGD and SimpleTOD by 11.26% on BiTOD. Human
evaluations further confirm that LLMs integrated with RealTOD achieve superior
task completion, fluency, and informativeness compared to existing methods.
|
2502.13301
|
Application of Context-dependent Interpretation of Biosignals
Recognition to Control a Bionic Multifunctional Hand Prosthesis
|
cs.LG
|
The paper presents an original method for controlling a
surface-electromyography-driven (sEMG) prosthesis. A context-dependent
recognition system is proposed in which the same class of sEMG signals may have
a different interpretation, depending on the context. This allowed the
repertoire of performed movements to be increased. The proposed structure of
the context-dependent recognition system includes unambiguously defined
decision sequences covering the overall action of the prosthesis, i.e. the
so-called boxes. Because the boxes are mutually isolated environments, each box
has its own interpretation of the recognition result, as well as a separate
local-recognition-task-focused classifier.
Due to the freedom to assign contextual meanings to classes of biosignals,
the construction procedure of the classifier can be optimised in terms of the
local classification quality in a given box or the classification quality of
the entire system. In the paper, two optimisation problems are formulated,
differing in the adopted constraints on optimisation variables, with the
methods of solving the problems based on an exhaustive search and an
evolutionary algorithm, being developed.
Experimental studies were conducted using signals from 1 able-bodied person
with simulation of amputation and 10 volunteers with transradial amputations.
The study compared the classical recognition system and the context-dependent
system for various classifier models. An unusual testing strategy was adopted
in the research, taking into account the specificity of the considered
recognition task, with two original quality measures resulting from this scheme
then being applied. The results obtained confirm the hypothesis that the
application of the context-dependent classifier led to an improvement in
classification quality.
|
2502.13308
|
A Label-Free Heterophily-Guided Approach for Unsupervised Graph Fraud
Detection
|
cs.LG
|
Graph fraud detection (GFD) has rapidly advanced in protecting online
services by identifying malicious fraudsters. Recent supervised GFD research
highlights that heterophilic connections between fraudsters and users can
greatly impact detection performance, since fraudsters tend to camouflage
themselves by building more connections to benign users. Despite the promising
performance of supervised GFD methods, the reliance on labels limits their
applications to unsupervised scenarios; Additionally, accurately capturing
complex and diverse heterophily patterns without labels poses a further
challenge. To fill the gap, we propose a Heterophily-guided Unsupervised Graph
fraud dEtection approach (HUGE) for unsupervised GFD, which contains two
essential components: a heterophily estimation module and an alignment-based
fraud detection module. In the heterophily estimation module, we design a novel
label-free heterophily metric called HALO, which captures the critical graph
properties for GFD, enabling its outstanding ability to estimate heterophily
from node attributes. In the alignment-based fraud detection module, we develop
a joint MLP-GNN architecture with ranking loss and asymmetric alignment loss.
The ranking loss aligns the predicted fraud score with the relative order of
HALO, providing an extra robustness guarantee by comparing heterophily among
non-adjacent nodes. Moreover, the asymmetric alignment loss effectively
utilizes structural information while alleviating the feature-smooth effects of
GNNs.Extensive experiments on 6 datasets demonstrate that HUGE significantly
outperforms competitors, showcasing its effectiveness and robustness. The
source code of HUGE is at https://github.com/CampanulaBells/HUGE-GAD.
|
2502.13310
|
Evaluating and Enhancing Out-of-Domain Generalization of Task-Oriented
Dialog Systems for Task Completion without Turn-level Dialog Annotations
|
cs.CL
|
Traditional task-oriented dialog (ToD) systems rely heavily on
labor-intensive turn-level annotations, such as dialogue states and policy
labels, for training. This work explores whether large language models (LLMs)
can be fine-tuned solely on natural language dialogs to perform ToD tasks,
without requiring such annotations. We evaluate their ability to generalize to
unseen domains and compare their performance with models trained on fully
annotated data. Through extensive experiments with three open-source LLMs of
varying sizes and two diverse ToD datasets, we find that models fine-tuned
without turn-level annotations generate coherent and contextually appropriate
responses. However, their task completion performance - measured by accurate
execution of API calls - remains suboptimal, with the best models achieving
only around 53% success in unseen domains. To improve task completion, we
propose ZeroToD, a framework that incorporates a schema augmentation mechanism
to enhance API call accuracy and overall task completion rates, particularly in
out-of-domain settings. We also compare ZeroToD with fine-tuning-free
alternatives, such as prompting off-the-shelf LLMs, and find that our framework
enables smaller, fine-tuned models that outperform large-scale proprietary LLMs
in task completion. Additionally, a human study evaluating informativeness,
fluency, and task completion confirms our empirical findings. These findings
suggest the feasibility of developing cost-effective, scalable, and zero-shot
generalizable ToD systems for real-world applications.
|
2502.13311
|
Training Turn-by-Turn Verifiers for Dialogue Tutoring Agents: The
Curious Case of LLMs as Your Coding Tutors
|
cs.CL cs.AI
|
Intelligent tutoring agents powered by large language models (LLMs) have been
increasingly explored to deliver personalized guidance in areas such as
language learning and science education. However, their capabilities in guiding
users to solve complex real-world tasks remain underexplored. To address this
limitation, in this work, we focus on coding tutoring, a challenging problem
that requires tutors to proactively guide students toward completing predefined
coding tasks. We propose a novel agent workflow, Trace-and-Verify (TRAVER),
which combines knowledge tracing to estimate a student's knowledge state and
turn-by-turn verification to ensure effective guidance toward task completion.
We introduce DICT, an automatic evaluation protocol that assesses tutor agents
holistically using controlled student simulation and code generation tests.
Extensive experiments reveal the challenges of coding tutoring and demonstrate
that TRAVER achieves a significantly higher success rate. Although we use code
tutoring as an example in this paper, our results and findings can be extended
beyond coding, providing valuable insights into advancing tutoring agents for a
variety of tasks.
|
2502.13313
|
Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning
Large Language Models
|
cs.AI cs.LG
|
We study the inherent trade-offs in minimizing privacy risks and maximizing
utility, while maintaining high computational efficiency, when fine-tuning
large language models (LLMs). A number of recent works in privacy research have
attempted to mitigate privacy risks posed by memorizing fine-tuning data by
using differentially private training methods (e.g., DP), albeit at a
significantly higher computational cost (inefficiency). In parallel, several
works in systems research have focussed on developing (parameter) efficient
fine-tuning methods (e.g., LoRA), but few works, if any, investigated whether
such efficient methods enhance or diminish privacy risks. In this paper, we
investigate this gap and arrive at a surprising conclusion: efficient
fine-tuning methods like LoRA mitigate privacy risks similar to private
fine-tuning methods like DP. Our empirical finding directly contradicts
prevailing wisdom that privacy and efficiency objectives are at odds during
fine-tuning. Our finding is established by (a) carefully defining measures of
privacy and utility that distinguish between memorizing sensitive and
non-sensitive tokens in training and test datasets used in fine-tuning and (b)
extensive evaluations using multiple open-source language models from Pythia,
Gemma, and Llama families and different domain-specific datasets.
|
2502.13316
|
Increasing NWP Thunderstorm Predictability Using Ensemble Data and
Machine Learning
|
physics.ao-ph cs.LG
|
While numerical weather prediction (NWP) models are essential for forecasting
thunderstorms hours in advance, NWP uncertainty, which increases with lead
time, limits the predictability of thunderstorm occurrence. This study
investigates how ensemble NWP data and machine learning (ML) can enhance the
skill of thunderstorm forecasts. Using our recently introduced neural network
model, SALAMA 1D, which identifies thunderstorm occurrence in operational
forecasts of the convection-permitting ICON-D2-EPS model for Central Europe, we
demonstrate that ensemble-averaging significantly improves forecast skill.
Notably, an 11-hour ensemble forecast matches the skill level of a 5-hour
deterministic forecast. To explain this improvement, we derive an analytic
expression linking skill differences to correlations between ensemble members,
which aligns with observed performance gains. This expression generalizes to
any binary classification model that processes ensemble members individually.
Additionally, we show that ML models like SALAMA 1D can identify patterns of
thunderstorm occurrence which remain predictable for longer lead times compared
to raw NWP output. Our findings quantitatively explain the benefits of
ensemble-averaging and encourage the development of ML methods for thunderstorm
forecasting and beyond.
|
2502.13318
|
VUS: Effective and Efficient Accuracy Measures for Time-Series Anomaly
Detection
|
cs.LG
|
Anomaly detection (AD) is a fundamental task for time-series analytics with
important implications for the downstream performance of many applications. In
contrast to other domains where AD mainly focuses on point-based anomalies
(i.e., outliers in standalone observations), AD for time series is also
concerned with range-based anomalies (i.e., outliers spanning multiple
observations). Nevertheless, it is common to use traditional point-based
information retrieval measures, such as Precision, Recall, and F-score, to
assess the quality of methods by thresholding the anomaly score to mark each
point as an anomaly or not. However, mapping discrete labels into continuous
data introduces unavoidable shortcomings, complicating the evaluation of
range-based anomalies. Notably, the choice of evaluation measure may
significantly bias the experimental outcome. Despite over six decades of
attention, there has never been a large-scale systematic quantitative and
qualitative analysis of time-series AD evaluation measures. This paper
extensively evaluates quality measures for time-series AD to assess their
robustness under noise, misalignments, and different anomaly cardinality
ratios. Our results indicate that measures producing quality values
independently of a threshold (i.e., AUC-ROC and AUC-PR) are more suitable for
time-series AD. Motivated by this observation, we first extend the AUC-based
measures to account for range-based anomalies. Then, we introduce a new family
of parameter-free and threshold-independent measures, Volume Under the Surface
(VUS), to evaluate methods while varying parameters. We also introduce two
optimized implementations for VUS that reduce significantly the execution time
of the initial implementation. Our findings demonstrate that our four measures
are significantly more robust in assessing the quality of time-series AD
methods.
|
2502.13319
|
Elucidating Mechanisms of Demographic Bias in LLMs for Healthcare
|
cs.CL
|
We know from prior work that LLMs encode social biases, and that this
manifests in clinical tasks. In this work we adopt tools from mechanistic
interpretability to unveil sociodemographic representations and biases within
LLMs in the context of healthcare. Specifically, we ask: Can we identify
activations within LLMs that encode sociodemographic information (e.g., gender,
race)? We find that gender information is highly localized in middle MLP layers
and can be reliably manipulated at inference time via patching. Such
interventions can surgically alter generated clinical vignettes for specific
conditions, and also influence downstream clinical predictions which correlate
with gender, e.g., patient risk of depression. We find that representation of
patient race is somewhat more distributed, but can also be intervened upon, to
a degree. To our knowledge, this is the first application of mechanistic
interpretability methods to LLMs for healthcare.
|
2502.13321
|
Adjust for Trust: Mitigating Trust-Induced Inappropriate Reliance on AI
Assistance
|
cs.HC cs.AI cs.CL
|
Trust biases how users rely on AI recommendations in AI-assisted
decision-making tasks, with low and high levels of trust resulting in increased
under- and over-reliance, respectively. We propose that AI assistants should
adapt their behavior through trust-adaptive interventions to mitigate such
inappropriate reliance. For instance, when user trust is low, providing an
explanation can elicit more careful consideration of the assistant's advice by
the user. In two decision-making scenarios -- laypeople answering science
questions and doctors making medical diagnoses -- we find that providing
supporting and counter-explanations during moments of low and high trust,
respectively, yields up to 38% reduction in inappropriate reliance and 20%
improvement in decision accuracy. We are similarly able to reduce over-reliance
by adaptively inserting forced pauses to promote deliberation. Our results
highlight how AI adaptation to user trust facilitates appropriate reliance,
presenting exciting avenues for improving human-AI collaboration.
|
2502.13322
|
Community Notes Moderate Engagement With and Diffusion of False
Information Online
|
cs.SI cs.CY physics.soc-ph
|
Social networks scaffold the diffusion of information on social media. Much
attention has been given to the spread of true vs. false content on online
social platforms, including the structural differences between their diffusion
patterns. However, much less is known about how platform interventions on false
content alter the engagement with and diffusion of such content. In this work,
we estimate the causal effects of Community Notes, a novel fact-checking
feature adopted by X (formerly Twitter) to solicit and vet crowd-sourced
fact-checking notes for false content. We gather detailed time series data for
40,074 posts for which notes have been proposed and use synthetic control
methods to estimate a range of counterfactual outcomes. We find that attaching
fact-checking notes significantly reduces the engagement with and diffusion of
false content. We estimate that, on average, the notes resulted in reductions
of 45.7% in reposts, 43.5% in likes, 22.9% in replies, and 14.0% in views after
being attached. Over the posts' entire lifespans, these reductions amount to
11.4% fewer reposts, 13.0% fewer likes, 7.3% fewer replies, and 5.7% fewer
views on average. In reducing reposts, we observe that diffusion cascades for
fact-checked content are less deep, but not less broad, than synthetic control
estimates for non-fact-checked content with similar reach. This structural
difference contrasts notably with differences between false vs. true content
diffusion itself, where false information diffuses farther, but with structural
patterns that are otherwise indistinguishable from those of true information,
conditional on reach.
|
2502.13326
|
Capturing Human Cognitive Styles with Language: Towards an Experimental
Evaluation Paradigm
|
cs.CL
|
While NLP models often seek to capture cognitive states via language, the
validity of predicted states is determined by comparing them to annotations
created without access the cognitive states of the authors. In behavioral
sciences, cognitive states are instead measured via experiments. Here, we
introduce an experiment-based framework for evaluating language-based cognitive
style models against human behavior. We explore the phenomenon of decision
making, and its relationship to the linguistic style of an individual talking
about a recent decision they made. The participants then follow a classical
decision-making experiment that captures their cognitive style, determined by
how preferences change during a decision exercise. We find that language
features, intended to capture cognitive style, can predict participants'
decision style with moderate-to-high accuracy (AUC ~ 0.8), demonstrating that
cognitive style can be partly captured and revealed by discourse patterns.
|
2502.13328
|
Observability-Blocking Controls for Double-Integrator and Higher Order
Integrator Networks
|
eess.SY cs.SY
|
The design of state-feedback controls to block observability at remote nodes
is studied for double integrator network (DIN) and higher order integrator
network models. A preliminary design algorithm is presented first for DIN that
requires $m+2$ actuation nodes to block observability for the measurement
obtained from a set of $m$ nodes. The algorithm is based on eigenstructure
assignment technique and leverages the properties of the eigenvectors in DIN.
Next, the topological structure of the network is exploited to reduce the
number of controllers required for blocking observability. The number of
actuation nodes in sparser design depends on the cardinality of a cutset
separating the actuation and measurement locations. Later, the design
principles are generalized for blocking observability in $N$-th order
integrator network models.
|
2502.13329
|
Language Models Can Predict Their Own Behavior
|
cs.CL cs.AI cs.LG
|
Autoregressive Language Models output text by sequentially predicting the
next token to generate, with modern methods like Chain-of-Thought (CoT)
prompting achieving state-of-the-art reasoning capabilities by scaling the
number of generated tokens. However, are there times when we can infer how the
model will behave (e.g. abstain from answering a question) early in the
computation, making generation unnecessary? We show that internal
representation of input tokens alone can often precisely predict, not just the
next token, but eventual behavior over the entire output sequence. We leverage
this capacity and learn probes on internal states to create early warning (and
exit) systems. Specifically, if the probes can confidently estimate the way the
LM is going to behave, then the system will avoid generating tokens altogether
and return the estimated behavior instead. On 27 text classification datasets
spanning five different tasks, we apply this method to estimate the eventual
answer of an LM under CoT prompting, reducing inference costs by 65% (average)
while suffering an accuracy loss of no more than 1.4% (worst case). We
demonstrate the potential of this method to pre-emptively identify when a model
will abstain from answering a question, fail to follow output format
specifications, or give a low-confidence response. We explore the limits of
this capability, showing that probes generalize to unseen datasets, but perform
worse when LM outputs are longer and struggle to predict properties that
require access to knowledge that the models themselves lack. Encouragingly,
performance scales with model size, suggesting applicability to the largest of
models
|
2502.13333
|
An Uncertainty-Aware Data-Driven Predictive Controller for Hybrid Power
Plants
|
eess.SY cs.CE cs.SY math.OC
|
Given the advancements in data-driven modeling for complex engineering and
scientific applications, this work utilizes a data-driven predictive control
method, namely subspace predictive control, to coordinate hybrid power plant
components and meet a desired power demand despite the presence of weather
uncertainties. An uncertainty-aware data-driven predictive controller is
proposed, and its potential is analyzed using real-world electricity demand
profiles. For the analysis, a hybrid power plant with wind, solar, and
co-located energy storage capacity of 4 MW each is considered. The analysis
shows that the predictive controller can track a real-world-inspired
electricity demand profile despite the presence of weather-induced
uncertainties and be an intelligent forecaster for HPP performance.
|
2502.13335
|
Geometry-Aware Diffusion Models for Multiview Scene Inpainting
|
cs.CV
|
In this paper, we focus on 3D scene inpainting, where parts of an input image
set, captured from different viewpoints, are masked out. The main challenge
lies in generating plausible image completions that are geometrically
consistent across views. Most recent work addresses this challenge by combining
generative models with a 3D radiance field to fuse information across
viewpoints. However, a major drawback of these methods is that they often
produce blurry images due to the fusion of inconsistent cross-view images. To
avoid blurry inpaintings, we eschew the use of an explicit or implicit radiance
field altogether and instead fuse cross-view information in a learned space. In
particular, we introduce a geometry-aware conditional generative model, capable
of inpainting multi-view consistent images based on both geometric and
appearance cues from reference images. A key advantage of our approach over
existing methods is its unique ability to inpaint masked scenes with a limited
number of views (i.e., few-view inpainting), whereas previous methods require
relatively large image sets for their 3D model fitting step. Empirically, we
evaluate and compare our scene-centric inpainting method on two datasets,
SPIn-NeRF and NeRFiller, which contain images captured at narrow and wide
baselines, respectively, and achieve state-of-the-art 3D inpainting performance
on both. Additionally, we demonstrate the efficacy of our approach in the
few-view setting compared to prior methods.
|
2502.13337
|
Language Models are Few-Shot Graders
|
cs.CL cs.AI
|
Providing evaluations to student work is a critical component of effective
student learning, and automating its process can significantly reduce the
workload on human graders. Automatic Short Answer Grading (ASAG) systems,
enabled by advancements in Large Language Models (LLMs), offer a promising
solution for assessing and providing instant feedback for open-ended student
responses. In this paper, we present an ASAG pipeline leveraging
state-of-the-art LLMs. Our new LLM-based ASAG pipeline achieves better
performances than existing custom-built models on the same datasets. We also
compare the grading performance of three OpenAI models: GPT-4, GPT-4o, and
o1-preview. Our results demonstrate that GPT-4o achieves the best balance
between accuracy and cost-effectiveness. On the other hand, o1-preview, despite
higher accuracy, exhibits a larger variance in error that makes it less
practical for classroom use. We investigate the effects of incorporating
instructor-graded examples into prompts using no examples, random selection,
and Retrieval-Augmented Generation (RAG)-based selection strategies. Our
findings indicate that providing graded examples enhances grading accuracy,
with RAG-based selection outperforming random selection. Additionally,
integrating grading rubrics improves accuracy by offering a structured standard
for evaluation.
|
2502.13339
|
How Expressive are Knowledge Graph Foundation Models?
|
cs.LG cs.AI
|
Knowledge Graph Foundation Models (KGFMs) are at the frontier for deep
learning on knowledge graphs (KGs), as they can generalize to completely novel
knowledge graphs with different relational vocabularies. Despite their
empirical success, our theoretical understanding of KGFMs remains very limited.
In this paper, we conduct a rigorous study of the expressive power of KGFMs.
Specifically, we show that the expressive power of KGFMs directly depends on
the motifs that are used to learn the relation representations. We then observe
that the most typical motifs used in the existing literature are binary, as the
representations are learned based on how pairs of relations interact, which
limits the model's expressiveness. As part of our study, we design more
expressive KGFMs using richer motifs, which necessitate learning relation
representations based on, e.g., how triples of relations interact with each
other. Finally, we empirically validate our theoretical findings, showing that
the use of richer motifs results in better performance on a wide range of
datasets drawn from different domains.
|
2502.13342
|
Beyond De-Identification: A Structured Approach for Defining and
Detecting Indirect Identifiers in Medical Texts
|
cs.CL
|
Sharing sensitive texts for scientific purposes requires appropriate
techniques to protect the privacy of patients and healthcare personnel.
Anonymizing textual data is particularly challenging due to the presence of
diverse unstructured direct and indirect identifiers. To mitigate the risk of
re-identification, this work introduces a schema of nine categories of indirect
identifiers designed to account for different potential adversaries, including
acquaintances, family members and medical staff. Using this schema, we annotate
100 MIMIC-III discharge summaries and propose baseline models for identifying
indirect identifiers. We will release the annotation guidelines, annotation
spans (6,199 annotations in total) and the corresponding MIMIC-III document IDs
to support further research in this area.
|
2502.13344
|
K-Paths: Reasoning over Graph Paths for Drug Repurposing and Drug
Interaction Prediction
|
cs.LG cs.CL q-bio.BM
|
Drug discovery is a complex and time-intensive process that requires
identifying and validating new therapeutic candidates. Computational approaches
using large-scale biomedical knowledge graphs (KGs) offer a promising solution
to accelerate this process. However, extracting meaningful insights from
large-scale KGs remains challenging due to the complexity of graph traversal.
Existing subgraph-based methods are tailored to graph neural networks (GNNs),
making them incompatible with other models, such as large language models
(LLMs). We introduce K-Paths, a retrieval framework that extracts structured,
diverse, and biologically meaningful paths from KGs. Integrating these paths
enables LLMs and GNNs to effectively predict unobserved drug-drug and
drug-disease interactions. Unlike traditional path-ranking approaches, K-Paths
retrieves and transforms paths into a structured format that LLMs can directly
process, facilitating explainable reasoning. K-Paths employs a diversity-aware
adaptation of Yen's algorithm to retrieve the K shortest loopless paths between
entities in an interaction query, prioritizing biologically relevant and
diverse relationships. Our experiments on benchmark datasets show that K-Paths
improves the zero-shot performance of Llama 8.1B's F1-score by 12.45 points on
drug repurposing and 13.42 points on interaction severity prediction. We also
show that Llama 70B achieves F1-score gains of 6.18 and 8.46 points,
respectively. K-Paths also improves the supervised training efficiency of
EmerGNN, a state-of-the-art GNN, by reducing KG size by 90% while maintaining
strong predictive performance. Beyond its scalability and efficiency, K-Paths
uniquely bridges the gap between KGs and LLMs, providing explainable rationales
for predicted interactions. These capabilities show that K-Paths is a valuable
tool for efficient data-driven drug discovery.
|
2502.13345
|
Secure and Efficient Watermarking for Latent Diffusion Models in Model
Distribution Scenarios
|
cs.CR cs.AI
|
Latent diffusion models have exhibited considerable potential in generative
tasks. Watermarking is considered to be an alternative to safeguard the
copyright of generative models and prevent their misuse. However, in the
context of model distribution scenarios, the accessibility of models to large
scale of model users brings new challenges to the security, efficiency and
robustness of existing watermark solutions. To address these issues, we propose
a secure and efficient watermarking solution. A new security mechanism is
designed to prevent watermark leakage and watermark escape, which considers
watermark randomness and watermark-model association as two constraints for
mandatory watermark injection. To reduce the time cost of training the security
module, watermark injection and the security mechanism are decoupled, ensuring
that fine-tuning VAE only accomplishes the security mechanism without the
burden of learning watermark patterns. A watermark distribution-based
verification strategy is proposed to enhance the robustness against diverse
attacks in the model distribution scenarios. Experimental results prove that
our watermarking consistently outperforms existing six baselines on
effectiveness and robustness against ten image processing attacks and
adversarial attacks, while enhancing security in the distribution scenarios.
|
2502.13347
|
Craw4LLM: Efficient Web Crawling for LLM Pretraining
|
cs.CL
|
Web crawl is a main source of large language models' (LLMs) pretraining data,
but the majority of crawled web pages are discarded in pretraining due to low
data quality. This paper presents Crawl4LLM, an efficient web crawling method
that explores the web graph based on the preference of LLM pretraining.
Specifically, it leverages the influence of a webpage in LLM pretraining as the
priority score of the web crawler's scheduler, replacing the standard graph
connectivity based priority. Our experiments on a web graph containing 900
million webpages from a commercial search engine's index demonstrate the
efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just
21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream
performances of previous crawls, significantly reducing the crawling waste and
alleviating the burdens on websites. Our code is publicly available at
https://github.com/cxcscmu/Crawl4LLM.
|
2502.13348
|
System-level Analysis of Dual-Mode Networked Sensing: ISAC Integration &
Coordination Gains
|
cs.IT cs.SY eess.SY math.IT
|
This paper characterizes integration and coordination gains in dense
millimeter-wave ISAC networks through a dual-mode framework that combines
monostatic and multistatic sensing. A comprehensive system-level analysis is
conducted, accounting for base station (BS) density, power allocation, antenna
misalignment, radar cross-section (RCS) fluctuations, clutter, bistatic
geometry, channel fading, and self-interference cancellation (SIC) efficiency.
Using stochastic geometry, coverage probabilities and ergodic rates for sensing
and communication are derived, revealing tradeoffs among BS density, beamwidth,
and power allocation. It is shown that the communication performance sustained
reliable operation despite the overlaid sensing functionality. In contrast, the
results reveal the foundational role of spatial sensing diversity, driven by
the dual-mode operation, to compensate for the weak sensing reflections and
vulnerability to imperfect SIC along with interference and clutter. To this
end, we identify a system transition from monostatic to multistatic-dominant
sensing operation as a function of the SIC efficiency. In the latter case,
using six multistatic BSs instead of a single bistatic receiver improved
sensing coverage probability by over 100%, highlighting the coordination gain.
Moreover, comparisons with pure communication networks confirm substantial
integration gain. Specifically, dual-mode networked sensing with four
cooperative BSs can double throughput, while multistatic sensing alone improves
throughput by over 50%.
|
2502.13349
|
Event Segmentation Applications in Large Language Model Enabled
Automated Recall Assessments
|
cs.CL
|
Understanding how individuals perceive and recall information in their
natural environments is critical to understanding potential failures in
perception (e.g., sensory loss) and memory (e.g., dementia). Event
segmentation, the process of identifying distinct events within dynamic
environments, is central to how we perceive, encode, and recall experiences.
This cognitive process not only influences moment-to-moment comprehension but
also shapes event specific memory. Despite the importance of event segmentation
and event memory, current research methodologies rely heavily on human
judgements for assessing segmentation patterns and recall ability, which are
subjective and time-consuming. A few approaches have been introduced to
automate event segmentation and recall scoring, but validity with human
responses and ease of implementation require further advancements. To address
these concerns, we leverage Large Language Models (LLMs) to automate event
segmentation and assess recall, employing chat completion and text-embedding
models, respectively. We validated these models against human annotations and
determined that LLMs can accurately identify event boundaries, and that human
event segmentation is more consistent with LLMs than among humans themselves.
Using this framework, we advanced an automated approach for recall assessments
which revealed semantic similarity between segmented narrative events and
participant recall can estimate recall performance. Our findings demonstrate
that LLMs can effectively simulate human segmentation patterns and provide
recall evaluations that are a scalable alternative to manual scoring. This
research opens novel avenues for studying the intersection between perception,
memory, and cognitive impairment using methodologies driven by artificial
intelligence.
|
2502.13358
|
Bridging the Editing Gap in LLMs: FineEdit for Precise and Targeted Text
Modifications
|
cs.CL
|
Large Language Models (LLMs) have transformed natural language processing,
yet they still struggle with direct text editing tasks that demand precise,
context-aware modifications. While models like ChatGPT excel in text generation
and analysis, their editing abilities often fall short, addressing only
superficial issues rather than deeper structural or logical inconsistencies. In
this work, we introduce a dual approach to enhance LLMs editing performance.
First, we present InstrEditBench, a high-quality benchmark dataset comprising
over 20,000 structured editing tasks spanning Wiki articles, LaTeX documents,
code, and database Domain-specific Languages (DSL). InstrEditBench is generated
using an innovative automated workflow that accurately identifies and evaluates
targeted edits, ensuring that modifications adhere strictly to specified
instructions without altering unrelated content. Second, we propose FineEdit, a
specialized model trained on this curated benchmark. Experimental results
demonstrate that FineEdit achieves significant improvements around {10\%}
compared with Gemini on direct editing tasks, convincingly validating its
effectiveness.
|
2502.13361
|
RGAR: Recurrence Generation-augmented Retrieval for Factual-aware
Medical Question Answering
|
cs.CL cs.AI
|
Medical question answering requires extensive access to specialized
conceptual knowledge. The current paradigm, Retrieval-Augmented Generation
(RAG), acquires expertise medical knowledge through large-scale corpus
retrieval and uses this knowledge to guide a general-purpose large language
model (LLM) for generating answers. However, existing retrieval approaches
often overlook the importance of factual knowledge, which limits the relevance
of retrieved conceptual knowledge and restricts its applicability in real-world
scenarios, such as clinical decision-making based on Electronic Health Records
(EHRs). This paper introduces RGAR, a recurrence generation-augmented retrieval
framework that retrieves both relevant factual and conceptual knowledge from
dual sources (i.e., EHRs and the corpus), allowing them to interact and refine
each another. Through extensive evaluation across three factual-aware medical
question answering benchmarks, RGAR establishes a new state-of-the-art
performance among medical RAG systems. Notably, the Llama-3.1-8B-Instruct model
with RGAR surpasses the considerably larger, RAG-enhanced GPT-3.5. Our findings
demonstrate the benefit of extracting factual knowledge for retrieval, which
consistently yields improved generation quality.
|
2502.13362
|
Dynamic directed functional connectivity as a neural biomarker for
objective motor skill assessment
|
q-bio.NC cs.LG
|
Objective motor skill assessment plays a critical role in fields such as
surgery, where proficiency is vital for certification and patient safety.
Existing assessment methods, however, rely heavily on subjective human
judgment, which introduces bias and limits reproducibility. While recent
efforts have leveraged kinematic data and neural imaging to provide more
objective evaluations, these approaches often overlook the dynamic neural
mechanisms that differentiate expert and novice performance. This study
proposes a novel method for motor skill assessment based on dynamic directed
functional connectivity (dFC) as a neural biomarker. By using
electroencephalography (EEG) to capture brain dynamics and employing an
attention-based Long Short-Term Memory (LSTM) model for non-linear Granger
causality analysis, we compute dFC among key brain regions involved in
psychomotor tasks. Coupled with hierarchical task analysis (HTA), our approach
enables subtask-level evaluation of motor skills, offering detailed insights
into neural coordination that underpins expert proficiency. A convolutional
neural network (CNN) is then used to classify skill levels, achieving greater
accuracy and specificity than established performance metrics in laparoscopic
surgery. This methodology provides a reliable, objective framework for
assessing motor skills, contributing to the development of tailored training
protocols and enhancing the certification process.
|
2502.13363
|
Pretrained Image-Text Models are Secretly Video Captioners
|
cs.CV cs.LG
|
Developing video captioning models is computationally expensive. The dynamic
nature of video also complicates the design of multimodal models that can
effectively caption these sequences. However, we find that by using minimal
computational resources and without complex modifications to address video
dynamics, an image-based model can be repurposed to outperform several
specialised video captioning systems. Our adapted model demonstrates top tier
performance on major benchmarks, ranking 2nd on MSRVTT and MSVD, and 3rd on
VATEX. We transform it into a competitive video captioner by post training a
typical image captioning model BLIP2 with only 6,000 video text pairs and
simply concatenating frames (significantly fewer data than other methods),
which use 2.5 to 144 million pairs. From a resource optimization perspective,
this video captioning study focuses on three fundamental factors: optimizing
model scale, maximizing data efficiency, and incorporating reinforcement
learning. This extensive study demonstrates that a lightweight, image based
adaptation strategy can rival state-of-the-art video captioning systems,
offering a practical solution for low-resource scenarios.
|
2502.13366
|
Low-Complexity Cooperative Payload Transportation for Nonholonomic
Mobile Robots Under Scalable Constraints
|
cs.RO cs.SY eess.SY
|
Cooperative transportation, a key aspect of logistics
cyber-physical systems (CPS), is typically approached using dis tributed
control and optimization-based methods. The distributed
control methods consume less time, but poorly handle and extend
to multiple constraints. Instead, optimization-based methods
handle constraints effectively, but they are usually centralized,
time-consuming and thus not easily scalable to numerous robots.
To overcome drawbacks of both, we propose a novel cooperative
transportation method for nonholonomic mobile robots by im proving
conventional formation control, which is distributed, has
a low time-complexity and accommodates scalable constraints.
The proposed control-based method is testified on a cable suspended payload
and divided into two parts, including robot
trajectory generation and trajectory tracking. Unlike most time consuming
trajectory generation methods, ours can generate
trajectories with only constant time-complexity, needless of global
maps. As for trajectory tracking, our control-based method not
only scales easily to multiple constraints as those optimization based
methods, but reduces their time-complexity from poly nomial to linear.
Simulations and experiments can verify the
feasibility of our method.
|
2502.13368
|
A Note on Structural Controllability and Observability Indices
|
eess.SY cs.SY
|
In this note, we investigate the structural controllability and observability
indices of structured systems. We provide counter-examples showing that an
existing graph-theoretic characterization for the structural controllability
index (SCOI) may not hold, even for systems with self-loop at every state node.
We further demonstrate that this characterization actually provides upper
bounds, and extend them to new graph-theoretic characterizations applicable to
systems that are not necessarily structurally controllable. Additionally, we
reveal that an existing method may fail to obtain the exact SCOI. Consequently,
complete graph-theoretic characterizations and polynomial-time computation of
SCOI remain open. Given this, we present an efficiently computable tight lower
bound, whose tightness is validated by numerical simulations. All these results
apply to the structural observability index by the duality between
controllability and observability.
|
2502.13369
|
Reducing Hallucinations in Language Model-based SPARQL Query Generation
Using Post-Generation Memory Retrieval
|
cs.CL
|
The ability to generate SPARQL queries from natural language questions is
crucial for ensuring efficient and accurate retrieval of structured data from
knowledge graphs (KG). While large language models (LLMs) have been widely
adopted for SPARQL query generation, they are often susceptible to
hallucinations and out-of-distribution errors when producing KG elements like
Uniform Resource Identifiers (URIs) based on internal parametric knowledge.
This often results in content that appears plausible but is factually
incorrect, posing significant challenges for their use in real-world
information retrieval (IR) applications. This has led to increased research
aimed at detecting and mitigating such errors. In this paper, we introduce PGMR
(Post-Generation Memory Retrieval), a modular framework that incorporates a
non-parametric memory module to retrieve KG elements and enhance LLM-based
SPARQL query generation. Our experimental results indicate that PGMR
consistently delivers strong performance across diverse datasets, data
distributions, and LLMs. Notably, PGMR significantly mitigates URI
hallucinations, nearly eliminating the problem in several scenarios.
|
2502.13370
|
Quantum Recurrent Neural Networks with Encoder-Decoder for
Time-Dependent Partial Differential Equations
|
cs.LG cs.NA math.NA quant-ph
|
Nonlinear time-dependent partial differential equations are essential in
modeling complex phenomena across diverse fields, yet they pose significant
challenges due to their computational complexity, especially in higher
dimensions. This study explores Quantum Recurrent Neural Networks within an
encoder-decoder framework, integrating Variational Quantum Circuits into Gated
Recurrent Units and Long Short-Term Memory networks. Using this architecture,
the model efficiently compresses high-dimensional spatiotemporal data into a
compact latent space, facilitating more efficient temporal evolution. We
evaluate the algorithms on the Hamilton-Jacobi-Bellman equation, Burgers'
equation, the Gray-Scott reaction-diffusion system, and the three dimensional
Michaelis-Menten reaction-diffusion equation. The results demonstrate the
superior performance of the quantum-based algorithms in capturing nonlinear
dynamics, handling high-dimensional spaces, and providing stable solutions,
highlighting their potential as an innovative tool in solving challenging and
complex systems.
|
2502.13372
|
MoVer: Motion Verification for Motion Graphics Animations
|
cs.GR cs.CV
|
While large vision-language models can generate motion graphics animations
from text prompts, they regularly fail to include all of spatio-temporal
properties described in the prompt. We introduce MoVer, a motion verification
DSL based on first-order logic that can check spatio-temporal properties of a
motion graphics animation. We identify a general set of such properties that
people commonly use to describe animations (e.g., the direction and timing of
motions, the relative positioning of objects, etc.). We implement these
properties as predicates in MoVer and provide an execution engine that can
apply a MoVer program to any input SVG-based motion graphics animation. We then
demonstrate how MoVer can be used in an LLM-based synthesis and verification
pipeline for iteratively refining motion graphics animations. Given a text
prompt, our pipeline synthesizes a motion graphics animation and a
corresponding MoVer program. Executing the verification program on the
animation yields a report of the predicates that failed and the report can be
automatically fed back to LLM to iteratively correct the animation. To evaluate
our pipeline, we build a synthetic dataset of 5600 text prompts paired with
ground truth MoVer verification programs. We find that while our LLM-based
pipeline is able to automatically generate a correct motion graphics animation
for 58.8% of the test prompts without any iteration, this number raises to
93.6% with up to 50 correction iterations. Project website:
https://mover-dsl.github.io/
|
2502.13373
|
Fighter Jet Navigation and Combat using Deep Reinforcement Learning with
Explainable AI
|
cs.AI
|
This paper presents the development of an Artificial Intelligence (AI) based
fighter jet agent within a customized Pygame simulation environment, designed
to solve multi-objective tasks via deep reinforcement learning (DRL). The jet's
primary objectives include efficiently navigating the environment, reaching a
target, and selectively engaging or evading an enemy. A reward function
balances these goals while optimized hyperparameters enhance learning
efficiency. Results show more than 80\% task completion rate, demonstrating
effective decision-making. To enhance transparency, the jet's action choices
are analyzed by comparing the rewards of the actual chosen action (factual
action) with those of alternate actions (counterfactual actions), providing
insights into the decision-making rationale. This study illustrates DRL's
potential for multi-objective problem-solving with explainable AI. Project page
is available at:
\href{https://github.com/swatikar95/Autonomous-Fighter-Jet-Navigation-and-Combat}{Project
GitHub Link}.
|
2502.13374
|
Task-agnostic Prompt Compression with Context-aware Sentence Embedding
and Reward-guided Task Descriptor
|
cs.CL
|
The rise of Large Language Models (LLMs) has led to significant interest in
prompt compression, a technique aimed at reducing the length of input prompts
while preserving critical information. However, the prominent approaches in
prompt compression often require explicit questions or handcrafted templates
for compression, limiting their generalizability. We propose Task-agnostic
Prompt Compression (TPC), a novel framework that generalizes compression across
tasks and domains without requiring input questions or templates. TPC generates
a context-relevant task description using a task descriptor trained on a
curated dataset of context and query pairs, and fine-tuned via reinforcement
learning with a reward function designed to capture the most relevant
information. The task descriptor is then utilized to compute the relevance of
each sentence in the prompt to generate the compressed prompt. We introduce 3
model sizes (Base, Large, and Huge), where the largest model outperforms the
existing state-of-the-art methods on LongBench and ZeroSCROLLS benchmarks, and
our smallest model performs comparable to the existing solutions while being
considerably smaller.
|
2502.13376
|
Learning Symbolic Task Decompositions for Multi-Agent Teams
|
cs.MA cs.AI cs.LG
|
One approach for improving sample efficiency in cooperative multi-agent
learning is to decompose overall tasks into sub-tasks that can be assigned to
individual agents. We study this problem in the context of reward machines:
symbolic tasks that can be formally decomposed into sub-tasks. In order to
handle settings without a priori knowledge of the environment, we introduce a
framework that can learn the optimal decomposition from model-free interactions
with the environment. Our method uses a task-conditioned architecture to
simultaneously learn an optimal decomposition and the corresponding agents'
policies for each sub-task. In doing so, we remove the need for a human to
manually design the optimal decomposition while maintaining the
sample-efficiency benefits of improved credit assignment. We provide
experimental results in several deep reinforcement learning settings,
demonstrating the efficacy of our approach. Our results indicate that our
approach succeeds even in environments with codependent agent dynamics,
enabling synchronous multi-agent learning not achievable in previous works.
|
2502.13383
|
MM-Verify: Enhancing Multimodal Reasoning with Chain-of-Thought
Verification
|
cs.CL cs.CV cs.LG
|
According to the Test-Time Scaling, the integration of External Slow-Thinking
with the Verify mechanism has been demonstrated to enhance multi-round
reasoning in large language models (LLMs). However, in the multimodal (MM)
domain, there is still a lack of a strong MM-Verifier. In this paper, we
introduce MM-Verifier and MM-Reasoner to enhance multimodal reasoning through
longer inference and more robust verification. First, we propose a two-step MM
verification data synthesis method, which combines a simulation-based tree
search with verification and uses rejection sampling to generate high-quality
Chain-of-Thought (COT) data. This data is then used to fine-tune the
verification model, MM-Verifier. Additionally, we present a more efficient
method for synthesizing MMCOT data, bridging the gap between text-based and
multimodal reasoning. The synthesized data is used to fine-tune MM-Reasoner.
Our MM-Verifier outperforms all larger models on the MathCheck, MathVista, and
MathVerse benchmarks. Moreover, MM-Reasoner demonstrates strong effectiveness
and scalability, with performance improving as data size increases. Finally,
our approach achieves strong performance when combining MM-Reasoner and
MM-Verifier, reaching an accuracy of 65.3 on MathVista, surpassing GPT-4o
(63.8) with 12 rollouts.
|
2502.13385
|
SNN-Driven Multimodal Human Action Recognition via Event Camera and
Skeleton Data Fusion
|
cs.CV
|
Multimodal human action recognition based on RGB and skeleton data fusion,
while effective, is constrained by significant limitations such as high
computational complexity, excessive memory consumption, and substantial energy
demands, particularly when implemented with Artificial Neural Networks (ANN).
These limitations restrict its applicability in resource-constrained scenarios.
To address these challenges, we propose a novel Spiking Neural Network
(SNN)-driven framework for multimodal human action recognition, utilizing event
camera and skeleton data. Our framework is centered on two key innovations: (1)
a novel multimodal SNN architecture that employs distinct backbone networks for
each modality-an SNN-based Mamba for event camera data and a Spiking Graph
Convolutional Network (SGN) for skeleton data-combined with a spiking semantic
extraction module to capture deep semantic representations; and (2) a
pioneering SNN-based discretized information bottleneck mechanism for modality
fusion, which effectively balances the preservation of modality-specific
semantics with efficient information compression. To validate our approach, we
propose a novel method for constructing a multimodal dataset that integrates
event camera and skeleton data, enabling comprehensive evaluation. Extensive
experiments demonstrate that our method achieves superior performance in both
recognition accuracy and energy efficiency, offering a promising solution for
practical applications.
|
2502.13388
|
Reflection of Episodes: Learning to Play Game from Expert and Self
Experiences
|
cs.AI
|
StarCraft II is a complex and dynamic real-time strategy (RTS) game
environment, which is very suitable for artificial intelligence and
reinforcement learning research. To address the problem of Large Language
Model(LLM) learning in complex environments through self-reflection, we propose
a Reflection of Episodes(ROE) framework based on expert experience and
self-experience. This framework first obtains key information in the game
through a keyframe selection method, then makes decisions based on expert
experience and self-experience. After a game is completed, it reflects on the
previous experience to obtain new self-experience. Finally, in the experiment,
our method beat the robot under the Very Hard difficulty in TextStarCraft II.
We analyze the data of the LLM in the process of the game in detail, verified
its effectiveness.
|
2502.13389
|
Reasoning with Reinforced Functional Token Tuning
|
cs.AI
|
In this work, we propose Reinforced Functional Token Tuning (RFTT), a novel
reinforced fine-tuning framework that empowers Large Language Models (LLMs)
with self-play learn-to-reason capabilities. Unlike prior prompt-driven
reasoning efforts, RFTT embeds a rich set of learnable functional tokens (e.g.,
<analyze>, <verify>, <refine>) directly into the model vocabulary, enabling
chain-of-thought construction with diverse human-like reasoning behaviors.
Specifically, RFTT comprises two phases: (1) supervised fine-tuning performs
prompt-driven tree search to obtain self-generated training data annotated with
functional tokens, which warms up the model to learn these tokens for
reasoning; and (2) online reinforcement learning further allows the model to
explore different reasoning pathways through functional token sampling without
relying on prompts, thereby facilitating effective self-improvement for
functional reasoning. Extensive experiments demonstrate the superiority of the
proposed RFTT on mathematical benchmarks, significantly boosting
Qwen-2.5-7B-Instruct (70.6% to 79.8%) and LLaMA-3.1-8B-Instruct (32.2% to
60.2%) on the MATH dataset. Moreover, the performance of RFTT consistently
improves with more search rollouts at inference time. Our code is available at
https://github.com/sastpg/RFTT.
|
2502.13390
|
Deep-Unfolded Massive Grant-Free Transmission in Cell-Free Wireless
Communication Systems
|
eess.SP cs.IT cs.LG math.IT
|
Grant-free transmission and cell-free communication are vital in improving
coverage and quality-of-service for massive machine-type communication. This
paper proposes a novel framework of joint active user detection, channel
estimation, and data detection (JACD) for massive grant-free transmission in
cell-free wireless communication systems. We formulate JACD as an optimization
problem and solve it approximately using forward-backward splitting. To deal
with the discrete symbol constraint, we relax the discrete constellation to its
convex hull and propose two approaches that promote solutions from the
constellation set. To reduce complexity, we replace costly computations with
approximate shrinkage operations and approximate posterior mean estimator
computations. To improve active user detection (AUD) performance, we introduce
a soft-output AUD module that considers both the data estimates and channel
conditions. To jointly optimize all algorithm hyper-parameters and to improve
JACD performance, we further deploy deep unfolding together with a momentum
strategy, resulting in two algorithms called DU-ABC and DU-POEM. Finally, we
demonstrate the efficacy of the proposed JACD algorithms via extensive system
simulations.
|
2502.13392
|
Atomic Proximal Policy Optimization for Electric Robo-Taxi Dispatch and
Charger Allocation
|
cs.AI
|
Pioneering companies such as Waymo have deployed robo-taxi services in
several U.S. cities. These robo-taxis are electric vehicles, and their
operations require the joint optimization of ride matching, vehicle
repositioning, and charging scheduling in a stochastic environment. We model
the operations of the ride-hailing system with robo-taxis as a discrete-time,
average reward Markov Decision Process with infinite horizon. As the fleet size
grows, the dispatching is challenging as the set of system state and the fleet
dispatching action set grow exponentially with the number of vehicles. To
address this, we introduce a scalable deep reinforcement learning algorithm,
called Atomic Proximal Policy Optimization (Atomic-PPO), that reduces the
action space using atomic action decomposition. We evaluate our algorithm using
real-world NYC for-hire vehicle data and we measure the performance using the
long-run average reward achieved by the dispatching policy relative to a
fluid-based reward upper bound. Our experiments demonstrate the superior
performance of our Atomic-PPO compared to benchmarks. Furthermore, we conduct
extensive numerical experiments to analyze the efficient allocation of charging
facilities and assess the impact of vehicle range and charger speed on fleet
performance.
|
2502.13394
|
Flow-based generative models as iterative algorithms in probability
space
|
cs.LG math.ST stat.ML stat.TH
|
Generative AI (GenAI) has revolutionized data-driven modeling by enabling the
synthesis of high-dimensional data across various applications, including image
generation, language modeling, biomedical signal processing, and anomaly
detection. Flow-based generative models provide a powerful framework for
capturing complex probability distributions, offering exact likelihood
estimation, efficient sampling, and deterministic transformations between
distributions. These models leverage invertible mappings governed by Ordinary
Differential Equations (ODEs), enabling precise density estimation and
likelihood evaluation. This tutorial presents an intuitive mathematical
framework for flow-based generative models, formulating them as neural
network-based representations of continuous probability densities. We explore
key theoretical principles, including the Wasserstein metric, gradient flows,
and density evolution governed by ODEs, to establish convergence guarantees and
bridge empirical advancements with theoretical insights. By providing a
rigorous yet accessible treatment, we aim to equip researchers and
practitioners with the necessary tools to effectively apply flow-based
generative models in signal processing and machine learning.
|
2502.13395
|
Unsupervised CP-UNet Framework for Denoising DAS Data with Decay Noise
|
cs.SD cs.LG eess.AS eess.SP physics.optics
|
Distributed acoustic sensor (DAS) technology leverages optical fiber cables
to detect acoustic signals, providing cost-effective and dense monitoring
capabilities. It offers several advantages including resistance to extreme
conditions, immunity to electromagnetic interference, and accurate detection.
However, DAS typically exhibits a lower signal-to-noise ratio (S/N) compared to
geophones and is susceptible to various noise types, such as random noise,
erratic noise, level noise, and long-period noise. This reduced S/N can
negatively impact data analyses containing inversion and interpretation. While
artificial intelligence has demonstrated excellent denoising capabilities, most
existing methods rely on supervised learning with labeled data, which imposes
stringent requirements on the quality of the labels. To address this issue, we
develop a label-free unsupervised learning (UL) network model based on
Context-Pyramid-UNet (CP-UNet) to suppress erratic and random noises in DAS
data. The CP-UNet utilizes the Context Pyramid Module in the encoding and
decoding process to extract features and reconstruct the DAS data. To enhance
the connectivity between shallow and deep features, we add a Connected Module
(CM) to both encoding and decoding section. Layer Normalization (LN) is
utilized to replace the commonly employed Batch Normalization (BN),
accelerating the convergence of the model and preventing gradient explosion
during training. Huber-loss is adopted as our loss function whose parameters
are experimentally determined. We apply the network to both the 2-D synthetic
and filed data. Comparing to traditional denoising methods and the latest UL
framework, our proposed method demonstrates superior noise reduction
performance.
|
2502.13396
|
Prompting a Weighting Mechanism into LLM-as-a-Judge in Two-Step: A Case
Study
|
cs.CL
|
While Large Language Models (LLMs) have emerged as promising tools for
evaluating Natural Language Generation (NLG) tasks, their effectiveness is
limited by their inability to appropriately weigh the importance of different
topics, often overemphasizing minor details while undervaluing critical
information, leading to misleading assessments. Our work proposes an efficient
prompt design mechanism to address this specific limitation and provide a case
study. Through strategic prompt engineering that incorporates explicit
importance weighting mechanisms, we enhance using LLM-as-a-Judge ability to
prioritize relevant information effectively, as demonstrated by an average
improvement of 6% in the Human Alignment Rate (HAR) metric.
|
2502.13398
|
$\mathtt{GeLLM^3O}$: Generalizing Large Language Models for
Multi-property Molecule Optimization
|
cs.LG cs.AI cs.CL physics.chem-ph q-bio.QM
|
Despite recent advancements, most computational methods for molecule
optimization are constrained to single- or double-property optimization tasks
and suffer from poor scalability and generalizability to novel optimization
tasks. Meanwhile, Large Language Models (LLMs) demonstrate remarkable
out-of-domain generalizability to novel tasks. To demonstrate LLMs' potential
for molecule optimization, we introduce $\mathtt{MoMUInstruct}$, the first
high-quality instruction-tuning dataset specifically focused on complex
multi-property molecule optimization tasks. Leveraging $\mathtt{MoMUInstruct}$,
we develop $\mathtt{GeLLM^3O}$s, a series of instruction-tuned LLMs for
molecule optimization. Extensive evaluations across 5 in-domain and 5
out-of-domain tasks demonstrate that $\mathtt{GeLLM^3O}$s consistently
outperform state-of-the-art baselines. $\mathtt{GeLLM^3O}$s also exhibit
outstanding zero-shot generalization to unseen tasks, significantly
outperforming powerful closed-source LLMs. Such strong generalizability
demonstrates the tremendous potential of $\mathtt{GeLLM^3O}$s as foundational
models for molecule optimization, thereby tackling novel optimization tasks
without resource-intensive retraining. $\mathtt{MoMUInstruct}$, models, and
code are accessible through https://github.com/ninglab/GeLLMO.
|
2502.13399
|
MaizeEar-SAM: Zero-Shot Maize Ear Phenotyping
|
cs.CV
|
Quantifying the variation in yield component traits of maize (Zea mays L.),
which together determine the overall productivity of this globally important
crop, plays a critical role in plant genetics research, plant breeding, and the
development of improved farming practices. Grain yield per acre is calculated
by multiplying the number of plants per acre, ears per plant, number of kernels
per ear, and the average kernel weight. The number of kernels per ear is
determined by the number of kernel rows per ear multiplied by the number of
kernels per row. Traditional manual methods for measuring these two traits are
time-consuming, limiting large-scale data collection. Recent automation efforts
using image processing and deep learning encounter challenges such as high
annotation costs and uncertain generalizability.
We tackle these issues by exploring Large Vision Models for zero-shot,
annotation-free maize kernel segmentation. By using an open-source large vision
model, the Segment Anything Model (SAM), we segment individual kernels in RGB
images of maize ears and apply a graph-based algorithm to calculate the number
of kernels per row. Our approach successfully identifies the number of kernels
per row across a wide range of maize ears, showing the potential of zero-shot
learning with foundation vision models combined with image processing
techniques to improve automation and reduce subjectivity in agronomic data
collection. All our code is open-sourced to make these affordable phenotyping
methods accessible to everyone.
|
2502.13403
|
Object-Pose Estimation With Neural Population Codes
|
cs.RO cs.LG
|
Robotic assembly tasks require object-pose estimation, particularly for tasks
that avoid costly mechanical constraints. Object symmetry complicates the
direct mapping of sensory input to object rotation, as the rotation becomes
ambiguous and lacks a unique training target. Some proposed solutions involve
evaluating multiple pose hypotheses against the input or predicting a
probability distribution, but these approaches suffer from significant
computational overhead. Here, we show that representing object rotation with a
neural population code overcomes these limitations, enabling a direct mapping
to rotation and end-to-end learning. As a result, population codes facilitate
fast and accurate pose estimation. On the T-LESS dataset, we achieve inference
in 3.2 milliseconds on an Apple M1 CPU and a Maximum Symmetry-Aware Surface
Distance accuracy of 84.7% using only gray-scale image input, compared to 69.7%
accuracy when directly mapping to pose.
|
2502.13406
|
Generative Predictive Control: Flow Matching Policies for Dynamic and
Difficult-to-Demonstrate Tasks
|
cs.RO cs.AI cs.SY eess.SY
|
Generative control policies have recently unlocked major progress in
robotics. These methods produce action sequences via diffusion or flow
matching, with training data provided by demonstrations. But despite enjoying
considerable success on difficult manipulation problems, generative policies
come with two key limitations. First, behavior cloning requires expert
demonstrations, which can be time-consuming and expensive to obtain. Second,
existing methods are limited to relatively slow, quasi-static tasks. In this
paper, we leverage a tight connection between sampling-based predictive control
and generative modeling to address each of these issues. In particular, we
introduce generative predictive control, a supervised learning framework for
tasks with fast dynamics that are easy to simulate but difficult to
demonstrate. We then show how trained flow-matching policies can be
warm-started at run-time, maintaining temporal consistency and enabling fast
feedback rates. We believe that generative predictive control offers a
complementary approach to existing behavior cloning methods, and hope that it
paves the way toward generalist policies that extend beyond quasi-static
demonstration-oriented tasks.
|
2502.13407
|
JL1-CD: A New Benchmark for Remote Sensing Change Detection and a Robust
Multi-Teacher Knowledge Distillation Framework
|
cs.CV cs.AI
|
Deep learning has achieved significant success in the field of remote sensing
image change detection (CD), yet two major challenges remain: the scarcity of
sub-meter, all-inclusive open-source CD datasets, and the difficulty of
achieving consistent and satisfactory detection results across images with
varying change areas. To address these issues, we introduce the JL1-CD dataset,
which contains 5,000 pairs of 512 x 512 pixel images with a resolution of 0.5
to 0.75 meters. Additionally, we propose a multi-teacher knowledge distillation
(MTKD) framework for CD. Experimental results on the JL1-CD and SYSU-CD
datasets demonstrate that the MTKD framework significantly improves the
performance of CD models with various network architectures and parameter
sizes, achieving new state-of-the-art results. The code is available at
https://github.com/circleLZY/MTKD-CD.
|
2502.13410
|
Tell Me Why: Incentivizing Explanations
|
cs.GT cs.AI econ.TH
|
Common sense suggests that when individuals explain why they believe
something, we can arrive at more accurate conclusions than when they simply
state what they believe. Yet, there is no known mechanism that provides
incentives to elicit explanations for beliefs from agents. This likely stems
from the fact that standard Bayesian models make assumptions (like conditional
independence of signals) that preempt the need for explanations, in order to
show efficient information aggregation. A natural justification for the value
of explanations is that agents' beliefs tend to be drawn from overlapping
sources of information, so agents' belief reports do not reveal all that needs
to be known. Indeed, this work argues that rationales-explanations of an
agent's private information-lead to more efficient aggregation by allowing
agents to efficiently identify what information they share and what information
is new. Building on this model of rationales, we present a novel 'deliberation
mechanism' to elicit rationales from agents in which truthful reporting of
beliefs and rationales is a perfect Bayesian equilibrium.
|
2502.13412
|
Explore-Construct-Filter: An Automated Framework for Rich and Reliable
API Knowledge Graph Construction
|
cs.SE cs.AI
|
The API Knowledge Graph (API KG) is a structured network that models API
entities and their relations, providing essential semantic insights for tasks
such as API recommendation, code generation, and API misuse detection. However,
constructing a knowledge-rich and reliable API KG presents several challenges.
Existing schema-based methods rely heavily on manual annotations to design KG
schemas, leading to excessive manual overhead. On the other hand, schema-free
methods, due to the lack of schema guidance, are prone to introducing noise,
reducing the KG's reliability. To address these issues, we propose the
Explore-Construct-Filter framework, an automated approach for API KG
construction based on large language models (LLMs). This framework consists of
three key modules: 1) KG exploration: LLMs simulate the workflow of annotators
to automatically design a schema with comprehensive type triples, minimizing
human intervention; 2) KG construction: Guided by the schema, LLMs extract
instance triples to construct a rich yet unreliable API KG; 3) KG filtering:
Removing invalid type triples and suspicious instance triples to construct a
rich and reliable API KG. Experimental results demonstrate that our method
surpasses the state-of-the-art method, achieving a 25.2% improvement in F1
score. Moreover, the Explore-Construct-Filter framework proves effective, with
the KG exploration module increasing KG richness by 133.6% and the KG filtering
module improving reliability by 26.6%. Finally, cross-model experiments confirm
the generalizability of our framework.
|
2502.13416
|
Detecting LLM Fact-conflicting Hallucinations Enhanced by
Temporal-logic-based Reasoning
|
cs.CL
|
Large language models (LLMs) face the challenge of hallucinations -- outputs
that seem coherent but are actually incorrect. A particularly damaging type is
fact-conflicting hallucination (FCH), where generated content contradicts
established facts. Addressing FCH presents three main challenges: 1)
Automatically constructing and maintaining large-scale benchmark datasets is
difficult and resource-intensive; 2) Generating complex and efficient test
cases that the LLM has not been trained on -- especially those involving
intricate temporal features -- is challenging, yet crucial for eliciting
hallucinations; and 3) Validating the reasoning behind LLM outputs is
inherently difficult, particularly with complex logical relationships, as it
requires transparency in the model's decision-making process.
This paper presents Drowzee, an innovative end-to-end metamorphic testing
framework that utilizes temporal logic to identify fact-conflicting
hallucinations (FCH) in large language models (LLMs). Drowzee builds a
comprehensive factual knowledge base by crawling sources like Wikipedia and
uses automated temporal-logic reasoning to convert this knowledge into a large,
extensible set of test cases with ground truth answers. LLMs are tested using
these cases through template-based prompts, which require them to generate both
answers and reasoning steps. To validate the reasoning, we propose two
semantic-aware oracles that compare the semantic structure of LLM outputs to
the ground truths. Across nine LLMs in nine different knowledge domains,
experimental results show that Drowzee effectively identifies rates of
non-temporal-related hallucinations ranging from 24.7% to 59.8%, and rates of
temporal-related hallucinations ranging from 16.7% to 39.2%.
|
2502.13417
|
RLTHF: Targeted Human Feedback for LLM Alignment
|
cs.CL cs.AI cs.LG
|
Fine-tuning large language models (LLMs) to align with user preferences is
challenging due to the high cost of quality human annotations in Reinforcement
Learning from Human Feedback (RLHF) and the generalizability limitations of AI
Feedback. To address these challenges, we propose RLTHF, a human-AI hybrid
framework that combines LLM-based initial alignment with selective human
annotations to achieve full-human annotation alignment with minimal effort.
RLTHF identifies hard-to-annotate samples mislabeled by LLMs using a reward
model's reward distribution and iteratively enhances alignment by integrating
strategic human corrections while leveraging LLM's correctly labeled samples.
Evaluations on HH-RLHF and TL;DR datasets show that RLTHF reaches full-human
annotation-level alignment with only 6-7% of the human annotation effort.
Furthermore, models trained on RLTHF's curated datasets for downstream tasks
outperform those trained on fully human-annotated datasets, underscoring the
effectiveness of RLTHF's strategic data curation.
|
2502.13418
|
Empirical Study of Dynamic Regret in Online Model Predictive Control for
Linear Time-Varying Systems
|
eess.SY cs.SY
|
Model Predictive Control (MPC) is a widely used technique for managing
timevarying systems, supported by extensive theoretical analysis. While
theoretical studies employing dynamic regret frameworks have established robust
performance guarantees, their empirical validation remains sparse. This paper
investigates the practical applicability of MPC by empirically evaluating the
assumptions and theoretical results proposed by Lin et al. [2022].
Specifically, we analyze the performance of online MPC under varying prediction
errors and prediction horizons in Linear Time-Varying (LTV) systems. Our study
examines the relationship between dynamic regret, prediction errors, and
prediction horizons, providing insights into the trade-offs involved. By
bridging theory and practice, this work advances the understanding and
application of MPC in real-world scenarios.
|
2502.13420
|
Probabilistically Robust Uncertainty Analysis and Optimal Control of
Continuous Lyophilization via Polynomial Chaos Theory
|
cs.CE cs.SY eess.SY math.OC
|
Lyophilization, aka freeze drying, is a process commonly used to increase the
stability of various drug products in biotherapeutics manufacturing, e.g., mRNA
vaccines, allowing for higher storage temperature. While the current trends in
the industry are moving towards continuous manufacturing, the majority of
industrial lyophilization processes are still being operated in a batch mode.
This article presents a framework that accounts for the probabilistic
uncertainty during the primary and secondary drying steps in continuous
lyophilization. The probabilistic uncertainty is incorporated into the
mechanistic model via polynomial chaos theory (PCT). The resulting PCT-based
model is able to accurately and efficiently quantify the effects of uncertainty
on several critical process variables, including the temperature, sublimation
front, and concentration of bound water. The integration of the PCT-based model
into stochastic optimization and control is demonstrated. The proposed
framework and case studies can be used to guide the design and control of
continuous lyophilization while accounting for probabilistic uncertainty.
|
2502.13422
|
TabSD: Large Free-Form Table Question Answering with SQL-Based Table
Decomposition
|
cs.CL cs.AI cs.DB
|
Question answering on free-form tables (TableQA) is challenging due to the
absence of predefined schemas and the presence of noise in large tables. While
Large Language Models (LLMs) have shown promise in TableQA, they struggle with
large free-form tables and noise sensitivity. To address these challenges, we
propose TabSD, a SQL-based decomposition model that enhances LLMs' ability to
process large free-form tables. TabSD generates SQL queries to guide the table
decomposition, remove noise, and processes sub-tables for better answer
generation. Additionally, SQL Verifier refines SQL outputs to enhance
decomposition accuracy. We introduce two TableQA datasets with large free-form
tables, SLQA and SEQA, which consist solely of large free-form tables and will
be publicly available. Experimental results on four benchmark datasets
demonstrate that TABSD outperforms the best-existing baseline models by 23.07%,
2.84%, 23.24% and 9.32% in accuracy, respectively, highlighting its
effectiveness in handling large and noisy free-form tables.
|
2502.13428
|
MCTS-KBQA: Monte Carlo Tree Search for Knowledge Base Question Answering
|
cs.CL cs.AI
|
This study explores how to enhance the reasoning capabilities of large
language models (LLMs) in knowledge base question answering (KBQA) by
leveraging Monte Carlo Tree Search (MCTS). Semantic parsing-based KBQA methods
are particularly challenging as these approaches require locating elements from
knowledge bases and generating logical forms, demanding not only extensive
annotated data but also strong reasoning capabilities. Although recent
approaches leveraging LLMs as agents have demonstrated considerable potential,
these studies are inherently constrained by their linear decision-making
processes. To address this limitation, we propose a MCTS-based framework that
enhances LLMs' reasoning capabilities through tree search methodology. We
design a carefully designed step-wise reward mechanism that requires only
direct prompting of open-source instruction LLMs without additional
fine-tuning. Experimental results demonstrate that our approach significantly
outperforms linear decision-making methods, particularly in low-resource
scenarios. Additionally, we contribute new data resources to the KBQA community
by annotating intermediate reasoning processes for existing question-SPARQL
datasets using distant supervision. Experimental results on the extended
dataset demonstrate that our method achieves comparable performance to fully
supervised models while using significantly less training data.
|
2502.13430
|
Vision-Based Generic Potential Function for Policy Alignment in
Multi-Agent Reinforcement Learning
|
cs.AI cs.LG
|
Guiding the policy of multi-agent reinforcement learning to align with human
common sense is a difficult problem, largely due to the complexity of modeling
common sense as a reward, especially in complex and long-horizon multi-agent
tasks. Recent works have shown the effectiveness of reward shaping, such as
potential-based rewards, to enhance policy alignment. The existing works,
however, primarily rely on experts to design rule-based rewards, which are
often labor-intensive and lack a high-level semantic understanding of common
sense. To solve this problem, we propose a hierarchical vision-based reward
shaping method. At the bottom layer, a visual-language model (VLM) serves as a
generic potential function, guiding the policy to align with human common sense
through its intrinsic semantic understanding. To help the policy adapts to
uncertainty and changes in long-horizon tasks, the top layer features an
adaptive skill selection module based on a visual large language model (vLLM).
The module uses instructions, video replays, and training records to
dynamically select suitable potential function from a pre-designed pool.
Besides, our method is theoretically proven to preserve the optimal policy.
Extensive experiments conducted in the Google Research Football environment
demonstrate that our method not only achieves a higher win rate but also
effectively aligns the policy with human common sense.
|
2502.13436
|
On Qualitative Preference in Alternating-time Temporal Logic with
Strategy Contexts
|
cs.LO cs.MA
|
We show how to add and eliminate binary preference on plays in
Alternating-time Temporal Logic (ATL) with strategy contexts on Concurrent Game
Models (CGMs) by means of a translation which preserves satisfaction in models
where preference-indiscernibility between plays is an equivalence relation of
finite index. The elimination technique also works for a companion second-order
path quantifier, which makes quantified path variables range over sets of plays
that are closed under preference-indiscernibility. We argue that the preference
operator and the specialized quantifier facilitate formulating interesting
solution concepts such as Nash equilibrium and secure equilibrium in a
straightforward way. We also present a novel translation from ATL with strategy
contexts to Quantified Computation Tree Logic (QCTL). Together with the
translation which eliminates preference and the specialized form of
quantification, this translation allows reasoning about infinite multiplayer
synchronous games on CGMs to be translated from the proposed extension of ATL
with strategy contexts into QCTL. The setting is related to that of ordered
objectives in the works of Bouyer, Brenguier, Markey and Ummels, except that
our focus is on the use of the temporal logic languages mentioned above, and we
rely on translations into QCTL for the algorithmic solutions.
|
2502.13440
|
Semi-supervised classification of bird vocalizations
|
cs.SD cs.AI cs.CV eess.AS q-bio.QM
|
Changes in bird populations can indicate broader changes in ecosystems,
making birds one of the most important animal groups to monitor. Combining
machine learning and passive acoustics enables continuous monitoring over
extended periods without direct human involvement. However, most existing
techniques require extensive expert-labeled datasets for training and cannot
easily detect time-overlapping calls in busy soundscapes. We propose a
semi-supervised acoustic bird detector designed to allow both the detection of
time-overlapping calls (when separated in frequency) and the use of few labeled
training samples. The classifier is trained and evaluated on a combination of
community-recorded open-source data and long-duration soundscape recordings
from Singapore. It achieves a mean F0.5 score of 0.701 across 315 classes from
110 bird species on a hold-out test set, with an average of 11 labeled training
samples per class. It outperforms the state-of-the-art BirdNET classifier on a
test set of 103 bird species despite significantly fewer labeled training
samples. The detector is further tested on 144 microphone-hours of continuous
soundscape data. The rich soundscape in Singapore makes suppression of false
positives a challenge on raw, continuous data streams. Nevertheless, we
demonstrate that achieving high precision in such environments with minimal
labeled training data is possible.
|
2502.13441
|
The Self-Improvement Paradox: Can Language Models Bootstrap Reasoning
Capabilities without External Scaffolding?
|
cs.CL cs.AI
|
Self-improving large language models (LLMs) -- i.e., to improve the
performance of an LLM by fine-tuning it with synthetic data generated by itself
-- is a promising way to advance the capabilities of LLMs while avoiding
extensive supervision. Existing approaches to self-improvement often rely on
external supervision signals in the form of seed data and/or assistance from
third-party models. This paper presents Crescent -- a simple yet effective
framework for generating high-quality synthetic question-answer data in a fully
autonomous manner. Crescent first elicits the LLM to generate raw questions via
a bait prompt, then diversifies these questions leveraging a rejection
sampling-based self-deduplication, and finally feeds the questions to the LLM
and collects the corresponding answers by means of majority voting. We show
that Crescent sheds light on the potential of true self-improvement with zero
external supervision signals for math reasoning; in particular,
Crescent-generated question-answer pairs suffice to (i) improve the reasoning
capabilities of an LLM while preserving its general performance (especially in
the 0-shot setting); and (ii) distil LLM knowledge to weaker models more
effectively than existing methods based on seed-dataset augmentation.
|
2502.13442
|
TreeCut: A Synthetic Unanswerable Math Word Problem Dataset for LLM
Hallucination Evaluation
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) now achieve near-human performance on standard
math word problem benchmarks (e.g., GSM8K), yet their true reasoning ability
remains disputed. A key concern is that models often produce confident, yet
unfounded, answers to unanswerable problems. We introduce TreeCut, a synthetic
dataset that systematically generates infinite unanswerable math word problems
and their answerable counterparts, by representing each question as a tree and
removing chosen necessary conditions. Experiments show TreeCut effectively
induce hallucinations in large language models, including GPT-4o and o3-mini,
with rates of 61% and 42% in their respective worst-case scenarios. Further
analysis highlights that deeper or more complex trees, composite item names,
and removing necessary condition near the middle of a path all increase the
likelihood of hallucinations, underscoring the persistent challenges LLMs face
in identifying unanswerable math problems.
|
2502.13443
|
Physics-Aware Robotic Palletization with Online Masking Inference
|
cs.RO
|
The efficient planning of stacking boxes, especially in the online setting
where the sequence of item arrivals is unpredictable, remains a critical
challenge in modern warehouse and logistics management. Existing solutions
often address box size variations, but overlook their intrinsic and physical
properties, such as density and rigidity, which are crucial for real-world
applications. We use reinforcement learning (RL) to solve this problem by
employing action space masking to direct the RL policy toward valid actions.
Unlike previous methods that rely on heuristic stability assessments which are
difficult to assess in physical scenarios, our framework utilizes online
learning to dynamically train the action space mask, eliminating the need for
manual heuristic design. Extensive experiments demonstrate that our proposed
method outperforms existing state-of-the-arts. Furthermore, we deploy our
learned task planner in a real-world robotic palletizer, validating its
practical applicability in operational settings.
|
2502.13446
|
Adopting Whisper for Confidence Estimation
|
eess.AS cs.LG
|
Recent research on word-level confidence estimation for speech recognition
systems has primarily focused on lightweight models known as Confidence
Estimation Modules (CEMs), which rely on hand-engineered features derived from
Automatic Speech Recognition (ASR) outputs. In contrast, we propose a novel
end-to-end approach that leverages the ASR model itself (Whisper) to generate
word-level confidence scores. Specifically, we introduce a method in which the
Whisper model is fine-tuned to produce scalar confidence scores given an audio
input and its corresponding hypothesis transcript. Our experiments demonstrate
that the fine-tuned Whisper-tiny model, comparable in size to a strong CEM
baseline, achieves similar performance on the in-domain dataset and surpasses
the CEM baseline on eight out-of-domain datasets, whereas the fine-tuned
Whisper-large model consistently outperforms the CEM baseline by a substantial
margin across all datasets.
|
2502.13447
|
Enhancing Chest X-ray Classification through Knowledge Injection in
Cross-Modality Learning
|
cs.CV cs.CL
|
The integration of artificial intelligence in medical imaging has shown
tremendous potential, yet the relationship between pre-trained knowledge and
performance in cross-modality learning remains unclear. This study investigates
how explicitly injecting medical knowledge into the learning process affects
the performance of cross-modality classification, focusing on Chest X-ray (CXR)
images. We introduce a novel Set Theory-based knowledge injection framework
that generates captions for CXR images with controllable knowledge granularity.
Using this framework, we fine-tune CLIP model on captions with varying levels
of medical information. We evaluate the model's performance through zero-shot
classification on the CheXpert dataset, a benchmark for CXR classification. Our
results demonstrate that injecting fine-grained medical knowledge substantially
improves classification accuracy, achieving 72.5\% compared to 49.9\% when
using human-generated captions. This highlights the crucial role of
domain-specific knowledge in medical cross-modality learning. Furthermore, we
explore the influence of knowledge density and the use of domain-specific Large
Language Models (LLMs) for caption generation, finding that denser knowledge
and specialized LLMs contribute to enhanced performance. This research advances
medical image analysis by demonstrating the effectiveness of knowledge
injection for improving automated CXR classification, paving the way for more
accurate and reliable diagnostic tools.
|
2502.13449
|
Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular
Language Model
|
cs.LG physics.chem-ph
|
Understanding molecules is key to understanding organisms and driving
advances in drug discovery, requiring interdisciplinary knowledge across
chemistry and biology. Although large molecular language models have achieved
notable success in interpreting molecular structures, their instruction
datasets are limited to the specific knowledge from task-oriented datasets and
do not fully cover the fundamental characteristics of molecules, hindering
their abilities as general-purpose molecular assistants. To address this issue,
we propose Mol-LLaMA, a large molecular language model that grasps the general
knowledge centered on molecules via multi-modal instruction tuning. To this
end, we design key data types that encompass the fundamental features of
molecules, incorporating essential knowledge from molecular structures. In
addition, to improve understanding of molecular features, we introduce a module
that integrates complementary information from different molecular encoders,
leveraging the distinct advantages of different molecular representations. Our
experimental results demonstrate that Mol-LLaMA is capable of comprehending the
general features of molecules and generating relevant responses to users'
queries with detailed explanations, implying its potential as a general-purpose
assistant for molecular analysis.
|
2502.13450
|
Interleaved Gibbs Diffusion for Constrained Generation
|
cs.LG cs.AI
|
We introduce Interleaved Gibbs Diffusion (IGD), a novel generative modeling
framework for mixed continuous-discrete data, focusing on constrained
generation problems. Prior works on discrete and continuous-discrete diffusion
models assume factorized denoising distribution for fast generation, which can
hinder the modeling of strong dependencies between random variables encountered
in constrained generation. IGD moves beyond this by interleaving continuous and
discrete denoising algorithms via a discrete time Gibbs sampling type Markov
chain. IGD provides flexibility in the choice of denoisers, allows conditional
generation via state-space doubling and inference time scaling via the
ReDeNoise method. Empirical evaluations on three challenging tasks-solving
3-SAT, generating molecule structures, and generating layouts-demonstrate
state-of-the-art performance. Notably, IGD achieves a 7% improvement on 3-SAT
out of the box and achieves state-of-the-art results in molecule generation
without relying on equivariant diffusion or domain-specific architectures. We
explore a wide range of modeling, and interleaving strategies along with
hyperparameters in each of these problems.
|
2502.13451
|
MapNav: A Novel Memory Representation via Annotated Semantic Maps for
VLM-based Vision-and-Language Navigation
|
cs.RO
|
Vision-and-language navigation (VLN) is a key task in Embodied AI, requiring
agents to navigate diverse and unseen environments while following natural
language instructions. Traditional approaches rely heavily on historical
observations as spatio-temporal contexts for decision making, leading to
significant storage and computational overhead. In this paper, we introduce
MapNav, a novel end-to-end VLN model that leverages Annotated Semantic Map
(ASM) to replace historical frames. Specifically, our approach constructs a
top-down semantic map at the start of each episode and update it at each
timestep, allowing for precise object mapping and structured navigation
information. Then, we enhance this map with explicit textual labels for key
regions, transforming abstract semantics into clear navigation cues and
generate our ASM. MapNav agent using the constructed ASM as input, and use the
powerful end-to-end capabilities of VLM to empower VLN. Extensive experiments
demonstrate that MapNav achieves state-of-the-art (SOTA) performance in both
simulated and real-world environments, validating the effectiveness of our
method. Moreover, we will release our ASM generation source code and dataset to
ensure reproducibility, contributing valuable resources to the field. We
believe that our proposed MapNav can be used as a new memory representation
method in VLN, paving the way for future research in this field.
|
2502.13452
|
Ephemerality meets LiDAR-based Lifelong Mapping
|
cs.RO
|
Lifelong mapping is crucial for the long-term deployment of robots in dynamic
environments. In this paper, we present ELite, an ephemerality-aided
LiDAR-based lifelong mapping framework which can seamlessly align multiple
session data, remove dynamic objects, and update maps in an end-to-end fashion.
Map elements are typically classified as static or dynamic, but cases like
parked cars indicate the need for more detailed categories than binary. Central
to our approach is the probabilistic modeling of the world into two-stage
$\textit{ephemerality}$, which represent the transiency of points in the map
within two different time scales. By leveraging the spatiotemporal context
encoded in ephemeralities, ELite can accurately infer transient map elements,
maintain a reliable up-to-date static map, and improve robustness in aligning
the new data in a more fine-grained manner. Extensive real-world experiments on
long-term datasets demonstrate the robustness and effectiveness of our system.
The source code is publicly available for the robotics community:
https://github.com/dongjae0107/ELite.
|
2502.13457
|
Provably Efficient Multi-Objective Bandit Algorithms under
Preference-Centric Customization
|
cs.LG
|
Multi-objective multi-armed bandit (MO-MAB) problems traditionally aim to
achieve Pareto optimality. However, real-world scenarios often involve users
with varying preferences across objectives, resulting in a Pareto-optimal arm
that may score high for one user but perform quite poorly for another. This
highlights the need for customized learning, a factor often overlooked in prior
research. To address this, we study a preference-aware MO-MAB framework in the
presence of explicit user preference. It shifts the focus from achieving Pareto
optimality to further optimizing within the Pareto front under
preference-centric customization. To our knowledge, this is the first
theoretical study of customized MO-MAB optimization with explicit user
preferences. Motivated by practical applications, we explore two scenarios:
unknown preference and hidden preference, each presenting unique challenges for
algorithm design and analysis. At the core of our algorithms are preference
estimation and preference-aware optimization mechanisms to adapt to user
preferences effectively. We further develop novel analytical techniques to
establish near-optimal regret of the proposed algorithms. Strong empirical
performance confirm the effectiveness of our approach.
|
2502.13458
|
ThinkGuard: Deliberative Slow Thinking Leads to Cautious Guardrails
|
cs.CL cs.AI cs.CR cs.LG
|
Ensuring the safety of large language models (LLMs) is critical as they are
deployed in real-world applications. Existing guardrails rely on rule-based
filtering or single-pass classification, limiting their ability to handle
nuanced safety violations. To address this, we propose ThinkGuard, a
critique-augmented guardrail model that distills knowledge from high-capacity
LLMs by generating structured critiques alongside safety labels. Fine-tuned on
critique-augmented data, the captured deliberative thinking ability drastically
enhances the guardrail's cautiousness and interpretability. Evaluated on
multiple safety benchmarks, ThinkGuard achieves the highest average F1 and
AUPRC, outperforming all baselines. Compared to LLaMA Guard 3, ThinkGuard
improves accuracy by 16.1% and macro F1 by 27.0%. Moreover, it surpasses
label-only fine-tuned models, confirming that structured critiques enhance both
classification precision and nuanced safety reasoning while maintaining
computational efficiency.
|
2502.13459
|
Poisoned Source Code Detection in Code Models
|
cs.CR cs.LG
|
Deep learning models have gained popularity for conducting various tasks
involving source code. However, their black-box nature raises concerns about
potential risks. One such risk is a poisoning attack, where an attacker
intentionally contaminates the training set with malicious samples to mislead
the model's predictions in specific scenarios. To protect source code models
from poisoning attacks, we introduce CodeGarrison (CG), a hybrid deep-learning
model that relies on code embeddings to identify poisoned code samples. We
evaluated CG against the state-of-the-art technique ONION for detecting
poisoned samples generated by DAMP, MHM, ALERT, as well as a novel poisoning
technique named CodeFooler. Results showed that CG significantly outperformed
ONION with an accuracy of 93.5%. We also tested CG's robustness against unknown
attacks and achieved an average accuracy of 85.6% in identifying poisoned
samples across the four attacks mentioned above.
|
2502.13464
|
Estimating Commonsense Plausibility through Semantic Shifts
|
cs.CL cs.AI
|
Commonsense plausibility estimation is critical for evaluating language
models (LMs), yet existing generative approaches--reliant on likelihoods or
verbalized judgments--struggle with fine-grained discrimination. In this paper,
we propose ComPaSS, a novel discriminative framework that quantifies
commonsense plausibility by measuring semantic shifts when augmenting sentences
with commonsense-related information. Plausible augmentations induce minimal
shifts in semantics, while implausible ones result in substantial deviations.
Evaluations on two types of fine-grained commonsense plausibility estimation
tasks across different backbones, including LLMs and vision-language models
(VLMs), show that ComPaSS consistently outperforms baselines. It demonstrates
the advantage of discriminative approaches over generative methods in
fine-grained commonsense plausibility evaluation. Experiments also show that
(1) VLMs yield superior performance to LMs, when integrated with ComPaSS, on
vision-grounded commonsense tasks. (2) contrastive pre-training sharpens
backbone models' ability to capture semantic nuances, thereby further enhancing
ComPaSS.
|
2502.13465
|
HawkBench: Investigating Resilience of RAG Methods on Stratified
Information-Seeking Tasks
|
cs.IR cs.AI cs.CL
|
In real-world information-seeking scenarios, users have dynamic and diverse
needs, requiring RAG systems to demonstrate adaptable resilience. To
comprehensively evaluate the resilience of current RAG methods, we introduce
HawkBench, a human-labeled, multi-domain benchmark designed to rigorously
assess RAG performance across categorized task types. By stratifying tasks
based on information-seeking behaviors, HawkBench provides a systematic
evaluation of how well RAG systems adapt to diverse user needs.
Unlike existing benchmarks, which focus primarily on specific task types
(mostly factoid queries) and rely on varying knowledge bases, HawkBench offers:
(1) systematic task stratification to cover a broad range of query types,
including both factoid and rationale queries, (2) integration of multi-domain
corpora across all task types to mitigate corpus bias, and (3) rigorous
annotation for high-quality evaluation.
HawkBench includes 1,600 high-quality test samples, evenly distributed across
domains and task types. Using this benchmark, we evaluate representative RAG
methods, analyzing their performance in terms of answer quality and response
latency. Our findings highlight the need for dynamic task strategies that
integrate decision-making, query interpretation, and global knowledge
understanding to improve RAG generalizability. We believe HawkBench serves as a
pivotal benchmark for advancing the resilience of RAG methods and their ability
to achieve general-purpose information seeking.
|
2502.13467
|
Continuous K-Max Bandits
|
cs.LG
|
We study the $K$-Max combinatorial multi-armed bandits problem with
continuous outcome distributions and weak value-index feedback: each base arm
has an unknown continuous outcome distribution, and in each round the learning
agent selects $K$ arms, obtains the maximum value sampled from these $K$ arms
as reward and observes this reward together with the corresponding arm index as
feedback. This setting captures critical applications in recommendation
systems, distributed computing, server scheduling, etc. The continuous $K$-Max
bandits introduce unique challenges, including discretization error from
continuous-to-discrete conversion, non-deterministic tie-breaking under limited
feedback, and biased estimation due to partial observability. Our key
contribution is the computationally efficient algorithm DCK-UCB, which combines
adaptive discretization with bias-corrected confidence bounds to tackle these
challenges. For general continuous distributions, we prove that DCK-UCB
achieves a $\widetilde{\mathcal{O}}(T^{3/4})$ regret upper bound, establishing
the first sublinear regret guarantee for this setting. Furthermore, we identify
an important special case with exponential distributions under full-bandit
feedback. In this case, our proposed algorithm MLE-Exp enables
$\widetilde{\mathcal{O}}(\sqrt{T})$ regret upper bound through maximal
log-likelihood estimation, achieving near-minimax optimality.
|
2502.13471
|
Some Insights of Construction of Feature Graph to Learn Pairwise Feature
Interactions with Graph Neural Networks
|
cs.LG cs.AI stat.ML
|
Feature interaction is crucial in predictive machine learning models, as it
captures the relationships between features that influence model performance.
In this work, we focus on pairwise interactions and investigate their
importance in constructing feature graphs for Graph Neural Networks (GNNs).
Rather than proposing new methods, we leverage existing GNN models and tools to
explore the relationship between feature graph structures and their
effectiveness in modeling interactions. Through experiments on synthesized
datasets, we uncover that edges between interacting features are important for
enabling GNNs to model feature interactions effectively. We also observe that
including non-interaction edges can act as noise, degrading model performance.
Furthermore, we provide theoretical support for sparse feature graph selection
using the Minimum Description Length (MDL) principle. We prove that feature
graphs retaining only necessary interaction edges yield a more efficient and
interpretable representation than complete graphs, aligning with Occam's Razor.
Our findings offer both theoretical insights and practical guidelines for
designing feature graphs that improve the performance and interpretability of
GNN models.
|
2502.13472
|
FlexDuo: A Pluggable System for Enabling Full-Duplex Capabilities in
Speech Dialogue Systems
|
cs.CL cs.HC
|
Full-Duplex Speech Dialogue Systems (Full-Duplex SDS) have significantly
enhanced the naturalness of human-machine interaction by enabling real-time
bidirectional communication. However, existing approaches face challenges such
as difficulties in independent module optimization and contextual noise
interference due to highly coupled architectural designs and oversimplified
binary state modeling. This paper proposes FlexDuo, a flexible full-duplex
control module that decouples duplex control from spoken dialogue systems
through a plug-and-play architectural design. Furthermore, inspired by human
information-filtering mechanisms in conversations, we introduce an explicit
Idle state. On one hand, the Idle state filters redundant noise and irrelevant
audio to enhance dialogue quality. On the other hand, it establishes a semantic
integrity-based buffering mechanism, reducing the risk of mutual interruptions
while ensuring accurate response transitions. Experimental results on the
Fisher corpus demonstrate that FlexDuo reduces the false interruption rate by
24.9% and improves response accuracy by 7.6% compared to integrated full-duplex
dialogue system baselines. It also outperforms voice activity detection (VAD)
controlled baseline systems in both Chinese and English dialogue quality. The
proposed modular architecture and state-based dialogue model provide a novel
technical pathway for building flexible and efficient duplex dialogue systems.
|
2502.13474
|
Towards Lightweight, Adaptive and Attribute-Aware Multi-Aspect
Controllable Text Generation with Large Language Models
|
cs.CL
|
Multi-aspect controllable text generation aims to control text generation in
attributes from multiple aspects, making it a complex but powerful task in
natural language processing. Supervised fine-tuning methods are often employed
for this task due to their simplicity and effectiveness. However, they still
have some limitations: low rank adaptation (LoRA) only fine-tunes a few
parameters and has suboptimal control effects, while full fine-tuning (FFT)
requires significant computational resources and is susceptible to overfitting,
particularly when data is limited. Moreover, existing works typically train
multi-aspect controllable text generation models using only single-aspect
annotated data, which results in discrepancies in data distribution; at the
same time, accurately generating text with specific attributes is a challenge
that requires strong attribute-aware capabilities. To address these
limitations, we propose a lightweight, adaptive and attribute-aware framework
for multi-aspect controllable text generation. Our framework can dynamically
adjust model parameters according to different aspects of data to achieve
controllable text generation, aiming to optimize performance across multiple
aspects. Experimental results show that our framework outperforms other strong
baselines, achieves state-of-the-art performance, adapts well to data
discrepancies, and is more accurate in attribute perception.
|
2502.13475
|
LLM should think and action as a human
|
cs.CL cs.AI
|
It is popular lately to train large language models to be used as chat
assistants, but in the conversation between the user and the chat assistant,
there are prompts, require multi-turns between the chat assistant and the user.
However, there are a number of issues with the multi-turns conversation: The
response of the chat assistant is prone to errors and cannot help users achieve
their goals; It is difficult for chat assistant to generate responses with
different processes based on actual needs for the same command or request; Chat
assistant require the use of tools, but the current approach is not elegant and
efficient, and the number of tool calls that can be supported is limited. The
main reason for these issues is that large language models do not have the
thinking ability as a human, lack the reasoning ability and planning ability,
and lack the ability to execute plans. To solve these issues, we propose a
thinking method based on a built-in chain of thought: In the multi-turns
conversation, for each user prompt, the large language model thinks based on
elements such as chat history, thinking context, action calls, memory and
knowledge, makes detailed reasoning and planning, and actions according to the
plan. We also explored how the large language model enhances thinking ability
through this thinking method: Collect training datasets according to the
thinking method and fine tune the large language model through supervised
learning; Train a consistency reward model and use it as a reward function to
fine tune the large language model using reinforcement learning, and the
reinforced large language model outputs according to this way of thinking. Our
experimental results show that the reasoning ability and planning ability of
the large language model are enhanced, and the issues in the multi-turns
conversation are solved.
|
2502.13476
|
Integration of Agentic AI with 6G Networks for Mission-Critical
Applications: Use-case and Challenges
|
cs.AI cs.NI
|
We are in a transformative era, and advances in Artificial Intelligence (AI),
especially the foundational models, are constantly in the news. AI has been an
integral part of many applications that rely on automation for service
delivery, and one of them is mission-critical public safety applications. The
problem with AI-oriented mission-critical applications is the humanin-the-loop
system and the lack of adaptability to dynamic conditions while maintaining
situational awareness. Agentic AI (AAI) has gained a lot of attention recently
due to its ability to analyze textual data through a contextual lens while
quickly adapting to conditions. In this context, this paper proposes an AAI
framework for mission-critical applications. We propose a novel framework with
a multi-layer architecture to realize the AAI. We also present a detailed
implementation of AAI layer that bridges the gap between network infrastructure
and missioncritical applications. Our preliminary analysis shows that the AAI
reduces initial response time by 5.6 minutes on average, while alert generation
time is reduced by 15.6 seconds on average and resource allocation is improved
by up to 13.4%. We also show that the AAI methods improve the number of
concurrent operations by 40, which reduces the recovery time by up to 5.2
minutes. Finally, we highlight some of the issues and challenges that need to
be considered when implementing AAI frameworks.
|
2502.13477
|
An Enhancement of Cuckoo Search Algorithm for Optimal Earthquake
Evacuation Space Allocation in Intramuros, Manila City
|
cs.NE
|
The Cuckoo Search Algorithm (CSA), while effective in solving complex
optimization problems, faces limitations in random population initialization
and reliance on fixed parameters. Random initialization of the population often
results in clustered solutions, resulting in uneven exploration of the search
space and hindering effective global optimization. Furthermore, the use of
fixed values for discovery rate and step size creates a trade-off between
solution accuracy and convergence speed. To address these limitations, an
Enhanced Cuckoo Search Algorithm (ECSA) is proposed. This algorithm utilizes
the Sobol Sequence to generate a more uniformly distributed initial population
and incorporates Cosine Annealing with Warm Restarts to dynamically adjust the
parameters. The performance of the algorithms was evaluated on 13 benchmark
functions (7 unimodal, 6 multimodal). Statistical analyses were conducted to
determine the significance and consistency of the results. The ECSA outperforms
the CSA in 11 out of 13 benchmark functions with a mean fitness improvement of
30% across all functions, achieving 35% for unimodal functions and 24% for
multimodal functions. The enhanced algorithm demonstrated increased convergence
efficiency, indicating its superiority to the CSA in solving a variety of
optimization problems. The ECSA is subsequently applied to optimize earthquake
evacuation space allocation in Intramuros, Manila.
|
2502.13480
|
Astra: Efficient and Money-saving Automatic Parallel Strategies Search
on Heterogeneous GPUs
|
cs.DC cs.AI
|
In this paper, we introduce an efficient and money-saving automatic parallel
strategies search framework on heterogeneous GPUs: Astra. First, Astra searches
for the efficiency-optimal parallel strategy in both GPU configurations search
space (GPU types and GPU numbers) and parallel parameters search space. Then,
Astra also provides the solution on heterogeneous GPUs by mathematically
modeling the time consumption of heterogeneous training. At last, Astra is the
first to propose the automatic parallel strategy search on money-saving. The
experiment results demonstrate that Astra can achieve better throughput than
expert-designed strategies. The search time cost for Astra can also be limited
to 1.27 seconds in a single-GPU setting and less than 1.35 minutes in a
heterogeneous-GPU setting on average with an accuracy of over 95%.
|
2502.13481
|
LLM4Tag: Automatic Tagging System for Information Retrieval via Large
Language Models
|
cs.IR
|
Tagging systems play an essential role in various information retrieval
applications such as search engines and recommender systems. Recently, Large
Language Models (LLMs) have been applied in tagging systems due to their
extensive world knowledge, semantic understanding, and reasoning capabilities.
Despite achieving remarkable performance, existing methods still have
limitations, including difficulties in retrieving relevant candidate tags
comprehensively, challenges in adapting to emerging domain-specific knowledge,
and the lack of reliable tag confidence quantification. To address these three
limitations above, we propose an automatic tagging system LLM4Tag. First, a
graph-based tag recall module is designed to effectively and comprehensively
construct a small-scale highly relevant candidate tag set. Subsequently, a
knowledge-enhanced tag generation module is employed to generate accurate tags
with long-term and short-term knowledge injection. Finally, a tag confidence
calibration module is introduced to generate reliable tag confidence scores.
Extensive experiments over three large-scale industrial datasets show that
LLM4Tag significantly outperforms the state-of-the-art baselines and LLM4Tag
has been deployed online for content tagging to serve hundreds of millions of
users.
|
2502.13482
|
Smoothed Normalization for Efficient Distributed Private Optimization
|
cs.LG cs.CR cs.DC math.OC stat.ML
|
Federated learning enables training machine learning models while preserving
the privacy of participants. Surprisingly, there is no differentially private
distributed method for smooth, non-convex optimization problems. The reason is
that standard privacy techniques require bounding the participants'
contributions, usually enforced via $\textit{clipping}$ of the updates.
Existing literature typically ignores the effect of clipping by assuming the
boundedness of gradient norms or analyzes distributed algorithms with clipping
but ignores DP constraints. In this work, we study an alternative approach via
$\textit{smoothed normalization}$ of the updates motivated by its favorable
performance in the single-node setting. By integrating smoothed normalization
with an error-feedback mechanism, we design a new distributed algorithm
$\alpha$-$\sf NormEC$. We prove that our method achieves a superior convergence
rate over prior works. By extending $\alpha$-$\sf NormEC$ to the DP setting, we
obtain the first differentially private distributed optimization algorithm with
provable convergence guarantees. Finally, our empirical results from neural
network training indicate robust convergence of $\alpha$-$\sf NormEC$ across
different parameter settings.
|
2502.13484
|
2.5D U-Net with Depth Reduction for 3D CryoET Object Identification
|
cs.CV
|
Cryo-electron tomography (cryoET) is a crucial technique for unveiling the
structure of protein complexes. Automatically analyzing tomograms captured by
cryoET is an essential step toward understanding cellular structures. In this
paper, we introduce the 4th place solution from the CZII - CryoET Object
Identification competition, which was organized to advance the development of
automated tomogram analysis techniques. Our solution adopted a heatmap-based
keypoint detection approach, utilizing an ensemble of two different types of
2.5D U-Net models with depth reduction. Despite its highly unified and simple
architecture, our method achieved 4th place, demonstrating its effectiveness.
|
2502.13486
|
Kernel Mean Embedding Topology: Weak and Strong Forms for Stochastic
Kernels and Implications for Model Learning
|
eess.SY cs.LG cs.SY math.OC math.ST stat.TH
|
We introduce a novel topology, called Kernel Mean Embedding Topology, for
stochastic kernels, in a weak and strong form. This topology, defined on the
spaces of Bochner integrable functions from a signal space to a space of
probability measures endowed with a Hilbert space structure, allows for a
versatile formulation. This construction allows one to obtain both a strong and
weak formulation. (i) For its weak formulation, we highlight the utility on
relaxed policy spaces, and investigate connections with the Young narrow
topology and Borkar (or $w^*$)-topology, and establish equivalence properties.
We report that, while both the $w^*$-topology and kernel mean embedding
topology are relatively compact, they are not closed. Conversely, while the
Young narrow topology is closed, it lacks relative compactness. (ii) We show
that the strong form provides an appropriate formulation for placing topologies
on spaces of models characterized by stochastic kernels with explicit
robustness and learning theoretic implications on optimal stochastic control
under discounted or average cost criteria. (iii) We show that this topology
possesses several properties making it ideal to study optimality,
approximations, robustness and continuity properties. In particular, the kernel
mean embedding topology has a Hilbert space structure, which is particularly
useful for approximating stochastic kernels through simulation data.
|
2502.13487
|
Transferring Textual Preferences to Vision-Language Understanding
through Model Merging
|
cs.CL cs.AI cs.CV cs.LG
|
Large vision-language models (LVLMs) perform outstandingly across various
multimodal tasks. However, their ability to evaluate generated content remains
limited, and training vision-language reward models (VLRMs) with preference
data is computationally expensive. This paper explores a training-free
alternative by merging text-based reward models (RMs) with LVLMs to create
VLRMs. Our approach shows that integrating these models leads to improved
performance over LVLMs' scoring and text-based RMs, offering an efficient
method for incorporating textual preferences into LVLMs.
|
2502.13490
|
What are Models Thinking about? Understanding Large Language Model
Hallucinations "Psychology" through Model Inner State Analysis
|
cs.CL cs.AI
|
Large language model (LLM) systems suffer from the models' unstable ability
to generate valid and factual content, resulting in hallucination generation.
Current hallucination detection methods heavily rely on out-of-model
information sources, such as RAG to assist the detection, thus bringing heavy
additional latency. Recently, internal states of LLMs' inference have been
widely used in numerous research works, such as prompt injection detection,
etc. Considering the interpretability of LLM internal states and the fact that
they do not require external information sources, we introduce such states into
LLM hallucination detection. In this paper, we systematically analyze different
internal states' revealing features during inference forward and
comprehensively evaluate their ability in hallucination detection.
Specifically, we cut the forward process of a large language model into three
stages: understanding, query, generation, and extracting the internal state
from these stages. By analyzing these states, we provide a deep understanding
of why the hallucinated content is generated and what happened in the internal
state of the models. Then, we introduce these internal states into
hallucination detection and conduct comprehensive experiments to discuss the
advantages and limitations.
|
2502.13495
|
A Study on Monthly Marine Heatwave Forecasts in New Zealand: An
Investigation of Imbalanced Regression Loss Functions with Neural Network
Models
|
physics.ao-ph cs.LG stat.AP
|
Marine heatwaves (MHWs) are extreme ocean-temperature events with significant
impacts on marine ecosystems and related industries. Accurate forecasts (one to
six months ahead) of MHWs would aid in mitigating these impacts. However,
forecasting MHWs presents a challenging imbalanced regression task due to the
rarity of extreme temperature anomalies in comparison to more frequent moderate
conditions. In this study, we examine monthly MHW forecasts for 12 locations
around New Zealand. We use a fully-connected neural network and compare
standard and specialized regression loss functions, including the mean squared
error (MSE), the mean absolute error (MAE), the Huber, the weighted MSE, the
focal-R, the balanced MSE, and a proposed scaling-weighted MSE. Results show
that (i) short lead times (one month) are considerably more predictable than
three- and six-month leads, (ii) models trained with the standard MSE or MAE
losses excel at forecasting average conditions but struggle to capture
extremes, and (iii) specialized loss functions such as the balanced MSE and our
scaling-weighted MSE substantially improve forecasting of MHW and suspected MHW
events. These findings underscore the importance of tailored loss functions for
imbalanced regression, particularly in forecasting rare but impactful events
such as MHWs.
|
2502.13497
|
Towards Geo-Culturally Grounded LLM Generations
|
cs.CL cs.AI
|
Generative large language models (LLMs) have been demonstrated to have gaps
in diverse, cultural knowledge across the globe. We investigate the effect of
retrieval augmented generation and search-grounding techniques on the ability
of LLMs to display familiarity with a diverse range of national cultures.
Specifically, we compare the performance of standard LLMs, LLMs augmented with
retrievals from a bespoke knowledge base (i.e., KB grounding), and LLMs
augmented with retrievals from a web search (i.e., search grounding) on a
series of cultural familiarity benchmarks. We find that search grounding
significantly improves the LLM performance on multiple-choice benchmarks that
test propositional knowledge (e.g., the norms, artifacts, and institutions of
national cultures), while KB grounding's effectiveness is limited by inadequate
knowledge base coverage and a suboptimal retriever. However, search grounding
also increases the risk of stereotypical judgments by language models, while
failing to improve evaluators' judgments of cultural familiarity in a human
evaluation with adequate statistical power. These results highlight the
distinction between propositional knowledge about a culture and open-ended
cultural fluency when it comes to evaluating the cultural familiarity of
generative LLMs.
|
2502.13498
|
Improving Collision-Free Success Rate For Object Goal Visual Navigation
Via Two-Stage Training With Collision Prediction
|
cs.RO cs.CV
|
The object goal visual navigation is the task of navigating to a specific
target object using egocentric visual observations. Recent end-to-end
navigation models based on deep reinforcement learning have achieved remarkable
performance in finding and reaching target objects. However, the collision
problem of these models during navigation remains unresolved, since the
collision is typically neglected when evaluating the success. Although
incorporating a negative reward for collision during training appears
straightforward, it results in a more conservative policy, thereby limiting the
agent's ability to reach targets. In addition, many of these models utilize
only RGB observations, further increasing the difficulty of collision avoidance
without depth information. To address these limitations, a new concept --
collision-free success is introduced to evaluate the ability of navigation
models to find a collision-free path towards the target object. A two-stage
training method with collision prediction is proposed to improve the
collision-free success rate of the existing navigation models using RGB
observations. In the first training stage, the collision prediction module
supervises the agent's collision states during exploration to learn to predict
the possible collision. In the second stage, leveraging the trained collision
prediction, the agent learns to navigate to the target without collision. The
experimental results in the AI2-THOR environment demonstrate that the proposed
method greatly improves the collision-free success rate of different navigation
models and outperforms other comparable collision-avoidance methods.
|
2502.13499
|
Hidden Darkness in LLM-Generated Designs: Exploring Dark Patterns in
Ecommerce Web Components Generated by LLMs
|
cs.HC cs.AI cs.LG
|
Recent work has highlighted the risks of LLM-generated content for a wide
range of harmful behaviors, including incorrect and harmful code. In this work,
we extend this by studying whether LLM-generated web design contains dark
patterns. This work evaluated designs of ecommerce web components generated by
four popular LLMs: Claude, GPT, Gemini, and Llama. We tested 13 commonly used
ecommerce components (e.g., search, product reviews) and used them as prompts
to generate a total of 312 components across all models. Over one-third of
generated components contain at least one dark pattern. The majority of dark
pattern strategies involve hiding crucial information, limiting users' actions,
and manipulating them into making decisions through a sense of urgency. Dark
patterns are also more frequently produced in components that are related to
company interests. These findings highlight the need for interventions to
prevent dark patterns during front-end code generation with LLMs and emphasize
the importance of expanding ethical design education to a broader audience.
|
2502.13502
|
PLDR-LLMs Learn A Generalizable Tensor Operator That Can Replace Its Own
Deep Neural Net At Inference
|
cs.CL cs.AI cs.LG
|
We show that Large Language Model from Power Law Decoder Representations
(PLDR-LLM) is a foundational model whose deductive outputs are invariant
tensors up to a small perturbation. PLDR-LLM learns a singularity condition for
the deductive outputs that enable the once-inferred energy-curvature tensor
$\mathbf{G}_{LM}$ to replace the deep neural network of power law graph
attention (PLGA) generating the deductive outputs at inference. We demonstrate
that a cache for $\mathbf{G}_{LM}$ (G-cache) and KV-cache can be implemented in
a straightforward manner to improve the inference time. The invariance and
generalizable nature of deductive outputs is at a very high fidelity where
deductive outputs have same RMSE and determinant values up to 15 decimal places
after caching, and zero-shot benchmark scores remain unchanged. Ablation
studies show that learned deductive outputs have distinct loss and accuracy
characteristics from models pretrained with transferred, randomly initialized
or identity tensors as a constant tensor operator and an LLM with scaled-dot
product attention (SDPA) is a special case of PLDR-LLM where $\mathbf{G}_{LM}$
is predefined as identity. The observed invariance characteristic introduces a
novel asymmetry between training and inference phases with caching. We outline
observed common characteristics of the deductive outputs for the learned
singularity condition. We provide an implementation of a training and inference
framework for PLDR-LLM with KV-cache and G-cache.
|
2502.13506
|
Reproducing NevIR: Negation in Neural Information Retrieval
|
cs.IR
|
Negation is a fundamental aspect of human communication, yet it remains a
challenge for Language Models (LMs) in Information Retrieval (IR). Despite the
heavy reliance of modern neural IR systems on LMs, little attention has been
given to their handling of negation. In this study, we reproduce and extend the
findings of NevIR, a benchmark study that revealed most IR models perform at or
below the level of random ranking when dealing with negation. We replicate
NevIR's original experiments and evaluate newly developed state-of-the-art IR
models. Our findings show that a recently emerging category - listwise Large
Language Model (LLM) rerankers - outperforms other models but still
underperforms human performance. Additionally, we leverage ExcluIR, a benchmark
dataset designed for exclusionary queries with extensive negation, to assess
the generalizability of negation understanding. Our findings suggest that
fine-tuning on one dataset does not reliably improve performance on the other,
indicating notable differences in their data distributions. Furthermore, we
observe that only cross-encoders and listwise LLM rerankers achieve reasonable
performance across both negation tasks.
|
2502.13508
|
VLAS: Vision-Language-Action Model With Speech Instructions For
Customized Robot Manipulation
|
cs.RO
|
Vision-language-action models (VLAs) have become increasingly popular in
robot manipulation for their end-to-end design and remarkable performance.
However, existing VLAs rely heavily on vision-language models (VLMs) that only
support text-based instructions, neglecting the more natural speech modality
for human-robot interaction. Traditional speech integration methods usually
involves a separate speech recognition system, which complicates the model and
introduces error propagation. Moreover, the transcription procedure would lose
non-semantic information in the raw speech, such as voiceprint, which may be
crucial for robots to successfully complete customized tasks. To overcome above
challenges, we propose VLAS, a novel end-to-end VLA that integrates speech
recognition directly into the robot policy model. VLAS allows the robot to
understand spoken commands through inner speech-text alignment and produces
corresponding actions to fulfill the task. We also present two new datasets,
SQA and CSI, to support a three-stage tuning process for speech instructions,
which empowers VLAS with the ability of multimodal interaction across text,
image, speech, and robot actions. Taking a step further, a voice
retrieval-augmented generation (RAG) paradigm is designed to enable our model
to effectively handle tasks that require individual-specific knowledge. Our
extensive experiments show that VLAS can effectively accomplish robot
manipulation tasks with diverse speech commands, offering a seamless and
customized interaction experience.
|
2502.13509
|
Unlocking Multimodal Integration in EHRs: A Prompt Learning Framework
for Language and Time Series Fusion
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) have shown remarkable performance in
vision-language tasks, but their application in the medical field remains
underexplored, particularly for integrating structured time series data with
unstructured clinical notes. In clinical practice, dynamic time series data
such as lab test results capture critical temporal patterns, while clinical
notes provide rich semantic context. Merging these modalities is challenging
due to the inherent differences between continuous signals and discrete text.
To bridge this gap, we introduce ProMedTS, a novel self-supervised multimodal
framework that employs prompt-guided learning to unify these heterogeneous data
types. Our approach leverages lightweight anomaly detection to generate anomaly
captions that serve as prompts, guiding the encoding of raw time series data
into informative embeddings. These embeddings are aligned with textual
representations in a shared latent space, preserving fine-grained temporal
nuances alongside semantic insights. Furthermore, our framework incorporates
tailored self-supervised objectives to enhance both intra- and inter-modal
alignment. We evaluate ProMedTS on disease diagnosis tasks using real-world
datasets, and the results demonstrate that our method consistently outperforms
state-of-the-art approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.