id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.09591
|
Censor Dependent Variational Inference
|
cs.LG stat.ML
|
This paper provides a comprehensive analysis of variational inference in
latent variable models for survival analysis, emphasizing the distinctive
challenges associated with applying variational methods to survival data. We
identify a critical weakness in the existing methodology, demonstrating how a
poorly designed variational distribution may hinder the objective of survival
analysis tasks--modeling time-to-event distributions. We prove that the optimal
variational distribution, which perfectly bounds the log-likelihood, may depend
on the censoring mechanism. To address this issue, we propose censor-dependent
variational inference (CDVI), tailored for latent variable models in survival
analysis. More practically, we introduce CD-CVAE, a V-structure Variational
Autoencoder (VAE) designed for the scalable implementation of CDVI. Further
discussion extends some existing theories and training techniques to survival
analysis. Extensive experiments validate our analysis and demonstrate
significant improvements in the estimation of individual survival
distributions.
|
2502.09592
|
A Data-Driven Method for Microgrid System Identification: Physically
Consistent Sparse Identification of Nonlinear Dynamics
|
eess.SY cs.SY
|
Microgrids (MGs) play a crucial role in utilizing distributed energy
resources (DERs) like solar and wind power, enhancing the sustainability and
flexibility of modern power systems. However, the inherent variability in MG
topology, power flow, and DER operating modes poses significant challenges to
the accurate system identification of MGs, which is crucial for designing
robust control strategies and ensuring MG stability. This paper proposes a
Physically Consistent Sparse Identification of Nonlinear Dynamics (PC-SINDy)
method for accurate MG system identification. By leveraging an analytically
derived library of candidate functions, PC-SINDy extracts accurate dynamic
models using only phasor measurement unit (PMU) data. Simulations on a 4-bus
system demonstrate that PC-SINDy can reliably and accurately predict frequency
trajectories under large disturbances, including scenarios not encountered
during the identification/training phase, even when using noisy, low-sampled
PMU data.
|
2502.09596
|
KIMAs: A Configurable Knowledge Integrated Multi-Agent System
|
cs.AI cs.MA
|
Knowledge-intensive conversations supported by large language models (LLMs)
have become one of the most popular and helpful applications that can assist
people in different aspects. Many current knowledge-intensive applications are
centered on retrieval-augmented generation (RAG) techniques. While many
open-source RAG frameworks facilitate the development of RAG-based
applications, they often fall short in handling practical scenarios complicated
by heterogeneous data in topics and formats, conversational context management,
and the requirement of low-latency response times. This technical report
presents a configurable knowledge integrated multi-agent system, KIMAs, to
address these challenges. KIMAs features a flexible and configurable system for
integrating diverse knowledge sources with 1) context management and query
rewrite mechanisms to improve retrieval accuracy and multi-turn conversational
coherency, 2) efficient knowledge routing and retrieval, 3) simple but
effective filter and reference generation mechanisms, and 4) optimized
parallelizable multi-agent pipeline execution. Our work provides a scalable
framework for advancing the deployment of LLMs in real-world settings. To show
how KIMAs can help developers build knowledge-intensive applications with
different scales and emphases, we demonstrate how we configure the system to
three applications already running in practice with reliable performance.
|
2502.09597
|
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference
Following in LLMs
|
cs.LG cs.CL
|
Large Language Models (LLMs) are increasingly used as chatbots, yet their
ability to personalize responses to user preferences remains limited. We
introduce PrefEval, a benchmark for evaluating LLMs' ability to infer, memorize
and adhere to user preferences in a long-context conversational setting.
PrefEval comprises 3,000 manually curated user preference and query pairs
spanning 20 topics. PrefEval contains user personalization or preference
information in both explicit and implicit forms, and evaluates LLM performance
using a generation and a classification task. With PrefEval, we evaluated the
aforementioned preference following capabilities of 10 open-source and
proprietary LLMs in multi-session conversations with varying context lengths up
to 100k tokens. We benchmark with various prompting, iterative feedback, and
retrieval-augmented generation methods. Our benchmarking effort reveals that
state-of-the-art LLMs face significant challenges in proactively following
users' preferences during conversations. In particular, in zero-shot settings,
preference following accuracy falls below 10% at merely 10 turns (~3k tokens)
across most evaluated models. Even with advanced prompting and retrieval
methods, preference following still deteriorates in long-context conversations.
Furthermore, we show that fine-tuning on PrefEval significantly improves
performance. We believe PrefEval serves as a valuable resource for measuring,
understanding, and enhancing LLMs' preference following abilities, paving the
way for personalized conversational agents. Our code and dataset are available
at https://prefeval.github.io/.
|
2502.09598
|
GAIA: A Global, Multi-modal, Multi-scale Vision-Language Dataset for
Remote Sensing Image Analysis
|
cs.CV
|
The continuous operation of Earth-orbiting satellites generates vast and
ever-growing archives of Remote Sensing (RS) images. Natural language presents
an intuitive interface for accessing, querying, and interpreting the data from
such archives. However, existing Vision-Language Models (VLMs) are
predominantly trained on web-scraped, noisy image-text data, exhibiting limited
exposure to the specialized domain of RS. This deficiency results in poor
performance on RS-specific tasks, as commonly used datasets often lack
detailed, scientifically accurate textual descriptions and instead emphasize
solely on attributes like date and location. To bridge this critical gap, we
introduce GAIA, a novel dataset designed for multi-scale, multi-sensor, and
multi-modal RS image analysis. GAIA comprises of 205,150 meticulously curated
RS image-text pairs, representing a diverse range of RS modalities associated
to different spatial resolutions. Unlike existing vision-language datasets in
RS, GAIA specifically focuses on capturing a diverse range of RS applications,
providing unique information about environmental changes, natural disasters,
and various other dynamic phenomena. The dataset provides a spatially and
temporally balanced distribution, spanning across the globe, covering the last
25 years with a balanced temporal distribution of observations. GAIA's
construction involved a two-stage process: (1) targeted web-scraping of images
and accompanying text from reputable RS-related sources, and (2) generation of
five high-quality, scientifically grounded synthetic captions for each image
using carefully crafted prompts that leverage the advanced vision-language
capabilities of GPT-4o. Our extensive experiments, including fine-tuning of
CLIP and BLIP2 models, demonstrate that GAIA significantly improves performance
on RS image classification, cross-modal retrieval and image captioning tasks.
|
2502.09601
|
CoT-Valve: Length-Compressible Chain-of-Thought Tuning
|
cs.AI cs.CL
|
Chain-of-Thought significantly enhances a model's reasoning capability, but
it also comes with a considerable increase in inference costs due to long
chains. With the observation that the reasoning path can be easily compressed
under easy tasks but struggle on hard tasks, we explore the feasibility of
elastically controlling the length of reasoning paths with only one model,
thereby reducing the inference overhead of reasoning models dynamically based
on task difficulty. We introduce a new tuning and inference strategy named
CoT-Valve, designed to allow models to generate reasoning chains of varying
lengths. To achieve this, we propose to identify a direction in the parameter
space that, when manipulated, can effectively control the length of generated
CoT. Moreover, we show that this property is valuable for compressing the
reasoning chain. We construct datasets with chains from long to short for the
same questions and explore two enhanced strategies for CoT-Valve: (1) a precise
length-compressible CoT tuning method, and (2) a progressive chain length
compression approach. Our experiments show that CoT-Valve successfully enables
controllability and compressibility of the chain and shows better performance
than the prompt-based control. We applied this method to QwQ-32B-Preview,
reducing reasoning chains on GSM8K from 741 to 225 tokens with a minor
performance drop (95.07% to 94.92%) and on AIME from 6827 to 4629 tokens, with
only one additional incorrect answer.
|
2502.09604
|
SelfCite: Self-Supervised Alignment for Context Attribution in Large
Language Models
|
cs.CL cs.AI cs.LG
|
We introduce SelfCite, a novel self-supervised approach that aligns LLMs to
generate high-quality, fine-grained, sentence-level citations for the
statements in their generated responses. Instead of only relying on costly and
labor-intensive annotations, SelfCite leverages a reward signal provided by the
LLM itself through context ablation: If a citation is necessary, removing the
cited text from the context should prevent the same response; if sufficient,
retaining the cited text alone should preserve the same response. This reward
can guide the inference-time best-of-N sampling strategy to improve citation
quality significantly, as well as be used in preference optimization to
directly fine-tune the models for generating better citations. The
effectiveness of SelfCite is demonstrated by increasing citation F1 up to 5.3
points on the LongBench-Cite benchmark across five long-form question answering
tasks.
|
2502.09606
|
Human-LLM Coevolution: Evidence from Academic Writing
|
cs.CL cs.AI cs.CY cs.DL cs.LG
|
With a statistical analysis of arXiv paper abstracts, we report a marked drop
in the frequency of several words previously identified as overused by ChatGPT,
such as "delve", starting soon after they were pointed out in early 2024. The
frequency of certain other words favored by ChatGPT, such as "significant", has
instead kept increasing. These phenomena suggest that some authors of academic
papers have adapted their use of large language models (LLMs), for example, by
selecting outputs or applying modifications to the LLM-generated content. Such
coevolution and cooperation of humans and LLMs thus introduce additional
challenges to the detection of machine-generated text in real-world scenarios.
Estimating the impact of LLMs on academic writing by examining word frequency
remains feasible, and more attention should be paid to words that were already
frequently employed, including those that have decreased in frequency due to
LLMs' disfavor.
|
2502.09608
|
Instance Segmentation of Scene Sketches Using Natural Image Priors
|
cs.CV cs.GR
|
Sketch segmentation involves grouping pixels within a sketch that belong to
the same object or instance. It serves as a valuable tool for sketch editing
tasks, such as moving, scaling, or removing specific components. While image
segmentation models have demonstrated remarkable capabilities in recent years,
sketches present unique challenges for these models due to their sparse nature
and wide variation in styles. We introduce SketchSeg, a method for instance
segmentation of raster scene sketches. Our approach adapts state-of-the-art
image segmentation and object detection models to the sketch domain by
employing class-agnostic fine-tuning and refining segmentation masks using
depth cues. Furthermore, our method organizes sketches into sorted layers,
where occluded instances are inpainted, enabling advanced sketch editing
applications. As existing datasets in this domain lack variation in sketch
styles, we construct a synthetic scene sketch segmentation dataset featuring
sketches with diverse brush strokes and varying levels of detail. We use this
dataset to demonstrate the robustness of our approach and will release it to
promote further research in the field.
Project webpage: https://sketchseg.github.io/sketch-seg/
|
2502.09609
|
Score-of-Mixture Training: Training One-Step Generative Models Made
Simple via Score Estimation of Mixture Distributions
|
cs.LG cs.AI stat.ML
|
We propose Score-of-Mixture Training (SMT), a novel framework for training
one-step generative models by minimizing a class of divergences called the
$\alpha$-skew Jensen-Shannon divergence. At its core, SMT estimates the score
of mixture distributions between real and fake samples across multiple noise
levels. Similar to consistency models, our approach supports both training from
scratch (SMT) and distillation using a pretrained diffusion model, which we
call Score-of-Mixture Distillation (SMD). It is simple to implement, requires
minimal hyperparameter tuning, and ensures stable training. Experiments on
CIFAR-10 and ImageNet 64x64 show that SMT/SMD are competitive with and can even
outperform existing methods.
|
2502.09611
|
Designing a Conditional Prior Distribution for Flow-Based Generative
Models
|
cs.LG cs.CV
|
Flow-based generative models have recently shown impressive performance for
conditional generation tasks, such as text-to-image generation. However,
current methods transform a general unimodal noise distribution to a specific
mode of the target data distribution. As such, every point in the initial
source distribution can be mapped to every point in the target distribution,
resulting in long average paths. To this end, in this work, we tap into a
non-utilized property of conditional flow-based models: the ability to design a
non-trivial prior distribution. Given an input condition, such as a text
prompt, we first map it to a point lying in data space, representing an
``average" data point with the minimal average distance to all data points of
the same conditional mode (e.g., class). We then utilize the flow matching
formulation to map samples from a parametric distribution centered around this
point to the conditional target distribution. Experimentally, our method
significantly improves training times and generation efficiency (FID, KID and
CLIP alignment scores) compared to baselines, producing high quality samples
using fewer sampling steps.
|
2502.09613
|
Latent Radiance Fields with 3D-aware 2D Representations
|
cs.CV
|
Latent 3D reconstruction has shown great promise in empowering 3D semantic
understanding and 3D generation by distilling 2D features into the 3D space.
However, existing approaches struggle with the domain gap between 2D feature
space and 3D representations, resulting in degraded rendering performance. To
address this challenge, we propose a novel framework that integrates 3D
awareness into the 2D latent space. The framework consists of three stages: (1)
a correspondence-aware autoencoding method that enhances the 3D consistency of
2D latent representations, (2) a latent radiance field (LRF) that lifts these
3D-aware 2D representations into 3D space, and (3) a VAE-Radiance Field
(VAE-RF) alignment strategy that improves image decoding from the rendered 2D
representations. Extensive experiments demonstrate that our method outperforms
the state-of-the-art latent 3D reconstruction approaches in terms of synthesis
performance and cross-dataset generalizability across diverse indoor and
outdoor scenes. To our knowledge, this is the first work showing the radiance
field representations constructed from 2D latent representations can yield
photorealistic 3D reconstruction performance.
|
2502.09614
|
DexTrack: Towards Generalizable Neural Tracking Control for Dexterous
Manipulation from Human References
|
cs.RO cs.AI cs.CV cs.LG
|
We address the challenge of developing a generalizable neural tracking
controller for dexterous manipulation from human references. This controller
aims to manage a dexterous robot hand to manipulate diverse objects for various
purposes defined by kinematic human-object interactions. Developing such a
controller is complicated by the intricate contact dynamics of dexterous
manipulation and the need for adaptivity, generalizability, and robustness.
Current reinforcement learning and trajectory optimization methods often fall
short due to their dependence on task-specific rewards or precise system
models. We introduce an approach that curates large-scale successful robot
tracking demonstrations, comprising pairs of human references and robot
actions, to train a neural controller. Utilizing a data flywheel, we
iteratively enhance the controller's performance, as well as the number and
quality of successful tracking demonstrations. We exploit available tracking
demonstrations and carefully integrate reinforcement learning and imitation
learning to boost the controller's performance in dynamic environments. At the
same time, to obtain high-quality tracking demonstrations, we individually
optimize per-trajectory tracking by leveraging the learned tracking controller
in a homotopy optimization method. The homotopy optimization, mimicking
chain-of-thought, aids in solving challenging trajectory tracking problems to
increase demonstration diversity. We showcase our success by training a
generalizable neural controller and evaluating it in both simulation and real
world. Our method achieves over a 10% improvement in success rates compared to
leading baselines. The project website with animated results is available at
https://meowuu7.github.io/DexTrack/.
|
2502.09615
|
RigAnything: Template-Free Autoregressive Rigging for Diverse 3D Assets
|
cs.CV
|
We present RigAnything, a novel autoregressive transformer-based model, which
makes 3D assets rig-ready by probabilistically generating joints, skeleton
topologies, and assigning skinning weights in a template-free manner. Unlike
most existing auto-rigging methods, which rely on predefined skeleton template
and are limited to specific categories like humanoid, RigAnything approaches
the rigging problem in an autoregressive manner, iteratively predicting the
next joint based on the global input shape and the previous prediction. While
autoregressive models are typically used to generate sequential data,
RigAnything extends their application to effectively learn and represent
skeletons, which are inherently tree structures. To achieve this, we organize
the joints in a breadth-first search (BFS) order, enabling the skeleton to be
defined as a sequence of 3D locations and the parent index. Furthermore, our
model improves the accuracy of position prediction by leveraging diffusion
modeling, ensuring precise and consistent placement of joints within the
hierarchy. This formulation allows the autoregressive model to efficiently
capture both spatial and hierarchical relationships within the skeleton.
Trained end-to-end on both RigNet and Objaverse datasets, RigAnything
demonstrates state-of-the-art performance across diverse object types,
including humanoids, quadrupeds, marine creatures, insects, and many more,
surpassing prior methods in quality, robustness, generalizability, and
efficiency. Please check our website for more details:
https://www.liuisabella.com/RigAnything.
|
2502.09616
|
Variational Rectified Flow Matching
|
cs.LG cs.CV
|
We study Variational Rectified Flow Matching, a framework that enhances
classic rectified flow matching by modeling multi-modal velocity vector-fields.
At inference time, classic rectified flow matching 'moves' samples from a
source distribution to the target distribution by solving an ordinary
differential equation via integration along a velocity vector-field. At
training time, the velocity vector-field is learnt by linearly interpolating
between coupled samples one drawn from the source and one drawn from the target
distribution randomly. This leads to ''ground-truth'' velocity vector-fields
that point in different directions at the same location, i.e., the velocity
vector-fields are multi-modal/ambiguous. However, since training uses a
standard mean-squared-error loss, the learnt velocity vector-field averages
''ground-truth'' directions and isn't multi-modal. In contrast, variational
rectified flow matching learns and samples from multi-modal flow directions. We
show on synthetic data, MNIST, CIFAR-10, and ImageNet that variational
rectified flow matching leads to compelling results.
|
2502.09617
|
LIFe-GoM: Generalizable Human Rendering with Learned Iterative Feedback
Over Multi-Resolution Gaussians-on-Mesh
|
cs.CV
|
Generalizable rendering of an animatable human avatar from sparse inputs
relies on data priors and inductive biases extracted from training on large
data to avoid scene-specific optimization and to enable fast reconstruction.
This raises two main challenges: First, unlike iterative gradient-based
adjustment in scene-specific optimization, generalizable methods must
reconstruct the human shape representation in a single pass at inference time.
Second, rendering is preferably computationally efficient yet of high
resolution. To address both challenges we augment the recently proposed dual
shape representation, which combines the benefits of a mesh and Gaussian
points, in two ways. To improve reconstruction, we propose an iterative
feedback update framework, which successively improves the canonical human
shape representation during reconstruction. To achieve computationally
efficient yet high-resolution rendering, we study a coupled-multi-resolution
Gaussians-on-Mesh representation. We evaluate the proposed approach on the
challenging THuman2.0, XHuman and AIST++ data. Our approach reconstructs an
animatable representation from sparse inputs in less than 1s, renders views
with 95.1FPS at $1024 \times 1024$, and achieves PSNR/LPIPS*/FID of
24.65/110.82/51.27 on THuman2.0, outperforming the state-of-the-art in
rendering quality.
|
2502.09619
|
Can this Model Also Recognize Dogs? Zero-Shot Model Search from Weights
|
cs.LG cs.CV
|
With the increasing numbers of publicly available models, there are probably
pretrained, online models for most tasks users require. However, current model
search methods are rudimentary, essentially a text-based search in the
documentation, thus users cannot find the relevant models. This paper presents
ProbeLog, a method for retrieving classification models that can recognize a
target concept, such as "Dog", without access to model metadata or training
data. Differently from previous probing methods, ProbeLog computes a descriptor
for each output dimension (logit) of each model, by observing its responses on
a fixed set of inputs (probes). Our method supports both logit-based retrieval
("find more logits like this") and zero-shot, text-based retrieval ("find all
logits corresponding to dogs"). As probing-based representations require
multiple costly feedforward passes through the model, we develop a method,
based on collaborative filtering, that reduces the cost of encoding
repositories by 3x. We demonstrate that ProbeLog achieves high retrieval
accuracy, both in real-world and fine-grained search tasks and is scalable to
full-size repositories.
|
2502.09620
|
Exploring the Potential of Encoder-free Architectures in 3D LMMs
|
cs.CV cs.AI cs.CL
|
Encoder-free architectures have been preliminarily explored in the 2D visual
domain, yet it remains an open question whether they can be effectively applied
to 3D understanding scenarios. In this paper, we present the first
comprehensive investigation into the potential of encoder-free architectures to
overcome the challenges of encoder-based 3D Large Multimodal Models (LMMs).
These challenges include the failure to adapt to varying point cloud
resolutions and the point features from the encoder not meeting the semantic
needs of Large Language Models (LLMs). We identify key aspects for 3D LMMs to
remove the encoder and enable the LLM to assume the role of the 3D encoder: 1)
We propose the LLM-embedded Semantic Encoding strategy in the pre-training
stage, exploring the effects of various point cloud self-supervised losses. And
we present the Hybrid Semantic Loss to extract high-level semantics. 2) We
introduce the Hierarchical Geometry Aggregation strategy in the instruction
tuning stage. This incorporates inductive bias into the LLM early layers to
focus on the local details of the point clouds. To the end, we present the
first Encoder-free 3D LMM, ENEL. Our 7B model rivals the current
state-of-the-art model, ShapeLLM-13B, achieving 55.0%, 50.92%, and 42.7% on the
classification, captioning, and VQA tasks, respectively. Our results
demonstrate that the encoder-free architecture is highly promising for
replacing encoder-based architectures in the field of 3D understanding. The
code is released at https://github.com/Ivan-Tang-3D/ENEL
|
2502.09621
|
MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for
Reasoning Quality, Robustness, and Efficiency
|
cs.CV cs.AI cs.CL
|
Answering questions with Chain-of-Thought (CoT) has significantly enhanced
the reasoning capabilities of Large Language Models (LLMs), yet its impact on
Large Multimodal Models (LMMs) still lacks a systematic assessment and in-depth
investigation. In this paper, we introduce MME-CoT, a specialized benchmark
evaluating the CoT reasoning performance of LMMs, spanning six domains: math,
science, OCR, logic, space-time, and general scenes. As the first comprehensive
study in this area, we propose a thorough evaluation suite incorporating three
novel metrics that assess the reasoning quality, robustness, and efficiency at
a fine-grained level. Leveraging curated high-quality data and a unique
evaluation strategy, we conduct an in-depth analysis of state-of-the-art LMMs,
uncovering several key insights: 1) Models with reflection mechanism
demonstrate a superior CoT quality, with Kimi k1.5 outperforming GPT-4o and
demonstrating the highest quality results; 2) CoT prompting often degrades LMM
performance on perception-heavy tasks, suggesting a potentially harmful
overthinking behavior; and 3) Although the CoT quality is high, LMMs with
reflection exhibit significant inefficiency in both normal response and
self-correction phases. We hope MME-CoT serves as a foundation for advancing
multimodal reasoning in LMMs. Project Page: https://mmecot.github.io/
|
2502.09622
|
Theoretical Benefit and Limitation of Diffusion Language Model
|
cs.LG cs.AI cs.CL stat.ML
|
Diffusion language models have emerged as a promising approach for text
generation. One would naturally expect this method to be an efficient
replacement for autoregressive models since multiple tokens can be sampled in
parallel during each diffusion step. However, its efficiency-accuracy trade-off
is not yet well understood. In this paper, we present a rigorous theoretical
analysis of a widely used type of diffusion language model, the Masked
Diffusion Model (MDM), and find that its effectiveness heavily depends on the
target evaluation metric. Under mild conditions, we prove that when using
perplexity as the metric, MDMs can achieve near-optimal perplexity in sampling
steps regardless of sequence length, demonstrating that efficiency can be
achieved without sacrificing performance. However, when using the sequence
error rate--which is important for understanding the "correctness" of a
sequence, such as a reasoning chain--we show that the required sampling steps
must scale linearly with sequence length to obtain "correct" sequences, thereby
eliminating MDM's efficiency advantage over autoregressive models. Our analysis
establishes the first theoretical foundation for understanding the benefits and
limitations of MDMs. All theoretical findings are supported by empirical
studies.
|
2502.09623
|
Embed Any NeRF: Graph Meta-Networks for Neural Tasks on Arbitrary NeRF
Architectures
|
cs.CV
|
Neural Radiance Fields (NeRFs) have emerged as a groundbreaking paradigm for
representing 3D objects and scenes by encoding shape and appearance information
into the weights of a neural network. Recent works have shown how such weights
can be used as input to frameworks processing them to solve deep learning
tasks. Yet, these frameworks can only process NeRFs with a specific, predefined
architecture. In this paper, we present the first framework that can ingest
NeRFs with multiple architectures and perform inference on architectures unseen
at training time. We achieve this goal by training a Graph Meta-Network in a
representation learning framework. Moreover, we show how a contrastive
objective is conducive to obtaining an architecture-agnostic latent space. In
experiments on both MLP-based and tri-planar NeRFs, our approach demonstrates
robust performance in classification and retrieval tasks that either matches or
exceeds that of existing frameworks constrained to single architectures, thus
providing the first architecture-agnostic method to perform tasks on NeRFs by
processing their weights.
|
2502.09624
|
Efficient and Trustworthy Block Propagation for Blockchain-enabled
Mobile Embodied AI Networks: A Graph Resfusion Approach
|
cs.AI cs.CR
|
By synergistically integrating mobile networks and embodied artificial
intelligence (AI), Mobile Embodied AI Networks (MEANETs) represent an advanced
paradigm that facilitates autonomous, context-aware, and interactive behaviors
within dynamic environments. Nevertheless, the rapid development of MEANETs is
accompanied by challenges in trustworthiness and operational efficiency.
Fortunately, blockchain technology, with its decentralized and immutable
characteristics, offers promising solutions for MEANETs. However, existing
block propagation mechanisms suffer from challenges such as low propagation
efficiency and weak security for block propagation, which results in delayed
transmission of vehicular messages or vulnerability to malicious tampering,
potentially causing severe traffic accidents in blockchain-enabled MEANETs.
Moreover, current block propagation strategies cannot effectively adapt to
real-time changes of dynamic topology in MEANETs. Therefore, in this paper, we
propose a graph Resfusion model-based trustworthy block propagation
optimization framework for consortium blockchain-enabled MEANETs. Specifically,
we propose an innovative trust calculation mechanism based on the trust cloud
model, which comprehensively accounts for randomness and fuzziness in the miner
trust evaluation. Furthermore, by leveraging the strengths of graph neural
networks and diffusion models, we develop a graph Resfusion model to
effectively and adaptively generate the optimal block propagation trajectory.
Simulation results demonstrate that the proposed model outperforms other
routing mechanisms in terms of block propagation efficiency and
trustworthiness. Additionally, the results highlight its strong adaptability to
dynamic environments, making it particularly suitable for rapidly changing
MEANETs.
|
2502.09625
|
Transformer Based Time-Series Forecasting for Stock
|
q-fin.CP cs.LG
|
To the naked eye, stock prices are considered chaotic, dynamic, and
unpredictable. Indeed, it is one of the most difficult forecasting tasks that
hundreds of millions of retail traders and professional traders around the
world try to do every second even before the market opens. With recent advances
in the development of machine learning and the amount of data the market
generated over years, applying machine learning techniques such as deep
learning neural networks is unavoidable. In this work, we modeled the task as a
multivariate forecasting problem, instead of a naive autoregression problem.
The multivariate analysis is done using the attention mechanism via applying a
mutated version of the Transformer, "Stockformer", which we created.
|
2502.09626
|
On the Bias, Fairness, and Bias Mitigation for a Wearable-based Freezing
of Gait Detection in Parkinson's Disease
|
eess.SP cs.LG
|
Freezing of gait (FOG) is a debilitating feature of Parkinson's disease (PD),
which is a cause of injurious falls among PD patients. Recent advances in
wearable-based human activity recognition (HAR) technology have enabled the
detection of FOG subtypes across benchmark datasets. Since FOG manifestation is
heterogeneous, developing models that quantify FOG consistently across patients
with varying demographics, FOG types, and PD conditions is important. Bias and
fairness in FOG models remain understudied in HAR, with research focused mainly
on FOG detection using single benchmark datasets. We evaluated the bias and
fairness of HAR models for wearable-based FOG detection across demographics and
PD conditions using multiple datasets and the effectiveness of transfer
learning as a potential bias mitigation approach. Our evaluation using
demographic parity ratio (DPR) and equalized odds ratio (EOR) showed model bias
(DPR & EOR < 0.8) for all stratified demographic variables, including age, sex,
and disease duration. Our experiments demonstrated that transfer learning from
multi-site datasets and generic human activity representations significantly
improved fairness (average change in DPR +0.027, +0.039, respectively) and
performance (average change in F1-score +0.026, +0.018, respectively) across
attributes, supporting the hypothesis that generic human activity
representations learn fairer representations applicable to health analytics.
|
2502.09635
|
CORRECT: Context- and Reference-Augmented Reasoning and Prompting for
Fact-Checking
|
cs.CL cs.AI
|
Fact-checking the truthfulness of claims usually requires reasoning over
multiple evidence sentences. Oftentimes, evidence sentences may not be always
self-contained, and may require additional contexts and references from
elsewhere to understand coreferential expressions, acronyms, and the scope of a
reported finding. For example, evidence sentences from an academic paper may
need contextual sentences in the paper and descriptions in its cited papers to
determine the scope of a research discovery. However, most fact-checking models
mainly focus on the reasoning within evidence sentences, and ignore the
auxiliary contexts and references. To address this problem, we propose a novel
method, Context- and Reference-augmented Reasoning and Prompting. For evidence
reasoning, we construct a three-layer evidence graph with evidence, context,
and reference layers. We design intra- and cross-layer reasoning to integrate
three graph layers into a unified evidence embedding. For verdict prediction,
we design evidence-conditioned prompt encoder, which produces unique prompt
embeddings for each claim. These evidence-conditioned prompt embeddings and
claims are unified for fact-checking. Experiments verify the strength of our
model.
|
2502.09636
|
Reading between the Lines: Can LLMs Identify Cross-Cultural
Communication Gaps?
|
cs.CL cs.AI
|
In a rapidly globalizing and digital world, content such as book and product
reviews created by people from diverse cultures are read and consumed by others
from different corners of the world. In this paper, we investigate the extent
and patterns of gaps in understandability of book reviews due to the presence
of culturally-specific items and elements that might be alien to users from
another culture. Our user-study on 57 book reviews from Goodreads reveal that
83\% of the reviews had at least one culture-specific difficult-to-understand
element. We also evaluate the efficacy of GPT-4o in identifying such items,
given the cultural background of the reader; the results are mixed, implying a
significant scope for improvement. Our datasets are available here:
https://github.com/sougata-ub/reading_between_lines
|
2502.09637
|
Meta-Cultural Competence: Climbing the Right Hill of Cultural Awareness
|
cs.CY cs.AI cs.CL
|
Numerous recent studies have shown that Large Language Models (LLMs) are
biased towards a Western and Anglo-centric worldview, which compromises their
usefulness in non-Western cultural settings. However, "culture" is a complex,
multifaceted topic, and its awareness, representation, and modeling in LLMs and
LLM-based applications can be defined and measured in numerous ways. In this
position paper, we ask what does it mean for an LLM to possess "cultural
awareness", and through a thought experiment, which is an extension of the
Octopus test proposed by Bender and Koller (2020), we argue that it is not
cultural awareness or knowledge, rather meta-cultural competence, which is
required of an LLM and LLM-based AI system that will make it useful across
various, including completely unseen, cultures. We lay out the principles of
meta-cultural competence AI systems, and discuss ways to measure and model
those.
|
2502.09638
|
Jailbreaking to Jailbreak
|
cs.CL cs.AI
|
Refusal training on Large Language Models (LLMs) prevents harmful outputs,
yet this defense remains vulnerable to both automated and human-crafted
jailbreaks. We present a novel LLM-as-red-teamer approach in which a human
jailbreaks a refusal-trained LLM to make it willing to jailbreak itself or
other LLMs. We refer to the jailbroken LLMs as $J_2$ attackers, which can
systematically evaluate target models using various red teaming strategies and
improve its performance via in-context learning from the previous failures. Our
experiments demonstrate that Sonnet 3.5 and Gemini 1.5 pro outperform other
LLMs as $J_2$, achieving 93.0% and 91.0% attack success rates (ASRs)
respectively against GPT-4o (and similar results across other capable LLMs) on
Harmbench. Our work not only introduces a scalable approach to strategic red
teaming, drawing inspiration from human red teamers, but also highlights
jailbreaking-to-jailbreak as an overlooked failure mode of the safeguard.
Specifically, an LLM can bypass its own safeguards by employing a jailbroken
version of itself that is willing to assist in further jailbreaking. To prevent
any direct misuse with $J_2$, while advancing research in AI safety, we
publicly share our methodology while keeping specific prompting details
private.
|
2502.09640
|
Online Social Support Detection in Spanish Social Media Texts
|
cs.CL cs.AI
|
The advent of social media has transformed communication, enabling
individuals to share their experiences, seek support, and participate in
diverse discussions. While extensive research has focused on identifying
harmful content like hate speech, the recognition and promotion of positive and
supportive interactions remain largely unexplored. This study proposes an
innovative approach to detecting online social support in Spanish-language
social media texts. We introduce the first annotated dataset specifically
created for this task, comprising 3,189 YouTube comments classified as
supportive or non-supportive. To address data imbalance, we employed GPT-4o to
generate paraphrased comments and create a balanced dataset. We then evaluated
social support classification using traditional machine learning models, deep
learning architectures, and transformer-based models, including GPT-4o, but
only on the unbalanced dataset. Subsequently, we utilized a transformer model
to compare the performance between the balanced and unbalanced datasets. Our
findings indicate that the balanced dataset yielded improved results for Task 2
(Individual and Group) and Task 3 (Nation, Other, LGBTQ, Black Community,
Women, Religion), whereas GPT-4o performed best for Task 1 (Social Support and
Non-Support). This study highlights the significance of fostering a supportive
online environment and lays the groundwork for future research in automated
social support detection.
|
2502.09642
|
Krutrim LLM: Multilingual Foundational Model for over a Billion People
|
cs.CL cs.AI
|
India is a diverse society with unique challenges in developing AI systems,
including linguistic diversity, oral traditions, data accessibility, and
scalability. Existing foundation models are primarily trained on English,
limiting their effectiveness for India's population. Indic languages comprise
only 1 percent of Common Crawl corpora despite India representing 18 percent of
the global population, leading to linguistic biases. Thousands of regional
languages, dialects, and code mixing create additional representation
challenges due to sparse training data.
We introduce Krutrim LLM, a 2 trillion token multilingual model designed for
India's linguistic landscape. It incorporates the largest known Indic dataset,
mitigating data scarcity and ensuring balanced performance across dialects.
Krutrim outperforms or matches state-of-the-art models on Indic benchmarks
while maintaining competitive English performance. Despite being significantly
smaller in training flops, Krutrim LLM matches or exceeds models like LLAMA-2
on 10 out of 16 tasks, with an average score of 0.57 versus 0.55. This
evidences Krutrim's flexible multilingual fluency across diverse linguistic
contexts.
Krutrim is integrated with real-time search to improve factual accuracy in
conversational AI applications. This enhances accessibility for over 1 billion
users worldwide. Through intentional design choices addressing data imbalances,
Krutrim LLM signifies meaningful progress in building ethical, globally
representative AI models.
|
2502.09644
|
From Argumentation to Deliberation: Perspectivized Stance Vectors for
Fine-grained (Dis)agreement Analysis
|
cs.CL cs.AI cs.CY
|
Debating over conflicting issues is a necessary first step towards resolving
conflicts. However, intrinsic perspectives of an arguer are difficult to
overcome by persuasive argumentation skills. Proceeding from a debate to a
deliberative process, where we can identify actionable options for resolving a
conflict requires a deeper analysis of arguments and the perspectives they are
grounded in - as it is only from there that one can derive mutually agreeable
resolution steps. In this work we develop a framework for a deliberative
analysis of arguments in a computational argumentation setup. We conduct a
fine-grained analysis of perspectivized stances expressed in the arguments of
different arguers or stakeholders on a given issue, aiming not only to identify
their opposing views, but also shared perspectives arising from their
attitudes, values or needs. We formalize this analysis in Perspectivized Stance
Vectors that characterize the individual perspectivized stances of all arguers
on a given issue. We construct these vectors by determining issue- and
argument-specific concepts, and predict an arguer's stance relative to each of
them. The vectors allow us to measure a modulated (dis)agreement between
arguers, structured by perspectives, which allows us to identify actionable
points for conflict resolution, as a first step towards deliberation.
|
2502.09645
|
From No to Know: Taxonomy, Challenges, and Opportunities for Negation
Understanding in Multimodal Foundation Models
|
cs.CL cs.AI
|
Negation, a linguistic construct conveying absence, denial, or contradiction,
poses significant challenges for multilingual multimodal foundation models.
These models excel in tasks like machine translation, text-guided generation,
image captioning, audio interactions, and video processing but often struggle
to accurately interpret negation across diverse languages and cultural
contexts. In this perspective paper, we propose a comprehensive taxonomy of
negation constructs, illustrating how structural, semantic, and cultural
factors influence multimodal foundation models. We present open research
questions and highlight key challenges, emphasizing the importance of
addressing these issues to achieve robust negation handling. Finally, we
advocate for specialized benchmarks, language-specific tokenization,
fine-grained attention mechanisms, and advanced multimodal architectures. These
strategies can foster more adaptable and semantically precise multimodal
foundation models, better equipped to navigate and accurately interpret the
complexities of negation in multilingual, multimodal environments.
|
2502.09646
|
Language Shift or Maintenance? An Intergenerational Study of the Tibetan
Community in Saudi Arabia
|
cs.CL cs.CY
|
The present study provides the first-ever report on the language shift from
Tibetan to Arabic among descendants of Tibetan families who migrated from the
Tibet region to Saudi Arabia around 70 years ago. The aim of this study was to
determine whether three age groups had adopted different practices in terms of
maintaining Tibetan or shifting to Hijazi Arabic. To this end, 96 male and
female members of the Tibetan community responded to a questionnaire in which
they were asked about their code choice in different domains (home,
neighbourhood, friends and relatives, expressing emotion, and performing
religious rituals). The data revealed significant intergenerational differences
between members of the community in terms of the extent of the shift to Arabic,
with Tibetan rarely used by younger members and older members making only
slightly more use of it. The difference between the three age groups was
significant, at a p-value of .001.
|
2502.09647
|
Unveiling Simplicities of Attention: Adaptive Long-Context Head
Identification
|
cs.CL cs.LG
|
The ability to process long contexts is crucial for many natural language
processing tasks, yet it remains a significant challenge. While substantial
progress has been made in enhancing the efficiency of attention mechanisms,
there is still a gap in understanding how attention heads function in
long-context settings. In this paper, we observe that while certain heads
consistently attend to local information only, others swing between attending
to local and long-context information depending on the query. This raises the
question: can we identify which heads require long-context information to
predict the next token accurately? We demonstrate that it's possible to predict
which heads are crucial for long-context processing using only local keys. The
core idea here is to exploit a simple model for the long-context scores via
second moment approximations. These findings unveil simple properties of
attention in the context of long sequences, and open the door to potentially
significant gains in efficiency.
|
2502.09648
|
UKTA: Unified Korean Text Analyzer
|
cs.CL cs.AI
|
Evaluating writing quality is complex and time-consuming often delaying
feedback to learners. While automated writing evaluation tools are effective
for English, Korean automated writing evaluation tools face challenges due to
their inability to address multi-view analysis, error propagation, and
evaluation explainability. To overcome these challenges, we introduce UKTA
(Unified Korean Text Analyzer), a comprehensive Korea text analysis and writing
evaluation system. UKTA provides accurate low-level morpheme analysis, key
lexical features for mid-level explainability, and transparent high-level
rubric-based writing scores. Our approach enhances accuracy and quadratic
weighted kappa over existing baseline, positioning UKTA as a leading
multi-perspective tool for Korean text analysis and writing evaluation.
|
2502.09649
|
Imit Diff: Semantics Guided Diffusion Transformer with Dual Resolution
Fusion for Imitation Learning
|
cs.AI cs.CV cs.LG cs.RO
|
Visuomotor imitation learning enables embodied agents to effectively acquire
manipulation skills from video demonstrations and robot proprioception.
However, as scene complexity and visual distractions increase, existing methods
that perform well in simple scenes tend to degrade in performance. To address
this challenge, we introduce Imit Diff, a semanstic guided diffusion
transformer with dual resolution fusion for imitation learning. Our approach
leverages prior knowledge from vision language foundation models to translate
high-level semantic instruction into pixel-level visual localization. This
information is explicitly integrated into a multi-scale visual enhancement
framework, constructed with a dual resolution encoder. Additionally, we
introduce an implementation of Consistency Policy within the diffusion
transformer architecture to improve both real-time performance and motion
smoothness in embodied agent control.We evaluate Imit Diff on several
challenging real-world tasks. Due to its task-oriented visual localization and
fine-grained scene perception, it significantly outperforms state-of-the-art
methods, especially in complex scenes with visual distractions, including
zero-shot experiments focused on visual distraction and category
generalization. The code will be made publicly available.
|
2502.09650
|
Principled Data Selection for Alignment: The Hidden Risks of Difficult
Examples
|
cs.CL cs.AI cs.LG
|
The alignment of large language models (LLMs) often assumes that using more
clean data yields better outcomes, overlooking the match between model capacity
and example difficulty. Challenging this, we propose a new principle:
Preference data vary in difficulty, and overly difficult examples hinder
alignment, by exceeding the model's capacity. Through systematic
experimentation, we validate this principle with three key findings: (1)
preference examples vary in difficulty, as evidenced by consistent learning
orders across alignment runs; (2) overly difficult examples significantly
degrade performance across four LLMs and two datasets; and (3) the capacity of
a model dictates its threshold for handling difficult examples, underscoring a
critical relationship between data selection and model capacity. Building on
this principle, we introduce Selective DPO, which filters out overly difficult
examples. This simple adjustment improves alignment performance by 9-16% in win
rates on the AlpacaEval 2 benchmark compared to the DPO baseline, suppressing a
series of DPO variants with different algorithmic adjustments. Together, these
results illuminate the importance of aligning data difficulty with model
capacity, offering a transformative perspective for improving alignment
strategies in LLMs. Code is available at
https://github.com/glorgao/SelectiveDPO.
|
2502.09651
|
AI-VERDE: A Gateway for Egalitarian Access to Large Language Model-Based
Resources For Educational Institutions
|
cs.CL cs.CY
|
We present AI-VERDE, a unified LLM-as-a-platform service designed to
facilitate seamless integration of commercial, cloud-hosted, and on-premise
open LLMs in academic settings. AI-VERDE streamlines access management for
instructional and research groups by providing features such as robust access
control, privacy-preserving mechanisms, native Retrieval-Augmented Generation
(RAG) support, budget management for third-party LLM services, and both a
conversational web interface and API access. In a pilot deployment at a large
public university, AI-VERDE demonstrated significant engagement across diverse
educational and research groups, enabling activities that would typically
require substantial budgets for commercial LLM services with limited user and
team management capabilities. To the best of our knowledge, AI-Verde is the
first platform to address both academic and research needs for LLMs within an
higher education institutional framework.
|
2502.09652
|
GraphCompNet: A Position-Aware Model for Predicting and Compensating
Shape Deviations in 3D Printing
|
cs.CV cs.LG
|
This paper introduces a data-driven algorithm for modeling and compensating
shape deviations in additive manufacturing (AM), addressing challenges in
geometric accuracy and batch production. While traditional methods, such as
analytical models and metrology, laid the groundwork for geometric precision,
they are often impractical for large-scale production. Recent advancements in
machine learning (ML) have improved compensation precision, but issues remain
in generalizing across complex geometries and adapting to position-dependent
variations. We present a novel approach for powder bed fusion (PBF) processes,
using GraphCompNet, which is a computational framework combining graph-based
neural networks with a generative adversarial network (GAN)-inspired training
process. By leveraging point cloud data and dynamic graph convolutional neural
networks (DGCNNs), GraphCompNet models complex shapes and incorporates
position-specific thermal and mechanical factors. A two-stage adversarial
training procedure iteratively refines compensated designs via a
compensator-predictor architecture, offering real-time feedback and
optimization. Experimental validation across diverse shapes and positions shows
the framework significantly improves compensation accuracy (35 to 65 percent)
across the entire print space, adapting to position-dependent variations. This
work advances the development of Digital Twin technology for AM, enabling
scalable, real-time monitoring and compensation, and addressing critical gaps
in AM process control. The proposed method supports high-precision, automated
industrial-scale design and manufacturing systems.
|
2502.09653
|
SASVi -- Segment Any Surgical Video
|
eess.IV cs.CV
|
Purpose: Foundation models, trained on multitudes of public datasets, often
require additional fine-tuning or re-prompting mechanisms to be applied to
visually distinct target domains such as surgical videos. Further, without
domain knowledge, they cannot model the specific semantics of the target
domain. Hence, when applied to surgical video segmentation, they fail to
generalise to sections where previously tracked objects leave the scene or new
objects enter. Methods: We propose SASVi, a novel re-prompting mechanism based
on a frame-wise Mask R-CNN Overseer model, which is trained on a minimal amount
of scarcely available annotations for the target domain. This model
automatically re-prompts the foundation model SAM2 when the scene constellation
changes, allowing for temporally smooth and complete segmentation of full
surgical videos. Results: Re-prompting based on our Overseer model
significantly improves the temporal consistency of surgical video segmentation
compared to similar prompting techniques and especially frame-wise
segmentation, which neglects temporal information, by at least 1.5%. Our
proposed approach allows us to successfully deploy SAM2 to surgical videos,
which we quantitatively and qualitatively demonstrate for three different
cholecystectomy and cataract surgery datasets. Conclusion: SASVi can serve as a
new baseline for smooth and temporally consistent segmentation of surgical
videos with scarcely available annotation data. Our method allows us to
leverage scarce annotations and obtain complete annotations for full videos of
the large-scale counterpart datasets. We make those annotations publicly
available, providing extensive annotation data for the future development of
surgical data science models.
|
2502.09654
|
Heterogeneous Mixture of Experts for Remote Sensing Image
Super-Resolution
|
eess.IV cs.CV
|
Remote sensing image super-resolution (SR) aims to reconstruct
high-resolution remote sensing images from low-resolution inputs, thereby
addressing limitations imposed by sensors and imaging conditions. However, the
inherent characteristics of remote sensing images, including diverse ground
object types and complex details, pose significant challenges to achieving
high-quality reconstruction. Existing methods typically employ a uniform
structure to process various types of ground objects without distinction,
making it difficult to adapt to the complex characteristics of remote sensing
images. To address this issue, we introduce a Mixture of Experts (MoE) model
and design a set of heterogeneous experts. These experts are organized into
multiple expert groups, where experts within each group are homogeneous while
being heterogeneous across groups. This design ensures that specialized
activation parameters can be employed to handle the diverse and intricate
details of ground objects effectively. To better accommodate the heterogeneous
experts, we propose a multi-level feature aggregation strategy to guide the
routing process. Additionally, we develop a dual-routing mechanism to
adaptively select the optimal expert for each pixel. Experiments conducted on
the UCMerced and AID datasets demonstrate that our proposed method achieves
superior SR reconstruction accuracy compared to state-of-the-art methods. The
code will be available at https://github.com/Mr-Bamboo/MFG-HMoE.
|
2502.09655
|
Bidirectional Diffusion Bridge Models
|
cs.CV cs.AI
|
Diffusion bridges have shown potential in paired image-to-image (I2I)
translation tasks. However, existing methods are limited by their
unidirectional nature, requiring separate models for forward and reverse
translations. This not only doubles the computational cost but also restricts
their practicality. In this work, we introduce the Bidirectional Diffusion
Bridge Model (BDBM), a scalable approach that facilitates bidirectional
translation between two coupled distributions using a single network. BDBM
leverages the Chapman-Kolmogorov Equation for bridges, enabling it to model
data distribution shifts across timesteps in both forward and backward
directions by exploiting the interchangeability of the initial and target
timesteps within this framework. Notably, when the marginal distribution given
endpoints is Gaussian, BDBM's transition kernels in both directions possess
analytical forms, allowing for efficient learning with a single network. We
demonstrate the connection between BDBM and existing bridge methods, such as
Doob's h-transform and variational approaches, and highlight its advantages.
Extensive experiments on high-resolution I2I translation tasks demonstrate that
BDBM not only enables bidirectional translation with minimal additional cost
but also outperforms state-of-the-art bridge models. Our source code is
available at [https://github.com/kvmduc/BDBM||https://github.com/kvmduc/BDBM].
|
2502.09656
|
Multi-Omics Fusion with Soft Labeling for Enhanced Prediction of Distant
Metastasis in Nasopharyngeal Carcinoma Patients after Radiotherapy
|
q-bio.QM cs.CV eess.IV
|
Omics fusion has emerged as a crucial preprocessing approach in the field of
medical image processing, providing significant assistance to several studies.
One of the challenges encountered in the integration of omics data is the
presence of unpredictability arising from disparities in data sources and
medical imaging equipment. In order to overcome this challenge and facilitate
the integration of their joint application to specific medical objectives, this
study aims to develop a fusion methodology that mitigates the disparities
inherent in omics data. The utilization of the multi-kernel late-fusion method
has gained significant popularity as an effective strategy for addressing this
particular challenge. An efficient representation of the data may be achieved
by utilizing a suitable single-kernel function to map the inherent features and
afterward merging them in a space with a high number of dimensions. This
approach effectively addresses the differences noted before. The inflexibility
of label fitting poses a constraint on the use of multi-kernel late-fusion
methods in complex nasopharyngeal carcinoma (NPC) datasets, hence affecting the
efficacy of general classifiers in dealing with high-dimensional
characteristics. This innovative methodology aims to increase the disparity
between the two cohorts, hence providing a more flexible structure for the
allocation of labels. The examination of the NPC-ContraParotid dataset
demonstrates the model's robustness and efficacy, indicating its potential as a
valuable tool for predicting distant metastases in patients with nasopharyngeal
carcinoma (NPC).
|
2502.09657
|
Integrating Spatiotemporal Vision Transformer into Digital Twins for
High-Resolution Heat Stress Forecasting in Campus Environments
|
cs.CV
|
Extreme heat events exacerbated by climate change pose significant challenges
to urban resilience and planning. This study introduces a climate-responsive
digital twin framework integrating the Spatiotemporal Vision Transformer
(ST-ViT) model to enhance heat stress forecasting and decision-making. Using a
Texas campus as a testbed, we synthesized high-resolution physical model
simulations with spatial and meteorological data to develop fine-scale human
thermal predictions. The ST-ViT-powered digital twin enables efficient,
data-driven insights for planners, policymakers, and campus stakeholders,
supporting targeted heat mitigation strategies and advancing climate-adaptive
urban design.
|
2502.09658
|
Neuro-Conceptual Artificial Intelligence: Integrating OPM with Deep
Learning to Enhance Question Answering Quality
|
cs.CL cs.AI
|
Knowledge representation and reasoning are critical challenges in Artificial
Intelligence (AI), particularly in integrating neural and symbolic approaches
to achieve explainable and transparent AI systems. Traditional knowledge
representation methods often fall short of capturing complex processes and
state changes. We introduce Neuro-Conceptual Artificial Intelligence (NCAI), a
specialization of the neuro-symbolic AI approach that integrates conceptual
modeling using Object-Process Methodology (OPM) ISO 19450:2024 with deep
learning to enhance question-answering (QA) quality. By converting natural
language text into OPM models using in-context learning, NCAI leverages the
expressive power of OPM to represent complex OPM elements-processes, objects,
and states-beyond what traditional triplet-based knowledge graphs can easily
capture. This rich structured knowledge representation improves reasoning
transparency and answer accuracy in an OPM-QA system. We further propose
transparency evaluation metrics to quantitatively measure how faithfully the
predicted reasoning aligns with OPM-based conceptual logic. Our experiments
demonstrate that NCAI outperforms traditional methods, highlighting its
potential for advancing neuro-symbolic AI by providing rich knowledge
representations, measurable transparency, and improved reasoning.
|
2502.09659
|
Cancer Vaccine Adjuvant Name Recognition from Biomedical Literature
using Large Language Models
|
cs.CL cs.AI cs.CY
|
Motivation: An adjuvant is a chemical incorporated into vaccines that
enhances their efficacy by improving the immune response. Identifying adjuvant
names from cancer vaccine studies is essential for furthering research and
enhancing immunotherapies. However, the manual curation from the constantly
expanding biomedical literature poses significant challenges. This study
explores the automated recognition of vaccine adjuvant names using Large
Language Models (LLMs), specifically Generative Pretrained Transformers (GPT)
and Large Language Model Meta AI (Llama). Methods: We utilized two datasets: 97
clinical trial records from AdjuvareDB and 290 abstracts annotated with the
Vaccine Adjuvant Compendium (VAC). GPT-4o and Llama 3.2 were employed in
zero-shot and few-shot learning paradigms with up to four examples per prompt.
Prompts explicitly targeted adjuvant names, testing the impact of contextual
information such as substances or interventions. Outputs underwent automated
and manual validation for accuracy and consistency. Results: GPT-4o attained
100% Precision across all situations while exhibiting notable improve in Recall
and F1-scores, particularly with incorporating interventions. On the VAC
dataset, GPT-4o achieved a maximum F1-score of 77.32% with interventions,
surpassing Llama-3.2-3B by approximately 2%. On the AdjuvareDB dataset, GPT-4o
reached an F1-score of 81.67% for three-shot prompting with interventions,
surpassing Llama-3.2-3 B's maximum F1-score of 65.62%. Conclusion: Our findings
demonstrate that LLMs excel at identifying adjuvant names, including rare
variations of naming representation. This study emphasizes the capability of
LLMs to enhance cancer vaccine development by efficiently extracting insights.
Future work aims to broaden the framework to encompass various biomedical
literature and enhance model generalizability across various vaccines and
adjuvants.
|
2502.09660
|
Towards Fine-grained Interactive Segmentation in Images and Videos
|
cs.CV eess.IV
|
The recent Segment Anything Models (SAMs) have emerged as foundational visual
models for general interactive segmentation. Despite demonstrating robust
generalization abilities, they still suffer performance degradations in
scenarios demanding accurate masks. Existing methods for high-precision
interactive segmentation face a trade-off between the ability to perceive
intricate local details and maintaining stable prompting capability, which
hinders the applicability and effectiveness of foundational segmentation
models. To this end, we present an SAM2Refiner framework built upon the SAM2
backbone. This architecture allows SAM2 to generate fine-grained segmentation
masks for both images and videos while preserving its inherent strengths.
Specifically, we design a localization augment module, which incorporates local
contextual cues to enhance global features via a cross-attention mechanism,
thereby exploiting potential detailed patterns and maintaining semantic
information. Moreover, to strengthen the prompting ability toward the enhanced
object embedding, we introduce a prompt retargeting module to renew the
embedding with spatially aligned prompt features. In addition, to obtain
accurate high resolution segmentation masks, a mask refinement module is
devised by employing a multi-scale cascaded structure to fuse mask features
with hierarchical representations from the encoder. Extensive experiments
demonstrate the effectiveness of our approach, revealing that the proposed
method can produce highly precise masks for both images and videos, surpassing
state-of-the-art methods.
|
2502.09662
|
Generalizable Cervical Cancer Screening via Large-scale Pretraining and
Test-Time Adaptation
|
q-bio.QM cs.CV eess.IV
|
Cervical cancer is a leading malignancy in female reproductive system. While
AI-assisted cytology offers a cost-effective and non-invasive screening
solution, current systems struggle with generalizability in complex clinical
scenarios. To address this issue, we introduced Smart-CCS, a generalizable
Cervical Cancer Screening paradigm based on pretraining and adaptation to
create robust and generalizable screening systems. To develop and validate
Smart-CCS, we first curated a large-scale, multi-center dataset named CCS-127K,
which comprises a total of 127,471 cervical cytology whole-slide images
collected from 48 medical centers. By leveraging large-scale self-supervised
pretraining, our CCS models are equipped with strong generalization capability,
potentially generalizing across diverse scenarios. Then, we incorporated
test-time adaptation to specifically optimize the trained CCS model for complex
clinical settings, which adapts and refines predictions, improving real-world
applicability. We conducted large-scale system evaluation among various
cohorts. In retrospective cohorts, Smart-CCS achieved an overall area under the
curve (AUC) value of 0.965 and sensitivity of 0.913 for cancer screening on 11
internal test datasets. In external testing, system performance maintained high
at 0.950 AUC across 6 independent test datasets. In prospective cohorts, our
Smart-CCS achieved AUCs of 0.947, 0.924, and 0.986 in three prospective
centers, respectively. Moreover, the system demonstrated superior sensitivity
in diagnosing cervical cancer, confirming the accuracy of our cancer screening
results by using histology findings for validation. Interpretability analysis
with cell and slide predictions further indicated that the system's
decision-making aligns with clinical practice. Smart-CCS represents a
significant advancement in cancer screening across diverse clinical contexts.
|
2502.09663
|
DiffEx: Explaining a Classifier with Diffusion Models to Identify
Microscopic Cellular Variations
|
cs.CV cs.AI cs.LG q-bio.CB
|
In recent years, deep learning models have been extensively applied to
biological data across various modalities. Discriminative deep learning models
have excelled at classifying images into categories (e.g., healthy versus
diseased, treated versus untreated). However, these models are often perceived
as black boxes due to their complexity and lack of interpretability, limiting
their application in real-world biological contexts. In biological research,
explainability is essential: understanding classifier decisions and identifying
subtle differences between conditions are critical for elucidating the effects
of treatments, disease progression, and biological processes. To address this
challenge, we propose DiffEx, a method for generating visually interpretable
attributes to explain classifiers and identify microscopic cellular variations
between different conditions. We demonstrate the effectiveness of DiffEx in
explaining classifiers trained on natural and biological images. Furthermore,
we use DiffEx to uncover phenotypic differences within microscopy datasets. By
offering insights into cellular variations through classifier explanations,
DiffEx has the potential to advance the understanding of diseases and aid drug
discovery by identifying novel biomarkers.
|
2502.09664
|
Image Super-Resolution with Guarantees via Conformal Generative Models
|
cs.CV cs.LG stat.ML
|
The increasing use of generative ML foundation models for image
super-resolution calls for robust and interpretable uncertainty quantification
methods. We address this need by presenting a novel approach based on conformal
prediction techniques to create a "confidence mask" capable of reliably and
intuitively communicating where the generated image can be trusted. Our method
is adaptable to any black-box generative model, including those locked behind
an opaque API, requires only easily attainable data for calibration, and is
highly customizable via the choice of a local image similarity metric. We prove
strong theoretical guarantees for our method that span fidelity error control
(according to our local image similarity metric), reconstruction quality, and
robustness in the face of data leakage. Finally, we empirically evaluate these
results and establish our method's solid performance.
|
2502.09665
|
Revealing Subtle Phenotypes in Small Microscopy Datasets Using Latent
Diffusion Models
|
cs.CV
|
Identifying subtle phenotypic variations in cellular images is critical for
advancing biological research and accelerating drug discovery. These variations
are often masked by the inherent cellular heterogeneity, making it challenging
to distinguish differences between experimental conditions. Recent advancements
in deep generative models have demonstrated significant potential for revealing
these nuanced phenotypes through image translation, opening new frontiers in
cellular and molecular biology as well as the identification of novel
biomarkers. Among these generative models, diffusion models stand out for their
ability to produce high-quality, realistic images. However, training diffusion
models typically requires large datasets and substantial computational
resources, both of which can be limited in biological research. In this work,
we propose a novel approach that leverages pre-trained latent diffusion models
to uncover subtle phenotypic changes. We validate our approach qualitatively
and quantitatively on several small datasets of microscopy images. Our findings
reveal that our approach enables effective detection of phenotypic variations,
capturing both visually apparent and imperceptible differences. Ultimately, our
results highlight the promising potential of this approach for phenotype
detection, especially in contexts constrained by limited data and computational
capacity.
|
2502.09667
|
k-LLMmeans: Summaries as Centroids for Interpretable and Scalable
LLM-Based Text Clustering
|
cs.CL cs.LG stat.ML
|
We introduce k-LLMmeans, a novel modification of the k-means clustering
algorithm that utilizes LLMs to generate textual summaries as cluster
centroids, thereby capturing contextual and semantic nuances often lost when
relying on purely numerical means of document embeddings. This modification
preserves the properties of k-means while offering greater interpretability:
the cluster centroid is represented by an LLM-generated summary, whose
embedding guides cluster assignments. We also propose a mini-batch variant,
enabling efficient online clustering for streaming text data and providing
real-time interpretability of evolving cluster centroids. Through extensive
simulations, we show that our methods outperform vanilla k-means on multiple
metrics while incurring only modest LLM usage that does not scale with dataset
size. Finally, We present a case study showcasing the interpretability of
evolving cluster centroids in sequential text streams. As part of our
evaluation, we compile a new dataset from StackExchange, offering a benchmark
for text-stream clustering.
|
2502.09669
|
Meta-INR: Efficient Encoding of Volumetric Data via Meta-Learning
Implicit Neural Representation
|
cs.CV cs.AI cs.GR
|
Implicit neural representation (INR) has emerged as a promising solution for
encoding volumetric data, offering continuous representations and seamless
compatibility with the volume rendering pipeline. However, optimizing an INR
network from randomly initialized parameters for each new volume is
computationally inefficient, especially for large-scale time-varying or
ensemble volumetric datasets where volumes share similar structural patterns
but require independent training. To close this gap, we propose Meta-INR, a
pretraining strategy adapted from meta-learning algorithms to learn initial INR
parameters from partial observation of a volumetric dataset. Compared to
training an INR from scratch, the learned initial parameters provide a strong
prior that enhances INR generalizability, allowing significantly faster
convergence with just a few gradient updates when adapting to a new volume and
better interpretability when analyzing the parameters of the adapted INRs. We
demonstrate that Meta-INR can effectively extract high-quality generalizable
features that help encode unseen similar volume data across diverse datasets.
Furthermore, we highlight its utility in tasks such as simulation parameter
analysis and representative timestep selection. The code is available at
https://github.com/spacefarers/MetaINR.
|
2502.09670
|
The Science of Evaluating Foundation Models
|
cs.CL cs.AI
|
The emergent phenomena of large foundation models have revolutionized natural
language processing. However, evaluating these models presents significant
challenges due to their size, capabilities, and deployment across diverse
applications. Existing literature often focuses on individual aspects, such as
benchmark performance or specific tasks, but fails to provide a cohesive
process that integrates the nuances of diverse use cases with broader ethical
and operational considerations. This work focuses on three key aspects: (1)
Formalizing the Evaluation Process by providing a structured framework tailored
to specific use-case contexts, (2) Offering Actionable Tools and Frameworks
such as checklists and templates to ensure thorough, reproducible, and
practical evaluations, and (3) Surveying Recent Work with a targeted review of
advancements in LLM evaluation, emphasizing real-world applications.
|
2502.09672
|
IMM-MOT: A Novel 3D Multi-object Tracking Framework with Interacting
Multiple Model Filter
|
cs.CV cs.RO
|
3D Multi-Object Tracking (MOT) provides the trajectories of surrounding
objects, assisting robots or vehicles in smarter path planning and obstacle
avoidance. Existing 3D MOT methods based on the Tracking-by-Detection framework
typically use a single motion model to track an object throughout its entire
tracking process. However, objects may change their motion patterns due to
variations in the surrounding environment. In this paper, we introduce the
Interacting Multiple Model filter in IMM-MOT, which accurately fits the complex
motion patterns of individual objects, overcoming the limitation of
single-model tracking in existing approaches. In addition, we incorporate a
Damping Window mechanism into the trajectory lifecycle management, leveraging
the continuous association status of trajectories to control their creation and
termination, reducing the occurrence of overlooked low-confidence true targets.
Furthermore, we propose the Distance-Based Score Enhancement module, which
enhances the differentiation between false positives and true positives by
adjusting detection scores, thereby improving the effectiveness of the Score
Filter. On the NuScenes Val dataset, IMM-MOT outperforms most other
single-modal models using 3D point clouds, achieving an AMOTA of 73.8%. Our
project is available at https://github.com/Ap01lo/IMM-MOT.
|
2502.09673
|
Are Smarter LLMs Safer? Exploring Safety-Reasoning Trade-offs in
Prompting and Fine-Tuning
|
cs.CL cs.AI
|
Large Language Models (LLMs) have demonstrated remarkable success across
various NLP benchmarks. However, excelling in complex tasks that require
nuanced reasoning and precise decision-making demands more than raw language
proficiency--LLMs must reason, i.e., think logically, draw from past
experiences, and synthesize information to reach conclusions and take action.
To enhance reasoning abilities, approaches such as prompting and fine-tuning
have been widely explored. While these methods have led to clear improvements
in reasoning, their impact on LLM safety remains less understood. In this work,
we investigate the interplay between reasoning and safety in LLMs. We highlight
the latent safety risks that arise as reasoning capabilities improve, shedding
light on previously overlooked vulnerabilities. At the same time, we explore
how reasoning itself can be leveraged to enhance safety, uncovering potential
mitigation strategies. By examining both the risks and opportunities in
reasoning-driven LLM safety, our study provides valuable insights for
developing models that are not only more capable but also more trustworthy in
real-world deployments.
|
2502.09674
|
The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Safety
Analysis
|
cs.CL cs.AI
|
Large Language Models' safety-aligned behaviors, such as refusing harmful
queries, can be represented by linear directions in activation space. Previous
research modeled safety behavior with a single direction, limiting mechanistic
understanding to an isolated safety feature. In this work, we discover that
safety-aligned behavior is jointly controlled by multi-dimensional directions.
Namely, we study the vector space of representation shifts during safety
fine-tuning on Llama 3 8B for refusing jailbreaks. By studying orthogonal
directions in the space, we first find that a dominant direction governs the
model's refusal behavior, while multiple smaller directions represent distinct
and interpretable features like hypothetical narrative and role-playing. We
then measure how different directions promote or suppress the dominant
direction, showing the important role of secondary directions in shaping the
model's refusal representation. Finally, we demonstrate that removing certain
trigger tokens in harmful queries can mitigate these directions to bypass the
learned safety capability, providing new insights on understanding safety
alignment vulnerability from a multi-dimensional perspective. Code and
artifacts are available at https://github.com/BMPixel/safety-residual-space.
|
2502.09675
|
Multi-level Conflict-Aware Network for Multi-modal Sentiment Analysis
|
cs.CL cs.AI cs.LG
|
Multimodal Sentiment Analysis (MSA) aims to recognize human emotions by
exploiting textual, acoustic, and visual modalities, and thus how to make full
use of the interactions between different modalities is a central challenge of
MSA. Interaction contains alignment and conflict aspects. Current works mainly
emphasize alignment and the inherent differences between unimodal modalities,
neglecting the fact that there are also potential conflicts between bimodal
combinations. Additionally, multi-task learning-based conflict modeling methods
often rely on the unstable generated labels. To address these challenges, we
propose a novel multi-level conflict-aware network (MCAN) for multimodal
sentiment analysis, which progressively segregates alignment and conflict
constituents from unimodal and bimodal representations, and further exploits
the conflict constituents with the conflict modeling branch. In the conflict
modeling branch, we conduct discrepancy constraints at both the representation
and predicted output levels, avoiding dependence on the generated labels.
Experimental results on the CMU-MOSI and CMU-MOSEI datasets demonstrate the
effectiveness of the proposed MCAN.
|
2502.09680
|
Object-Centric Latent Action Learning
|
cs.CV cs.AI
|
Leveraging vast amounts of internet video data for Embodied AI is currently
bottle-necked by the lack of action annotations and the presence of
action-correlated distractors. We propose a novel object-centric latent action
learning approach, based on VideoSaur and LAPO, that employs self-supervised
decomposition of scenes into object representations and annotates video data
with proxy-action labels. This method effectively disentangles causal
agent-object interactions from irrelevant background noise and reduces the
performance degradation of latent action learning approaches caused by
distractors. Our preliminary experiments with the Distracting Control Suite
show that latent action pretraining based on object decompositions improve the
quality of inferred latent actions by x2.7 and efficiency of downstream
fine-tuning with a small set of labeled actions, increasing return by x2.6 on
average.
|
2502.09682
|
Lifespan tree of brain anatomy: diagnostic values for motor and
cognitive neurodegenerative diseases
|
eess.IV cs.LG
|
The differential diagnosis of neurodegenerative diseases, characterized by
overlapping symptoms, may be challenging. Brain imaging coupled with artificial
intelligence has been previously proposed for diagnostic support, but most of
these methods have been trained to discriminate only isolated diseases from
controls. Here, we develop a novel machine learning framework, named lifespan
tree of brain anatomy, dedicated to the differential diagnosis between multiple
diseases simultaneously. It integrates the modeling of volume changes for 124
brain structures during the lifespan with non-linear dimensionality reduction
and synthetic sampling techniques to create easily interpretable
representations of brain anatomy over the course of disease progression. As
clinically relevant proof-of-concept applications, we constructed a cognitive
lifespan tree of brain anatomy for the differential diagnosis of six causes of
neurodegenerative dementia and a motor lifespan tree of brain anatomy for the
differential diagnosis of four causes of parkinsonism using 37594 MRI as a
training dataset. This original approach enhanced significantly the efficiency
of differential diagnosis in the external validation cohort of 1754 cases,
outperforming existing state-of-the art machine learning techniques. Lifespan
tree holds promise as a valuable tool for differential diagnostic in relevant
clinical conditions, especially for diseases still lacking effective biological
markers.
|
2502.09683
|
Channel Dependence, Limited Lookback Windows, and the Simplicity of
Datasets: How Biased is Time Series Forecasting?
|
cs.LG
|
Time-series forecasting research has converged to a small set of datasets and
a standardized collection of evaluation scenarios. Such a standardization is to
a specific extent needed for comparable research. However, the underlying
assumption is, that the considered setting is a representative for the problem
as a whole. In this paper, we challenge this assumption and show that the
current scenario gives a strongly biased perspective on the state of
time-series forecasting research. To be more detailed, we show that the current
evaluation scenario is heavily biased by the simplicity of the current
datasets. We furthermore emphasize, that when the lookback-window is properly
tuned, current models usually do not need any information flow across channels.
However, when using more complex benchmark data, the situation changes: Here,
modeling channel-interactions in a sophisticated manner indeed enhances
performances. Furthermore, in this complex evaluation scenario, Crossformer, a
method regularly neglected as an important baseline, is the SOTA method for
time series forecasting. Based on this, we present the Fast Channel-dependent
Transformer (FaCT), a simplified version of Crossformer which closes the
runtime gap between Crossformer and TimeMixer, leading to an efficient model
for complex forecasting datasets.
|
2502.09685
|
A Novel Hybrid Approach to Contraceptive Demand Forecasting: Integrating
Point Predictions with Probabilistic Distributions
|
cs.LG stat.AP stat.ME
|
Accurate demand forecasting is vital for ensuring reliable access to
contraceptive products, supporting key processes like procurement, inventory,
and distribution. However, forecasting contraceptive demand in developing
countries presents challenges, including incomplete data, poor data quality,
and the need to account for multiple geographical and product factors. Current
methods often rely on simple forecasting techniques, which fail to capture
demand uncertainties arising from these factors, warranting expert involvement.
Our study aims to improve contraceptive demand forecasting by combining
probabilistic forecasting methods with expert knowledge. We developed a hybrid
model that combines point forecasts from domain-specific model with
probabilistic distributions from statistical and machine learning approaches,
enabling human input to fine-tune and enhance the system-generated forecasts.
This approach helps address the uncertainties in demand and is particularly
useful in resource-limited settings. We evaluate different forecasting methods,
including time series, Bayesian, machine learning, and foundational time series
methods alongside our new hybrid approach. By comparing these methods, we
provide insights into their strengths, weaknesses, and computational
requirements. Our research fills a gap in forecasting contraceptive demand and
offers a practical framework that combines algorithmic and human expertise. Our
proposed model can also be generalized to other humanitarian contexts with
similar data patterns.
|
2502.09686
|
Leveraging Machine Learning and Deep Learning Techniques for Improved
Pathological Staging of Prostate Cancer
|
cs.LG
|
Prostate cancer (Pca) continues to be a leading cause of cancer-related
mortality in men, and the limitations in precision of traditional diagnostic
methods such as the Digital Rectal Exam (DRE), Prostate-Specific Antigen (PSA)
testing, and biopsies underscore the critical importance of accurate staging
detection in enhancing treatment outcomes and improving patient prognosis. This
study leverages machine learning and deep learning approaches, along with
feature selection and extraction methods, to enhance PCa pathological staging
predictions using RNA sequencing data from The Cancer Genome Atlas (TCGA). Gene
expression profiles from 486 tumors were analyzed using advanced algorithms,
including Random Forest (RF), Logistic Regression (LR), Extreme Gradient
Boosting (XGB), and Support Vector Machine (SVM). The performance of the study
is measured with respect to the F1-score, as well as precision and recall, all
of which are calculated as weighted averages. The results reveal that the
highest test F1-score, approximately 83%, was achieved by the Random Forest
algorithm, followed by Logistic Regression at 80%, while both Extreme Gradient
Boosting (XGB) and Support Vector Machine (SVM) scored around 79%. Furthermore,
deep learning models with data augmentation achieved an accuracy of 71. 23%,
while PCA-based dimensionality reduction reached an accuracy of 69.86%. This
research highlights the potential of AI-driven approaches in clinical oncology,
paving the way for more reliable diagnostic tools that can ultimately improve
patient outcomes.
|
2502.09687
|
Mind What You Ask For: Emotional and Rational Faces of Persuasion by
Large Language Models
|
cs.CL cs.AI cs.HC
|
Be careful what you ask for, you just might get it. This saying fits with the
way large language models (LLMs) are trained, which, instead of being rewarded
for correctness, are increasingly rewarded for pleasing the recipient. So, they
are increasingly effective at persuading us that their answers are valuable.
But what tricks do they use in this persuasion? In this study, we examine what
are the psycholinguistic features of the responses used by twelve different
language models. By grouping response content according to rational or
emotional prompts and exploring social influence principles employed by LLMs,
we ask whether and how we can mitigate the risks of LLM-driven mass
misinformation. We position this study within the broader discourse on
human-centred AI, emphasizing the need for interdisciplinary approaches to
mitigate cognitive and societal risks posed by persuasive AI responses.
|
2502.09688
|
Towards Virtual Clinical Trials of Radiology AI with Conditional
Generative Modeling
|
cs.CV cs.AI cs.LG
|
Artificial intelligence (AI) is poised to transform healthcare by enabling
personalized and efficient care through data-driven insights. Although
radiology is at the forefront of AI adoption, in practice, the potential of AI
models is often overshadowed by severe failures to generalize: AI models can
have performance degradation of up to 20% when transitioning from controlled
test environments to clinical use by radiologists. This mismatch raises
concerns that radiologists will be misled by incorrect AI predictions in
practice and/or grow to distrust AI, rendering these promising technologies
practically ineffectual. Exhaustive clinical trials of AI models on abundant
and diverse data is thus critical to anticipate AI model degradation when
encountering varied data samples. Achieving these goals, however, is
challenging due to the high costs of collecting diverse data samples and
corresponding annotations. To overcome these limitations, we introduce a novel
conditional generative AI model designed for virtual clinical trials (VCTs) of
radiology AI, capable of realistically synthesizing full-body CT images of
patients with specified attributes. By learning the joint distribution of
images and anatomical structures, our model enables precise replication of
real-world patient populations with unprecedented detail at this scale. We
demonstrate meaningful evaluation of radiology AI models through VCTs powered
by our synthetic CT study populations, revealing model degradation and
facilitating algorithmic auditing for bias-inducing data attributes. Our
generative AI approach to VCTs is a promising avenue towards a scalable
solution to assess model robustness, mitigate biases, and safeguard patient
care by enabling simpler testing and evaluation of AI models in any desired
range of diverse patient populations.
|
2502.09689
|
Large Language Models and Provenance Metadata for Determining the
Relevance of Images and Videos in News Stories
|
cs.CL cs.CV cs.CY
|
The most effective misinformation campaigns are multimodal, often combining
text with images and videos taken out of context -- or fabricating them
entirely -- to support a given narrative. Contemporary methods for detecting
misinformation, whether in deepfakes or text articles, often miss the interplay
between multiple modalities. Built around a large language model, the system
proposed in this paper addresses these challenges. It analyzes both the
article's text and the provenance metadata of included images and videos to
determine whether they are relevant. We open-source the system prototype and
interactive web interface.
|
2502.09690
|
Trust at Your Own Peril: A Mixed Methods Exploration of the Ability of
Large Language Models to Generate Expert-Like Systems Engineering Artifacts
and a Characterization of Failure Modes
|
cs.CL cs.AI
|
Multi-purpose Large Language Models (LLMs), a subset of generative Artificial
Intelligence (AI), have recently made significant progress. While expectations
for LLMs to assist systems engineering (SE) tasks are paramount; the
interdisciplinary and complex nature of systems, along with the need to
synthesize deep-domain knowledge and operational context, raise questions
regarding the efficacy of LLMs to generate SE artifacts, particularly given
that they are trained using data that is broadly available on the internet. To
that end, we present results from an empirical exploration, where a human
expert-generated SE artifact was taken as a benchmark, parsed, and fed into
various LLMs through prompt engineering to generate segments of typical SE
artifacts. This procedure was applied without any fine-tuning or calibration to
document baseline LLM performance. We then adopted a two-fold mixed-methods
approach to compare AI generated artifacts against the benchmark. First, we
quantitatively compare the artifacts using natural language processing
algorithms and find that when prompted carefully, the state-of-the-art
algorithms cannot differentiate AI-generated artifacts from the human-expert
benchmark. Second, we conduct a qualitative deep dive to investigate how they
differ in terms of quality. We document that while the two-material appear very
similar, AI generated artifacts exhibit serious failure modes that could be
difficult to detect. We characterize these as: premature requirements
definition, unsubstantiated numerical estimates, and propensity to overspecify.
We contend that this study tells a cautionary tale about why the SE community
must be more cautious adopting AI suggested feedback, at least when generated
by multi-purpose LLMs.
|
2502.09692
|
NeuralCFD: Deep Learning on High-Fidelity Automotive Aerodynamics
Simulations
|
cs.LG cs.AI
|
Recent advancements in neural operator learning are paving the way for
transformative innovations in fields such as automotive aerodynamics. However,
key challenges must be overcome before neural network-based simulation
surrogates can be implemented at an industry scale. First, surrogates must
become scalable to large surface and volume meshes, especially when using raw
geometry inputs only, i.e., without relying on the simulation mesh. Second,
surrogates must be trainable with a limited number of high-fidelity numerical
simulation samples while still reaching the required performance levels. To
this end, we introduce Geometry-preserving Universal Physics Transformer
(GP-UPT), which separates geometry encoding and physics predictions, ensuring
flexibility with respect to geometry representations and surface sampling
strategies. GP-UPT enables independent scaling of the respective parts of the
model according to practical requirements, offering scalable solutions to open
challenges. GP-UPT circumvents the creation of high-quality simulation meshes,
enables accurate 3D velocity field predictions at 20 million mesh cells, and
excels in transfer learning from low-fidelity to high-fidelity simulation
datasets, requiring less than half of the high-fidelity data to match the
performance of models trained from scratch.
|
2502.09695
|
Power System Electromagnetic Transient Stability: an Analysis Based on
Convergent Hamiltonian
|
eess.SY cs.SY
|
Transient stability is crucial to the reliable operation of power systems.
Existing theories rely on the simplified electromechanical models, substituting
the detailed electromagnetic dynamics of inductor and capacitor with their
impedance representations. However, this simplification is inadequate for the
growing penetration of fast-switching power electronic devices. Attempts to
extend the existing theories to include electromagnetic dynamics lead to overly
conservative stability conditions. To tackle this problem more directly, we
study the condition under which the power source and dissipation in the
electromagnetic dynamics tend to balance each other asymptotically. This is
equivalent to the convergence of the Hamiltonian (total stored energy) and can
be shown to imply transient stability. Using contraction analysis, we prove
that this property holds for a large class of time-varying port-Hamiltonian
systems with (i) constant damping matrix and (ii) strictly convex Hamiltonian.
Then through port-Hamiltonian modeling of the electromagnetic dynamics, we
obtain that the synchronized steady state of the power system is globally
stable if it exists. This result provides new insights into the reliable
operation of power systems. The proposed theory is illustrated in the
simulation results of a two-machine system.
|
2502.09696
|
ZeroBench: An Impossible Visual Benchmark for Contemporary Large
Multimodal Models
|
cs.CV
|
Large Multimodal Models (LMMs) exhibit major shortfalls when interpreting
images and, by some measures, have poorer spatial cognition than small children
or animals. Despite this, they attain high scores on many popular visual
benchmarks, with headroom rapidly eroded by an ongoing surge of model progress.
To address this, there is a pressing need for difficult benchmarks that remain
relevant for longer. We take this idea to its limit by introducing ZeroBench-a
lightweight visual reasoning benchmark that is entirely impossible for
contemporary frontier LMMs. Our benchmark consists of 100 manually curated
questions and 334 less difficult subquestions. We evaluate 20 LMMs on
ZeroBench, all of which score 0.0%, and rigorously analyse the errors. To
encourage progress in visual understanding, we publicly release ZeroBench.
|
2502.09704
|
Iterative quantum optimisation with a warm-started quantum state
|
quant-ph cond-mat.dis-nn cs.LG math.OC physics.comp-ph
|
We provide a method to prepare a warm-started quantum state from measurements
with an iterative framework to enhance the quantum approximate optimisation
algorithm (QAOA). The numerical simulations show the method can effectively
address the "stuck issue" of the standard QAOA using a single-string
warm-started initial state described in [Cain et al., 2023]. When applied to
the $3$-regular MaxCut problem, our approach achieves an improved approximation
ratio, with a lower bound that iteratively converges toward the best classical
algorithms for $p=1$ standard QAOA. Additionally, in the context of the
discrete global minimal variance portfolio (DGMVP) model, simulations reveal a
more favourable scaling of identifying the global minimal compared to the QAOA
standalone, the single-string warm-started QAOA and a classical constrained
sampling approach.
|
2502.09715
|
Evaluating GPT's Capability in Identifying Stages of Cognitive
Impairment from Electronic Health Data
|
cs.LG cs.AI cs.CL
|
Identifying cognitive impairment within electronic health records (EHRs) is
crucial not only for timely diagnoses but also for facilitating research.
Information about cognitive impairment often exists within unstructured
clinician notes in EHRs, but manual chart reviews are both time-consuming and
error-prone. To address this issue, our study evaluates an automated approach
using zero-shot GPT-4o to determine stage of cognitive impairment in two
different tasks. First, we evaluated the ability of GPT-4o to determine the
global Clinical Dementia Rating (CDR) on specialist notes from 769 patients who
visited the memory clinic at Massachusetts General Hospital (MGH), and achieved
a weighted kappa score of 0.83. Second, we assessed GPT-4o's ability to
differentiate between normal cognition, mild cognitive impairment (MCI), and
dementia on all notes in a 3-year window from 860 Medicare patients. GPT-4o
attained a weighted kappa score of 0.91 in comparison to specialist chart
reviews and 0.96 on cases that the clinical adjudicators rated with high
confidence. Our findings demonstrate GPT-4o's potential as a scalable chart
review tool for creating research datasets and assisting diagnosis in clinical
settings in the future.
|
2502.09717
|
Carbon- and Precedence-Aware Scheduling for Data Processing Clusters
|
cs.DC cs.CY cs.SY eess.SY
|
As large-scale data processing workloads continue to grow, their carbon
footprint raises concerns. Prior research on carbon-aware schedulers has
focused on shifting computation to align with availability of low-carbon
energy, but these approaches assume that each task can be executed
independently. In contrast, data processing jobs have precedence constraints
(i.e., outputs of one task are inputs for another) that complicate decisions,
since delaying an upstream ``bottleneck'' task to a low-carbon period will also
block downstream tasks, impacting the entire job's completion time. In this
paper, we show that carbon-aware scheduling for data processing benefits from
knowledge of both time-varying carbon and precedence constraints. Our main
contribution is $\texttt{PCAPS}$, a carbon-aware scheduler that interfaces with
modern ML scheduling policies to explicitly consider the precedence-driven
importance of each task in addition to carbon. To illustrate the gains due to
fine-grained task information, we also study $\texttt{CAP}$, a wrapper for any
carbon-agnostic scheduler that adapts the key provisioning ideas of
$\texttt{PCAPS}$. Our schedulers enable a configurable priority between carbon
reduction and job completion time, and we give analytical results
characterizing the trade-off between the two. Furthermore, our Spark prototype
on a 100-node Kubernetes cluster shows that a moderate configuration of
$\texttt{PCAPS}$ reduces carbon footprint by up to 32.9% without significantly
impacting the cluster's total efficiency.
|
2502.09720
|
NestQuant: Nested Lattice Quantization for Matrix Products and LLMs
|
cs.LG cs.AI cs.IT math.IT
|
Post-training quantization (PTQ) has emerged as a critical technique for
efficient deployment of large language models (LLMs). This work proposes
NestQuant, a novel PTQ scheme for weights and activations that is based on
self-similar nested lattices. Recent work have mathematically shown such
quantizers to be information-theoretically optimal for low-precision matrix
multiplication. We implement a practical low-complexity version of NestQuant
based on Gosset lattice, making it a drop-in quantizer for any matrix
multiplication step (e.g., in self-attention, MLP etc). For example, NestQuant
quantizes weights, KV-cache, and activations of Llama-3-8B to 4 bits, achieving
perplexity of 6.6 on wikitext2. This represents more than 55% reduction in
perplexity gap with respect to unquantized model (perplexity of 6.14) compared
to state-of-the-art Meta's SpinQuant (perplexity 7.3). Comparisons on various
LLM evaluation benchmarks also show a reduction in performance degradation
induced by quantization.
|
2502.09723
|
Making Them a Malicious Database: Exploiting Query Code to Jailbreak
Aligned Large Language Models
|
cs.CR cs.AI cs.CL
|
Recent advances in large language models (LLMs) have demonstrated remarkable
potential in the field of natural language processing. Unfortunately, LLMs face
significant security and ethical risks. Although techniques such as safety
alignment are developed for defense, prior researches reveal the possibility of
bypassing such defenses through well-designed jailbreak attacks. In this paper,
we propose QueryAttack, a novel framework to examine the generalizability of
safety alignment. By treating LLMs as knowledge databases, we translate
malicious queries in natural language into structured non-natural query
language to bypass the safety alignment mechanisms of LLMs. We conduct
extensive experiments on mainstream LLMs, and the results show that QueryAttack
not only can achieve high attack success rates (ASRs), but also can jailbreak
various defense methods. Furthermore, we tailor a defense method against
QueryAttack, which can reduce ASR by up to 64% on GPT-4-1106. Our code is
available at https://github.com/horizonsinzqs/QueryAttack.
|
2502.09724
|
Navigating the Social Welfare Frontier: Portfolios for Multi-objective
Reinforcement Learning
|
cs.LG
|
In many real-world applications of reinforcement learning (RL), deployed
policies have varied impacts on different stakeholders, creating challenges in
reaching consensus on how to effectively aggregate their preferences.
Generalized $p$-means form a widely used class of social welfare functions for
this purpose, with broad applications in fair resource allocation, AI
alignment, and decision-making. This class includes well-known welfare
functions such as Egalitarian, Nash, and Utilitarian welfare. However,
selecting the appropriate social welfare function is challenging for
decision-makers, as the structure and outcomes of optimal policies can be
highly sensitive to the choice of $p$. To address this challenge, we study the
concept of an $\alpha$-approximate portfolio in RL, a set of policies that are
approximately optimal across the family of generalized $p$-means for all $p \in
[-\infty, 1]$. We propose algorithms to compute such portfolios and provide
theoretical guarantees on the trade-offs among approximation factor, portfolio
size, and computational efficiency. Experimental results on synthetic and
real-world datasets demonstrate the effectiveness of our approach in
summarizing the policy space induced by varying $p$ values, empowering
decision-makers to navigate this landscape more effectively.
|
2502.09728
|
Perch like a bird: bio-inspired optimal maneuvers and nonlinear control
for Flapping-Wing Unmanned Aerial Vehicles
|
eess.SY cs.RO cs.SY math.OC
|
This research endeavors to design the perching maneuver and control in
ornithopter robots. By analyzing the dynamic interplay between the robot's
flight dynamics, feedback loops, and the environmental constraints, we aim to
advance our understanding of the perching maneuver, drawing parallels to
biological systems. Inspired by the elegant control strategies observed in
avian flight, we develop an optimal maneuver and a corresponding controller to
achieve stable perching. The maneuver consists of a deceleration and a rapid
pitch-up (vertical turn), which arises from analytically solving the
optimization problem of minimal velocity at perch, subject to kinematic and
dynamic constraints. The controller for the flapping frequency and tail
symmetric deflection is nonlinear and adaptive, ensuring robustly stable
perching. Indeed, such adaptive behavior in a sense incorporates homeostatic
principles of cybernetics into the control system, enhancing the robot's
ability to adapt to unexpected disturbances and maintain a stable posture
during the perching maneuver. The resulting autonomous perching maneuvers --
closed-loop descent and turn -- , have been verified and validated,
demonstrating excellent agreement with real bird perching trajectories reported
in the literature. These findings lay the theoretical groundwork for the
development of future prototypes that better imitate the skillful perching
maneuvers of birds.
|
2502.09731
|
A CNN Approach to Automated Detection and Classification of Brain Tumors
|
cs.CV cs.AI
|
Brain tumors require an assessment to ensure timely diagnosis and effective
patient treatment. Morphological factors such as size, location, texture, and
variable appearance complicate tumor inspection. Medical imaging presents
challenges, including noise and incomplete images. This research article
presents a methodology for processing Magnetic Resonance Imaging (MRI) data,
encompassing techniques for image classification and denoising. The effective
use of MRI images allows medical professionals to detect brain disorders,
including tumors. This research aims to categorize healthy brain tissue and
brain tumors by analyzing the provided MRI data. Unlike alternative methods
like Computed Tomography (CT), MRI technology offers a more detailed
representation of internal anatomical components, making it a suitable option
for studying data related to brain tumors. The MRI picture is first subjected
to a denoising technique utilizing an Anisotropic diffusion filter. The dataset
utilized for the models creation is a publicly accessible and validated Brain
Tumour Classification (MRI) database, comprising 3,264 brain MRI scans. SMOTE
was employed for data augmentation and dataset balancing. Convolutional Neural
Networks(CNN) such as ResNet152V2, VGG, ViT, and EfficientNet were employed for
the classification procedure. EfficientNet attained an accuracy of 98%, the
highest recorded.
|
2502.09741
|
FoNE: Precise Single-Token Number Embeddings via Fourier Features
|
cs.CL cs.LG
|
Large Language Models (LLMs) typically represent numbers using multiple
tokens, which requires the model to aggregate these tokens to interpret
numerical values. This fragmentation makes both training and inference less
efficient and adversely affects the model's performance on number-related
tasks. Inspired by the observation that pre-trained LLMs internally learn
Fourier-like features for number tokens, we propose Fourier Number Embedding
(FoNE), a novel method that directly maps numbers into the embedding space with
their Fourier features. FoNE encodes each number as a single token with only
two embedding dimensions per digit, effectively capturing numerical values
without fragmentation. This compact representation accelerates both training
and inference. Compared to traditional subword and digit-wise embeddings, FoNE
not only reduces computational overhead but also achieves higher accuracy
across various numerical tasks including addition, subtraction and
multiplication. On 6-digit decimal addition, FoNE requires 64$\times$ less data
to achieve 99% accuracy than subword and digit-wise embeddings while using
3$\times$ and 6$\times$ fewer tokens per number, respectively. Furthermore,
FoNE is the only method that yields 100% accuracy on over 100,000 test examples
for addition, subtraction, and multiplication. The codes and visualization are
available at https://fouriernumber.github.io/.
|
2502.09743
|
Partial Colexifications Improve Concept Embeddings
|
cs.CL
|
While the embedding of words has revolutionized the field of Natural Language
Processing, the embedding of concepts has received much less attention so far.
A dense and meaningful representation of concepts, however, could prove useful
for several tasks in computational linguistics, especially those involving
cross-linguistic data or sparse data from low resource languages. First methods
that have been proposed so far embed concepts from automatically constructed
colexification networks. While these approaches depart from automatically
inferred polysemies, attested across a larger number of languages, they are
restricted to the word level, ignoring lexical relations that would only hold
for parts of the words in a given language. Building on recently introduced
methods for the inference of partial colexifications, we show how they can be
used to improve concept embeddings in meaningful ways. The learned embeddings
are evaluated against lexical similarity ratings, recorded instances of
semantic shift, and word association data. We show that in all evaluation
tasks, the inclusion of partial colexifications lead to improved concept
representations and better results. Our results further show that the learned
embeddings are able to capture and represent different semantic relationships
between concepts.
|
2502.09744
|
Fine-Tuning Foundation Models with Federated Learning for Privacy
Preserving Medical Time Series Forecasting
|
cs.LG cs.CR
|
Federated Learning (FL) provides a decentralized machine learning approach,
where multiple devices or servers collaboratively train a model without sharing
their raw data, thus enabling data privacy. This approach has gained
significant interest in academia and industry due to its privacy-preserving
properties, which are particularly valuable in the medical domain where data
availability is often protected under strict regulations. A relatively
unexplored area is the use of FL to fine-tune Foundation Models (FMs) for time
series forecasting, potentially enhancing model efficacy by overcoming data
limitation while maintaining privacy. In this paper, we fine-tuned time series
FMs with Electrocardiogram (ECG) and Impedance Cardiography (ICG) data using
different FL techniques. We then examined various scenarios and discussed the
challenges FL faces under different data heterogeneity configurations. Our
empirical results demonstrated that while FL can be effective for fine-tuning
FMs on time series forecasting tasks, its benefits depend on the data
distribution across clients. We highlighted the trade-offs in applying FL to FM
fine-tuning.
|
2502.09747
|
The Widespread Adoption of Large Language Model-Assisted Writing Across
Society
|
cs.CL
|
The recent advances in large language models (LLMs) attracted significant
public and policymaker interest in its adoption patterns. In this paper, we
systematically analyze LLM-assisted writing across four domains-consumer
complaints, corporate communications, job postings, and international
organization press releases-from January 2022 to September 2024. Our dataset
includes 687,241 consumer complaints, 537,413 corporate press releases, 304.3
million job postings, and 15,919 United Nations (UN) press releases. Using a
robust population-level statistical framework, we find that LLM usage surged
following the release of ChatGPT in November 2022. By late 2024, roughly 18% of
financial consumer complaint text appears to be LLM-assisted, with adoption
patterns spread broadly across regions and slightly higher in urban areas. For
corporate press releases, up to 24% of the text is attributable to LLMs. In job
postings, LLM-assisted writing accounts for just below 10% in small firms, and
is even more common among younger firms. UN press releases also reflect this
trend, with nearly 14% of content being generated or modified by LLMs. Although
adoption climbed rapidly post-ChatGPT, growth appears to have stabilized by
2024, reflecting either saturation in LLM adoption or increasing subtlety of
more advanced models. Our study shows the emergence of a new reality in which
firms, consumers and even international organizations substantially rely on
generative AI for communications.
|
2502.09748
|
Contracting Strategies for Electrolyzers to Secure Grid Connection: The
Dutch Case
|
math.OC cs.SY eess.SY
|
In response to increasing grid congestion in the Netherlands, non-firm
connection and transport agreements (CTAs) and capacity restriction contracts
(CRCs) have been introduced, allowing consumer curtailment in exchange for grid
tariff discounts or per-MW compensations. This study examines the interaction
between an electrolyzer project, facing sizing and contracting decisions, and a
network operator, responsible for contract activations and determining grid
connection capacity, under the new Dutch regulations. The interaction is
modeled using two bilevel optimization problems with alternating
leader-follower roles. Results highlight a trade-off between CRC income and
non-firm CTA tariff discounts, showing that voluntary congestion management by
the network operator increases electrolyzer profitability at CRC prices below
10 euro per MW but reduces it at higher prices. Furthermore, the network
operator benefits more from reacting to the electrolyzer owner's CTA decisions
than from leading the interaction at CRC prices above 10 euro per MW. Ignoring
the other party's optimization problem overestimates profits for both the
network operator and the electrolyzer owner, emphasizing the importance of
coordinated decision-making.
|
2502.09749
|
Vote-Tree-Planner: Optimizing Execution Order in LLM-based Task Planning
Pipeline via Voting
|
cs.RO cs.AI
|
Integrating large language models (LLMs) into closed-loop robotic task
planning has become increasingly popular within embodied artificial
intelligence. Previous efforts mainly focused on leveraging the strong
reasoning abilities of LLMs to enhance task planning performance while often
overlooking task planning efficiency and executability due to repetitive
queries to LLMs. This paper addresses the synergy between LLMs and task
planning systems, aiming to minimize redundancy while enhancing planning
effectiveness. Specifically, building upon Prog-Prompt and the high-level
concept of Tree-Planner, we propose Vote-Tree-Planner. This sampling strategy
utilizes votes to guide plan traversal during the decision-making process. Our
approach is motivated by a straightforward observation: assigning weights to
agents during decision-making enables the evaluation of critical paths before
execution. With this simple vote-tree construction, our method further improves
the success rate and reduces the number of queries to LLMs. The experimental
results highlight that our Vote-Tree-Planner demonstrates greater stability and
shows a higher average success rate and goal condition recall on the unseen
dataset compared with previous baseline methods. These findings underscore the
potential of the Vote-Tree-Planner to enhance planning accuracy, reliability,
and efficiency in LLM-based planning systems.
|
2502.09755
|
Enhancing Jailbreak Attacks via Compliance-Refusal-Based Initialization
|
cs.CR cs.LG
|
Jailbreak attacks aim to exploit large language models (LLMs) and pose a
significant threat to their proper conduct; they seek to bypass models'
safeguards and often provoke transgressive behaviors. However, existing
automatic jailbreak attacks require extensive computational resources and are
prone to converge on suboptimal solutions. In this work, we propose
\textbf{C}ompliance \textbf{R}efusal \textbf{I}nitialization (CRI), a novel,
attack-agnostic framework that efficiently initializes the optimization in the
proximity of the compliance subspace of harmful prompts. By narrowing the
initial gap to the adversarial objective, CRI substantially improves
adversarial success rates (ASR) and drastically reduces computational overhead
-- often requiring just a single optimization step. We evaluate CRI on the
widely-used AdvBench dataset over the standard jailbreak attacks of GCG and
AutoDAN. Results show that CRI boosts ASR and decreases the median steps to
success by up to \textbf{\(\times 60\)}. The project page, along with the
reference implementation, is publicly available at
\texttt{https://amit1221levi.github.io/CRI-Jailbreak-Init-LLMs-evaluation/}.
|
2502.09757
|
The AI-Therapist Duo: Exploring the Potential of Human-AI Collaboration
in Personalized Art Therapy for PICS Intervention
|
cs.HC cs.AI
|
Post-intensive care syndrome (PICS) is a multifaceted condition that arises
from prolonged stays in an intensive care unit (ICU). While preventing PICS
among ICU patients is becoming increasingly important, interventions remain
limited. Building on evidence supporting the effectiveness of art exposure in
addressing the psychological aspects of PICS, we propose a novel art therapy
solution through a collaborative Human-AI approach that enhances personalized
therapeutic interventions using state-of-the-art Visual Art Recommendation
Systems. We developed two Human-in-the-Loop (HITL) personalization methods and
assessed their impact through a large-scale user study (N=150). Our findings
demonstrate that this Human-AI collaboration not only enhances the
personalization and effectiveness of art therapy but also supports therapists
by streamlining their workload. While our study centres on PICS intervention,
the results suggest that human-AI collaborative Art therapy could potentially
benefit other areas where emotional support is critical, such as cases of
anxiety and depression.
|
2502.09762
|
Adaptive Teaming in Multi-Drone Pursuit: Simulation, Training, and
Deployment
|
cs.RO cs.AI
|
Adaptive teaming, the ability to collaborate with unseen teammates without
prior coordination, remains an underexplored challenge in multi-robot
collaboration. This paper focuses on adaptive teaming in multi-drone
cooperative pursuit, a critical task with real-world applications such as
border surveillance, search-and-rescue, and counter-terrorism. We first define
and formalize the \textbf{A}daptive Teaming in \textbf{M}ulti-\textbf{D}rone
\textbf{P}ursuit (AT-MDP) problem and introduce AT-MDP framework, a
comprehensive framework that integrates simulation, algorithm training and
real-world deployment. AT-MDP framework provides a flexible experiment
configurator and interface for simulation, a distributed training framework
with an extensive algorithm zoo (including two newly proposed baseline methods)
and an unseen drone zoo for evaluating adaptive teaming, as well as a
real-world deployment system that utilizes edge computing and Crazyflie drones.
To the best of our knowledge, AT-MDP framework is the first adaptive framework
for continuous-action decision-making in complex real-world drone tasks,
enabling multiple drones to coordinate effectively with unseen teammates.
Extensive experiments in four multi-drone pursuit environments of increasing
difficulty confirm the effectiveness of AT-MDP framework, while real-world
deployments further validate its feasibility in physical systems. Videos and
code are available at https://sites.google.com/view/at-mdp.
|
2502.09765
|
Differential Adjusted Parity for Learning Fair Representations
|
cs.LG cs.AI
|
The development of fair and unbiased machine learning models remains an
ongoing objective for researchers in the field of artificial intelligence. We
introduce the Differential Adjusted Parity (DAP) loss to produce unbiased
informative representations. It utilises a differentiable variant of the
adjusted parity metric to create a unified objective function. By combining
downstream task classification accuracy and its inconsistency across sensitive
feature domains, it provides a single tool to increase performance and mitigate
bias. A key element in this approach is the use of soft balanced accuracies. In
contrast to previous non-adversarial approaches, DAP does not suffer a
degeneracy where the metric is satisfied by performing equally poorly across
all sensitive domains. It outperforms several adversarial models on downstream
task accuracy and fairness in our analysis. Specifically, it improves the
demographic parity, equalized odds and sensitive feature accuracy by as much as
22.5\%, 44.1\% and 40.1\%, respectively, when compared to the best performing
adversarial approaches on these metrics. Overall, the DAP loss and its
associated metric can play a significant role in creating more fair machine
learning models.
|
2502.09767
|
Non-Markovian Discrete Diffusion with Causal Language Models
|
cs.LG cs.AI cs.CL
|
Discrete diffusion models have emerged as a flexible and controllable
paradigm for structured sequence modeling, yet they still lag behind causal
language models in expressiveness. To bridge the gap between two paradigms, we
introduce CaDDi, a causal discrete diffusion model that unifies sequential and
temporal modeling within a non-Markovian diffusion framework. Unlike
conventional diffusion models that operate step by step with no access to prior
states, CaDDi integrates the temporal trajectory, enabling more expressive and
controllable generation. Our approach also treats causal language models as a
special case, allowing seamless adoption of pretrained large language models
(LLMs) for discrete diffusion without the need for architectural modifications.
Empirically, we demonstrate that CaDDi outperforms state-of-the-art discrete
diffusion models on both natural language and biological sequence tasks,
narrowing the gap between diffusion-based methods and large-scale
autoregressive transformers.
|
2502.09768
|
Complex Network Modelling with Power-law Activating Patterns and Its
Evolutionary Dynamics
|
cs.SI physics.soc-ph
|
Complex network theory provides a unifying framework for the study of
structured dynamic systems. The current literature emphasizes a widely reported
phenomenon of intermittent interaction among network vertices. In this paper,
we introduce a complex network model that considers the stochastic switching of
individuals between activated and quiescent states at power-law rates and the
corresponding evolutionary dynamics. By using the Markov chain and renewal
theory, we discover a homogeneous stationary distribution of activated sizes in
the network with power-law activating patterns and infer some statistical
characteristics. To better understand the effect of power-law activating
patterns, we study the two-person-two-strategy evolutionary game dynamics,
demonstrate the absorbability of strategies, and obtain the critical
cooperation conditions for prisoner's dilemmas in homogeneous networks without
mutation. The evolutionary dynamics in real networks are also discussed. Our
results provide a new perspective to analyze and understand social physics in
time-evolving network systems.
|
2502.09775
|
CellFlow: Simulating Cellular Morphology Changes via Flow Matching
|
q-bio.QM cs.CV cs.LG q-bio.BM q-bio.CB
|
Building a virtual cell capable of accurately simulating cellular behaviors
in silico has long been a dream in computational biology. We introduce
CellFlow, an image-generative model that simulates cellular morphology changes
induced by chemical and genetic perturbations using flow matching. Unlike prior
methods, CellFlow models distribution-wise transformations from unperturbed to
perturbed cell states, effectively distinguishing actual perturbation effects
from experimental artifacts such as batch effects -- a major challenge in
biological data. Evaluated on chemical (BBBC021), genetic (RxRx1), and combined
perturbation (JUMP) datasets, CellFlow generates biologically meaningful cell
images that faithfully capture perturbation-specific morphological changes,
achieving a 35% improvement in FID scores and a 12% increase in mode-of-action
prediction accuracy over existing methods. Additionally, CellFlow enables
continuous interpolation between cellular states, providing a potential tool
for studying perturbation dynamics. These capabilities mark a significant step
toward realizing virtual cell modeling for biomedical research.
|
2502.09777
|
On the existence of EFX allocations in multigraphs
|
cs.GT cs.AI
|
We study the problem of "fairly" dividing indivisible goods to several agents
that have valuation set functions over the sets of goods. As fair we consider
the allocations that are envy-free up to any good (EFX), i.e., no agent envies
any proper subset of the goods given to any other agent. The existence or not
of EFX allocations is a major open problem in Fair Division, and there are only
positive results for special cases.
[George Christodoulou, Amos Fiat, Elias Koutsoupias, Alkmini Sgouritsa 2023]
introduced a restriction on the agents' valuations according to a graph
structure: the vertices correspond to agents and the edges to goods, and each
vertex/agent has zero marginal value (or in other words, they are indifferent)
for the edges/goods that are not adjacent to them. The existence of EFX
allocations has been shown for simple graphs with general monotone valuations
[George Christodoulou, Amos Fiat, Elias Koutsoupias, Alkmini Sgouritsa 2023],
and for multigraphs for restricted additive valuations [Alireza Kaviani, Masoud
Seddighin, Amir Mohammad Shahrezaei 2024].
In this work, we push the state-of-the-art further, and show that the EFX
allocations always exists in multigraphs and general monotone valuations if any
of the following three conditions hold: either (a) the multigraph is bipartite,
or (b) each agent has at most $\lceil \frac{n}{4} \rceil -1$ neighbors, where
$n$ is the total number of agents, or (c) the shortest cycle with non-parallel
edges has length at least 6.
|
2502.09778
|
Prompt and circumstance: A word-by-word LLM prompting approach to
interlinear glossing for low-resource languages
|
cs.CL
|
Partly automated creation of interlinear glossed text (IGT) has the potential
to assist in linguistic documentation. We argue that LLMs can make this process
more accessible to linguists because of their capacity to follow
natural-language instructions. We investigate the effectiveness of a
retrieval-based LLM prompting approach to glossing, applied to the seven
languages from the SIGMORPHON 2023 shared task. Our system beats the BERT-based
shared task baseline for every language in the morpheme-level score category,
and we show that a simple 3-best oracle has higher word-level scores than the
challenge winner (a tuned sequence model) in five languages. In a case study on
Tsez, we ask the LLM to automatically create and follow linguistic
instructions, reducing errors on a confusing grammatical feature. Our results
thus demonstrate the potential contributions which LLMs can make in interactive
systems for glossing, both in making suggestions to human annotators and
following directions.
|
2502.09779
|
Automated Muscle and Fat Segmentation in Computed Tomography for
Comprehensive Body Composition Analysis
|
eess.IV cs.CV
|
Body composition assessment using CT images can potentially be used for a
number of clinical applications, including the prognostication of
cardiovascular outcomes, evaluation of metabolic health, monitoring of disease
progression, assessment of nutritional status, prediction of treatment response
in oncology, and risk stratification for surgical and critical care outcomes.
While multiple groups have developed in-house segmentation tools for this
analysis, there are very limited publicly available tools that could be
consistently used across different applications. To mitigate this gap, we
present a publicly accessible, end-to-end segmentation and feature calculation
model specifically for CT body composition analysis. Our model performs
segmentation of skeletal muscle, subcutaneous adipose tissue (SAT), and
visceral adipose tissue (VAT) across the chest, abdomen, and pelvis area in
axial CT images. It also provides various body composition metrics, including
muscle density, visceral-to-subcutaneous fat (VAT/SAT) ratio, muscle
area/volume, and skeletal muscle index (SMI), supporting both 2D and 3D
assessments. The model is shared for public use. To evaluate the model, the
segmentation was applied to both internal and external datasets, with body
composition metrics analyzed across different age, sex, and race groups. The
model achieved high dice coefficients on both internal and external datasets,
exceeding 89% for skeletal muscle, SAT, and VAT segmentation. The model
outperforms the benchmark by 2.40% on skeletal muscle and 10.26% on SAT
compared to the manual annotations given by the publicly available dataset.
Body composition metrics show mean relative absolute errors (MRAEs) under 10%
for all measures. Furthermore, the model provided muscular fat segmentation
with a Dice coefficient of 56.27%, which can be utilized for additional
analyses as needed.
|
2502.09780
|
Incentivize without Bonus: Provably Efficient Model-based Online
Multi-agent RL for Markov Games
|
cs.LG cs.AI cs.GT math.OC
|
Multi-agent reinforcement learning (MARL) lies at the heart of a plethora of
applications involving the interaction of a group of agents in a shared unknown
environment. A prominent framework for studying MARL is Markov games, with the
goal of finding various notions of equilibria in a sample-efficient manner,
such as the Nash equilibrium (NE) and the coarse correlated equilibrium (CCE).
However, existing sample-efficient approaches either require tailored
uncertainty estimation under function approximation, or careful coordination of
the players. In this paper, we propose a novel model-based algorithm, called
VMG, that incentivizes exploration via biasing the empirical estimate of the
model parameters towards those with a higher collective best-response values of
all the players when fixing the other players' policies, thus encouraging the
policy to deviate from its current equilibrium for more exploration. VMG is
oblivious to different forms of function approximation, and permits
simultaneous and uncoupled policy updates of all players. Theoretically, we
also establish that VMG achieves a near-optimal regret for finding both the NEs
of two-player zero-sum Markov games and CCEs of multi-player general-sum Markov
games under linear function approximation in an online environment, which
nearly match their counterparts with sophisticated uncertainty quantification.
|
2502.09781
|
Medical Applications of Graph Convolutional Networks Using Electronic
Health Records: A Survey
|
cs.LG
|
Graph Convolutional Networks (GCNs) have emerged as a promising approach to
machine learning on Electronic Health Records (EHRs). By constructing a graph
representation of patient data and performing convolutions on neighborhoods of
nodes, GCNs can capture complex relationships and extract meaningful insights
to support medical decision making. This survey provides an overview of the
current research in applying GCNs to EHR data. We identify the key medical
domains and prediction tasks where these models are being utilized, common
benchmark datasets, and architectural patterns to provide a comprehensive
survey of this field. While this is a nascent area of research, GCNs
demonstrate strong potential to leverage the complex information hidden in
EHRs. Challenges and opportunities for future work are also discussed.
|
2502.09782
|
Improving Acoustic Side-Channel Attacks on Keyboards Using Transformers
and Large Language Models
|
cs.LG cs.AI cs.CL eess.AS
|
The increasing prevalence of microphones in everyday devices and the growing
reliance on online services have amplified the risk of acoustic side-channel
attacks (ASCAs) targeting keyboards. This study explores deep learning
techniques, specifically vision transformers (VTs) and large language models
(LLMs), to enhance the effectiveness and applicability of such attacks. We
present substantial improvements over prior research, with the CoAtNet model
achieving state-of-the-art performance. Our CoAtNet shows a 5.0% improvement
for keystrokes recorded via smartphone (Phone) and 5.9% for those recorded via
Zoom compared to previous benchmarks. We also evaluate transformer
architectures and language models, with the best VT model matching CoAtNet's
performance. A key advancement is the introduction of a noise mitigation method
for real-world scenarios. By using LLMs for contextual understanding, we detect
and correct erroneous keystrokes in noisy environments, enhancing ASCA
performance. Additionally, fine-tuned lightweight language models with Low-Rank
Adaptation (LoRA) deliver comparable performance to heavyweight models with 67X
more parameters. This integration of VTs and LLMs improves the practical
applicability of ASCA mitigation, marking the first use of these technologies
to address ASCAs and error correction in real-world scenarios.
|
2502.09787
|
TableTalk: Scaffolding Spreadsheet Development with a Language Agent
|
cs.SE cs.AI cs.HC
|
Despite its ubiquity in the workforce, spreadsheet programming remains
challenging as programmers need both spreadsheet-specific knowledge (e.g., APIs
to write formulas) and problem-solving skills to create complex spreadsheets.
Large language models (LLMs) can help automate aspects of this process, and
recent advances in planning and reasoning have enabled language agents, which
dynamically plan, use tools, and take iterative actions to complete complex
tasks. These agents observe, plan, and act, making them well-suited to scaffold
spreadsheet programming by following expert processes.
We present TableTalk, a language agent that helps programmers build
spreadsheets conversationally. Its design reifies three design principles --
scaffolding, flexibility, and incrementality -- which we derived from two
studies of seven programmers and 62 Excel templates. TableTalk structures
spreadsheet development by generating step-by-step plans and suggesting three
next steps users can choose from. It also integrates tools that enable
incremental spreadsheet construction. A user study with 20 programmers shows
that TableTalk produces spreadsheets 2.3 times more likely to be preferred over
a baseline agent, while reducing cognitive load and time spent reasoning about
spreadsheet actions by 12.6%. TableTalk's approach has implications for
human-agent collaboration. This includes providing persistent direct
manipulation interfaces for stopping or undoing agent actions, while ensuring
that such interfaces for accepting actions can be deactivated.
|
2502.09790
|
ExoMiner++ on TESS with Transfer Learning from Kepler: Transit
Classification and Vetting Catalog for 2-min Data
|
astro-ph.EP astro-ph.IM cs.LG
|
We present ExoMiner++, an enhanced deep learning model that builds on the
success of ExoMiner to improve transit signal classification in 2-minute TESS
data. ExoMiner++ incorporates additional diagnostic inputs, including
periodogram, flux trend, difference image, unfolded flux, and spacecraft
attitude control data, all of which are crucial for effectively distinguishing
transit signals from more challenging sources of false positives. To further
enhance performance, we leverage transfer learning from high-quality labeled
data from the Kepler space telescope, mitigating the impact of TESS's noisier
and more ambiguous labels. ExoMiner++ achieves high accuracy across various
classification and ranking metrics, significantly narrowing the search space
for follow-up investigations to confirm new planets. To serve the exoplanet
community, we introduce new TESS catalogs containing ExoMiner++ classifications
and confidence scores for each transit signal. Among the 147,568 unlabeled
TCEs, ExoMiner++ identifies 7,330 as planet candidates, with the remainder
classified as false positives. These 7,330 planet candidates correspond to
1,868 existing TESS Objects of Interest (TOIs), 69 Community TESS Objects of
Interest (CTOIs), and 50 newly introduced CTOIs. 1,797 out of the 2,506 TOIs
previously labeled as planet candidates in ExoFOP are classified as planet
candidates by ExoMiner++. This reduction in plausible candidates combined with
the excellent ranking quality of ExoMiner++ allows the follow-up efforts to be
focused on the most likely candidates, increasing the overall planet yield.
|
2502.09791
|
Atom identification in bilayer moire materials with Gomb-Net
|
cond-mat.mtrl-sci cs.CV
|
Moire patterns in van der Waals bilayer materials complicate the analysis of
atomic-resolution images, hindering the atomic-scale insight typically
attainable with scanning transmission electron microscopy. Here, we report a
method to detect the positions and identity of atoms in each of the individual
layers that compose bilayer heterostructures. We developed a deep learning
model, Gomb-Net, which can distinguish atomic species in each individual layer,
effectively deconvoluting the moire pattern to enable layer-specific mapping of
strain and dopant distributions, unlike other methods which struggle with
moire-induced complexity. Using this approach, we explored Se atom
substitutional sites in a twisted fractional Janus WS2-WS2(1-x)Se2x
heterostructure and found that layer specific implantation sites are unaffected
by the moire pattern's local energetic or electronic modulation. This
advancement enables atom-identification within material regimes where it was
not possible before, opening new insights into previously inaccessible material
physics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.