id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.13968
|
Betsu-Betsu: Multi-View Separable 3D Reconstruction of Two Interacting
Objects
|
cs.CV
|
Separable 3D reconstruction of multiple objects from multi-view RGB images --
resulting in two different 3D shapes for the two objects with a clear
separation between them -- remains a sparsely researched problem. It is
challenging due to severe mutual occlusions and ambiguities along the objects'
interaction boundaries. This paper investigates the setting and introduces a
new neuro-implicit method that can reconstruct the geometry and appearance of
two objects undergoing close interactions while disjoining both in 3D, avoiding
surface inter-penetrations and enabling novel-view synthesis of the observed
scene. The framework is end-to-end trainable and supervised using a novel
alpha-blending regularisation that ensures that the two geometries are well
separated even under extreme occlusions. Our reconstruction method is
markerless and can be applied to rigid as well as articulated objects. We
introduce a new dataset consisting of close interactions between a human and an
object and also evaluate on two scenes of humans performing martial arts. The
experiments confirm the effectiveness of our framework and substantial
improvements using 3D and novel view synthesis metrics compared to several
existing approaches applicable in our setting.
|
2502.13969
|
Bridging Simulation and Reality: A 3D Clustering-Based Deep Learning
Model for UAV-Based RF Source Localization
|
eess.SP cs.AI
|
Localization of radio frequency (RF) sources has critical applications,
including search and rescue, jammer detection, and monitoring of hostile
activities. Unmanned aerial vehicles (UAVs) offer significant advantages for RF
source localization (RFSL) over terrestrial methods, leveraging autonomous 3D
navigation and improved signal capture at higher altitudes. Recent advancements
in deep learning (DL) have further enhanced localization accuracy, particularly
for outdoor scenarios. DL models often face challenges in real-world
performance, as they are typically trained on simulated datasets that fail to
replicate real-world conditions fully. To address this, we first propose the
Enhanced Two-Ray propagation model, reducing the simulation-to-reality gap by
improving the accuracy of propagation environment modeling. For RFSL, we
propose the 3D Cluster-Based RealAdaptRNet, a DL-based method leveraging 3D
clustering-based feature extraction for robust localization. Experimental
results demonstrate that the proposed Enhanced Two-Ray model provides superior
accuracy in simulating real-world propagation scenarios compared to
conventional free-space and two-ray models. Notably, the 3D Cluster-Based
RealAdaptRNet, trained entirely on simulated datasets, achieves exceptional
performance when validated in real-world environments using the AERPAW physical
testbed, with an average localization error of 18.2 m. The proposed approach is
computationally efficient, utilizing 33.5 times fewer parameters, and
demonstrates strong generalization capabilities across diverse trajectories,
making it highly suitable for real-world applications.
|
2502.13972
|
IncepFormerNet: A multi-scale multi-head attention network for SSVEP
classification
|
eess.SP cs.AI cs.LG
|
In recent years, deep learning (DL) models have shown outstanding performance
in EEG classification tasks, particularly in Steady-State Visually Evoked
Potential(SSVEP)-based Brain-Computer-Interfaces(BCI)systems. DL methods have
been successfully applied to SSVEP-BCI. This study proposes a new model called
IncepFormerNet, which is a hybrid of the Inception and Transformer
architectures. IncepFormerNet adeptly extracts multi-scale temporal information
from time series data using parallel convolution kernels of varying sizes,
accurately capturing the subtle variations and critical features within SSVEP
signals.Furthermore, the model integrates the multi-head attention mechanism
from the Transformer architecture, which not only provides insights into global
dependencies but also significantly enhances the understanding and
representation of complex patterns.Additionally, it takes advantage of filter
bank techniques to extract features based on the spectral characteristics of
SSVEP data. To validate the effectiveness of the proposed model, we conducted
experiments on two public datasets, . The experimental results show that
IncepFormerNet achieves an accuracy of 87.41 on Dataset 1 and 71.97 on Dataset
2 using a 1.0-second time window. To further verify the superiority of the
proposed model, we compared it with other deep learning models, and the results
indicate that our method achieves significantly higher accuracy than the
others.The source codes in this work are available at:
https://github.com/CECNL/SSVEP-DAN.
|
2502.13974
|
Segmentation-free integration of nuclei morphology and spatial
transcriptomics for retinal images
|
eess.IV cs.CV
|
This study introduces SEFI (SEgmentation-Free Integration), a novel method
for integrating morphological features of cell nuclei with spatial
transcriptomics data. Cell segmentation poses a significant challenge in the
analysis of spatial transcriptomics data, as tissue-specific structural
complexities and densely packed cells in certain regions make it difficult to
develop a universal approach. SEFI addresses this by utilizing self-supervised
learning to extract morphological features from fluorescent nuclear staining
images, enhancing the clustering of gene expression data without requiring
segmentation. We demonstrate SEFI on spatially resolved gene expression
profiles of the developing retina, acquired using multiplexed single molecule
Fluorescence In Situ Hybridization (smFISH). SEFI is publicly available at
https://github.com/eduardchelebian/sefi.
|
2502.13976
|
Regulariza\c{c}\~ao, aprendizagem profunda e interdisciplinaridade em
problemas inversos mal-postos
|
eess.IV cs.LG
|
In this book, written in Portuguese, we discuss what ill-posed problems are
and how the regularization method is used to solve them. In the form of
questions and answers, we reflect on the origins and future of regularization,
relating the similarities and differences of its meaning in different areas,
including inverse problems, statistics, machine learning, and deep learning.
|
2502.13979
|
Utilizing Effective Dynamic Graph Learning to Shield Financial Stability
from Risk Propagation
|
q-fin.RM cs.AI cs.LG
|
Financial risks can propagate across both tightly coupled temporal and
spatial dimensions, posing significant threats to financial stability.
Moreover, risks embedded in unlabeled data are often difficult to detect. To
address these challenges, we introduce GraphShield, a novel approach with three
key innovations: Enhanced Cross-Domain Infor mation Learning: We propose a
dynamic graph learning module to improve information learning across temporal
and spatial domains. Advanced Risk Recognition: By leveraging the clustering
characteristics of risks, we construct a risk recognizing module to enhance the
identification of hidden threats. Risk Propagation Visualization: We provide a
visualization tool for quantifying and validating nodes that trigger widespread
cascading risks. Extensive experiments on two real-world and two open-source
datasets demonstrate the robust performance of our framework. Our approach
represents a significant advancement in leveraging artificial intelligence to
enhance financial stability, offering a powerful solution to mitigate the
spread of risks within financial networks.
|
2502.13982
|
Benchmarking Automatic Speech Recognition coupled LLM Modules for
Medical Diagnostics
|
eess.AS cs.LG
|
Natural Language Processing (NLP) and Voice Recognition agents are rapidly
evolving healthcare by enabling efficient, accessible, and professional patient
support while automating grunt work. This report serves as my self project
wherein models finetuned on medical call recordings are analysed through a
two-stage system: Automatic Speech Recognition (ASR) for speech transcription
and a Large Language Model (LLM) for context-aware, professional responses.
ASR, finetuned on phone call recordings provides generalised transcription of
diverse patient speech over call, while the LLM matches transcribed text to
medical diagnosis. A novel audio preprocessing strategy, is deployed to provide
invariance to incoming recording/call data, laden with sufficient augmentation
with noise/clipping to make the pipeline robust to the type of microphone and
ambient conditions the patient might have while calling/recording.
|
2502.13983
|
Gesture-Aware Zero-Shot Speech Recognition for Patients with Language
Disorders
|
eess.AS cs.AI
|
Individuals with language disorders often face significant communication
challenges due to their limited language processing and comprehension
abilities, which also affect their interactions with voice-assisted systems
that mostly rely on Automatic Speech Recognition (ASR). Despite advancements in
ASR that address disfluencies, there has been little attention on integrating
non-verbal communication methods, such as gestures, which individuals with
language disorders substantially rely on to supplement their communication.
Recognizing the need to interpret the latent meanings of visual information not
captured by speech alone, we propose a gesture-aware ASR system utilizing a
multimodal large language model with zero-shot learning for individuals with
speech impairments. Our experiment results and analyses show that including
gesture information significantly enhances semantic understanding. This study
can help develop effective communication technologies, specifically designed to
meet the unique needs of individuals with language impairments.
|
2502.13990
|
Remote Sensing Semantic Segmentation Quality Assessment based on Vision
Language Model
|
eess.IV cs.LG
|
The complexity of scenes and variations in image quality result in
significant variability in the performance of semantic segmentation methods of
remote sensing imagery (RSI) in supervised real-world scenarios. This makes the
evaluation of semantic segmentation quality in such scenarios an issue to be
resolved. However, most of the existing evaluation metrics are developed based
on expert-labeled object-level annotations, which are not applicable in such
scenarios. To address this issue, we propose RS-SQA, an unsupervised quality
assessment model for RSI semantic segmentation based on vision language model
(VLM). This framework leverages a pre-trained RS VLM for semantic understanding
and utilizes intermediate features from segmentation methods to extract
implicit information about segmentation quality. Specifically, we introduce
CLIP-RS, a large-scale pre-trained VLM trained with purified text to reduce
textual noise and capture robust semantic information in the RS domain. Feature
visualizations confirm that CLIP-RS can effectively differentiate between
various levels of segmentation quality. Semantic features and low-level
segmentation features are effectively integrated through a semantic-guided
approach to enhance evaluation accuracy. To further support the development of
RS semantic segmentation quality assessment, we present RS-SQED, a dedicated
dataset sampled from four major RS semantic segmentation datasets and annotated
with segmentation accuracy derived from the inference results of 8
representative segmentation methods. Experimental results on the established
dataset demonstrate that RS-SQA significantly outperforms state-of-the-art
quality assessment models. This provides essential support for predicting
segmentation accuracy and high-quality semantic segmentation interpretation,
offering substantial practical value.
|
2502.13991
|
Learning to Discover Regulatory Elements for Gene Expression Prediction
|
q-bio.GN cs.AI
|
We consider the problem of predicting gene expressions from DNA sequences. A
key challenge of this task is to find the regulatory elements that control gene
expressions. Here, we introduce Seq2Exp, a Sequence to Expression network
explicitly designed to discover and extract regulatory elements that drive
target gene expression, enhancing the accuracy of the gene expression
prediction. Our approach captures the causal relationship between epigenomic
signals, DNA sequences and their associated regulatory elements. Specifically,
we propose to decompose the epigenomic signals and the DNA sequence conditioned
on the causal active regulatory elements, and apply an information bottleneck
with the Beta distribution to combine their effects while filtering out
non-causal components. Our experiments demonstrate that Seq2Exp outperforms
existing baselines in gene expression prediction tasks and discovers
influential regions compared to commonly used statistical methods for peak
detection such as MACS3. The source code is released as part of the AIRS
library (https://github.com/divelab/AIRS/).
|
2502.13994
|
Generative Detail Enhancement for Physically Based Materials
|
cs.GR cs.AI
|
We present a tool for enhancing the detail of physically based materials
using an off-the-shelf diffusion model and inverse rendering. Our goal is to
enhance the visual fidelity of materials with detail that is often tedious to
author, by adding signs of wear, aging, weathering, etc. As these appearance
details are often rooted in real-world processes, we leverage a generative
image model trained on a large dataset of natural images with corresponding
visuals in context. Starting with a given geometry, UV mapping, and basic
appearance, we render multiple views of the object. We use these views,
together with an appearance-defining text prompt, to condition a diffusion
model. The details it generates are then backpropagated from the enhanced
images to the material parameters via inverse differentiable rendering. For
inverse rendering to be successful, the generated appearance has to be
consistent across all the images. We propose two priors to address the
multi-view consistency of the diffusion model. First, we ensure that the
initial noise that seeds the diffusion process is itself consistent across
views by integrating it from a view-independent UV space. Second, we enforce
geometric consistency by biasing the attention mechanism via a projective
constraint so that pixels attend strongly to their corresponding pixel
locations in other views. Our approach does not require any training or
finetuning of the diffusion model, is agnostic of the material model used, and
the enhanced material properties, i.e., 2D PBR textures, can be further edited
by artists.
|
2502.13996
|
Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning
with Cognitive Diagnosis
|
cs.LG
|
Due to the widespread use of LLMs and the rising critical ethical and safety
concerns, LLM unlearning methods have been developed to remove harmful
knowledge and undesirable capabilities. In this context, evaluations are mostly
based on single-value metrics such as QA accuracy. However, these metrics often
fail to capture the nuanced retention of harmful knowledge components, making
it difficult to assess the true effectiveness of unlearning. To address this
issue, we propose UNCD (UNlearning evaluation via Cognitive Diagnosis), a novel
framework that leverages Cognitive Diagnosis Modeling for fine-grained
evaluation of LLM unlearning. Our dedicated benchmark, UNCD-Cyber, provides a
detailed assessment of the removal of dangerous capabilities. Moreover, we
introduce UNCD-Agent, which refines unlearning by diagnosing knowledge remnants
and generating targeted unlearning data. Extensive experiments across eight
unlearning methods and two base models demonstrate that UNCD not only enhances
evaluation but also effectively facilitates the removal of harmful LLM
abilities.
|
2502.13998
|
A Baseline Method for Removing Invisible Image Watermarks using Deep
Image Prior
|
eess.IV cs.AI
|
Image watermarks have been considered a promising technique to help detect
AI-generated content, which can be used to protect copyright or prevent fake
image abuse. In this work, we present a black-box method for removing invisible
image watermarks, without the need of any dataset of watermarked images or any
knowledge about the watermark system. Our approach is simple to implement:
given a single watermarked image, we regress it by deep image prior (DIP). We
show that from the intermediate steps of DIP one can reliably find an evasion
image that can remove invisible watermarks while preserving high image quality.
Due to its unique working mechanism and practical effectiveness, we advocate
including DIP as a baseline invasion method for benchmarking the robustness of
watermarking systems. Finally, by showing the limited ability of DIP and other
existing black-box methods in evading training-based visible watermarks, we
discuss the positive implications on the practical use of training-based
visible watermarks to prevent misinformation abuse.
|
2502.14000
|
Human-Artificial Interaction in the Age of Agentic AI: A
System-Theoretical Approach
|
cs.MA cs.AI cs.HC
|
This paper presents a novel perspective on human-computer interaction (HCI),
framing it as a dynamic interplay between human and computational agents within
a networked system. Going beyond traditional interface-based approaches, we
emphasize the importance of coordination and communication among heterogeneous
agents with different capabilities, roles, and goals. A key distinction is made
between multi-agent systems (MAS) and Centaurian systems, which represent two
different paradigms of human-AI collaboration. MAS maintain agent autonomy,
with structured protocols enabling cooperation, while Centaurian systems deeply
integrate human and AI capabilities, creating unified decision-making entities.
To formalize these interactions, we introduce a framework for communication
spaces, structured into surface, observation, and computation layers, ensuring
seamless integration between MAS and Centaurian architectures, where colored
Petri nets effectively represent structured Centaurian systems and high-level
reconfigurable networks address the dynamic nature of MAS.
Our research has practical applications in autonomous robotics,
human-in-the-loop decision making, and AI-driven cognitive architectures, and
provides a foundation for next-generation hybrid intelligence systems that
balance structured coordination with emergent behavior.
|
2502.14001
|
Towards a perturbation-based explanation for medical AI as
differentiable programs
|
stat.ML cs.AI cs.LG
|
Recent advancement in machine learning algorithms reaches a point where
medical devices can be equipped with artificial intelligence (AI) models for
diagnostic support and routine automation in clinical settings. In medicine and
healthcare, there is a particular demand for sufficient and objective
explainability of the outcome generated by AI models. However, AI models are
generally considered as black boxes due to their complexity, and the
computational process leading to their response is often opaque. Although
several methods have been proposed to explain the behavior of models by
evaluating the importance of each feature in discrimination and prediction,
they may suffer from biases and opacities arising from the scale and sampling
protocol of the dataset used for training or testing. To overcome the
shortcomings of existing methods, we explore an alternative approach to provide
an objective explanation of AI models that can be defined independently of the
learning process and does not require additional data. As a preliminary study
for this direction of research, this work examines a numerical availability of
the Jacobian matrix of deep learning models that measures how stably a model
responses against small perturbations added to the input. The indicator, if
available, are calculated from a trained AI model for a given target input.
This is a first step towards a perturbation-based explanation, which will
assist medical practitioners in understanding and interpreting the response of
the AI model in its clinical application.
|
2502.14003
|
Rectified Lagrangian for Out-of-Distribution Detection in Modern
Hopfield Networks
|
cs.LG cs.AI
|
Modern Hopfield networks (MHNs) have recently gained significant attention in
the field of artificial intelligence because they can store and retrieve a
large set of patterns with an exponentially large memory capacity. A MHN is
generally a dynamical system defined with Lagrangians of memory and feature
neurons, where memories associated with in-distribution (ID) samples are
represented by attractors in the feature space. One major problem in existing
MHNs lies in managing out-of-distribution (OOD) samples because it was
originally assumed that all samples are ID samples. To address this, we propose
the rectified Lagrangian (RegLag), a new Lagrangian for memory neurons that
explicitly incorporates an attractor for OOD samples in the dynamical system of
MHNs. RecLag creates a trivial point attractor for any interaction matrix,
enabling OOD detection by identifying samples that fall into this attractor as
OOD. The interaction matrix is optimized so that the probability densities can
be estimated to identify ID/OOD. We demonstrate the effectiveness of
RecLag-based MHNs compared to energy-based OOD detection methods, including
those using state-of-the-art Hopfield energies, across nine image datasets.
|
2502.14004
|
Inter3D: A Benchmark and Strong Baseline for Human-Interactive 3D Object
Reconstruction
|
cs.GR cs.LG
|
Recent advancements in implicit 3D reconstruction methods, e.g., neural
rendering fields and Gaussian splatting, have primarily focused on novel view
synthesis of static or dynamic objects with continuous motion states. However,
these approaches struggle to efficiently model a human-interactive object with
n movable parts, requiring 2^n separate models to represent all discrete
states. To overcome this limitation, we propose Inter3D, a new benchmark and
approach for novel state synthesis of human-interactive objects. We introduce a
self-collected dataset featuring commonly encountered interactive objects and a
new evaluation pipeline, where only individual part states are observed during
training, while part combination states remain unseen. We also propose a strong
baseline approach that leverages Space Discrepancy Tensors to efficiently
modelling all states of an object. To alleviate the impractical constraints on
camera trajectories across training states, we propose a Mutual State
Regularization mechanism to enhance the spatial density consistency of movable
parts. In addition, we explore two occupancy grid sampling strategies to
facilitate training efficiency. We conduct extensive experiments on the
proposed benchmark, showcasing the challenges of the task and the superiority
of our approach.
|
2502.14005
|
Smaller But Better: Unifying Layout Generation with Smaller Large
Language Models
|
cs.LG
|
We propose LGGPT, an LLM-based model tailored for unified layout generation.
First, we propose Arbitrary Layout Instruction (ALI) and Universal Layout
Response (ULR) as the uniform I/O template. ALI accommodates arbitrary layout
generation task inputs across multiple layout domains, enabling LGGPT to unify
both task-generic and domain-generic layout generation hitherto unexplored.
Collectively, ALI and ULR boast a succinct structure that forgoes superfluous
tokens typically found in existing HTML-based formats, facilitating efficient
instruction tuning and boosting unified generation performance. In addition, we
propose an Interval Quantization Encoding (IQE) strategy that compresses ALI
into a more condensed structure. IQE precisely preserves valid layout clues
while eliminating the less informative placeholders, facilitating LGGPT to
capture complex and variable layout generation conditions during the unified
training process. Experimental results demonstrate that LGGPT achieves superior
or on par performance compared to existing methods. Notably, LGGPT strikes a
prominent balance between proficiency and efficiency with a compact 1.5B
parameter LLM, which beats prior 7B or 175B models even in the most extensive
and challenging unified scenario. Furthermore, we underscore the necessity of
employing LLMs for unified layout generation and suggest that 1.5B could be an
optimal parameter size by comparing LLMs of varying scales. Code is available
at https://github.com/NiceRingNode/LGGPT.
|
2502.14008
|
MaskPrune: Mask-based LLM Pruning for Layer-wise Uniform Structures
|
cs.CL cs.AI cs.LG
|
The remarkable performance of large language models (LLMs) in various
language tasks has attracted considerable attention. However, the
ever-increasing size of these models presents growing challenges for deployment
and inference. Structured pruning, an effective model compression technique, is
gaining increasing attention due to its ability to enhance inference
efficiency. Nevertheless, most previous optimization-based structured pruning
methods sacrifice the uniform structure across layers for greater flexibility
to maintain performance. The heterogeneous structure hinders the effective
utilization of off-the-shelf inference acceleration techniques and impedes
efficient configuration for continued training. To address this issue, we
propose a novel masking learning paradigm based on minimax optimization to
obtain the uniform pruned structure by optimizing the masks under sparsity
regularization. Extensive experimental results demonstrate that our method can
maintain high performance while ensuring the uniformity of the pruned model
structure, thereby outperforming existing SOTA methods.
|
2502.14009
|
Benchmarking Self-Supervised Methods for Accelerated MRI Reconstruction
|
eess.IV cs.LG
|
Reconstructing MRI from highly undersampled measurements is crucial for
accelerating medical imaging, but is challenging due to the ill-posedness of
the inverse problem. While supervised deep learning approaches have shown
remarkable success, they rely on fully-sampled ground truth data, which is
often impractical or impossible to obtain. Recently, numerous self-supervised
methods have emerged that do not require ground truth, however, the lack of
systematic comparison and standard experimental setups have hindered research.
We present the first comprehensive review of loss functions from all
feedforward self-supervised methods and the first benchmark on accelerated MRI
reconstruction without ground truth, showing that there is a wide range in
performance across methods. In addition, we propose Multi-Operator Equivariant
Imaging (MO-EI), a novel framework that builds on the imaging model considered
in existing methods to outperform all state-of-the-art and approaches
supervised performance. Finally, to facilitate reproducible benchmarking, we
provide implementations of all methods in the DeepInverse library
(https://deepinv.github.io) and easy-to-use demo code at
https://andrewwango.github.io/deepinv-selfsup-fastmri.
|
2502.14010
|
Which Attention Heads Matter for In-Context Learning?
|
cs.LG cs.AI cs.CL
|
Large language models (LLMs) exhibit impressive in-context learning (ICL)
capability, enabling them to perform new tasks using only a few demonstrations
in the prompt. Two different mechanisms have been proposed to explain ICL:
induction heads that find and copy relevant tokens, and function vector (FV)
heads whose activations compute a latent encoding of the ICL task. To better
understand which of the two distinct mechanisms drives ICL, we study and
compare induction heads and FV heads in 12 language models.
Through detailed ablations, we discover that few-shot ICL performance depends
primarily on FV heads, especially in larger models. In addition, we uncover
that FV and induction heads are connected: many FV heads start as induction
heads during training before transitioning to the FV mechanism. This leads us
to speculate that induction facilitates learning the more complex FV mechanism
that ultimately drives ICL.
|
2502.14011
|
DFDT: Dynamic Fast Decision Tree for IoT Data Stream Mining on Edge
Devices
|
cs.LG cs.AI cs.NI
|
The Internet of Things generates massive data streams, with edge computing
emerging as a key enabler for online IoT applications and 5G networks. Edge
solutions facilitate real-time machine learning inference, but also require
continuous adaptation to concept drifts. Ensemble-based solutions improve
predictive performance, but incur higher resource consumption, latency, and
memory demands. This paper presents DFDT: Dynamic Fast Decision Tree, a novel
algorithm designed for energy-efficient memory-constrained data stream mining.
DFDT improves hoeffding tree growth efficiency by dynamically adjusting grace
periods, tie thresholds, and split evaluations based on incoming data. It
incorporates stricter evaluation rules (based on entropy, information gain, and
leaf instance count), adaptive expansion modes, and a leaf deactivation
mechanism to manage memory, allowing more computation on frequently visited
nodes while conserving energy on others. Experiments show that the proposed
framework can achieve increased predictive performance (0.43 vs 0.29 ranking)
with constrained memory and a fraction of the runtime of VFDT or SVFDT.
|
2502.14013
|
Appeal prediction for AI up-scaled Images
|
cs.GR cs.AI eess.IV
|
DNN- or AI-based up-scaling algorithms are gaining in popularity due to the
improvements in machine learning. Various up-scaling models using CNNs, GANs or
mixed approaches have been published. The majority of models are evaluated
using PSRN and SSIM or only a few example images. However, a performance
evaluation with a wide range of real-world images and subjective evaluation is
missing, which we tackle in the following paper. For this reason, we describe
our developed dataset, which uses 136 base images and five different up-scaling
methods, namely Real-ESRGAN, BSRGAN, waifu2x, KXNet, and Lanczos. Overall the
dataset consists of 1496 annotated images. The labeling of our dataset focused
on image appeal and has been performed using crowd-sourcing employing our
open-source tool AVRate Voyager. We evaluate the appeal of the different
methods, and the results indicate that Real-ESRGAN and BSRGAN are the best.
Furthermore, we train a DNN to detect which up-scaling method has been used,
the trained models have a good overall performance in our evaluation. In
addition to this, we evaluate state-of-the-art image appeal and quality models,
here none of the models showed a high prediction performance, therefore we also
trained two own approaches. The first uses transfer learning and has the best
performance, and the second model uses signal-based features and a random
forest model with good overall performance. We share the data and
implementation to allow further research in the context of open science.
|
2502.14018
|
I Want 'Em All (At Once) -- Ultrametric Cluster Hierarchies
|
cs.LG
|
Hierarchical clustering is a powerful tool for exploratory data analysis,
organizing data into a tree of clusterings from which a partition can be
chosen. This paper generalizes these ideas by proving that, for any reasonable
hierarchy, one can optimally solve any center-based clustering objective over
it (such as $k$-means). Moreover, these solutions can be found exceedingly
quickly and are themselves necessarily hierarchical. Thus, given a cluster
tree, we show that one can quickly access a plethora of new, equally meaningful
hierarchies. Just as in standard hierarchical clustering, one can then choose
any desired partition from these new hierarchies. We conclude by verifying the
utility of our proposed techniques across datasets, hierarchies, and
partitioning schemes.
|
2502.14019
|
Dehumanizing Machines: Mitigating Anthropomorphic Behaviors in Text
Generation Systems
|
cs.CL cs.AI cs.HC
|
As text generation systems' outputs are increasingly anthropomorphic --
perceived as human-like -- scholars have also raised increasing concerns about
how such outputs can lead to harmful outcomes, such as users over-relying or
developing emotional dependence on these systems. How to intervene on such
system outputs to mitigate anthropomorphic behaviors and their attendant
harmful outcomes, however, remains understudied. With this work, we aim to
provide empirical and theoretical grounding for developing such interventions.
To do so, we compile an inventory of interventions grounded both in prior
literature and a crowdsourced study where participants edited system outputs to
make them less human-like. Drawing on this inventory, we also develop a
conceptual framework to help characterize the landscape of possible
interventions, articulate distinctions between different types of
interventions, and provide a theoretical basis for evaluating the effectiveness
of different interventions.
|
2502.14022
|
A General Framework for Augmenting Lossy Compressors with Topological
Guarantees
|
cs.DC cs.IT math.IT
|
Topological descriptors such as contour trees are widely utilized in
scientific data analysis and visualization, with applications from materials
science to climate simulations. It is desirable to preserve topological
descriptors when data compression is part of the scientific workflow for these
applications. However, classic error-bounded lossy compressors for volumetric
data do not guarantee the preservation of topological descriptors, despite
imposing strict pointwise error bounds. In this work, we introduce a general
framework for augmenting any lossy compressor to preserve the topology of the
data during compression. Specifically, our framework quantifies the adjustments
(to the decompressed data) needed to preserve the contour tree and then employs
a custom variable-precision encoding scheme to store these adjustments. We
demonstrate the utility of our framework in augmenting classic compressors
(such as SZ3, TTHRESH, and ZFP) and deep learning-based compressors (such as
Neurcomp) with topological guarantees.
|
2502.14023
|
Dynamic Activation with Knowledge Distillation for Energy-Efficient
Spiking NN Ensembles
|
cs.LG cs.AI cs.CV cs.NE
|
While foundation AI models excel at tasks like classification and
decision-making, their high energy consumption makes them unsuitable for
energy-constrained applications. Inspired by the brain's efficiency, spiking
neural networks (SNNs) have emerged as a viable alternative due to their
event-driven nature and compatibility with neuromorphic chips. This work
introduces a novel system that combines knowledge distillation and ensemble
learning to bridge the performance gap between artificial neural networks
(ANNs) and SNNs. A foundation AI model acts as a teacher network, guiding
smaller student SNNs organized into an ensemble, called Spiking Neural Ensemble
(SNE). SNE enables the disentanglement of the teacher's knowledge, allowing
each student to specialize in predicting a distinct aspect of it, while
processing the same input. The core innovation of SNE is the adaptive
activation of a subset of SNN models of an ensemble, leveraging
knowledge-distillation, enhanced with an informed-partitioning
(disentanglement) of the teacher's feature space. By dynamically activating
only a subset of these student SNNs, the system balances accuracy and energy
efficiency, achieving substantial energy savings with minimal accuracy loss.
Moreover, SNE is significantly more efficient than the teacher network,
reducing computational requirements by up to 20x with only a 2% drop in
accuracy on the CIFAR-10 dataset. This disentanglement procedure achieves an
accuracy improvement of up to 2.4% on the CIFAR-10 dataset compared to other
partitioning schemes. Finally, we comparatively analyze SNE performance under
noisy conditions, demonstrating enhanced robustness compared to its ANN
teacher. In summary, SNE offers a promising new direction for
energy-constrained applications.
|
2502.14037
|
DiffSampling: Enhancing Diversity and Accuracy in Neural Text Generation
|
cs.CL cs.AI cs.LG
|
Despite their increasing performance, large language models still tend to
reproduce training data, generate several repetitions, and focus on the most
common grammatical structures and words. A possible cause is the decoding
strategy adopted: the most common ones either consider only the most probable
tokens, reducing output diversity, or increase the likelihood of unlikely
tokens at the cost of output accuracy and correctness. In this paper, we
propose a family of three new decoding methods by leveraging a mathematical
analysis of the token probability distribution. In particular, the difference
between consecutive, sorted probabilities can be used to avoid incorrect tokens
and increase the chance of low-probable but accurate words. Experiments
concerning math problem solving, extreme summarization, and the divergent
association task show that our approach consistently performs at least as well
as current alternatives in terms of quality and diversity.
|
2502.14043
|
Asking for Help Enables Safety Guarantees Without Sacrificing
Effectiveness
|
cs.LG cs.AI
|
Most reinforcement learning algorithms with regret guarantees rely on a
critical assumption: that all errors are recoverable. Recent work by Plaut et
al. discarded this assumption and presented algorithms that avoid "catastrophe"
(i.e., irreparable errors) by asking for help. However, they provided only
safety guarantees and did not consider reward maximization. We prove that any
algorithm that avoids catastrophe in their setting also guarantees high reward
(i.e., sublinear regret) in any Markov Decision Process (MDP), including MDPs
with irreversible costs. This constitutes the first no-regret guarantee for
general MDPs. More broadly, our result may be the first formal proof that it is
possible for an agent to obtain high reward while becoming self-sufficient in
an unknown, unbounded, and high-stakes environment without causing catastrophe
or requiring resets.
|
2502.14044
|
Enhancing Cognition and Explainability of Multimodal Foundation Models
with Self-Synthesized Data
|
cs.CV cs.LG
|
Large multimodal models (LMMs) have shown impressive capabilities in a wide
range of visual tasks. However, they often struggle with fine-grained visual
reasoning, failing to identify domain-specific objectives and provide
justifiable explanations for their predictions. To address this, we propose a
novel visual rejection sampling framework to improve the cognition and
explainability of LMMs using self-synthesized data. Specifically, visual
fine-tuning requires images, queries, and target answers. Our approach begins
by synthesizing interpretable answers that include human-verifiable visual
features. These features are based on expert-defined concepts, carefully
selected based on their alignment with the image content. After each round of
fine-tuning, we apply a reward model-free filtering mechanism to select the
highest-quality interpretable answers for the next round of tuning. This
iterative process of data synthesis and fine-tuning progressively improves the
model's ability to generate accurate and reasonable explanations. Experimental
results demonstrate the effectiveness of our method in improving both the
accuracy and explainability of specialized visual classification tasks.
|
2502.14045
|
Position: There are no Champions in Long-Term Time Series Forecasting
|
cs.LG cs.AI
|
Recent advances in long-term time series forecasting have introduced numerous
complex prediction models that consistently outperform previously published
architectures. However, this rapid progression raises concerns regarding
inconsistent benchmarking and reporting practices, which may undermine the
reliability of these comparisons. Our position emphasizes the need to shift
focus away from pursuing ever-more complex models and towards enhancing
benchmarking practices through rigorous and standardized evaluation methods. To
support our claim, we first perform a broad, thorough, and reproducible
evaluation of the top-performing models on the most popular benchmark by
training 3,500+ networks over 14 datasets. Then, through a comprehensive
analysis, we find that slight changes to experimental setups or current
evaluation metrics drastically shift the common belief that newly published
results are advancing the state of the art. Our findings suggest the need for
rigorous and standardized evaluation methods that enable more substantiated
claims, including reproducible hyperparameter setups and statistical testing.
|
2502.14047
|
Towards a Learning Theory of Representation Alignment
|
cs.LG cs.AI stat.ML
|
It has recently been argued that AI models' representations are becoming
aligned as their scale and performance increase. Empirical analyses have been
designed to support this idea and conjecture the possible alignment of
different representations toward a shared statistical model of reality. In this
paper, we propose a learning-theoretic perspective to representation alignment.
First, we review and connect different notions of alignment based on metric,
probabilistic, and spectral ideas. Then, we focus on stitching, a particular
approach to understanding the interplay between different representations in
the context of a task. Our main contribution here is relating properties of
stitching to the kernel alignment of the underlying representation. Our results
can be seen as a first step toward casting representation alignment as a
learning-theoretic problem.
|
2502.14048
|
Semantic Decomposition and Selective Context Filtering -- Text
Processing Techniques for Context-Aware NLP-Based Systems
|
cs.CL cs.AI cs.HC
|
In this paper, we present two techniques for use in context-aware systems:
Semantic Decomposition, which sequentially decomposes input prompts into a
structured and hierarchal information schema in which systems can parse and
process easily, and Selective Context Filtering, which enables systems to
systematically filter out specific irrelevant sections of contextual
information that is fed through a system's NLP-based pipeline. We will explore
how context-aware systems and applications can utilize these two techniques in
order to implement dynamic LLM-to-system interfaces, improve an LLM's ability
to generate more contextually cohesive user-facing responses, and optimize
complex automated workflows and pipelines.
|
2502.14050
|
Diversity-driven Data Selection for Language Model Tuning through Sparse
Autoencoder
|
cs.CL cs.AI cs.LG
|
Current pre-trained large language models typically need instruction tuning
to align with human preferences. However, instruction tuning data is often
quantity-saturated due to the large volume of data collection and fast model
iteration, leaving coreset data selection important but underexplored. On the
other hand, existing quality-driven data selection methods such as LIMA
(NeurIPS 2023 (Zhou et al., 2024)) and AlpaGasus (ICLR 2024 (Chen et al.))
generally ignore the equal importance of data diversity and complexity. In this
work, we aim to design a diversity-aware data selection strategy and creatively
propose using sparse autoencoders to tackle the challenge of data diversity
measure. In addition, sparse autoencoders can also provide more
interpretability of model behavior and explain, e.g., the surprising
effectiveness of selecting the longest response (ICML 2024 (Zhao et al.)).
Using effective data selection, we experimentally prove that models trained on
our selected data can outperform other methods in terms of model capabilities,
reduce training cost, and potentially gain more control over model behaviors.
|
2502.14051
|
RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache
Compression
|
cs.CL cs.LG
|
Transformer-based Large Language Models rely critically on KV cache to
efficiently handle extended contexts during the decode phase. Yet, the size of
the KV cache grows proportionally with the input length, burdening both memory
bandwidth and capacity as decoding progresses. To address this challenge, we
present RocketKV, a training-free KV cache compression strategy designed
specifically to reduce both memory bandwidth and capacity demand of KV cache
during the decode phase. RocketKV contains two consecutive stages. In the first
stage, it performs coarse-grain KV cache eviction on the input sequence tokens
with SnapKV++, a method improved upon SnapKV by introducing adaptive pooling
size and full compatibility with grouped-query attention. In the second stage,
it adopts a hybrid attention method to conduct fine-grain top-k sparse
attention, approximating the attention scores by leveraging both head and
sequence dimensional reductions. Combining these two stages, RocketKV achieves
significant KV cache fetching bandwidth and storage savings while maintaining
comparable accuracy to full KV cache attention. We show that RocketKV provides
end-to-end speedup by up to 3$\times$ as well as peak memory reduction by up to
31% in the decode phase on an NVIDIA H100 GPU compared to the full KV cache
baseline, while achieving negligible accuracy loss on a variety of long-context
tasks.
|
2502.14053
|
Goggin's corrected Kalman Filter: Guarantees and Filtering Regimes
|
cs.IT math.IT
|
In this paper we revisit a non-linear filter for {\em non-Gaussian} noises
that was introduced in [1]. Goggin proved that transforming the observations by
the score function and then applying the Kalman Filter (KF) to the transformed
observations results in an asymptotically optimal filter. In the current paper,
we study the convergence rate of Goggin's filter in a pre-limit setting that
allows us to study a range of signal-to-noise regimes which includes, as a
special case, Goggin's setting. Our guarantees are explicit in the level of
observation noise, and unlike most other works in filtering, we do not assume
Gaussianity of the noises.
Our proofs build on combining simple tools from two separate literature
streams. One is a general posterior Cram\'er-Rao lower bound for filtering. The
other is convergence-rate bounds in the Fisher information central limit
theorem.
Along the way, we also study filtering regimes for linear state-space models,
characterizing clearly degenerate regimes -- where trivial filters are nearly
optimal -- and a {\em balanced} regime, which is where Goggin's filter has the
most value. \footnote{This work has been submitted to the IEEE for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessible.
|
2502.14054
|
A Low-Complexity Scheme for Multi-Message Private Information Retrieval
|
cs.IT math.IT
|
Private Information Retrieval (PIR) is a fundamental problem in the broader
fields of security and privacy. In recent years, the problem has garnered
significant attention from the research community, leading to achievability
schemes and converse results for many important PIR settings.
This paper focuses on the Multi-message Private Information Retrieval (MPIR)
setting, where a user aims to retrieve \(D\) messages from a database of \(K\)
messages, with identical copies of the database available on \(N\) remote
servers. The user's goal is to maximize the download rate while keeping the
identities of the retrieved messages private. Existing approaches to the MPIR
problem primarily focus on either scalar-linear solutions or vector-linear
solutions, the latter requiring a high degree of subpacketization. Furthermore,
prior scalar-linear solutions are restricted to the special case of \(N =
D+1\). This limitation hinders the practical adoption of these schemes, as
real-world applications demand simple, easily implementable solutions that
support a broad range of scenarios.
In this work, we present a solution for the MPIR problem, which applies to a
broader range of system parameters and requires a limited degree of
subpacketization. In particular, the proposed scheme applies to all values of
\(N=DL+1\) for any integer \(L\geq 1\), and requires a degree of
subpacketization \(L\). Our scheme achieves capacity when \(D\) divides \(K\),
and in all other cases, its performance matches or comes within a small
additive margin of the best-known scheme that requires a high degree of
subpacketization.
|
2502.14060
|
New Lower Bounds for Stochastic Non-Convex Optimization through
Divergence Composition
|
stat.ML cs.LG math.OC
|
We study fundamental limits of first-order stochastic optimization in a range
of nonconvex settings, including L-smooth functions satisfying Quasar-Convexity
(QC), Quadratic Growth (QG), and Restricted Secant Inequalities (RSI). While
the convergence properties of standard algorithms are well-understood in
deterministic regimes, significantly fewer results address the stochastic case,
where only unbiased and noisy gradients are available. We establish new lower
bounds on the number of noisy gradient queries to minimize these classes of
functions, also showing that they are tight (up to a logarithmic factor) in all
the relevant quantities characterizing each class. Our approach reformulates
the optimization task as a function identification problem, leveraging
divergence composition arguments to construct a challenging subclass that leads
to sharp lower bounds. Furthermore, we present a specialized algorithm in the
one-dimensional setting that achieves faster rates, suggesting that certain
dimensional thresholds are intrinsic to the complexity of non-convex stochastic
optimization.
|
2502.14061
|
EfficientPose 6D: Scalable and Efficient 6D Object Pose Estimation
|
cs.CV cs.AI cs.LG
|
In industrial applications requiring real-time feedback, such as quality
control and robotic manipulation, the demand for high-speed and accurate pose
estimation remains critical. Despite advances improving speed and accuracy in
pose estimation, finding a balance between computational efficiency and
accuracy poses significant challenges in dynamic environments. Most current
algorithms lack scalability in estimation time, especially for diverse
datasets, and the state-of-the-art (SOTA) methods are often too slow. This
study focuses on developing a fast and scalable set of pose estimators based on
GDRNPP to meet or exceed current benchmarks in accuracy and robustness,
particularly addressing the efficiency-accuracy trade-off essential in
real-time scenarios. We propose the AMIS algorithm to tailor the utilized model
according to an application-specific trade-off between inference time and
accuracy. We further show the effectiveness of the AMIS-based model choice on
four prominent benchmark datasets (LM-O, YCB-V, T-LESS, and ITODD).
|
2502.14063
|
PedDet: Adaptive Spectral Optimization for Multimodal Pedestrian
Detection
|
cs.CV
|
Pedestrian detection in intelligent transportation systems has made
significant progress but faces two critical challenges: (1) insufficient fusion
of complementary information between visible and infrared spectra, particularly
in complex scenarios, and (2) sensitivity to illumination changes, such as
low-light or overexposed conditions, leading to degraded performance. To
address these issues, we propose PedDet, an adaptive spectral optimization
complementarity framework specifically enhanced and optimized for multispectral
pedestrian detection. PedDet introduces the Multi-scale Spectral Feature
Perception Module (MSFPM) to adaptively fuse visible and infrared features,
enhancing robustness and flexibility in feature extraction. Additionally, the
Illumination Robustness Feature Decoupling Module (IRFDM) improves detection
stability under varying lighting by decoupling pedestrian and background
features. We further design a contrastive alignment to enhance intermodal
feature discrimination. Experiments on LLVIP and MSDS datasets demonstrate that
PedDet achieves state-of-the-art performance, improving the mAP by 6.6% with
superior detection accuracy even in low-light conditions, marking a significant
step forward for road safety. Code will be available at
https://github.com/AIGeeksGroup/PedDet.
|
2502.14064
|
Triad: Vision Foundation Model for 3D Magnetic Resonance Imaging
|
cs.CV cs.AI
|
Vision foundation models (VFMs) are pre-trained on extensive image datasets
to learn general representations for diverse types of data. These models can
subsequently be fine-tuned for specific downstream tasks, significantly
boosting performance across a broad range of applications. However, existing
vision foundation models that claim to be applicable to various radiology tasks
are mostly pre-trained on 3D computed tomography (CT), which benefits from the
availability of extensive 3D CT databases. Significant differences between CT
and magnetic resonance imaging (MRI) in imaging principles, signal
characteristics, and data distribution may hinder their practical performance
and versatility in MRI-specific applications. Here, we propose Triad, a vision
foundation model for 3D MRI. Triad adopts a widely used autoencoder
architecture to learn robust representations from 131,170 3D MRI volumes and
uses organ-independent imaging descriptions to constrain the semantic
distribution of the visual modality. The above pre-training dataset is called
Triad-131K, which is currently the largest 3D MRI pre-training dataset. We
evaluate Triad across three tasks, namely, organ/tumor segmentation,
organ/cancer classification, and medical image registration, in two data
modalities (within-domain and out-of-domain) settings using 25 downstream
datasets. By initializing models with Triad's pre-trained weights, nnUNet-Triad
improves segmentation performance by 6.88% compared to nnUNet-Scratch across 17
datasets. Swin-B-Triad achieves a 3.97% improvement over Swin-B-Scratch in
classification tasks across five datasets. SwinUNETR-Triad improves by 4.00%
compared to SwinUNETR-Scratch in registration tasks across two datasets. Our
study demonstrates that pre-training can maximize performance when the data
modalities and organs of upstream and downstream tasks are consistent.
|
2502.14066
|
Experiment Design with Gaussian Process Regression with Applications to
Chance-Constrained Control
|
eess.SY cs.SY
|
Learning for control in repeated tasks allows for well-designed experiments
to gather the most useful data. We consider the setting in which we use a
data-driven controller that does not have access to the true system dynamics.
Rather, the controller uses inferred dynamics based on the available
information. In order to acquire data that is beneficial for this controller,
we present an experimental design approach that leverages the current data to
improve expected control performance. We focus on the setting in which
inference on the unknown dynamics is performed using Gaussian processes.
Gaussian processes not only provide uncertainty quantification but also allow
us to leverage structures inherent to Gaussian random variables. Through this
structure, we design experiments via gradient descent on the expected control
performance with respect to the experiment input. In particular, we focus on a
chance-constrained minimum expected time control problem. Numerical
demonstrations of our approach indicate our experimental design outperforms
relevant benchmarks.
|
2502.14068
|
A Racing Dataset and Baseline Model for Track Detection in Autonomous
Racing
|
cs.CV cs.AI eess.IV
|
A significant challenge in racing-related research is the lack of publicly
available datasets containing raw images with corresponding annotations for the
downstream task. In this paper, we introduce RoRaTrack, a novel dataset that
contains annotated multi-camera image data from racing scenarios for track
detection. The data is collected on a Dallara AV-21 at a racing circuit in
Indiana, in collaboration with the Indy Autonomous Challenge (IAC). RoRaTrack
addresses common problems such as blurriness due to high speed, color inversion
from the camera, and absence of lane markings on the track. Consequently, we
propose RaceGAN, a baseline model based on a Generative Adversarial Network
(GAN) that effectively addresses these challenges. The proposed model
demonstrates superior performance compared to current state-of-the-art machine
learning models in track detection. The dataset and code for this work are
available at github.com/RaceGAN.
|
2502.14070
|
DiffExp: Efficient Exploration in Reward Fine-tuning for Text-to-Image
Diffusion Models
|
cs.CV cs.AI
|
Fine-tuning text-to-image diffusion models to maximize rewards has proven
effective for enhancing model performance. However, reward fine-tuning methods
often suffer from slow convergence due to online sample generation. Therefore,
obtaining diverse samples with strong reward signals is crucial for improving
sample efficiency and overall performance. In this work, we introduce DiffExp,
a simple yet effective exploration strategy for reward fine-tuning of
text-to-image models. Our approach employs two key strategies: (a) dynamically
adjusting the scale of classifier-free guidance to enhance sample diversity,
and (b) randomly weighting phrases of the text prompt to exploit high-quality
reward signals. We demonstrate that these strategies significantly enhance
exploration during online sample generation, improving the sample efficiency of
recent reward fine-tuning methods, such as DDPO and AlignProp.
|
2502.14074
|
Investigating Non-Transitivity in LLM-as-a-Judge
|
cs.AI cs.CL cs.LG
|
Automatic evaluation methods based on large language models (LLMs) are
emerging as the standard tool for assessing the instruction-following abilities
of LLM-based agents. The most common method in this paradigm, pairwise
comparisons with a baseline model, critically depends on the assumption of
transitive preferences. However, the validity of this assumption remains
largely unexplored. In this study, we investigate the presence of
non-transitivity within the AlpacaEval framework and analyze its effects on
model rankings. We find that LLM judges exhibit non-transitive preferences,
leading to rankings that are sensitive to the choice of the baseline model. To
mitigate this issue, we show that round-robin tournaments combined with
Bradley-Terry models of preference can produce more reliable rankings. Notably,
our method increases both the Spearman correlation and the Kendall correlation
with Chatbot Arena (95.0% -> 96.4% and 82.1% -> 86.3% respectively). To address
the computational cost of round-robin tournaments, we propose Swiss-Wise
Iterative Matchmaking (Swim) tournaments, using a dynamic matching strategy to
capture the benefits of round-robin tournaments while maintaining computational
efficiency.
|
2502.14075
|
Towards Vector Optimization on Low-Dimensional Vector Symbolic
Architecture
|
cs.LG
|
Vector Symbolic Architecture (VSA) is emerging in machine learning due to its
efficiency, but they are hindered by issues of hyperdimensionality and
accuracy. As a promising mitigation, the Low-Dimensional Computing (LDC) method
significantly reduces the vector dimension by ~100 times while maintaining
accuracy, by employing a gradient-based optimization. Despite its potential,
LDC optimization for VSA is still underexplored. Our investigation into vector
updates underscores the importance of stable, adaptive dynamics in LDC
training. We also reveal the overlooked yet critical roles of batch
normalization (BN) and knowledge distillation (KD) in standard approaches.
Besides the accuracy boost, BN does not add computational overhead during
inference, and KD significantly enhances inference confidence. Through
extensive experiments and ablation studies across multiple benchmarks, we
provide a thorough evaluation of our approach and extend the interpretability
of binary neural network optimization similar to LDC, previously unaddressed in
BNN literature.
|
2502.14079
|
Population Dynamics Control with Partial Observations
|
math.OC cs.LG
|
We study the problem of controlling population dynamics, a class of linear
dynamical systems evolving on the probability simplex, from the perspective of
online non-stochastic control. While Golowich et.al. 2024 analyzed the fully
observable setting, we focus on the more realistic, partially observable case,
where only a low-dimensional representation of the state is accessible.
In classical non-stochastic control, inputs are set as linear combinations of
past disturbances. However, under partial observations, disturbances cannot be
directly computed. To address this, Simchowitz et.al. 2020 proposed to
construct oblivious signals, which are counterfactual observations with zero
control, as a substitute. This raises several challenges in our setting: (1)
how to construct oblivious signals under simplex constraints, where zero
control is infeasible; (2) how to design a sufficiently expressive convex
controller parameterization tailored to these signals; and (3) how to enforce
the simplex constraint on control when projections may break the convexity of
cost functions.
Our main contribution is a new controller that achieves the optimal
$\tilde{O}(\sqrt{T})$ regret with respect to a natural class of mixing linear
dynamic controllers. To tackle these challenges, we construct signals based on
hypothetical observations under a constant control adapted to the simplex
domain, and introduce a new controller parameterization that approximates
general control policies linear in non-oblivious observations. Furthermore, we
employ a novel convex extension surrogate loss, inspired by Lattimore 2024, to
bypass the projection-induced convexity issue.
|
2502.14080
|
Personalized Education with Generative AI and Digital Twins: VR, RAG,
and Zero-Shot Sentiment Analysis for Industry 4.0 Workforce Development
|
cs.CY cs.AI
|
The Fourth Industrial Revolution (4IR) technologies, such as cloud computing,
machine learning, and AI, have improved productivity but introduced challenges
in workforce training and reskilling. This is critical given existing workforce
shortages, especially in marginalized communities like Underrepresented
Minorities (URM), who often lack access to quality education. Addressing these
challenges, this research presents gAI-PT4I4, a Generative AI-based
Personalized Tutor for Industrial 4.0, designed to personalize 4IR experiential
learning. gAI-PT4I4 employs sentiment analysis to assess student comprehension,
leveraging generative AI and finite automaton to tailor learning experiences.
The framework integrates low-fidelity Digital Twins for VR-based training,
featuring an Interactive Tutor - a generative AI assistant providing real-time
guidance via audio and text. It uses zero-shot sentiment analysis with LLMs and
prompt engineering, achieving 86\% accuracy in classifying student-teacher
interactions as positive or negative. Additionally, retrieval-augmented
generation (RAG) enables personalized learning content grounded in
domain-specific knowledge. To adapt training dynamically, finite automaton
structures exercises into states of increasing difficulty, requiring 80\%
task-performance accuracy for progression. Experimental evaluation with 22
volunteers showed improved accuracy exceeding 80\%, reducing training time.
Finally, this paper introduces a Multi-Fidelity Digital Twin model, aligning
Digital Twin complexity with Bloom's Taxonomy and Kirkpatrick's model,
providing a scalable educational framework.
|
2502.14083
|
Are Rules Meant to be Broken? Understanding Multilingual Moral Reasoning
as a Computational Pipeline with UniMoral
|
cs.CL
|
Moral reasoning is a complex cognitive process shaped by individual
experiences and cultural contexts and presents unique challenges for
computational analysis. While natural language processing (NLP) offers
promising tools for studying this phenomenon, current research lacks cohesion,
employing discordant datasets and tasks that examine isolated aspects of moral
reasoning. We bridge this gap with UniMoral, a unified dataset integrating
psychologically grounded and social-media-derived moral dilemmas annotated with
labels for action choices, ethical principles, contributing factors, and
consequences, alongside annotators' moral and cultural profiles. Recognizing
the cultural relativity of moral reasoning, UniMoral spans six languages,
Arabic, Chinese, English, Hindi, Russian, and Spanish, capturing diverse
socio-cultural contexts. We demonstrate UniMoral's utility through a benchmark
evaluations of three large language models (LLMs) across four tasks: action
prediction, moral typology classification, factor attribution analysis, and
consequence generation. Key findings reveal that while implicitly embedded
moral contexts enhance the moral reasoning capability of LLMs, there remains a
critical need for increasingly specialized approaches to further advance moral
reasoning in these models.
|
2502.14086
|
Navigating Semantic Relations: Challenges for Language Models in
Abstract Common-Sense Reasoning
|
cs.CL cs.AI
|
Large language models (LLMs) have achieved remarkable performance in
generating human-like text and solving reasoning tasks of moderate complexity,
such as question-answering and mathematical problem-solving. However, their
capabilities in tasks requiring deeper cognitive skills, such as common-sense
understanding and abstract reasoning, remain under-explored. In this paper, we
systematically evaluate abstract common-sense reasoning in LLMs using the
ConceptNet knowledge graph. We propose two prompting approaches: instruct
prompting, where models predict plausible semantic relationships based on
provided definitions, and few-shot prompting, where models identify relations
using examples as guidance. Our experiments with the gpt-4o-mini model show
that in instruct prompting, consistent performance is obtained when ranking
multiple relations but with substantial decline when the model is restricted to
predicting only one relation. In few-shot prompting, the model's accuracy
improves significantly when selecting from five relations rather than the full
set, although with notable bias toward certain relations. These results suggest
significant gaps still, even in commercially used LLMs' abstract common-sense
reasoning abilities, compared to human-level understanding. However, the
findings also highlight the promise of careful prompt engineering, based on
selective retrieval, for obtaining better performance.
|
2502.14087
|
Learning from End User Data with Shuffled Differential Privacy over
Kernel Densities
|
cs.LG cs.CR cs.DS
|
We study a setting of collecting and learning from private data distributed
across end users. In the shuffled model of differential privacy, the end users
partially protect their data locally before sharing it, and their data is also
anonymized during its collection to enhance privacy. This model has recently
become a prominent alternative to central DP, which requires full trust in a
central data curator, and local DP, where fully local data protection takes a
steep toll on downstream accuracy.
Our main technical result is a shuffled DP protocol for privately estimating
the kernel density function of a distributed dataset, with accuracy essentially
matching central DP. We use it to privately learn a classifier from the end
user data, by learning a private density function per class. Moreover, we show
that the density function itself can recover the semantic content of its class,
despite having been learned in the absence of any unprotected data. Our
experiments show the favorable downstream performance of our approach, and
highlight key downstream considerations and trade-offs in a practical ML
deployment of shuffled DP.
|
2502.14088
|
Regression in EO: Are VLMs Up to the Challenge?
|
cs.CV
|
Earth Observation (EO) data encompass a vast range of remotely sensed
information, featuring multi-sensor and multi-temporal, playing an
indispensable role in understanding our planet's dynamics. Recently, Vision
Language Models (VLMs) have achieved remarkable success in perception and
reasoning tasks, bringing new insights and opportunities to the EO field.
However, the potential for EO applications, especially for scientific
regression related applications remains largely unexplored. This paper bridges
that gap by systematically examining the challenges and opportunities of
adapting VLMs for EO regression tasks. The discussion first contrasts the
distinctive properties of EO data with conventional computer vision datasets,
then identifies four core obstacles in applying VLMs to EO regression: 1) the
absence of dedicated benchmarks, 2) the discrete-versus-continuous
representation mismatch, 3) cumulative error accumulation, and 4) the
suboptimal nature of text-centric training objectives for numerical tasks.
Next, a series of methodological insights and potential subtle pitfalls are
explored. Lastly, we offer some promising future directions for designing
robust, domain-aware solutions. Our findings highlight the promise of VLMs for
scientific regression in EO, setting the stage for more precise and
interpretable modeling of critical environmental processes.
|
2502.14090
|
MambaLiteSR: Image Super-Resolution with Low-Rank Mamba using Knowledge
Distillation
|
eess.IV cs.CV
|
Generative Artificial Intelligence (AI) has gained significant attention in
recent years, revolutionizing various applications across industries. Among
these, advanced vision models for image super-resolution are in high demand,
particularly for deployment on edge devices where real-time processing is
crucial. However, deploying such models on edge devices is challenging due to
limited computing power and memory. In this paper, we present MambaLiteSR, a
novel lightweight image Super-Resolution (SR) model that utilizes the
architecture of Vision Mamba. It integrates State Space Blocks and a
reconstruction module for efficient feature extraction. To optimize efficiency
without affecting performance, MambaLiteSR employs knowledge distillation to
transfer key insights from a larger Mamba-based teacher model to a smaller
student model via hyperparameter tuning. Through mathematical analysis of model
parameters and their impact on PSNR, we identify key factors and adjust them
accordingly. Our comprehensive evaluation shows that MambaLiteSR outperforms
state-of-the-art edge SR methods by reducing power consumption while
maintaining competitive PSNR and SSIM scores across benchmark datasets. It also
reduces power usage during training via low-rank approximation. Moreover,
MambaLiteSR reduces parameters with minimal performance loss, enabling
efficient deployment of generative AI models on resource-constrained devices.
Deployment on the embedded NVIDIA Jetson Orin Nano confirms the superior
balance of MambaLiteSR size, latency, and efficiency. Experiments show that
MambaLiteSR achieves performance comparable to both the baseline and other edge
models while using 15% fewer parameters. It also improves power consumption by
up to 58% compared to state-of-the-art SR edge models, all while maintaining
low energy use during training.
|
2502.14092
|
Hybrid Visual Servoing of Tendon-driven Continuum Robots
|
cs.RO cs.CV cs.SY eess.SY
|
This paper introduces a novel Hybrid Visual Servoing (HVS) approach for
controlling tendon-driven continuum robots (TDCRs). The HVS system combines
Image-Based Visual Servoing (IBVS) with Deep Learning-Based Visual Servoing
(DLBVS) to overcome the limitations of each method and improve overall
performance. IBVS offers higher accuracy and faster convergence in feature-rich
environments, while DLBVS enhances robustness against disturbances and offers a
larger workspace. By enabling smooth transitions between IBVS and DLBVS, the
proposed HVS ensures effective control in dynamic, unstructured environments.
The effectiveness of this approach is validated through simulations and
real-world experiments, demonstrating that HVS achieves reduced iteration time,
faster convergence, lower final error, and smoother performance compared to
DLBVS alone, while maintaining DLBVS's robustness in challenging conditions
such as occlusions, lighting changes, actuator noise, and physical impacts.
|
2502.14094
|
CND-IDS: Continual Novelty Detection for Intrusion Detection Systems
|
cs.CR cs.LG
|
Intrusion detection systems (IDS) play a crucial role in IoT and network
security by monitoring system data and alerting to suspicious activities.
Machine learning (ML) has emerged as a promising solution for IDS, offering
highly accurate intrusion detection. However, ML-IDS solutions often overlook
two critical aspects needed to build reliable systems: continually changing
data streams and a lack of attack labels. Streaming network traffic and
associated cyber attacks are continually changing, which can degrade the
performance of deployed ML models. Labeling attack data, such as zero-day
attacks, in real-world intrusion scenarios may not be feasible, making the use
of ML solutions that do not rely on attack labels necessary. To address both
these challenges, we propose CND-IDS, a continual novelty detection IDS
framework which consists of (i) a learning-based feature extractor that
continuously updates new feature representations of the system data, and (ii) a
novelty detector that identifies new cyber attacks by leveraging principal
component analysis (PCA) reconstruction. Our results on realistic intrusion
datasets show that CND-IDS achieves up to 6.1x F-score improvement, and up to
6.5x improved forward transfer over the SOTA unsupervised continual learning
algorithm. Our code will be released upon acceptance.
|
2502.14095
|
Retrieving Versus Understanding Extractive Evidence in Few-Shot Learning
|
cs.CL
|
A key aspect of alignment is the proper use of within-document evidence to
construct document-level decisions. We analyze the relationship between the
retrieval and interpretation of within-document evidence for large language
model in a few-shot setting. Specifically, we measure the extent to which model
prediction errors are associated with evidence retrieval errors with respect to
gold-standard human-annotated extractive evidence for five datasets, using two
popular closed proprietary models. We perform two ablation studies to
investigate when both label prediction and evidence retrieval errors can be
attributed to qualities of the relevant evidence. We find that there is a
strong empirical relationship between model prediction and evidence retrieval
error, but that evidence retrieval error is mostly not associated with evidence
interpretation error--a hopeful sign for downstream applications built on this
mechanism.
|
2502.14096
|
Aligned Multi Objective Optimization
|
cs.LG math.OC
|
To date, the multi-objective optimization literature has mainly focused on
conflicting objectives, studying the Pareto front, or requiring users to
balance tradeoffs. Yet, in machine learning practice, there are many scenarios
where such conflict does not take place. Recent findings from multi-task
learning, reinforcement learning, and LLMs training show that diverse related
tasks can enhance performance across objectives simultaneously. Despite this
evidence, such phenomenon has not been examined from an optimization
perspective. This leads to a lack of generic gradient-based methods that can
scale to scenarios with a large number of related objectives. To address this
gap, we introduce the Aligned Multi-Objective Optimization framework, propose
new algorithms for this setting, and provide theoretical guarantees of their
superior performance compared to naive approaches.
|
2502.14099
|
Point Cloud Geometry Scalable Coding Using a Resolution and
Quality-conditioned Latents Probability Estimator
|
cs.CV
|
In the current age, users consume multimedia content in very heterogeneous
scenarios in terms of network, hardware, and display capabilities. A naive
solution to this problem is to encode multiple independent streams, each
covering a different possible requirement for the clients, with an obvious
negative impact in both storage and computational requirements. These drawbacks
can be avoided by using codecs that enable scalability, i.e., the ability to
generate a progressive bitstream, containing a base layer followed by multiple
enhancement layers, that allow decoding the same bitstream serving multiple
reconstructions and visualization specifications. While scalable coding is a
well-known and addressed feature in conventional image and video codecs, this
paper focuses on a new and very different problem, notably the development of
scalable coding solutions for deep learning-based Point Cloud (PC) coding. The
peculiarities of this 3D representation make it hard to implement flexible
solutions that do not compromise the other functionalities of the codec. This
paper proposes a joint quality and resolution scalability scheme, named
Scalable Resolution and Quality Hyperprior (SRQH), that, contrary to previous
solutions, can model the relationship between latents obtained with models
trained for different RD tradeoffs and/or at different resolutions.
Experimental results obtained by integrating SRQH in the emerging JPEG Pleno
learning-based PC coding standard show that SRQH allows decoding the PC at
different qualities and resolutions with a single bitstream while incurring
only in a limited RD penalty and increment in complexity w.r.t. non-scalable
JPEG PCC that would require one bitstream per coding configuration.
|
2502.14100
|
Towards Context-Robust LLMs: A Gated Representation Fine-tuning Approach
|
cs.CL cs.IR
|
Large Language Models (LLMs) enhanced with external contexts, such as through
retrieval-augmented generation (RAG), often face challenges in handling
imperfect evidence. They tend to over-rely on external knowledge, making them
vulnerable to misleading and unhelpful contexts. To address this, we propose
the concept of context-robust LLMs, which can effectively balance internal
knowledge with external context, similar to human cognitive processes.
Specifically, context-robust LLMs should rely on external context only when
lacking internal knowledge, identify contradictions between internal and
external knowledge, and disregard unhelpful contexts. To achieve this goal, we
introduce Grft, a lightweight and plug-and-play gated representation
fine-tuning approach. Grft consists of two key components: a gating mechanism
to detect and filter problematic inputs, and low-rank representation adapters
to adjust hidden representations. By training a lightweight intervention
function with only 0.0004\% of model size on fewer than 200 examples, Grft can
effectively adapt LLMs towards context-robust behaviors.
|
2502.14102
|
Explainable Distributed Constraint Optimization Problems
|
cs.AI
|
The Distributed Constraint Optimization Problem (DCOP) formulation is a
powerful tool to model cooperative multi-agent problems that need to be solved
distributively. A core assumption of existing approaches is that DCOP solutions
can be easily understood, accepted, and adopted, which may not hold, as
evidenced by the large body of literature on Explainable AI. In this paper, we
propose the Explainable DCOP (X-DCOP) model, which extends a DCOP to include
its solution and a contrastive query for that solution. We formally define some
key properties that contrastive explanations must satisfy for them to be
considered as valid solutions to X-DCOPs as well as theoretical results on the
existence of such valid explanations. To solve X-DCOPs, we propose a
distributed framework as well as several optimizations and suboptimal variants
to find valid explanations. We also include a human user study that showed that
users, not surprisingly, prefer shorter explanations over longer ones. Our
empirical evaluations showed that our approach can scale to large problems, and
the different variants provide different options for trading off explanation
lengths for smaller runtimes. Thus, our model and algorithmic contributions
extend the state of the art by reducing the barrier for users to understand
DCOP solutions, facilitating their adoption in more real-world applications.
|
2502.14105
|
Conformal Prediction under L\'evy-Prokhorov Distribution Shifts:
Robustness to Local and Global Perturbations
|
stat.ML cs.LG math.ST stat.ME stat.TH
|
Conformal prediction provides a powerful framework for constructing
prediction intervals with finite-sample guarantees, yet its robustness under
distribution shifts remains a significant challenge. This paper addresses this
limitation by modeling distribution shifts using L\'evy-Prokhorov (LP)
ambiguity sets, which capture both local and global perturbations. We provide a
self-contained overview of LP ambiguity sets and their connections to popular
metrics such as Wasserstein and Total Variation. We show that the link between
conformal prediction and LP ambiguity sets is a natural one: by propagating the
LP ambiguity set through the scoring function, we reduce complex
high-dimensional distribution shifts to manageable one-dimensional distribution
shifts, enabling exact quantification of worst-case quantiles and coverage.
Building on this analysis, we construct robust conformal prediction intervals
that remain valid under distribution shifts, explicitly linking LP parameters
to interval width and confidence levels. Experimental results on real-world
datasets demonstrate the effectiveness of the proposed approach.
|
2502.14111
|
Comprehensive Review on the Control of Heat Pumps for Energy Flexibility
in Distribution Networks
|
eess.SY cs.SY
|
Decarbonization plans promote the transition to heat pumps (HPs), creating
new opportunities for their energy flexibility in demand response programs,
solar photovoltaic integration and optimization of distribution networks. This
paper reviews scheduling-based and real-time optimization methods for
controlling HPs with a focus on energy flexibility in distribution networks.
Scheduling-based methods fall into two categories: rule-based controllers
(RBCs), which rely on predefined control rules without explicitly seeking
optimal solutions, and optimization models, which are designed to determine the
optimal scheduling of operations. Real-time optimization is achieved through
model predictive control (MPC), which relies on a predictive model to optimize
decisions over a time horizon, and reinforcement learning (RL), which takes a
model-free approach by learning optimal strategies through direct interaction
with the environment. The paper also examines studies on the impact of HPs on
distribution networks, particularly those leveraging energy flexibility
strategies. Key takeaways suggest the need to validate control strategies for
extreme cold-weather regions that require backup heaters, as well as develop
approaches designed for demand charge schemes that integrate HPs with other
controllable loads. From a grid impact assessment perspective, studies have
focused primarily on RBCs for providing energy flexibility through HP
operation, without addressing more advanced methods such as real-time
optimization using MPC or RL-based algorithms. Incorporating these advanced
control strategies could help identify key limitations, including the impact of
varying user participation levels and the cost-benefit trade-offs associated
with their implementation.
|
2502.14112
|
To Stand on the Shoulders of Giants: Should We Protect Initial
Discoveries in Multi-Agent Exploration?
|
cs.MA
|
Exploring new ideas is a fundamental aspect of research and development
(R\&D), which often occurs in competitive environments. Most ideas are
subsequent, i.e. one idea today leads to more ideas tomorrow. According to one
approach, the best way to encourage exploration is by granting protection on
discoveries to the first innovator. Correspondingly, only the one who made the
first discovery can use the new knowledge and benefit from subsequent
discoveries, which in turn should increase the initial motivation to explore.
An alternative approach to promote exploration favors the \emph{sharing of
knowledge} from discoveries among researchers allowing explorers to use each
others' discoveries to develop further knowledge, as in the open-source
community. With no protection, all explorers have access to all existing
discoveries and new directions are explored faster.
We present a game theoretic analysis of an abstract research-and-application
game which clarifies the expected advantages and disadvantages of the two
approaches under full information. We then compare the theoretical predictions
with the observed behavior of actual players in the lab who operate under
partial information conditions in both worlds.
Our main experimental finding is that the no protection approach leads to
\emph{more} investment efforts overall, in contrast to theoretical prediction
and common economic wisdom, but in line with a familiar cognitive bias known as
`underweighting of rare events'.
|
2502.14113
|
Object-centric Binding in Contrastive Language-Image Pretraining
|
cs.CV cs.AI
|
Recent advances in vision language models (VLM) have been driven by
contrastive models such as CLIP, which learn to associate visual information
with their corresponding text descriptions. However, these models have
limitations in understanding complex compositional scenes involving multiple
objects and their spatial relationships. To address these challenges, we
propose a novel approach that diverges from commonly used strategies, which
rely on the design of hard-negative augmentations. Instead, our work focuses on
integrating inductive biases into pre-trained CLIP-like models to improve their
compositional understanding without using any additional hard-negatives. To
that end, we introduce a binding module that connects a scene graph, derived
from a text description, with a slot-structured image representation,
facilitating a structured similarity assessment between the two modalities. We
also leverage relationships as text-conditioned visual constraints, thereby
capturing the intricate interactions between objects and their contextual
relationships more effectively. Our resulting model not only enhances the
performance of CLIP-based models in multi-object compositional understanding
but also paves the way towards more accurate and sample-efficient image-text
matching of complex scenes.
|
2502.14114
|
Zero loss guarantees and explicit minimizers for generic
overparametrized Deep Learning networks
|
cs.LG cs.AI math.AP math.OC stat.ML
|
We determine sufficient conditions for overparametrized deep learning (DL)
networks to guarantee the attainability of zero loss in the context of
supervised learning, for the $\mathcal{L}^2$ cost and {\em generic} training
data. We present an explicit construction of the zero loss minimizers without
invoking gradient descent. On the other hand, we point out that increase of
depth can deteriorate the efficiency of cost minimization using a gradient
descent algorithm by analyzing the conditions for rank loss of the training
Jacobian. Our results clarify key aspects on the dichotomy between zero loss
reachability in underparametrized versus overparametrized DL.
|
2502.14115
|
Chasing the Timber Trail: Machine Learning to Reveal Harvest Location
Misrepresentation
|
cs.LG cs.CE cs.CY
|
Illegal logging poses a significant threat to global biodiversity, climate
stability, and depresses international prices for legal wood harvesting and
responsible forest products trade, affecting livelihoods and communities across
the globe. Stable isotope ratio analysis (SIRA) is rapidly becoming an
important tool for determining the harvest location of traded, organic,
products. The spatial pattern in stable isotope ratio values depends on factors
such as atmospheric and environmental conditions and can thus be used for
geographical identification. We present here the results of a deployed machine
learning pipeline where we leverage both isotope values and atmospheric
variables to determine timber harvest location. Additionally, the pipeline
incorporates uncertainty estimation to facilitate the interpretation of harvest
location determination for analysts. We present our experiments on a collection
of oak (Quercus spp.) tree samples from its global range. Our pipeline
outperforms comparable state-of-the-art models determining geographic harvest
origin of commercially traded wood products, and has been used by European
enforcement agencies to identify illicit Russian and Belarusian timber entering
the EU market. We also identify opportunities for further advancement of our
framework and how it can be generalized to help identify the origin of falsely
labeled organic products throughout the supply chain.
|
2502.14119
|
Meaning Beyond Truth Conditions: Evaluating Discourse Level
Understanding via Anaphora Accessibility
|
cs.CL
|
We present a hierarchy of natural language understanding abilities and argue
for the importance of moving beyond assessments of understanding at the lexical
and sentence levels to the discourse level. We propose the task of anaphora
accessibility as a diagnostic for assessing discourse understanding, and to
this end, present an evaluation dataset inspired by theoretical research in
dynamic semantics. We evaluate human and LLM performance on our dataset and
find that LLMs and humans align on some tasks and diverge on others. Such
divergence can be explained by LLMs' reliance on specific lexical items during
language comprehension, in contrast to human sensitivity to structural
abstractions.
|
2502.14120
|
A Supervised Machine-Learning Approach For Turboshaft Engine Dynamic
Modeling Under Real Flight Conditions
|
cs.LG cs.SY eess.SY
|
Rotorcraft engines are highly complex, nonlinear thermodynamic systems that
operate under varying environmental and flight conditions. Simulating their
dynamics is crucial for design, fault diagnostics, and deterioration control
phases, and requires robust and reliable control systems to estimate engine
performance throughout flight envelope. However, the development of detailed
physical models of the engine based on numerical simulations is a very
challenging task due to the complex and entangled physics driving the engine.
In this scenario, data-driven machine-learning techniques are of great interest
to the aircraft engine community, due to their ability to describe nonlinear
systems' dynamic behavior and enable online performance estimation, achieving
excellent results with accuracy competitive with the state of the art. In this
work, we explore different Neural Network architectures to model the turboshaft
engine of Leonardo's AW189P4 prototype, aiming to predict the engine torque.
The models are trained on an extensive database of real flight tests featuring
a variety of operational maneuvers performed under different flight conditions,
providing a comprehensive representation of the engine's performance. To
complement the neural network approach, we apply Sparse Identification of
Nonlinear Dynamics (SINDy) to derive a low-dimensional dynamical model from the
available data, describing the relationship between fuel flow and engine
torque. The resulting model showcases SINDy's capability to recover the actual
physics underlying the engine dynamics and demonstrates its potential for
investigating more complex aspects of the engine. The results prove that
data-driven engine models can exploit a wider range of parameters than standard
transfer function-based approaches, enabling the use of trained schemes to
simulate nonlinear effects in different engines and helicopters.
|
2502.14121
|
Multi-Objective Bayesian Optimization for Networked Black-Box Systems: A
Path to Greener Profits and Smarter Designs
|
stat.ML cs.AI cs.LG
|
Designing modern industrial systems requires balancing several competing
objectives, such as profitability, resilience, and sustainability, while
accounting for complex interactions between technological, economic, and
environmental factors. Multi-objective optimization (MOO) methods are commonly
used to navigate these tradeoffs, but selecting the appropriate algorithm to
tackle these problems is often unclear, particularly when system
representations vary from fully equation-based (white-box) to entirely
data-driven (black-box) models. While grey-box MOO methods attempt to bridge
this gap, they typically impose rigid assumptions on system structure,
requiring models to conform to the underlying structural assumptions of the
solver rather than the solver adapting to the natural representation of the
system of interest. In this chapter, we introduce a unifying approach to
grey-box MOO by leveraging network representations, which provide a general and
flexible framework for modeling interconnected systems as a series of function
nodes that share various inputs and outputs. Specifically, we propose MOBONS, a
novel Bayesian optimization-inspired algorithm that can efficiently optimize
general function networks, including those with cyclic dependencies, enabling
the modeling of feedback loops, recycle streams, and multi-scale simulations -
features that existing methods fail to capture. Furthermore, MOBONS
incorporates constraints, supports parallel evaluations, and preserves the
sample efficiency of Bayesian optimization while leveraging network structure
for improved scalability. We demonstrate the effectiveness of MOBONS through
two case studies, including one related to sustainable process design. By
enabling efficient MOO under general graph representations, MOBONS has the
potential to significantly enhance the design of more profitable, resilient,
and sustainable engineering systems.
|
2502.14122
|
Benchmarking LLMs for Political Science: A United Nations Perspective
|
cs.CL cs.CY cs.ET
|
Large Language Models (LLMs) have achieved significant advances in natural
language processing, yet their potential for high-stake political
decision-making remains largely unexplored. This paper addresses the gap by
focusing on the application of LLMs to the United Nations (UN) decision-making
process, where the stakes are particularly high and political decisions can
have far-reaching consequences. We introduce a novel dataset comprising
publicly available UN Security Council (UNSC) records from 1994 to 2024,
including draft resolutions, voting records, and diplomatic speeches. Using
this dataset, we propose the United Nations Benchmark (UNBench), the first
comprehensive benchmark designed to evaluate LLMs across four interconnected
political science tasks: co-penholder judgment, representative voting
simulation, draft adoption prediction, and representative statement generation.
These tasks span the three stages of the UN decision-making process--drafting,
voting, and discussing--and aim to assess LLMs' ability to understand and
simulate political dynamics. Our experimental analysis demonstrates the
potential and challenges of applying LLMs in this domain, providing insights
into their strengths and limitations in political science. This work
contributes to the growing intersection of AI and political science, opening
new avenues for research and practical applications in global governance. The
UNBench Repository can be accessed at:
https://github.com/yueqingliang1/UNBench.
|
2502.14123
|
Understanding SGD with Exponential Moving Average: A Case Study in
Linear Regression
|
cs.LG math.OC stat.ML
|
Exponential moving average (EMA) has recently gained significant popularity
in training modern deep learning models, especially diffusion-based generative
models. However, there have been few theoretical results explaining the
effectiveness of EMA. In this paper, to better understand EMA, we establish the
risk bound of online SGD with EMA for high-dimensional linear regression, one
of the simplest overparameterized learning tasks that shares similarities with
neural networks. Our results indicate that (i) the variance error of SGD with
EMA is always smaller than that of SGD without averaging, and (ii) unlike SGD
with iterate averaging from the beginning, the bias error of SGD with EMA
decays exponentially in every eigen-subspace of the data covariance matrix.
Additionally, we develop proof techniques applicable to the analysis of a broad
class of averaging schemes.
|
2502.14125
|
Modular Prompt Learning Improves Vision-Language Models
|
cs.CV
|
Pre-trained vision-language models are able to interpret visual concepts and
language semantics. Prompt learning, a method of constructing prompts for text
encoders or image encoders, elicits the potentials of pre-trained models and
readily adapts them to new scenarios. Compared to fine-tuning, prompt learning
enables the model to achieve comparable or better performance using fewer
trainable parameters. Besides, prompt learning freezes the pre-trained model
and avoids the catastrophic forgetting issue in the fine-tuning. Continuous
prompts inserted into the input of every transformer layer (i.e. deep prompts)
can improve the performances of pre-trained models on downstream tasks. For
i-th transformer layer, the inserted prompts replace previously inserted
prompts in the $(i-1)$-th layer. Although the self-attention mechanism
contextualizes newly inserted prompts for the current layer and embeddings from
the previous layer's output, removing all inserted prompts from the previous
layer inevitably loses information contained in the continuous prompts. In this
work, we propose Modular Prompt Learning (MPL) that is designed to promote the
preservation of information contained in the inserted prompts. We evaluate the
proposed method on base-to-new generalization and cross-dataset tasks. On
average of 11 datasets, our method achieves 0.7% performance gain on the
base-to-new generalization task compared to the state-of-the-art method. The
largest improvement on the individual dataset is 10.7% (EuroSAT dataset).
|
2502.14127
|
Which of These Best Describes Multiple Choice Evaluation with LLMs? A)
Forced B) Flawed C) Fixable D) All of the Above
|
cs.CL
|
Multiple choice question answering (MCQA) is popular for LLM evaluation due
to its simplicity and human-like testing, but we argue for its reform. We first
reveal flaws in MCQA's format, as it struggles to: 1) test
generation/subjectivity; 2) match LLM use cases; and 3) fully test knowledge.
We instead advocate for generative formats based on human testing-where LLMs
construct and explain answers-better capturing user needs and knowledge while
remaining easy to score. We then show even when MCQA is a useful format, its
datasets suffer from: leakage; unanswerability; shortcuts; and saturation. In
each issue, we give fixes from education, like rubrics to guide MCQ writing;
scoring methods to bridle guessing; and Item Response Theory to build harder
MCQs. Lastly, we discuss LLM errors in MCQA-robustness, biases, and unfaithful
explanations-showing how our prior solutions better measure or address these
issues. While we do not need to desert MCQA, we encourage more efforts in
refining the task based on educational testing, advancing evaluations.
|
2502.14129
|
GlossGau: Efficient Inverse Rendering for Glossy Surface with
Anisotropic Spherical Gaussian
|
cs.CV
|
The reconstruction of 3D objects from calibrated photographs represents a
fundamental yet intricate challenge in the domains of computer graphics and
vision. Although neural reconstruction approaches based on Neural Radiance
Fields (NeRF) have shown remarkable capabilities, their processing costs remain
substantial. Recently, the advent of 3D Gaussian Splatting (3D-GS) largely
improves the training efficiency and facilitates to generate realistic
rendering in real-time. However, due to the limited ability of Spherical
Harmonics (SH) to represent high-frequency information, 3D-GS falls short in
reconstructing glossy objects. Researchers have turned to enhance the specular
expressiveness of 3D-GS through inverse rendering. Yet these methods often
struggle to maintain the training and rendering efficiency, undermining the
benefits of Gaussian Splatting techniques. In this paper, we introduce
GlossGau, an efficient inverse rendering framework that reconstructs scenes
with glossy surfaces while maintaining training and rendering speeds comparable
to vanilla 3D-GS. Specifically, we explicitly model the surface normals,
Bidirectional Reflectance Distribution Function (BRDF) parameters, as well as
incident lights and use Anisotropic Spherical Gaussian (ASG) to approximate the
per-Gaussian Normal Distribution Function under the microfacet model. We
utilize 2D Gaussian Splatting (2D-GS) as foundational primitives and apply
regularization to significantly alleviate the normal estimation challenge
encountered in related works. Experiments demonstrate that GlossGau achieves
competitive or superior reconstruction on datasets with glossy surfaces.
Compared with previous GS-based works that address the specular surface, our
optimization time is considerably less.
|
2502.14131
|
Gradients can train reward models: An Empirical Risk Minimization
Approach for Offline Inverse RL and Dynamic Discrete Choice Model
|
cs.LG cs.AI econ.EM
|
We study the problem of estimating Dynamic Discrete Choice (DDC) models, also
known as offline Maximum Entropy-Regularized Inverse Reinforcement Learning
(offline MaxEnt-IRL) in machine learning. The objective is to recover reward or
$Q^*$ functions that govern agent behavior from offline behavior data. In this
paper, we propose a globally convergent gradient-based method for solving these
problems without the restrictive assumption of linearly parameterized rewards.
The novelty of our approach lies in introducing the Empirical Risk Minimization
(ERM) based IRL/DDC framework, which circumvents the need for explicit state
transition probability estimation in the Bellman equation. Furthermore, our
method is compatible with non-parametric estimation techniques such as neural
networks. Therefore, the proposed method has the potential to be scaled to
high-dimensional, infinite state spaces. A key theoretical insight underlying
our approach is that the Bellman residual satisfies the Polyak-Lojasiewicz (PL)
condition -- a property that, while weaker than strong convexity, is sufficient
to ensure fast global convergence guarantees. Through a series of synthetic
experiments, we demonstrate that our approach consistently outperforms
benchmark methods and state-of-the-art alternatives.
|
2502.14132
|
Can Community Notes Replace Professional Fact-Checkers?
|
cs.CL cs.AI
|
Two commonly-employed strategies to combat the rise of misinformation on
social media are (i) fact-checking by professional organisations and (ii)
community moderation by platform users. Policy changes by Twitter/X and, more
recently, Meta, signal a shift away from partnerships with fact-checking
organisations and towards an increased reliance on crowdsourced community
notes. However, the extent and nature of dependencies between fact-checking and
helpful community notes remain unclear. To address these questions, we use
language models to annotate a large corpus of Twitter/X community notes with
attributes such as topic, cited sources, and whether they refute claims tied to
broader misinformation narratives. Our analysis reveals that community notes
cite fact-checking sources up to five times more than previously reported.
Fact-checking is especially crucial for notes on posts linked to broader
narratives, which are twice as likely to reference fact-checking sources
compared to other sources. In conclusion, our results show that successful
community moderation heavily relies on professional fact-checking.
|
2502.14133
|
Self-Regularization with Latent Space Explanations for Controllable
LLM-based Classification
|
cs.CL
|
Modern text classification methods heavily rely on contextual embeddings from
large language models (LLMs). Compared to human-engineered features, these
embeddings provide automatic and effective representations for classification
model training. However, they also introduce a challenge: we lose the ability
to manually remove unintended features, such as sensitive or task-irrelevant
features, to guarantee regulatory compliance or improve the generalizability of
classification models. This limitation arises because LLM embeddings are opaque
and difficult to interpret. In this paper, we propose a novel framework to
identify and regularize unintended features in the LLM latent space.
Specifically, we first pre-train a sparse autoencoder (SAE) to extract
interpretable features from LLM latent spaces. To ensure the SAE can capture
task-specific features, we further fine-tune it on task-specific datasets. In
training the classification model, we propose a simple and effective
regularizer, by minimizing the similarity between the classifier weights and
the identified unintended feature, to remove the impacts of these unintended
features toward classification. We evaluate the proposed framework on three
real-world tasks, including toxic chat detection, reward modeling, and disease
diagnosis. Results show that the proposed framework can significantly improve
the classifier's generalizability by regularizing those features that are not
semantically correlated to each task. This work pioneers controllable text
classification on LLM latent spaces by leveraging interpreted features to
address generalizability, fairness, and privacy challenges. We will release our
code and data once accepted.
|
2502.14135
|
Cluster Analysis and Concept Drift Detection in Malware
|
cs.LG cs.CR
|
Concept drift refers to gradual or sudden changes in the properties of data
that affect the accuracy of machine learning models. In this paper, we address
the problem of concept drift detection in the malware domain. Specifically, we
propose and analyze a clustering-based approach to detecting concept drift.
Using a subset of the KronoDroid dataset, malware samples are partitioned into
temporal batches and analyzed using MiniBatch $K$-Means clustering. The
silhouette coefficient is used as a metric to identify points in time where
concept drift has likely occurred. To verify our drift detection results, we
train learning models under three realistic scenarios, which we refer to as
static training, periodic retraining, and drift-aware retraining. In each
scenario, we consider four supervised classifiers, namely, Multilayer
Perceptron (MLP), Support Vector Machine (SVM), Random Forest, and XGBoost.
Experimental results demonstrate that drift-aware retraining guided by
silhouette coefficient thresholding achieves classification accuracy far
superior to static models, and generally within 1% of periodic retraining,
while also being far more efficient than periodic retraining. These results
provide strong evidence that our clustering-based approach is effective at
detecting concept drift, while also illustrating a highly practical and
efficient fully automated approach to improved malware classification via
concept drift detection.
|
2502.14137
|
Collaborative Retrieval for Large Language Model-based Conversational
Recommender Systems
|
cs.IR
|
Conversational recommender systems (CRS) aim to provide personalized
recommendations via interactive dialogues with users. While large language
models (LLMs) enhance CRS with their superior understanding of context-aware
user preferences, they typically struggle to leverage behavioral data, which
have proven to be important for classical collaborative filtering (CF)-based
approaches. For this reason, we propose CRAG, Collaborative Retrieval Augmented
Generation for LLM-based CRS. To the best of our knowledge, CRAG is the first
approach that combines state-of-the-art LLMs with CF for conversational
recommendations. Our experiments on two publicly available movie conversational
recommendation datasets, i.e., a refined Reddit dataset (which we name
Reddit-v2) as well as the Redial dataset, demonstrate the superior item
coverage and recommendation performance of CRAG, compared to several CRS
baselines. Moreover, we observe that the improvements are mainly due to better
recommendation accuracy on recently released movies. The code and data are
available at https://github.com/yaochenzhu/CRAG.
|
2502.14140
|
ModSkill: Physical Character Skill Modularization
|
cs.CV cs.GR cs.RO
|
Human motion is highly diverse and dynamic, posing challenges for imitation
learning algorithms that aim to generalize motor skills for controlling
simulated characters. Previous methods typically rely on a universal full-body
controller for tracking reference motion (tracking-based model) or a unified
full-body skill embedding space (skill embedding). However, these approaches
often struggle to generalize and scale to larger motion datasets. In this work,
we introduce a novel skill learning framework, ModSkill, that decouples complex
full-body skills into compositional, modular skills for independent body parts.
Our framework features a skill modularization attention layer that processes
policy observations into modular skill embeddings that guide low-level
controllers for each body part. We also propose an Active Skill Learning
approach with Generative Adaptive Sampling, using large motion generation
models to adaptively enhance policy learning in challenging tracking scenarios.
Our results show that this modularized skill learning framework, enhanced by
generative sampling, outperforms existing methods in precise full-body motion
tracking and enables reusable skill embeddings for diverse goal-driven tasks.
|
2502.14142
|
Token Adaptation via Side Graph Convolution for Temporally and Spatially
Efficient Fine-tuning of 3D Point Cloud Transformers
|
cs.CV
|
Parameter-efficient fine-tuning (PEFT) of pre-trained 3D point cloud
Transformers has emerged as a promising technique for 3D point cloud analysis.
While existing PEFT methods attempt to minimize the number of tunable
parameters, they still suffer from high temporal and spatial computational
costs during fine-tuning. This paper proposes a novel PEFT algorithm for 3D
point cloud Transformers, called Side Token Adaptation on a neighborhood Graph
(STAG), to achieve superior temporal and spatial efficiency. STAG employs a
graph convolutional side network that operates in parallel with a frozen
backbone Transformer to adapt tokens to downstream tasks. STAG's side network
realizes high efficiency through three key components: connection with the
backbone that enables reduced gradient computation, parameter sharing
framework, and efficient graph convolution. Furthermore, we present Point Cloud
Classification 13 (PCC13), a new benchmark comprising diverse publicly
available 3D point cloud datasets, enabling comprehensive evaluation of PEFT
methods. Extensive experiments using multiple pre-trained models and PCC13
demonstrates the effectiveness of STAG. Specifically, STAG maintains
classification accuracy comparable to existing methods while reducing tunable
parameters to only 0.43M and achieving significant reductions in both
computational time and memory consumption for fine-tuning. Code and benchmark
will be available at: https://github.com/takahikof/STAG
|
2502.14143
|
Multi-Agent Risks from Advanced AI
|
cs.MA cs.AI cs.CY cs.ET cs.LG
|
The rapid development of advanced AI agents and the imminent deployment of
many instances of these agents will give rise to multi-agent systems of
unprecedented complexity. These systems pose novel and under-explored risks. In
this report, we provide a structured taxonomy of these risks by identifying
three key failure modes (miscoordination, conflict, and collusion) based on
agents' incentives, as well as seven key risk factors (information asymmetries,
network effects, selection pressures, destabilising dynamics, commitment
problems, emergent agency, and multi-agent security) that can underpin them. We
highlight several important instances of each risk, as well as promising
directions to help mitigate them. By anchoring our analysis in a range of
real-world examples and experimental evidence, we illustrate the distinct
challenges posed by multi-agent systems and their implications for the safety,
governance, and ethics of advanced AI.
|
2502.14144
|
UM_FHS at TREC 2024 PLABA: Exploration of Fine-tuning and AI agent
approach for plain language adaptations of biomedical text
|
cs.CL
|
This paper describes our submissions to the TREC 2024 PLABA track with the
aim to simplify biomedical abstracts for a K8-level audience (13-14 years old
students). We tested three approaches using OpenAI's gpt-4o and gpt-4o-mini
models: baseline prompt engineering, a two-AI agent approach, and fine-tuning.
Adaptations were evaluated using qualitative metrics (5-point Likert scales for
simplicity, accuracy, completeness, and brevity) and quantitative readability
scores (Flesch-Kincaid grade level, SMOG Index). Results indicated that the
two-agent approach and baseline prompt engineering with gpt-4o-mini models show
superior qualitative performance, while fine-tuned models excelled in accuracy
and completeness but were less simple. The evaluation results demonstrated that
prompt engineering with gpt-4o-mini outperforms iterative improvement
strategies via two-agent approach as well as fine-tuning with gpt-4o. We intend
to expand our investigation of the results and explore advanced evaluations.
|
2502.14145
|
LLM-Enhanced Dialogue Management for Full-Duplex Spoken Dialogue Systems
|
cs.CL eess.AS
|
Achieving full-duplex communication in spoken dialogue systems (SDS) requires
real-time coordination between listening, speaking, and thinking. This paper
proposes a semantic voice activity detection (VAD) module as a dialogue manager
(DM) to efficiently manage turn-taking in full-duplex SDS. Implemented as a
lightweight (0.5B) LLM fine-tuned on full-duplex conversation data, the
semantic VAD predicts four control tokens to regulate turn-switching and
turn-keeping, distinguishing between intentional and unintentional barge-ins
while detecting query completion for handling user pauses and hesitations. By
processing input speech in short intervals, the semantic VAD enables real-time
decision-making, while the core dialogue engine (CDE) is only activated for
response generation, reducing computational overhead. This design allows
independent DM optimization without retraining the CDE, balancing interaction
accuracy and inference efficiency for scalable, next-generation full-duplex
SDS.
|
2502.14146
|
Efficient and Optimal Policy Gradient Algorithm for Corrupted
Multi-armed Bandits
|
cs.LG
|
In this paper, we consider the stochastic multi-armed bandits problem with
adversarial corruptions, where the random rewards of the arms are partially
modified by an adversary to fool the algorithm. We apply the policy gradient
algorithm SAMBA to this setting, and show that it is computationally efficient,
and achieves a state-of-the-art $O(K\log T/\Delta) + O(C/\Delta)$ regret upper
bound, where $K$ is the number of arms, $C$ is the unknown corruption level,
$\Delta$ is the minimum expected reward gap between the best arm and other
ones, and $T$ is the time horizon. Compared with the best existing efficient
algorithm (e.g., CBARBAR), whose regret upper bound is $O(K\log^2 T/\Delta) +
O(C)$, we show that SAMBA reduces one $\log T$ factor in the regret bound,
while maintaining the corruption-dependent term to be linear with $C$. This is
indeed asymptotically optimal. We also conduct simulations to demonstrate the
effectiveness of SAMBA, and the results show that SAMBA outperforms existing
baselines.
|
2502.14147
|
Learning the P2D Model for Lithium-Ion Batteries with SOH Detection
|
cs.LG physics.chem-ph
|
Lithium ion batteries are widely used in many applications. Battery
management systems control their optimal use and charging and predict when the
battery will cease to deliver the required output on a planned duty or driving
cycle. Such systems use a simulation of a mathematical model of battery
performance. These models can be electrochemical or data-driven.
Electrochemical models for batteries running at high currents are
mathematically and computationally complex. In this work, we show that a
well-regarded electrochemical model, the Pseudo Two Dimensional (P2D) model,
can be replaced by a computationally efficient Convolutional Neural Network
(CNN) surrogate model fit to accurately simulated data from a class of random
driving cycles. We demonstrate that a CNN is an ideal choice for accurately
capturing Lithium ion concentration profiles. Additionally, we show how the
neural network model can be adjusted to correspond to battery changes in State
of Health (SOH).
|
2502.14149
|
PitVQA++: Vector Matrix-Low-Rank Adaptation for Open-Ended Visual
Question Answering in Pituitary Surgery
|
cs.CV cs.AI
|
Vision-Language Models (VLMs) in visual question answering (VQA) offer a
unique opportunity to enhance intra-operative decision-making, promote
intuitive interactions, and significantly advancing surgical education.
However, the development of VLMs for surgical VQA is challenging due to limited
datasets and the risk of overfitting and catastrophic forgetting during full
fine-tuning of pretrained weights. While parameter-efficient techniques like
Low-Rank Adaptation (LoRA) and Matrix of Rank Adaptation (MoRA) address
adaptation challenges, their uniform parameter distribution overlooks the
feature hierarchy in deep networks, where earlier layers, that learn general
features, require more parameters than later ones. This work introduces
PitVQA++ with an open-ended PitVQA dataset and vector matrix-low-rank
adaptation (Vector-MoLoRA), an innovative VLM fine-tuning approach for adapting
GPT-2 to pituitary surgery. Open-Ended PitVQA comprises around 101,803 frames
from 25 procedural videos with 745,972 question-answer sentence pairs, covering
key surgical elements such as phase and step recognition, context
understanding, tool detection, localization, and interactions recognition.
Vector-MoLoRA incorporates the principles of LoRA and MoRA to develop a
matrix-low-rank adaptation strategy that employs vector ranking to allocate
more parameters to earlier layers, gradually reducing them in the later layers.
Our approach, validated on the Open-Ended PitVQA and EndoVis18-VQA datasets,
effectively mitigates catastrophic forgetting while significantly enhancing
performance over recent baselines. Furthermore, our risk-coverage analysis
highlights its enhanced reliability and trustworthiness in handling uncertain
predictions. Our source code and dataset is available
at~\url{https://github.com/HRL-Mike/PitVQA-Plus}.
|
2502.14150
|
Risk-Sensitive Security-Constrained Economic Dispatch: Pricing and
Algorithm Design
|
eess.SY cs.SY econ.TH
|
We propose a risk-sensitive security-constrained economic dispatch (R-SCED)
formulation capturing the tradeoff between dispatch cost and resilience against
potential line failures, where risk is modeled via the conditional value at
risk (CVaR). In the context of our formulation, we analyze revenue adequacy and
side payments of two pricing models, one based on nominal generation costs, and
another based on total marginal cost including contingencies. In particular, we
prove that the system operator's (SO) merchandising surplus (MS) and total
revenue are nonnegative under the latter, while under the former the same does
not hold in general. We demonstrate that the proposed R-SCED formulation is
amenable to decomposition and describe a Benders' decomposition algorithm to
solve it. In numerical examples, we illustrate the differences in MS and total
revenue under the considered pricing schemes, and the computational efficiency
of our decomposition approach.
|
2502.14155
|
Giving AI Personalities Leads to More Human-Like Reasoning
|
cs.AI cs.CL cs.CY
|
In computational cognitive modeling, capturing the full spectrum of human
judgment and decision-making processes, beyond just optimal behaviors, is a
significant challenge. This study explores whether Large Language Models (LLMs)
can emulate the breadth of human reasoning by predicting both intuitive, fast
System 1 and deliberate, slow System 2 processes. We investigate the potential
of AI to mimic diverse reasoning behaviors across a human population,
addressing what we call the {\em full reasoning spectrum problem}. We designed
reasoning tasks using a novel generalization of the Natural Language Inference
(NLI) format to evaluate LLMs' ability to replicate human reasoning. The
questions were crafted to elicit both System 1 and System 2 responses. Human
responses were collected through crowd-sourcing and the entire distribution was
modeled, rather than just the majority of the answers. We used
personality-based prompting inspired by the Big Five personality model to
elicit AI responses reflecting specific personality traits, capturing the
diversity of human reasoning, and exploring how personality traits influence
LLM outputs. Combined with genetic algorithms to optimize the weighting of
these prompts, this method was tested alongside traditional machine learning
models. The results show that LLMs can mimic human response distributions, with
open-source models like Llama and Mistral outperforming proprietary GPT models.
Personality-based prompting, especially when optimized with genetic algorithms,
significantly enhanced LLMs' ability to predict human response distributions,
suggesting that capturing suboptimal, naturalistic reasoning may require
modeling techniques incorporating diverse reasoning styles and psychological
profiles. The study concludes that personality-based prompting combined with
genetic algorithms is promising for enhancing AI's \textit{human-ness} in
reasoning.
|
2502.14156
|
Mixed Signals: A Diverse Point Cloud Dataset for Heterogeneous LiDAR V2X
Collaboration
|
cs.CV
|
Vehicle-to-everything (V2X) collaborative perception has emerged as a
promising solution to address the limitations of single-vehicle perception
systems. However, existing V2X datasets are limited in scope, diversity, and
quality. To address these gaps, we present Mixed Signals, a comprehensive V2X
dataset featuring 45.1k point clouds and 240.6k bounding boxes collected from
three connected autonomous vehicles (CAVs) equipped with two different types of
LiDAR sensors, plus a roadside unit with dual LiDARs. Our dataset provides
precisely aligned point clouds and bounding box annotations across 10 classes,
ensuring reliable data for perception training. We provide detailed statistical
analysis on the quality of our dataset and extensively benchmark existing V2X
methods on it. Mixed Signals V2X Dataset is one of the highest quality,
large-scale datasets publicly available for V2X perception research. Details on
the website https://mixedsignalsdataset.cs.cornell.edu/.
|
2502.14158
|
Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks
|
cs.LG cs.SI
|
Graph neural networks have been demonstrated as a powerful paradigm for
effectively learning graph-structured data on the web and mining content from
it.Current leading graph models require a large number of labeled samples for
training, which unavoidably leads to overfitting in few-shot scenarios. Recent
research has sought to alleviate this issue by simultaneously leveraging graph
learning and meta-learning paradigms. However, these graph meta-learning models
assume the availability of numerous meta-training tasks to learn transferable
meta-knowledge. Such assumption may not be feasible in the real world due to
the difficulty of constructing tasks and the substantial costs involved.
Therefore, we propose a SiMple yet effectIve approach for graph few-shot
Learning with fEwer tasks, named SMILE. We introduce a dual-level mixup
strategy, encompassing both within-task and across-task mixup, to
simultaneously enrich the available nodes and tasks in meta-learning. Moreover,
we explicitly leverage the prior information provided by the node degrees in
the graph to encode expressive node representations. Theoretically, we
demonstrate that SMILE can enhance the model generalization ability.
Empirically, SMILE consistently outperforms other competitive models by a large
margin across all evaluated datasets with in-domain and cross-domain settings.
Our anonymous code can be found here.
|
2502.14160
|
Efficient Inverse Multiagent Learning
|
cs.GT cs.AI cs.LG econ.TH
|
In this paper, we study inverse game theory (resp. inverse multiagent
learning) in which the goal is to find parameters of a game's payoff functions
for which the expected (resp. sampled) behavior is an equilibrium. We formulate
these problems as generative-adversarial (i.e., min-max) optimization problems,
for which we develop polynomial-time algorithms to solve, the former of which
relies on an exact first-order oracle, and the latter, a stochastic one. We
extend our approach to solve inverse multiagent simulacral learning in
polynomial time and number of samples. In these problems, we seek a simulacrum,
meaning parameters and an associated equilibrium that replicate the given
observations in expectation. We find that our approach outperforms the
widely-used ARIMA method in predicting prices in Spanish electricity markets
based on time-series data.
|
2502.14166
|
Prediction-Powered Adaptive Shrinkage Estimation
|
stat.ML cs.LG stat.ME
|
Prediction-Powered Inference (PPI) is a powerful framework for enhancing
statistical estimates by combining limited gold-standard data with machine
learning (ML) predictions. While prior work has demonstrated PPI's benefits for
individual statistical tasks, modern applications require answering numerous
parallel statistical questions. We introduce Prediction-Powered Adaptive
Shrinkage (PAS), a method that bridges PPI with empirical Bayes shrinkage to
improve the estimation of multiple means. PAS debiases noisy ML predictions
within each task and then borrows strength across tasks by using those same
predictions as a reference point for shrinkage. The amount of shrinkage is
determined by minimizing an unbiased estimate of risk, and we prove that this
tuning strategy is asymptotically optimal. Experiments on both synthetic and
real-world datasets show that PAS adapts to the reliability of the ML
predictions and outperforms traditional and modern baselines in large-scale
applications.
|
2502.14168
|
Deep learning based infrared small object segmentation: Challenges and
future directions
|
cs.CV
|
Infrared sensing is a core method for supporting unmanned systems, such as
autonomous vehicles and drones. Recently, infrared sensors have been widely
deployed on mobile and stationary platforms for detection and classification of
objects from long distances and in wide field of views. Given its success in
the vision image analysis domain, deep learning has also been applied for
object recognition in infrared images. However, techniques that have proven
successful in visible light perception face new challenges in the infrared
domain. These challenges include extremely low signal-to-noise ratios in
infrared images, very small and blurred objects of interest, and limited
availability of labeled/unlabeled training data due to the specialized nature
of infrared sensors. Numerous methods have been proposed in the literature for
the detection and classification of small objects in infrared images achieving
varied levels of success. There is a need for a survey paper that critically
analyzes existing techniques in this domain, identifies unsolved challenges and
provides future research directions. This paper fills the gap and offers a
concise and insightful review of deep learning-based methods. It also
identifies the challenges faced by existing infrared object segmentation
methods and provides a structured review of existing infrared perception
methods from the perspective of these challenges and highlights the motivations
behind the various approaches. Finally, this review suggests promising future
directions based on recent advancements within this domain.
|
2502.14170
|
Blockchain-based Framework for Scalable and Incentivized Federated
Learning
|
cs.LG cs.DC
|
Federated Learning (FL) enables collaborative model training without sharing
raw data, preserving privacy while harnessing distributed datasets. However,
traditional FL systems often rely on centralized aggregating mechanisms,
introducing trust issues, single points of failure, and limited mechanisms for
incentivizing meaningful client contributions. These challenges are exacerbated
as FL scales to train resource-intensive models, such as large language models
(LLMs), requiring scalable, decentralized solutions. This paper presents a
blockchain-based FL framework that addresses these limitations by integrating
smart contracts and a novel hybrid incentive mechanism. The framework automates
critical FL tasks, including client registration, update validation, reward
distribution, and maintaining a transparent global state. The hybrid incentive
mechanism combines on-chain alignment-based rewards, off-chain fairness checks,
and consistency multipliers to ensure fairness, transparency, and sustained
engagement. We evaluate the framework through gas cost analysis, demonstrating
its feasibility for different scales of federated learning scenarios.
|
2502.14171
|
Enhancing Conversational Agents with Theory of Mind: Aligning Beliefs,
Desires, and Intentions for Human-Like Interaction
|
cs.CL
|
Natural language interaction with agentic Artificial Intelligence (AI),
driven by Large Language Models (LLMs), is expected to remain a dominant
paradigm in the near future. While humans instinctively align their
communication with mental states -- an ability known as Theory of Mind (ToM),
current LLM powered systems exhibit significant limitations in this regard.
This study examines the extent to which open source language models (LLaMA) can
capture and preserve ToM related information and how effectively it contributes
to consistent ToM reasoning in generated responses. We further investigate
whether explicit manipulation of ToM related components, such as beliefs,
desires, and intentions, can enhance response alignment. Experiments on two
LLaMA 3 variants demonstrate that incorporating ToM informed alignment improves
response quality, achieving win rates of 67 and 63 percent for the 3B and 8B
models, respectively. These findings highlight the potential of ToM driven
strategies to improve alignment in LLM based conversational agents.
|
2502.14172
|
Finite Sample Analysis of Distributional TD Learning with Linear
Function Approximation
|
stat.ML cs.LG
|
In this paper, we investigate the finite-sample statistical rates of
distributional temporal difference (TD) learning with linear function
approximation. The aim of distributional TD learning is to estimate the return
distribution of a discounted Markov decision process for a given policy {\pi}.
Prior works on statistical analysis of distributional TD learning mainly focus
on the tabular case. In contrast, we first consider the linear function
approximation setting and derive sharp finite-sample rates. Our theoretical
results demonstrate that the sample complexity of linear distributional TD
learning matches that of the classic linear TD learning. This implies that,
with linear function approximation, learning the full distribution of the
return using streaming data is no more difficult than learning its expectation
(i.e. the value function). To derive tight sample complexity bounds, we conduct
a fine-grained analysis of the linear-categorical Bellman equation, and employ
the exponential stability arguments for products of random matrices. Our
findings provide new insights into the statistical efficiency of distributional
reinforcement learning algorithms.
|
2502.14174
|
Weighted Low-rank Approximation via Stochastic Gradient Descent on
Manifolds
|
math.OC cs.AI cs.LG stat.ML
|
We solve a regularized weighted low-rank approximation problem by a
stochastic gradient descent on a manifold. To guarantee the convergence of our
stochastic gradient descent, we establish a convergence theorem on manifolds
for retraction-based stochastic gradient descents admitting confinements. On
sample data from the Netflix Prize training dataset, our algorithm outperforms
the existing stochastic gradient descent on Euclidean spaces. We also compare
the accelerated line search on this manifold to the existing accelerated line
search on Euclidean spaces.
|
2502.14176
|
A modal logic translation of the AGM axioms for belief revision
|
cs.LO cs.AI
|
Building on the analysis of Bonanno (Artificial Intelligence, 2025) we
introduce a simple modal logic containing three modal operators: a unimodal
belief operator, a bimodal conditional operator and the unimodal global
operator. For each AGM axiom for belief revision, we provide a corresponding
modal axiom. The correspondence is as follows: each AGM axiom is characterized
by a property of the Kripke-Lewis frames considered in Bonanno (Artificial
Intelligence, 2025) and, in turn, that property characterizes the proposed
modal axiom.
|
2502.14177
|
InstaSHAP: Interpretable Additive Models Explain Shapley Values
Instantly
|
cs.LG stat.ML
|
In recent years, the Shapley value and SHAP explanations have emerged as one
of the most dominant paradigms for providing post-hoc explanations of black-box
models. Despite their well-founded theoretical properties, many recent works
have focused on the limitations in both their computational efficiency and
their representation power. The underlying connection with additive models,
however, is left critically under-emphasized in the current literature. In this
work, we find that a variational perspective linking GAM models and SHAP
explanations is able to provide deep insights into nearly all recent
developments. In light of this connection, we borrow in the other direction to
develop a new method to train interpretable GAM models which are automatically
purified to compute the Shapley value in a single forward pass. Finally, we
provide theoretical results showing the limited representation power of GAM
models is the same Achilles' heel existing in SHAP and discuss the implications
for SHAP's modern usage in CV and NLP.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.