id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.09793
|
Noise Controlled CT Super-Resolution with Conditional Diffusion Model
|
cs.CV
|
Improving the spatial resolution of CT images is a meaningful yet challenging
task, often accompanied by the issue of noise amplification. This article
introduces an innovative framework for noise-controlled CT super-resolution
utilizing the conditional diffusion model. The model is trained on hybrid
datasets, combining noise-matched simulation data with segmented details from
real data. Experimental results with real CT images validate the effectiveness
of our proposed framework, showing its potential for practical applications in
CT imaging.
|
2502.09794
|
Reconstruction of frequency-localized functions from pointwise samples
via least squares and deep learning
|
math.CA cs.LG
|
Recovering frequency-localized functions from pointwise data is a fundamental
task in signal processing. We examine this problem from an
approximation-theoretic perspective, focusing on least squares and deep
learning-based methods. First, we establish a novel recovery theorem for least
squares approximations using the Slepian basis from uniform random samples in
low dimensions, explicitly tracking the dependence of the bandwidth on the
sampling complexity. Building on these results, we then present a recovery
guarantee for approximating bandlimited functions via deep learning from
pointwise data. This result, framed as a practical existence theorem, provides
conditions on the network architecture, training procedure, and data
acquisition sufficient for accurate approximation. To complement our
theoretical findings, we perform numerical comparisons between least squares
and deep learning for approximating one- and two-dimensional functions. We
conclude with a discussion of the theoretical limitations and the practical
gaps between theory and implementation.
|
2502.09795
|
Vision-based Geo-Localization of Future Mars Rotorcraft in Challenging
Illumination Conditions
|
cs.CV cs.RO
|
Planetary exploration using aerial assets has the potential for unprecedented
scientific discoveries on Mars. While NASA's Mars helicopter Ingenuity proved
flight in Martian atmosphere is possible, future Mars rotocrafts will require
advanced navigation capabilities for long-range flights. One such critical
capability is Map-based Localization (MbL) which registers an onboard image to
a reference map during flight in order to mitigate cumulative drift from visual
odometry. However, significant illumination differences between rotocraft
observations and a reference map prove challenging for traditional MbL systems,
restricting the operational window of the vehicle. In this work, we investigate
a new MbL system and propose Geo-LoFTR, a geometry-aided deep learning model
for image registration that is more robust under large illumination differences
than prior models. The system is supported by a custom simulation framework
that uses real orbital maps to produce large amounts of realistic images of the
Martian terrain. Comprehensive evaluations show that our proposed system
outperforms prior MbL efforts in terms of localization accuracy under
significant lighting and scale variations. Furthermore, we demonstrate the
validity of our approach across a simulated Martian day.
|
2502.09797
|
A Survey on LLM-based News Recommender Systems
|
cs.IR cs.AI
|
News recommender systems play a critical role in mitigating the information
overload problem. In recent years, due to the successful applications of large
language model technologies, researchers have utilized Discriminative Large
Language Models (DLLMs) or Generative Large Language Models (GLLMs) to improve
the performance of news recommender systems. Although several recent surveys
review significant challenges for deep learning-based news recommender systems,
such as fairness, privacy-preserving, and responsibility, there is a lack of a
systematic survey on Large Language Model (LLM)-based news recommender systems.
In order to review different core methodologies and explore potential issues
systematically, we categorize DLLM-based and GLLM-based news recommender
systems under the umbrella of LLM-based news recommender systems. In this
survey, we first overview the development of deep learning-based news
recommender systems. Then, we review LLM-based news recommender systems based
on three aspects: news-oriented modeling, user-oriented modeling, and
prediction-oriented modeling. Next, we examine the challenges from various
perspectives, including datasets, benchmarking tools, and methodologies.
Furthermore, we conduct extensive experiments to analyze how large language
model technologies affect the performance of different news recommender
systems. Finally, we comprehensively explore the future directions for
LLM-based news recommendations in the era of LLMs.
|
2502.09799
|
Co-designing Large Language Model Tools for Project-Based Learning with
K12 Educators
|
cs.HC cs.AI cs.CY
|
The emergence of generative AI, particularly large language models (LLMs),
has opened the door for student-centered and active learning methods like
project-based learning (PBL). However, PBL poses practical implementation
challenges for educators around project design and management, assessment, and
balancing student guidance with student autonomy. The following research
documents a co-design process with interdisciplinary K-12 teachers to explore
and address the current PBL challenges they face. Through teacher-driven
interviews, collaborative workshops, and iterative design of wireframes, we
gathered evidence for ways LLMs can support teachers in implementing
high-quality PBL pedagogy by automating routine tasks and enhancing
personalized learning. Teachers in the study advocated for supporting their
professional growth and augmenting their current roles without replacing them.
They also identified affordances and challenges around classroom integration,
including resource requirements and constraints, ethical concerns, and
potential immediate and long-term impacts. Drawing on these, we propose design
guidelines for future deployment of LLM tools in PBL.
|
2502.09804
|
Acute Lymphoblastic Leukemia Diagnosis Employing YOLOv11, YOLOv8,
ResNet50, and Inception-ResNet-v2 Deep Learning Models
|
eess.IV cs.AI cs.CV cs.LG
|
Thousands of individuals succumb annually to leukemia alone. As artificial
intelligence-driven technologies continue to evolve and advance, the question
of their applicability and reliability remains unresolved. This study aims to
utilize image processing and deep learning methodologies to achieve
state-of-the-art results for the detection of Acute Lymphoblastic Leukemia
(ALL) using data that best represents real-world scenarios. ALL is one of
several types of blood cancer, and it is an aggressive form of leukemia. In
this investigation, we examine the most recent advancements in ALL detection,
as well as the latest iteration of the YOLO series and its performance. We
address the question of whether white blood cells are malignant or benign.
Additionally, the proposed models can identify different ALL stages, including
early stages. Furthermore, these models can detect hematogones despite their
frequent misclassification as ALL. By utilizing advanced deep learning models,
namely, YOLOv8, YOLOv11, ResNet50 and Inception-ResNet-v2, the study achieves
accuracy rates as high as 99.7%, demonstrating the effectiveness of these
algorithms across multiple datasets and various real-world situations.
|
2502.09805
|
Towards Patient-Specific Surgical Planning for Bicuspid Aortic Valve
Repair: Fully Automated Segmentation of the Aortic Valve in 4D CT
|
eess.IV cs.CV
|
The bicuspid aortic valve (BAV) is the most prevalent congenital heart defect
and may require surgery for complications such as stenosis, regurgitation, and
aortopathy. BAV repair surgery is effective but challenging due to the
heterogeneity of BAV morphology. Multiple imaging modalities can be employed to
assist the quantitative assessment of BAVs for surgical planning.
Contrast-enhanced 4D computed tomography (CT) produces volumetric temporal
sequences with excellent contrast and spatial resolution. Segmentation of the
aortic cusps and root in these images is an essential step in creating patient
specific models for visualization and quantification. While deep learning-based
methods are capable of fully automated segmentation, no BAV-specific model
exists. Among valve segmentation studies, there has been limited quantitative
assessment of the clinical usability of the segmentation results. In this work,
we developed a fully automated multi-label BAV segmentation pipeline based on
nnU-Net. The predicted segmentations were used to carry out surgically relevant
morphological measurements including geometric cusp height, commissural angle
and annulus diameter, and the results were compared against manual
segmentation. Automated segmentation achieved average Dice scores of over 0.7
and symmetric mean distance below 0.7 mm for all three aortic cusps and the
root wall. Clinically relevant benchmarks showed good consistency between
manual and predicted segmentations. Overall, fully automated BAV segmentation
of 3D frames in 4D CT can produce clinically usable measurements for surgical
risk stratification, but the temporal consistency of segmentations needs to be
improved.
|
2502.09806
|
Prioritized Ranking Experimental Design Using Recommender Systems in
Two-Sided Platforms
|
econ.EM cs.IR cs.SI stat.ME
|
Interdependencies between units in online two-sided marketplaces complicate
estimating causal effects in experimental settings. We propose a novel
experimental design to mitigate the interference bias in estimating the total
average treatment effect (TATE) of item-side interventions in online two-sided
marketplaces. Our Two-Sided Prioritized Ranking (TSPR) design uses the
recommender system as an instrument for experimentation. TSPR strategically
prioritizes items based on their treatment status in the listings displayed to
users. We designed TSPR to provide users with a coherent platform experience by
ensuring access to all items and a consistent realization of their treatment by
all users. We evaluate our experimental design through simulations using a
search impression dataset from an online travel agency. Our methodology closely
estimates the true simulated TATE, while a baseline item-side estimator
significantly overestimates TATE.
|
2502.09809
|
AgentGuard: Repurposing Agentic Orchestrator for Safety Evaluation of
Tool Orchestration
|
cs.CR cs.AI
|
The integration of tool use into large language models (LLMs) enables agentic
systems with real-world impact. In the meantime, unlike standalone LLMs,
compromised agents can execute malicious workflows with more consequential
impact, signified by their tool-use capability. We propose AgentGuard, a
framework to autonomously discover and validate unsafe tool-use workflows,
followed by generating safety constraints to confine the behaviors of agents,
achieving the baseline of safety guarantee at deployment. AgentGuard leverages
the LLM orchestrator's innate capabilities - knowledge of tool functionalities,
scalable and realistic workflow generation, and tool execution privileges - to
act as its own safety evaluator. The framework operates through four phases:
identifying unsafe workflows, validating them in real-world execution,
generating safety constraints, and validating constraint efficacy. The output,
an evaluation report with unsafe workflows, test cases, and validated
constraints, enables multiple security applications. We empirically demonstrate
AgentGuard's feasibility with experiments. With this exploratory work, we hope
to inspire the establishment of standardized testing and hardening procedures
for LLM agents to enhance their trustworthiness in real-world applications.
|
2502.09810
|
$\Lambda$CDM and early dark energy in latent space: a data-driven
parametrization of the CMB temperature power spectrum
|
astro-ph.CO astro-ph.IM cs.LG
|
Finding the best parametrization for cosmological models in the absence of
first-principle theories is an open question. We propose a data-driven
parametrization of cosmological models given by the disentangled 'latent'
representation of a variational autoencoder (VAE) trained to compress cosmic
microwave background (CMB) temperature power spectra. We consider a broad range
of $\Lambda$CDM and beyond-$\Lambda$CDM cosmologies with an additional early
dark energy (EDE) component. We show that these spectra can be compressed into
5 ($\Lambda$CDM) or 8 (EDE) independent latent parameters, as expected when
using temperature power spectra alone, and which reconstruct spectra at an
accuracy well within the Planck errors. These latent parameters have a physical
interpretation in terms of well-known features of the CMB temperature spectrum:
these include the position, height and even-odd modulation of the acoustic
peaks, as well as the gravitational lensing effect. The VAE also discovers one
latent parameter which entirely isolates the EDE effects from those related to
$\Lambda$CDM parameters, thus revealing a previously unknown degree of freedom
in the CMB temperature power spectrum. We further showcase how to place
constraints on the latent parameters using Planck data as typically done for
cosmological parameters, obtaining latent values consistent with previous
$\Lambda$CDM and EDE cosmological constraints. Our work demonstrates the
potential of a data-driven reformulation of current beyond-$\Lambda$CDM
phenomenological models into the independent degrees of freedom to which the
data observables are sensitive.
|
2502.09812
|
Face Deepfakes -- A Comprehensive Review
|
cs.CV cs.LG
|
In recent years, remarkable advancements in deep-fake generation technology
have led to unprecedented leaps in its realism and capabilities. Despite these
advances, we observe a notable lack of structured and deep analysis deepfake
technology. The principal aim of this survey is to contribute a thorough
theoretical analysis of state-of-the-art face deepfake generation and detection
methods. Furthermore, we provide a coherent and systematic evaluation of the
implications of deepfakes on face biometric recognition approaches. In
addition, we outline key applications of face deepfake technology, elucidating
both positive and negative applications of the technology, provide a detailed
discussion regarding the gaps in existing research, and propose key research
directions for further investigation.
|
2502.09813
|
Suture Thread Modeling Using Control Barrier Functions for Autonomous
Surgery
|
cs.RO cs.SY eess.SY
|
Automating surgical systems enhances precision and safety while reducing
human involvement in high-risk environments. A major challenge in automating
surgical procedures like suturing is accurately modeling the suture thread, a
highly flexible and compliant component. Existing models either lack the
accuracy needed for safety critical procedures or are too computationally
intensive for real time execution. In this work, we introduce a novel approach
for modeling suture thread dynamics using control barrier functions (CBFs),
achieving both realism and computational efficiency. Thread like behavior,
collision avoidance, stiffness, and damping are all modeled within a unified
CBF and control Lyapunov function (CLF) framework. Our approach eliminates the
need to calculate complex forces or solve differential equations, significantly
reducing computational overhead while maintaining a realistic model suitable
for both automation and virtual reality surgical training systems. The
framework also allows visual cues to be provided based on the thread's
interaction with the environment, enhancing user experience when performing
suture or ligation tasks. The proposed model is tested on the MagnetoSuture
system, a minimally invasive robotic surgical platform that uses magnetic
fields to manipulate suture needles, offering a less invasive solution for
surgical procedures.
|
2502.09814
|
INJONGO: A Multicultural Intent Detection and Slot-filling Dataset for
16 African Languages
|
cs.CL
|
Slot-filling and intent detection are well-established tasks in
Conversational AI. However, current large-scale benchmarks for these tasks
often exclude evaluations of low-resource languages and rely on translations
from English benchmarks, thereby predominantly reflecting Western-centric
concepts. In this paper, we introduce Injongo -- a multicultural, open-source
benchmark dataset for 16 African languages with utterances generated by native
speakers across diverse domains, including banking, travel, home, and dining.
Through extensive experiments, we benchmark the fine-tuning multilingual
transformer models and the prompting large language models (LLMs), and show the
advantage of leveraging African-cultural utterances over Western-centric
utterances for improving cross-lingual transfer from the English language.
Experimental results reveal that current LLMs struggle with the slot-filling
task, with GPT-4o achieving an average performance of 26 F1-score. In contrast,
intent detection performance is notably better, with an average accuracy of
70.6%, though it still falls behind the fine-tuning baselines. Compared to the
English language, GPT-4o and fine-tuning baselines perform similarly on intent
detection, achieving an accuracy of approximately 81%. Our findings suggest
that the performance of LLMs is still behind for many low-resource African
languages, and more work is needed to further improve their downstream
performance.
|
2502.09815
|
Statistical Coherence Alignment for Large Language Model Representation
Learning Through Tensor Field Convergence
|
cs.CL
|
Representation learning plays a central role in structuring internal
embeddings to capture the statistical properties of language, influencing the
coherence and contextual consistency of generated text. Statistical Coherence
Alignment is introduced as a method to enforce structured token representations
through tensor field convergence, guiding embeddings to reflect statistical
dependencies inherent in linguistic data. A mathematical framework is
established to quantify coherence alignment, integrating a loss function that
optimizes representational consistency across training iterations. Empirical
evaluations demonstrate that applying coherence constraints improves
perplexity, enhances classification accuracy, and refines rare word embeddings,
contributing to a more stable representation space. Comparative analyses with
baseline models reveal that the proposed method fosters a more interpretable
internal structure, ensuring that embeddings retain contextual dependencies
while mitigating representation collapse. The impact on coherence score
distributions suggests that the alignment mechanism strengthens semantic
integrity across diverse linguistic constructs, leading to a more balanced
organization of learned embeddings. Computational assessments indicate that
while the method introduces additional memory and training costs, the
structured optimization process justifies the trade-offs in applications
requiring heightened contextual fidelity. Experimental results validate the
effectiveness of coherence alignment in optimizing token representations,
providing insights into how statistical dependencies can be leveraged to
improve language model training.
|
2502.09817
|
Vector Linear Secure Aggregation
|
cs.IT math.IT
|
The secure summation problem, where $K$ users wish to compute the sum of
their inputs at a server while revealing nothing about all $K$ inputs beyond
the desired sum, is generalized in two aspects - first, the desired function is
an arbitrary linear function (multiple linear combinations) of the $K$ inputs
instead of just the sum; second, rather than protecting all $K$ inputs, we wish
to guarantee that no information is leaked about an arbitrary linear function
of the $K$ inputs. For this vector linear generalization of the secure
summation problem, we characterize the optimal randomness cost, i.e., to
compute one instance of the desired vector linear function, the minimum number
of the random key variables held by the users is equal to the dimension of the
vector space that is in the span of the vectors formed by the coefficients of
the linear function to protect but not in the span of the vectors formed by the
coefficients of the linear function to compute.
|
2502.09818
|
On the robustness of multimodal language model towards distractions
|
cs.CV
|
Although vision-language models (VLMs) have achieved significant success in
various applications such as visual question answering, their resilience to
prompt variations remains an under-explored area. Understanding how
distractions affect VLMs is crucial for improving their real-world
applicability, as inputs could have noisy and irrelevant information in many
practical scenarios. This paper aims to assess the robustness of VLMs against
both visual and textual distractions in the context of science question
answering. Built on the ScienceQA dataset, we developed a new benchmark that
introduces distractions in both the visual and textual contexts to evaluate the
reasoning capacity of VLMs amid these distractions. Our findings reveal that
most-of-the-art VLMs, including GPT-4, are vulnerable to various types of
distractions, experiencing noticeable degradation in reasoning capabilities
when confronted with distractions. Notably, models such as InternVL2
demonstrate a higher degree of robustness to these distractions. We also found
that models exhibit greater sensitivity to textual distractions than visual
ones. Additionally, we explored various mitigation strategies, such as prompt
engineering, to counteract the impact of distractions. While these strategies
improved solution accuracy, our analysis shows that there remain significant
opportunities for improvement.
|
2502.09819
|
A Solver-Aided Hierarchical Language for LLM-Driven CAD Design
|
cs.CV cs.AI cs.GR cs.LG cs.PL
|
Large language models (LLMs) have been enormously successful in solving a
wide variety of structured and unstructured generative tasks, but they struggle
to generate procedural geometry in Computer Aided Design (CAD). These
difficulties arise from an inability to do spatial reasoning and the necessity
to guide a model through complex, long range planning to generate complex
geometry. We enable generative CAD Design with LLMs through the introduction of
a solver-aided, hierarchical domain specific language (DSL) called AIDL, which
offloads the spatial reasoning requirements to a geometric constraint solver.
Additionally, we show that in the few-shot regime, AIDL outperforms even a
language with in-training data (OpenSCAD), both in terms of generating visual
results closer to the prompt and creating objects that are easier to
post-process and reason about.
|
2502.09822
|
ATM-Net: Adaptive Termination and Multi-Precision Neural Networks for
Energy-Harvested Edge Intelligence
|
cs.LG
|
ATM-Net is a novel neural network architecture tailored for energy-harvested
IoT devices, integrating adaptive termination points with multi-precision
computing. It dynamically adjusts computational precision (32/8/4-bit) and
network depth based on energy availability via early exit points. An
energy-aware task scheduler optimizes the energy-accuracy trade-off.
Experiments on CIFAR-10, PlantVillage, and TissueMNIST show ATM-Net achieves up
to 96.93% accuracy while reducing power consumption by 87.5% with Q4
quantization compared to 32-bit operations. The power-delay product improves
from 13.6J to 0.141J for DenseNet-121 and from 10.3J to 0.106J for ResNet-18,
demonstrating its suitability for energy-harvesting systems.
|
2502.09824
|
PUGS: Perceptual Uncertainty for Grasp Selection in Underwater
Environments
|
cs.RO cs.CV
|
When navigating and interacting in challenging environments where sensory
information is imperfect and incomplete, robots must make decisions that
account for these shortcomings. We propose a novel method for quantifying and
representing such perceptual uncertainty in 3D reconstruction through occupancy
uncertainty estimation. We develop a framework to incorporate it into grasp
selection for autonomous manipulation in underwater environments. Instead of
treating each measurement equally when deciding which location to grasp from,
we present a framework that propagates uncertainty inherent in the multi-view
reconstruction process into the grasp selection. We evaluate our method with
both simulated and the real world data, showing that by accounting for
uncertainty, the grasp selection becomes robust against partial and noisy
measurements. Code will be made available at
https://onurbagoren.github.io/PUGS/
|
2502.09826
|
Safe Reinforcement Learning-based Control for Hydrogen Diesel Dual-Fuel
Engines
|
eess.SY cs.SY
|
The urgent energy transition requirements towards a sustainable future
stretch across various industries and are a significant challenge facing
humanity. Hydrogen promises a clean, carbon-free future, with the opportunity
to integrate with existing solutions in the transportation sector. However,
adding hydrogen to existing technologies such as diesel engines requires
additional modeling effort. Reinforcement Learning (RL) enables interactive
data-driven learning that eliminates the need for mathematical modeling. The
algorithms, however, may not be real-time capable and need large amounts of
data to work in practice. This paper presents a novel approach which uses
offline model learning with RL to demonstrate safe control of a 4.5 L Hydrogen
Diesel Dual-Fuel (H2DF) engine. The controllers are demonstrated to be
constraint compliant and can leverage a novel state-augmentation approach for
sample-efficient learning. The offline policy is subsequently experimentally
validated on the real engine where the control algorithm is executed on a
Raspberry Pi controller and requires 6 times less computation time compared to
online Model Predictive Control (MPC) optimization.
|
2502.09827
|
Data and Decision Traceability for SDA TAP Lab's Prototype Battle
Management System
|
cs.IR cs.CR
|
Space Protocol is applying the principles derived from MITRE and NIST's
Supply Chain Traceability: Manufacturing Meta-Framework (NIST IR 8536) to a
complex multi party system to achieve introspection, auditing, and replay of
data and decisions that ultimately lead to a end decision. The core goal of
decision traceability is to ensure transparency, accountability, and integrity
within the WA system. This is accomplished by providing a clear, auditable path
from the system's inputs all the way to the final decision. This traceability
enables the system to track the various algorithms and data flows that have
influenced a particular outcome.
|
2502.09829
|
Efficient Evaluation of Multi-Task Robot Policies With Active Experiment
Selection
|
cs.RO cs.AI cs.LG
|
Evaluating learned robot control policies to determine their physical
task-level capabilities costs experimenter time and effort. The growing number
of policies and tasks exacerbates this issue. It is impractical to test every
policy on every task multiple times; each trial requires a manual environment
reset, and each task change involves re-arranging objects or even changing
robots. Naively selecting a random subset of tasks and policies to evaluate is
a high-cost solution with unreliable, incomplete results. In this work, we
formulate robot evaluation as an active testing problem. We propose to model
the distribution of robot performance across all tasks and policies as we
sequentially execute experiments. Tasks often share similarities that can
reveal potential relationships in policy behavior, and we show that natural
language is a useful prior in modeling these relationships between tasks. We
then leverage this formulation to reduce the experimenter effort by using a
cost-aware expected information gain heuristic to efficiently select
informative trials. Our framework accommodates both continuous and discrete
performance outcomes. We conduct experiments on existing evaluation data from
real robots and simulations. By prioritizing informative trials, our framework
reduces the cost of calculating evaluation metrics for robot policies across
many tasks.
|
2502.09831
|
Learning Fair Policies for Infectious Diseases Mitigation using Path
Integral Control
|
cs.LG math.OC
|
Infectious diseases pose major public health challenges to society,
highlighting the importance of designing effective policies to reduce economic
loss and mortality. In this paper, we propose a framework for sequential
decision-making under uncertainty to design fairness-aware disease mitigation
policies that incorporate various measures of unfairness. Specifically, our
approach learns equitable vaccination and lockdown strategies based on a
stochastic multi-group SIR model. To address the challenges of solving the
resulting sequential decision-making problem, we adopt the path integral
control algorithm as an efficient solution scheme. Through a case study, we
demonstrate that our approach effectively improves fairness compared to
conventional methods and provides valuable insights for policymakers.
|
2502.09832
|
Algorithmic contiguity from low-degree conjecture and applications in
correlated random graphs
|
stat.ML cs.DS cs.LG math.PR math.ST stat.TH
|
In this paper, assuming a natural strengthening of the low-degree conjecture,
we provide evidence of computational hardness for two problems: (1) the
(partial) matching recovery problem in the sparse correlated Erd\H{o}s-R\'enyi
graphs $\mathcal G(n,q;\rho)$ when the edge-density $q=n^{-1+o(1)}$ and the
correlation $\rho<\sqrt{\alpha}$ lies below the Otter's threshold, solving a
remaining problem in \cite{DDL23+}; (2) the detection problem between the
correlated sparse stochastic block model $\mathcal
S(n,\tfrac{\lambda}{n};k,\epsilon;s)$ and a pair of independent stochastic
block models $\mathcal S(n,\tfrac{\lambda s}{n};k,\epsilon)$ when $\epsilon^2
\lambda s<1$ lies below the Kesten-Stigum (KS) threshold and $s<\sqrt{\alpha}$
lies below the Otter's threshold, solving a remaining problem in
\cite{CDGL24+}.
One of the main ingredient in our proof is to derive certain forms of
\emph{algorithmic contiguity} between two probability measures based on bounds
on their low-degree advantage. To be more precise, consider the
high-dimensional hypothesis testing problem between two probability measures
$\mathbb{P}$ and $\mathbb{Q}$ based on the sample $\mathsf Y$. We show that if
the low-degree advantage $\mathsf{Adv}_{\leq D} \big(
\frac{\mathrm{d}\mathbb{P}}{\mathrm{d}\mathbb{Q}} \big)=O(1)$, then (assuming
the low-degree conjecture) there is no efficient algorithm $\mathcal A$ such
that $\mathbb{Q}(\mathcal A(\mathsf Y)=0)=1-o(1)$ and $\mathbb{P}(\mathcal
A(\mathsf Y)=1)=\Omega(1)$. This framework provides a useful tool for
performing reductions between different inference tasks.
|
2502.09838
|
HealthGPT: A Medical Large Vision-Language Model for Unifying
Comprehension and Generation via Heterogeneous Knowledge Adaptation
|
cs.CV cs.AI
|
We present HealthGPT, a powerful Medical Large Vision-Language Model
(Med-LVLM) that integrates medical visual comprehension and generation
capabilities within a unified autoregressive paradigm. Our bootstrapping
philosophy is to progressively adapt heterogeneous comprehension and generation
knowledge to pre-trained large language models (LLMs). This is achieved through
a novel heterogeneous low-rank adaptation (H-LoRA) technique, which is
complemented by a tailored hierarchical visual perception approach and a
three-stage learning strategy. To effectively learn the HealthGPT, we devise a
comprehensive medical domain-specific comprehension and generation dataset
called VL-Health. Experimental results demonstrate exceptional performance and
scalability of HealthGPT in medical visual unified tasks. Our project can be
accessed at https://github.com/DCDmllm/HealthGPT.
|
2502.09843
|
MuDoC: An Interactive Multimodal Document-grounded Conversational AI
System
|
cs.AI cs.HC cs.MM
|
Multimodal AI is an important step towards building effective tools to
leverage multiple modalities in human-AI communication. Building a multimodal
document-grounded AI system to interact with long documents remains a
challenge. Our work aims to fill the research gap of directly leveraging
grounded visuals from documents alongside textual content in documents for
response generation. We present an interactive conversational AI agent 'MuDoC'
based on GPT-4o to generate document-grounded responses with interleaved text
and figures. MuDoC's intelligent textbook interface promotes trustworthiness
and enables verification of system responses by allowing instant navigation to
source text and figures in the documents. We also discuss qualitative
observations based on MuDoC responses highlighting its strengths and
limitations.
|
2502.09844
|
Solving Empirical Bayes via Transformers
|
cs.LG stat.ML
|
This work applies modern AI tools (transformers) to solving one of the oldest
statistical problems: Poisson means under empirical Bayes (Poisson-EB) setting.
In Poisson-EB a high-dimensional mean vector $\theta$ (with iid coordinates
sampled from an unknown prior $\pi$) is estimated on the basis of
$X=\mathrm{Poisson}(\theta)$. A transformer model is pre-trained on a set of
synthetically generated pairs $(X,\theta)$ and learns to do in-context learning
(ICL) by adapting to unknown $\pi$. Theoretically, we show that a sufficiently
wide transformer can achieve vanishing regret with respect to an oracle
estimator who knows $\pi$ as dimension grows to infinity. Practically, we
discover that already very small models (100k parameters) are able to
outperform the best classical algorithm (non-parametric maximum likelihood, or
NPMLE) both in runtime and validation loss, which we compute on
out-of-distribution synthetic data as well as real-world datasets (NHL hockey,
MLB baseball, BookCorpusOpen). Finally, by using linear probes, we confirm that
the transformer's EB estimator appears to internally work differently from
either NPMLE or Robbins' estimators.
|
2502.09846
|
Robust Event-Triggered Integrated Communication and Control with Graph
Information Bottleneck Optimization
|
cs.MA
|
Integrated communication and control serves as a critical ingredient in
Multi-Agent Reinforcement Learning. However, partial observability limitations
will impair collaboration effectiveness, and a potential solution is to
establish consensus through well-calibrated latent variables obtained from
neighboring agents. Nevertheless, the rigid transmission of less informative
content can still result in redundant information exchanges. Therefore, we
propose a Consensus-Driven Event-Based Graph Information Bottleneck (CDE-GIB)
method, which integrates the communication graph and information flow through a
GIB regularizer to extract more concise message representations while avoiding
the high computational complexity of inner-loop operations. To further minimize
the communication volume required for establishing consensus during
interactions, we also develop a variable-threshold event-triggering mechanism.
By simultaneously considering historical data and current observations, this
mechanism capably evaluates the importance of information to determine whether
an event should be triggered. Experimental results demonstrate that our
proposed method outperforms existing state-of-the-art methods in terms of both
efficiency and adaptability.
|
2502.09849
|
A Survey on Human-Centered Evaluation of Explainable AI Methods in
Clinical Decision Support Systems
|
cs.LG cs.HC
|
Explainable AI (XAI) has become a crucial component of Clinical Decision
Support Systems (CDSS) to enhance transparency, trust, and clinical adoption.
However, while many XAI methods have been proposed, their effectiveness in
real-world medical settings remains underexplored. This paper provides a survey
of human-centered evaluations of Explainable AI methods in Clinical Decision
Support Systems. By categorizing existing works based on XAI methodologies,
evaluation frameworks, and clinical adoption challenges, we offer a structured
understanding of the landscape. Our findings reveal key challenges in the
integration of XAI into healthcare workflows and propose a structured framework
to align the evaluation methods of XAI with the clinical needs of stakeholders.
|
2502.09850
|
Elastic Representation: Mitigating Spurious Correlations for Group
Robustness
|
cs.LG
|
Deep learning models can suffer from severe performance degradation when
relying on spurious correlations between input features and labels, making the
models perform well on training data but have poor prediction accuracy for
minority groups. This problem arises especially when training data are limited
or imbalanced. While most prior work focuses on learning invariant features
(with consistent correlations to y), it overlooks the potential harm of
spurious correlations between features. We hereby propose Elastic
Representation (ElRep) to learn features by imposing Nuclear- and
Frobenius-norm penalties on the representation from the last layer of a neural
network. Similar to the elastic net, ElRep enjoys the benefits of learning
important features without losing feature diversity. The proposed method is
simple yet effective. It can be integrated into many deep learning approaches
to mitigate spurious correlations and improve group robustness. Moreover, we
theoretically show that ElRep has minimum negative impacts on in-distribution
predictions. This is a remarkable advantage over approaches that prioritize
minority groups at the cost of overall performance.
|
2502.09854
|
Efficient Multitask Learning in Small Language Models Through
Upside-Down Reinforcement Learning
|
cs.CL cs.AI cs.LG
|
In this work, we demonstrate that small language models (SLMs), specifically
a 100M parameter GPT-2 model, can achieve competitive performance in multitask
prompt generation tasks while requiring only a fraction of the computational
resources needed by large language models (LLMs). Through a novel combination
of upside-down reinforcement learning and synthetic data distillation from a
powerful LLM, Llama-3, we train an SLM that achieves relevance scores within 5%
of state-of-the-art models, including Llama-3, Qwen2, and Mistral, despite
being up to 80 times smaller, making it highly suitable for
resource-constrained and real-time applications. This study highlights the
potential of SLMs as efficient multitask learners in multimodal settings,
providing a promising alternative to LLMs for scalable, low-latency
deployments.
|
2502.09858
|
Automated Hypothesis Validation with Agentic Sequential Falsifications
|
cs.LG cs.AI cs.CL q-bio.QM
|
Hypotheses are central to information acquisition, decision-making, and
discovery. However, many real-world hypotheses are abstract, high-level
statements that are difficult to validate directly. This challenge is further
intensified by the rise of hypothesis generation from Large Language Models
(LLMs), which are prone to hallucination and produce hypotheses in volumes that
make manual validation impractical. Here we propose Popper, an agentic
framework for rigorous automated validation of free-form hypotheses. Guided by
Karl Popper's principle of falsification, Popper validates a hypothesis using
LLM agents that design and execute falsification experiments targeting its
measurable implications. A novel sequential testing framework ensures strict
Type-I error control while actively gathering evidence from diverse
observations, whether drawn from existing data or newly conducted procedures.
We demonstrate Popper on six domains including biology, economics, and
sociology. Popper delivers robust error control, high power, and scalability.
Furthermore, compared to human scientists, Popper achieved comparable
performance in validating complex biological hypotheses while reducing time by
10 folds, providing a scalable, rigorous solution for hypothesis validation.
|
2502.09860
|
Gradient GA: Gradient Genetic Algorithm for Drug Molecular Design
|
q-bio.BM cs.CE cs.LG stat.ML
|
Molecular discovery has brought great benefits to the chemical industry.
Various molecule design techniques are developed to identify molecules with
desirable properties. Traditional optimization methods, such as genetic
algorithms, continue to achieve state-of-the-art results across multiple
molecular design benchmarks. However, these techniques rely solely on random
walk exploration, which hinders both the quality of the final solution and the
convergence speed. To address this limitation, we propose a novel approach
called Gradient Genetic Algorithm (Gradient GA), which incorporates gradient
information from the objective function into genetic algorithms. Instead of
random exploration, each proposed sample iteratively progresses toward an
optimal solution by following the gradient direction. We achieve this by
designing a differentiable objective function parameterized by a neural network
and utilizing the Discrete Langevin Proposal to enable gradient guidance in
discrete molecular spaces. Experimental results demonstrate that our method
significantly improves both convergence speed and solution quality,
outperforming cutting-edge techniques. For example, it achieves up to a 25%
improvement in the top-10 score over the vanilla genetic algorithm. The code is
publicly available at https://github.com/debadyuti23/GradientGA.
|
2502.09861
|
A Scoresheet for Explainable AI
|
cs.AI cs.MA cs.SE
|
Explainability is important for the transparency of autonomous and
intelligent systems and for helping to support the development of appropriate
levels of trust. There has been considerable work on developing approaches for
explaining systems and there are standards that specify requirements for
transparency. However, there is a gap: the standards are too high-level and do
not adequately specify requirements for explainability. This paper develops a
scoresheet that can be used to specify explainability requirements or to assess
the explainability aspects provided for particular applications. The scoresheet
is developed by considering the requirements of a range of stakeholders and is
applicable to Multiagent Systems as well as other AI technologies. We also
provide guidance for how to use the scoresheet and illustrate its generality
and usefulness by applying it to a range of applications.
|
2502.09863
|
Solvable Dynamics of Self-Supervised Word Embeddings and the Emergence
of Analogical Reasoning
|
cs.LG cs.CL stat.ML
|
The remarkable success of large language models relies on their ability to
implicitly learn structured latent representations from the pretraining corpus.
As a simpler surrogate for representation learning in language modeling, we
study a class of solvable contrastive self-supervised algorithms which we term
quadratic word embedding models. These models resemble the word2vec algorithm
and perform similarly on downstream tasks. Our main contributions are
analytical solutions for both the training dynamics (under certain
hyperparameter choices) and the final word embeddings, given in terms of only
the corpus statistics. Our solutions reveal that these models learn orthogonal
linear subspaces one at a time, each one incrementing the effective rank of the
embeddings until model capacity is saturated. Training on WikiText, we find
that the top subspaces represent interpretable concepts. Finally, we use our
dynamical theory to predict how and when models acquire the ability to complete
analogies.
|
2502.09866
|
How Users Who are Blind or Low Vision Play Mobile Games: Perceptions,
Challenges, and Strategies
|
cs.HC cs.AI cs.CY cs.LG
|
As blind and low-vision (BLV) players engage more deeply with games,
accessibility features have become essential. While some research has explored
tools and strategies to enhance game accessibility, the specific experiences of
these players with mobile games remain underexamined. This study addresses this
gap by investigating how BLV users experience mobile games with varying
accessibility levels. Through interviews with 32 experienced BLV mobile
players, we explore their perceptions, challenges, and strategies for engaging
with mobile games. Our findings reveal that BLV players turn to mobile games to
alleviate boredom, achieve a sense of accomplishment, and build social
connections, but face barriers depending on the game's accessibility level. We
also compare mobile games to other forms of gaming, highlighting the relative
advantages of mobile games, such as the inherent accessibility of smartphones.
This study contributes to understanding BLV mobile gaming experiences and
provides insights for enhancing accessible mobile game design.
|
2502.09870
|
A Taxonomy of Linguistic Expressions That Contribute To Anthropomorphism
of Language Technologies
|
cs.HC cs.AI cs.CL
|
Recent attention to anthropomorphism -- the attribution of human-like
qualities to non-human objects or entities -- of language technologies like
LLMs has sparked renewed discussions about potential negative impacts of
anthropomorphism. To productively discuss the impacts of this anthropomorphism
and in what contexts it is appropriate, we need a shared vocabulary for the
vast variety of ways that language can be anthropomorphic. In this work, we
draw on existing literature and analyze empirical cases of user interactions
with language technologies to develop a taxonomy of textual expressions that
can contribute to anthropomorphism. We highlight challenges and tensions
involved in understanding linguistic anthropomorphism, such as how all language
is fundamentally human and how efforts to characterize and shift perceptions of
humanness in machines can also dehumanize certain humans. We discuss ways that
our taxonomy supports more precise and effective discussions of and decisions
about anthropomorphism of language technologies.
|
2502.09872
|
Learning to Calibrate for Reliable Visual Fire Detection
|
cs.CV cs.LG
|
Fire is characterized by its sudden onset and destructive power, making early
fire detection crucial for ensuring human safety and protecting property. With
the advancement of deep learning, the application of computer vision in fire
detection has significantly improved. However, deep learning models often
exhibit a tendency toward overconfidence, and most existing works focus
primarily on enhancing classification performance, with limited attention given
to uncertainty modeling. To address this issue, we propose transforming the
Expected Calibration Error (ECE), a metric for measuring uncertainty, into a
differentiable ECE loss function. This loss is then combined with the
cross-entropy loss to guide the training process of multi-class fire detection
models. Additionally, to achieve a good balance between classification accuracy
and reliable decision, we introduce a curriculum learning-based approach that
dynamically adjusts the weight of the ECE loss during training. Extensive
experiments are conducted on two widely used multi-class fire detection
datasets, DFAN and EdgeFireSmoke, validating the effectiveness of our
uncertainty modeling method.
|
2502.09873
|
Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal
|
cs.CV
|
Diffusion models have demonstrated remarkable success in image restoration
tasks. However, their multi-step denoising process introduces significant
computational overhead, limiting their practical deployment. Furthermore,
existing methods struggle to effectively remove severe JPEG artifact,
especially in highly compressed images. To address these challenges, we propose
CODiff, a compression-aware one-step diffusion model for JPEG artifact removal.
The core of CODiff is the compression-aware visual embedder (CaVE), which
extracts and leverages JPEG compression priors to guide the diffusion model. We
propose a dual learning strategy that combines explicit and implicit learning.
Specifically, explicit learning enforces a quality prediction objective to
differentiate low-quality images with different compression levels. Implicit
learning employs a reconstruction objective that enhances the model's
generalization. This dual learning allows for a deeper and more comprehensive
understanding of JPEG compression. Experimental results demonstrate that CODiff
surpasses recent leading methods in both quantitative and visual quality
metrics. The code and models will be released at
https://github.com/jp-guo/CODiff.
|
2502.09874
|
FrGNet: A fourier-guided weakly-supervised framework for nuclear
instance segmentation
|
cs.CV cs.AI
|
Nuclear instance segmentation has played a critical role in pathology image
analysis. The main challenges arise from the difficulty in accurately
segmenting instances and the high cost of precise mask-level annotations for
fully-supervised training.In this work, we propose a fourier guidance framework
for solving the weakly-supervised nuclear instance segmentation problem. In
this framework, we construct a fourier guidance module to fuse the priori
information into the training process of the model, which facilitates the model
to capture the relevant features of the nuclear. Meanwhile, in order to further
improve the model's ability to represent the features of nuclear, we propose
the guide-based instance level contrastive module. This module makes full use
of the framework's own properties and guide information to effectively enhance
the representation features of nuclear. We show on two public datasets that our
model can outperform current SOTA methods under fully-supervised design, and in
weakly-supervised experiments, with only a small amount of labeling our model
still maintains close to the performance under full supervision.In addition, we
also perform generalization experiments on a private dataset, and without any
labeling, our model is able to segment nuclear images that have not been seen
during training quite effectively. As open science, all codes and pre-trained
models are available at https://github.com/LQY404/FrGNet.
|
2502.09877
|
Stretching Rubber, Not Budgets: Accurate Parking Utilization on a
Shoestring
|
eess.SY cs.SY
|
Effective parking management is essential for ensuring accessibility, safety,
and convenience in master-planned communities, particularly in active adult
neighborhoods experiencing rapid growth. Accurately assessing parking
utilization is a crucial first step in planning for future demand, but data
collection methods can be costly and labor-intensive. This paper presents a
low-cost yet highly accurate methodology for measuring parking utilization
using road tubes connected to portable traffic counters from JAMAR
Technologies, Inc. By integrating results from JAMAR's analysis tool with
custom Python scripting, the methodology enables precise parking lot counts
through parameter optimization and automated error correction. The system's
efficiency allows for scalable deployment without significant manual
observation, reducing both costs and disruptions to daily operations. Using
Tellico Village as a case study, this research demonstrates that community
planners can obtain actionable parking insights on a limited budget, empowering
them to make informed decisions about capacity expansion, traffic flow
improvements, and facility scheduling. The findings underscore the feasibility
of leveraging cost-effective technology to optimize infrastructure planning and
ensure long-term resident satisfaction as communities grow.
|
2502.09880
|
Interpretable Early Warnings using Machine Learning in an Online
Game-experiment
|
physics.soc-ph cs.LG cs.SI nlin.AO stat.ML
|
Stemming from physics and later applied to other fields such as ecology, the
theory of critical transitions suggests that some regime shifts are preceded by
statistical early warning signals. Reddit's r/place experiment, a large-scale
social game, provides a unique opportunity to test these signals consistently
across thousands of subsystems undergoing critical transitions. In r/place,
millions of users collaboratively created compositions, or pixel-art drawings,
in which transitions occur when one composition rapidly replaces another. We
develop a machine-learning-based early warning system that combines the
predictive power of multiple system-specific time series via gradient-boosted
decision trees with memory-retaining features. Our method significantly
outperforms standard early warning indicators. Trained on the 2022 r/place
data, our algorithm detects half of the transitions occurring within 20 minutes
at a false positive rate of just 3.7%. Its performance remains robust when
tested on the 2023 r/place event, demonstrating generalizability across
different contexts. Using SHapley Additive exPlanations (SHAP) for interpreting
the predictions, we investigate the underlying drivers of warnings, which could
be relevant to other complex systems, especially online social systems. We
reveal an interplay of patterns preceding transitions, such as critical slowing
down or speeding up, a lack of innovation or coordination, turbulent histories,
and a lack of image complexity. These findings show the potential of machine
learning indicators in socio-ecological systems for predicting regime shifts
and understanding their dynamics.
|
2502.09884
|
Nonasymptotic CLT and Error Bounds for Two-Time-Scale Stochastic
Approximation
|
cs.LG cs.AI
|
We consider linear two-time-scale stochastic approximation algorithms driven
by martingale noise. Recent applications in machine learning motivate the need
to understand finite-time error rates, but conventional stochastic
approximation analysis focus on either asymptotic convergence in distribution
or finite-time bounds that are far from optimal. Prior work on asymptotic
central limit theorems (CLTs) suggest that two-time-scale algorithms may be
able to achieve $1/\sqrt{n}$ error in expectation, with a constant given by the
expected norm of the limiting Gaussian vector. However, the best known
finite-time rates are much slower. We derive the first non-asymptotic central
limit theorem with respect to the Wasserstein-1 distance for two-time-scale
stochastic approximation with Polyak-Ruppert averaging. As a corollary, we show
that expected error achieved by Polyak-Ruppert averaging decays at rate
$1/\sqrt{n}$, which significantly improves on the rates of convergence in prior
works.
|
2502.09885
|
Comprehensive Review of Neural Differential Equations for Time Series
Analysis
|
cs.LG cs.AI
|
Time series modeling and analysis has become critical in various domains.
Conventional methods such as RNNs and Transformers, while effective for
discrete-time and regularly sampled data, face significant challenges in
capturing the continuous dynamics and irregular sampling patterns inherent in
real-world scenarios. Neural Differential Equations (NDEs) represent a paradigm
shift by combining the flexibility of neural networks with the mathematical
rigor of differential equations. This paper presents a comprehensive review of
NDE-based methods for time series analysis, including neural ordinary
differential equations, neural controlled differential equations, and neural
stochastic differential equations. We provide a detailed discussion of their
mathematical formulations, numerical methods, and applications, highlighting
their ability to model continuous-time dynamics. Furthermore, we address key
challenges and future research directions. This survey serves as a foundation
for researchers and practitioners seeking to leverage NDEs for advanced time
series analysis.
|
2502.09886
|
Video2Policy: Scaling up Manipulation Tasks in Simulation through
Internet Videos
|
cs.RO cs.AI cs.LG
|
Simulation offers a promising approach for cheaply scaling training data for
generalist policies. To scalably generate data from diverse and realistic
tasks, existing algorithms either rely on large language models (LLMs) that may
hallucinate tasks not interesting for robotics; or digital twins, which require
careful real-to-sim alignment and are hard to scale. To address these
challenges, we introduce Video2Policy, a novel framework that leverages
internet RGB videos to reconstruct tasks based on everyday human behavior. Our
approach comprises two phases: (1) task generation in simulation from videos;
and (2) reinforcement learning utilizing in-context LLM-generated reward
functions iteratively. We demonstrate the efficacy of Video2Policy by
reconstructing over 100 videos from the Something-Something-v2 (SSv2) dataset,
which depicts diverse and complex human behaviors on 9 different tasks. Our
method can successfully train RL policies on such tasks, including complex and
challenging tasks such as throwing. Finally, we show that the generated
simulation data can be scaled up for training a general policy, and it can be
transferred back to the real robot in a Real2Sim2Real way.
|
2502.09888
|
An Efficient Large Recommendation Model: Towards a Resource-Optimal
Scaling Law
|
cs.IR
|
The pursuit of scaling up recommendation models confronts intrinsic tensions
between expanding model capacity and preserving computational tractability.
While prior studies have explored scaling laws for recommendation systems,
their resource-intensive paradigms -- often requiring tens of thousands of A100
GPU hours -- remain impractical for most industrial applications. This work
addresses a critical gap: achieving sustainable model scaling under strict
computational budgets. We propose Climber, a resource-efficient recommendation
framework comprising two synergistic components: the ASTRO model architecture
for algorithmic innovation and the TURBO acceleration framework for engineering
optimization. ASTRO (Adaptive Scalable Transformer for RecOmmendation) adopts
two core innovations: (1) multi-scale sequence partitioning that reduces
attention complexity from O(n^2d) to O(n^2d/Nb) via hierarchical blocks,
enabling more efficient scaling with sequence length; (2) dynamic temperature
modulation that adaptively adjusts attention scores for multimodal
distributions arising from inherent multi-scenario and multi-behavior
interactions. Complemented by TURBO (Two-stage Unified Ranking with Batched
Output), a co-designed acceleration framework integrating gradient-aware
feature compression and memory-efficient Key-Value caching, Climber achieves
5.15x throughput gains without performance degradation. Comprehensive offline
experiments on multiple datasets validate that Climber exhibits a more ideal
scaling curve. To our knowledge, this is the first publicly documented
framework where controlled model scaling drives continuous online metric growth
(12.19% overall lift) without prohibitive resource costs. Climber has been
successfully deployed on Netease Cloud Music, one of China's largest music
streaming platforms, serving tens of millions of users daily.
|
2502.09889
|
Evaluating and Improving Graph-based Explanation Methods for Multi-Agent
Coordination
|
cs.MA cs.AI cs.LG cs.RO
|
Graph Neural Networks (GNNs), developed by the graph learning community, have
been adopted and shown to be highly effective in multi-robot and multi-agent
learning. Inspired by this successful cross-pollination, we investigate and
characterize the suitability of existing GNN explanation methods for explaining
multi-agent coordination. We find that these methods have the potential to
identify the most-influential communication channels that impact the team's
behavior. Informed by our initial analyses, we propose an attention entropy
regularization term that renders GAT-based policies more amenable to existing
graph-based explainers. Intuitively, minimizing attention entropy incentivizes
agents to limit their attention to the most influential or impactful agents,
thereby easing the challenge faced by the explainer. We theoretically ground
this intuition by showing that minimizing attention entropy increases the
disparity between the explainer-generated subgraph and its complement.
Evaluations across three tasks and three team sizes i) provides insights into
the effectiveness of existing explainers, and ii) demonstrates that our
proposed regularization consistently improves explanation quality without
sacrificing task performance.
|
2502.09890
|
Symmetry-Preserving Diffusion Models via Target Symmetrization
|
cs.LG
|
Diffusion models are powerful tools for capturing complex distributions, but
modeling data with inherent symmetries, such as molecular structures, remains
challenging. Equivariant denoisers are commonly used to address this, but they
introduce architectural complexity and optimization challenges, including noisy
gradients and convergence issues. We propose a novel approach that enforces
equivariance through a symmetrized loss function, which applies a
time-dependent weighted averaging operation over group actions to the model's
prediction target. This ensures equivariance without explicit architectural
constraints and reduces gradient variance, leading to more stable and efficient
optimization. Our method uses Monte Carlo sampling to estimate the average,
incurring minimal computational overhead. We provide theoretical guarantees of
equivariance for the minimizer of our loss function and demonstrate its
effectiveness on synthetic datasets and the molecular conformation generation
task using the GEOM-QM9 dataset. Experiments show improved sample quality
compared to existing methods, highlighting the potential of our approach to
enhance the scalability and practicality of equivariant diffusion models in
generative tasks.
|
2502.09891
|
ArchRAG: Attributed Community-based Hierarchical Retrieval-Augmented
Generation
|
cs.IR cs.AI
|
Retrieval-Augmented Generation (RAG) has proven effective in integrating
external knowledge into large language models (LLMs) for question-answer (QA)
tasks. The state-of-the-art RAG approaches often use the graph data as the
external data since they capture the rich semantic information and link
relationships between entities. However, existing graph-based RAG approaches
cannot accurately identify the relevant information from the graph and also
consume large numbers of tokens in the online retrieval process. To address
these issues, we introduce a novel graph-based RAG approach, called Attributed
Community-based Hierarchical RAG (ArchRAG), by augmenting the question using
attributed communities, and also introducing a novel LLM-based hierarchical
clustering method. To retrieve the most relevant information from the graph for
the question, we build a novel hierarchical index structure for the attributed
communities and develop an effective online retrieval method. Experimental
results demonstrate that ArchRAG outperforms existing methods in terms of both
accuracy and token cost.
|
2502.09893
|
Dynamic-Computed Tomography Angiography for Cerebral Vessel Templates
and Segmentation
|
physics.med-ph cs.CV
|
Background: Computed Tomography Angiography (CTA) is crucial for
cerebrovascular disease diagnosis. Dynamic CTA is a type of imaging that
captures temporal information about the We aim to develop and evaluate two
segmentation techniques to segment vessels directly on CTA images: (1) creating
and registering population-averaged vessel atlases and (2) using deep learning
(DL). Methods: We retrieved 4D-CT of the head from our institutional research
database, with bone and soft tissue subtracted from post-contrast images. An
Advanced Normalization Tools pipeline was used to create angiographic atlases
from 25 patients. Then, atlas-driven ROIs were identified by a CT attenuation
threshold to generate segmentation of the arteries and veins using non-linear
registration. To create DL vessel segmentations, arterial and venous structures
were segmented using the MRA vessel segmentation tool, iCafe, in 29 patients.
These were then used to train a DL model, with bone-in CT images as input.
Multiple phase images in the 4D-CT were used to increase the training and
validation dataset. Both segmentation approaches were evaluated on a test 4D-CT
dataset of 11 patients which were also processed by iCafe and validated by a
neuroradiologist. Specifically, branch-wise segmentation accuracy was
quantified with 20 labels for arteries and one for veins. DL outperformed the
atlas-based segmentation models for arteries (average modified dice coefficient
(amDC) 0.856 vs. 0.324) and veins (amDC 0.743 vs. 0.495) overall. For ICAs,
vertebral and basilar arteries, DL and atlas -based segmentation had an amDC of
0.913 and 0.402, respectively. The amDC for MCA-M1, PCA-P1, and ACA-A1 segments
were 0.932 and 0.474, respectively. Conclusion: Angiographic CT templates are
developed for the first time in literature. Using 4D-CTA enables the use of
tools like iCafe, lessening the burden of manual annotation.
|
2502.09897
|
Artificial Intelligence in Spectroscopy: Advancing Chemistry from
Prediction to Generation and Beyond
|
cs.AI cs.LG
|
The rapid advent of machine learning (ML) and artificial intelligence (AI)
has catalyzed major transformations in chemistry, yet the application of these
methods to spectroscopic and spectrometric data, referred to as Spectroscopy
Machine Learning (SpectraML), remains relatively underexplored. Modern
spectroscopic techniques (MS, NMR, IR, Raman, UV-Vis) generate an ever-growing
volume of high-dimensional data, creating a pressing need for automated and
intelligent analysis beyond traditional expert-based workflows. In this survey,
we provide a unified review of SpectraML, systematically examining
state-of-the-art approaches for both forward tasks (molecule-to-spectrum
prediction) and inverse tasks (spectrum-to-molecule inference). We trace the
historical evolution of ML in spectroscopy, from early pattern recognition to
the latest foundation models capable of advanced reasoning, and offer a
taxonomy of representative neural architectures, including graph-based and
transformer-based methods. Addressing key challenges such as data quality,
multimodal integration, and computational scalability, we highlight emerging
directions such as synthetic data generation, large-scale pretraining, and few-
or zero-shot learning. To foster reproducible research, we also release an
open-source repository containing recent papers and their corresponding curated
datasets (https://github.com/MINE-Lab-ND/SpectrumML_Survey_Papers). Our survey
serves as a roadmap for researchers, guiding progress at the intersection of
spectroscopy and AI.
|
2502.09898
|
Optimal lower Lipschitz bounds for ReLU layers, saturation, and phase
retrieval
|
cs.LG cs.NA math.FA math.NA
|
The injectivity of ReLU layers in neural networks, the recovery of vectors
from clipped or saturated measurements, and (real) phase retrieval in
$\mathbb{R}^n$ allow for a similar problem formulation and characterization
using frame theory. In this paper, we revisit all three problems with a unified
perspective and derive lower Lipschitz bounds for ReLU layers and clipping
which are analogous to the previously known result for phase retrieval and are
optimal up to a constant factor.
|
2502.09900
|
Thompson Sampling for Repeated Newsvendor
|
cs.LG
|
In this paper, we investigate the performance of Thompson Sampling (TS) for
online learning with censored feedback, focusing primarily on the classic
repeated newsvendor model--a foundational framework in inventory
management--and demonstrating how our techniques can be naturally extended to a
broader class of problems. We model demand using a Weibull distribution and
initialize TS with a Gamma prior to dynamically adjust order quantities. Our
analysis establishes optimal (up to logarithmic factors) frequentist regret
bounds for TS without imposing restrictive prior assumptions. More importantly,
it yields novel and highly interpretable insights on how TS addresses the
exploration-exploitation trade-off in the repeated newsvendor setting.
Specifically, our results show that when past order quantities are sufficiently
large to overcome censoring, TS accurately estimates the unknown demand
parameters, leading to near-optimal ordering decisions. Conversely, when past
orders are relatively small, TS automatically increases future order quantities
to gather additional demand information. Extensive numerical simulations
further demonstrate that TS outperforms more conservative and widely-used
approaches such as online convex optimization, upper confidence bounds, and
myopic Bayesian dynamic programming. This study also lays the foundation for
exploring general online learning problems with censored feedback.
|
2502.09903
|
The Ann Arbor Architecture for Agent-Oriented Programming
|
cs.AI cs.HC cs.SE
|
In this paper, we reexamine prompt engineering for large language models
through the lens of automata theory. We argue that language models function as
automata and, like all automata, should be programmed in the languages they
accept, a unified collection of all natural and formal languages. Therefore,
traditional software engineering practices--conditioned on the clear separation
of programming languages and natural languages--must be rethought. We introduce
the Ann Arbor Architecture, a conceptual framework for agent-oriented
programming of language models, as a higher-level abstraction over raw token
generation, and provide a new perspective on in-context learning. Based on this
framework, we present the design of our agent platform Postline, and report on
our initial experiments in agent training.
|
2502.09905
|
Towards personalised assessment of abdominal aortic aneurysm structural
integrity
|
cs.CE
|
Abdominal aortic aneurysm (AAA) is a life-threatening condition involving the
permanent dilation of the aorta, often detected incidentally through imaging
for some other condition. The standard clinical approach to managing AAA
follows a one-size-fits-all model based on aneurysm size and growth rate,
leading to underestimation or overestimation of rupture risk in individual
patients. The widely studied stress-based rupture risk estimation using
computational biomechanics requires wall strength information. However,
non-invasive methods for local patient-specific wall strength measurement have
not yet been developed. Recently, we introduced an image-based approach for
patient-specific, in vivo, non-invasive AAA kinematic analysis using
time-resolved 3D computed tomography angiography (4D-CTA) images to measure
wall strain throughout the cardiac cycle. In the present study, we integrated
wall tension computation and strain measurement to develop a novel measure of
local structural integrity of AAA wall - Relative Structural Integrity Index
(RSII), independent of material properties and thickness of the wall and
conditions of blood pressure measurement. Our methods provide a visual map of
AAA wall structural integrity for individual patients using only their medical
images and blood pressure data. We applied our methods to twelve patients.
Additionally, we compared our measure of structural integrity of aneurysmal and
non-aneurysmal aortas. Our results show similar values of the wall structural
integrity measure across the patients, indicating the reliability of our
methods. In line with experimental observations reported in the literature, our
analysis revealed that localized low stiffness areas are primarily found in the
most dilated AAA regions. Our results clearly demonstrate that the AAA wall is
stiffer than the non-aneurysmal aorta.
|
2502.09906
|
Insect-Foundation: A Foundation Model and Large Multimodal Dataset for
Vision-Language Insect Understanding
|
cs.CV
|
Multimodal conversational generative AI has shown impressive capabilities in
various vision and language understanding through learning massive text-image
data. However, current conversational models still lack knowledge about visual
insects since they are often trained on the general knowledge of
vision-language data. Meanwhile, understanding insects is a fundamental problem
in precision agriculture, helping to promote sustainable development in
agriculture. Therefore, this paper proposes a novel multimodal conversational
model, Insect-LLaVA, to promote visual understanding in insect-domain
knowledge. In particular, we first introduce a new large-scale Multimodal
Insect Dataset with Visual Insect Instruction Data that enables the capability
of learning the multimodal foundation models. Our proposed dataset enables
conversational models to comprehend the visual and semantic features of the
insects. Second, we propose a new Insect-LLaVA model, a new general Large
Language and Vision Assistant in Visual Insect Understanding. Then, to enhance
the capability of learning insect features, we develop an Insect Foundation
Model by introducing a new micro-feature self-supervised learning with a
Patch-wise Relevant Attention mechanism to capture the subtle differences among
insect images. We also present Description Consistency loss to improve
micro-feature learning via text descriptions. The experimental results
evaluated on our new Visual Insect Question Answering benchmarks illustrate the
effective performance of our proposed approach in visual insect understanding
and achieve State-of-the-Art performance on standard benchmarks of
insect-related tasks.
|
2502.09913
|
AutoS$^2$earch: Unlocking the Reasoning Potential of Large Models for
Web-based Source Search
|
cs.AI cs.HC
|
Web-based management systems have been widely used in risk control and
industrial safety. However, effectively integrating source search capabilities
into these systems, to enable decision-makers to locate and address the hazard
(e.g., gas leak detection) remains a challenge. While prior efforts have
explored using web crowdsourcing and AI algorithms for source search decision
support, these approaches suffer from overheads in recruiting human
participants and slow response times in time-sensitive situations. To address
this, we introduce AutoS$^2$earch, a novel framework leveraging large models
for zero-shot source search in web applications. AutoS$^2$earch operates on a
simplified visual environment projected through a web-based display, utilizing
a chain-of-thought prompt designed to emulate human reasoning. The multi-modal
large language model (MLLMs) dynamically converts visual observations into
language descriptions, enabling the LLM to perform linguistic reasoning on four
directional choices. Extensive experiments demonstrate that AutoS$^2$earch
achieves performance nearly equivalent to human-AI collaborative source search
while eliminating dependency on crowdsourced labor. Our work offers valuable
insights in using web engineering to design such autonomous systems in other
industrial applications.
|
2502.09918
|
Dual Control for Interactive Autonomous Merging with Model Predictive
Diffusion
|
cs.RO cs.SY eess.SY math.OC
|
Interactive decision-making is essential in applications such as autonomous
driving, where the agent must infer the behavior of nearby human drivers while
planning in real-time. Traditional predict-then-act frameworks are often
insufficient or inefficient because accurate inference of human behavior
requires a continuous interaction rather than isolated prediction. To address
this, we propose an active learning framework in which we rigorously derive
predicted belief distributions. Additionally, we introduce a novel model-based
diffusion solver tailored for online receding horizon control problems,
demonstrated through a complex, non-convex highway merging scenario. Our
approach extends previous high-fidelity dual control simulations to hardware
experiments, which may be viewed at https://youtu.be/Q_JdZuopGL4, and verifies
behavior inference in human-driven traffic scenarios, moving beyond idealized
models. The results show improvements in adaptive planning under uncertainty,
advancing the field of interactive decision-making for real-world applications.
|
2502.09919
|
AttenGluco: Multimodal Transformer-Based Blood Glucose Forecasting on
AI-READI Dataset
|
cs.LG cs.AI
|
Diabetes is a chronic metabolic disorder characterized by persistently high
blood glucose levels (BGLs), leading to severe complications such as
cardiovascular disease, neuropathy, and retinopathy. Predicting BGLs enables
patients to maintain glucose levels within a safe range and allows caregivers
to take proactive measures through lifestyle modifications. Continuous Glucose
Monitoring (CGM) systems provide real-time tracking, offering a valuable tool
for monitoring BGLs. However, accurately forecasting BGLs remains challenging
due to fluctuations due to physical activity, diet, and other factors. Recent
deep learning models show promise in improving BGL prediction. Nonetheless,
forecasting BGLs accurately from multimodal, irregularly sampled data over long
prediction horizons remains a challenging research problem. In this paper, we
propose AttenGluco, a multimodal Transformer-based framework for long-term
blood glucose prediction. AttenGluco employs cross-attention to effectively
integrate CGM and activity data, addressing challenges in fusing data with
different sampling rates. Moreover, it employs multi-scale attention to capture
long-term dependencies in temporal data, enhancing forecasting accuracy. To
evaluate the performance of AttenGluco, we conduct forecasting experiments on
the recently released AIREADI dataset, analyzing its predictive accuracy across
different subject cohorts including healthy individuals, people with
prediabetes, and those with type 2 diabetes. Furthermore, we investigate its
performance improvements and forgetting behavior as new cohorts are introduced.
Our evaluations show that AttenGluco improves all error metrics, such as root
mean square error (RMSE), mean absolute error (MAE), and correlation, compared
to the multimodal LSTM model. AttenGluco outperforms this baseline model by
about 10% and 15% in terms of RMSE and MAE, respectively.
|
2502.09920
|
Machine Learning for Phase Estimation in Satellite-to-Earth Quantum
Communication
|
quant-ph cs.AI eess.SP
|
A global continuous-variable quantum key distribution (CV-QKD) network can be
established using a series of satellite-to-Earth channels. Increased
performance in such a network is provided by performing coherent measurement of
the optical quantum signals using a real local oscillator, calibrated locally
by encoding known information on transmitted reference pulses and using signal
phase error estimation algorithms. The speed and accuracy of the signal phase
error estimation algorithm are vital to practical CV-QKD implementation. Our
work provides a framework to analyze long short-term memory neural network (NN)
architecture parameterization, with respect to the quantum Cram\'er-Rao
uncertainty bound of the signal phase error estimation, with a focus on
reducing the model complexity. More specifically, we demonstrate that signal
phase error estimation can be achieved using a low-complexity NN architecture,
without significantly sacrificing accuracy. Our results significantly improve
the real-time performance of practical CV-QKD systems deployed over
satellite-to-Earth channels, thereby contributing to the ongoing development of
the Quantum Internet.
|
2502.09923
|
Self-Consistent Model-based Adaptation for Visual Reinforcement Learning
|
cs.CV cs.LG
|
Visual reinforcement learning agents typically face serious performance
declines in real-world applications caused by visual distractions. Existing
methods rely on fine-tuning the policy's representations with hand-crafted
augmentations. In this work, we propose Self-Consistent Model-based Adaptation
(SCMA), a novel method that fosters robust adaptation without modifying the
policy. By transferring cluttered observations to clean ones with a denoising
model, SCMA can mitigate distractions for various policies as a plug-and-play
enhancement. To optimize the denoising model in an unsupervised manner, we
derive an unsupervised distribution matching objective with a theoretical
analysis of its optimality. We further present a practical algorithm to
optimize the objective by estimating the distribution of clean observations
with a pre-trained world model. Extensive experiments on multiple visual
generalization benchmarks and real robot data demonstrate that SCMA effectively
boosts performance across various distractions and exhibits better sample
efficiency.
|
2502.09925
|
TaskGalaxy: Scaling Multi-modal Instruction Fine-tuning with Tens of
Thousands Vision Task Types
|
cs.CV cs.AI
|
Multimodal visual language models are gaining prominence in open-world
applications, driven by advancements in model architectures, training
techniques, and high-quality data. However, their performance is often limited
by insufficient task-specific data, leading to poor generalization and biased
outputs. Existing efforts to increase task diversity in fine-tuning datasets
are hindered by the labor-intensive process of manual task labeling, which
typically produces only a few hundred task types. To address this, we propose
TaskGalaxy, a large-scale multimodal instruction fine-tuning dataset comprising
19,227 hierarchical task types and 413,648 samples. TaskGalaxy utilizes GPT-4o
to enrich task diversity by expanding from a small set of manually defined
tasks, with CLIP and GPT-4o filtering those that best match open-source images,
and generating relevant question-answer pairs. Multiple models are employed to
ensure sample quality. This automated process enhances both task diversity and
data quality, reducing manual intervention. Incorporating TaskGalaxy into
LLaVA-v1.5 and InternVL-Chat-v1.0 models shows substantial performance
improvements across 16 benchmarks, demonstrating the critical importance of
task diversity. TaskGalaxy is publicly released at
https://github.com/Kwai-YuanQi/TaskGalaxy.
|
2502.09926
|
Robust Anomaly Detection via Tensor Chidori Pseudoskeleton Decomposition
|
cs.LG
|
Anomaly detection plays a critical role in modern data-driven applications,
from identifying fraudulent transactions and safeguarding network
infrastructure to monitoring sensor systems for irregular patterns. Traditional
approaches, such as distance, density, or cluster-based methods, face
significant challenges when applied to high dimensional tensor data, where
complex interdependencies across dimensions amplify noise and computational
complexity. To address these limitations, this paper leverages Tensor Chidori
pseudoskeleton decomposition within a tensor-robust principal component
analysis framework to extract low Tucker rank structure while isolating sparse
anomalies, ensuring robustness to anomaly detection. We establish theoretical
results regarding convergence, and estimation error, demonstrating the
stability and accuracy of the proposed approach. Numerical experiments on
real-world spatiotemporal data from New York City taxi trip records validate
the superiority of the proposed method in detecting anomalous urban events
compared to existing benchmark methods. The results underscore the potential of
Tensor Chidori pseudoskeleton decomposition to enhance anomaly detection for
large-scale, high-dimensional data.
|
2502.09927
|
Granite Vision: a lightweight, open-source multimodal model for
enterprise Intelligence
|
cs.CV cs.AI
|
We introduce Granite Vision, a lightweight large language model with vision
capabilities, specifically designed to excel in enterprise use cases,
particularly in visual document understanding. Our model is trained on a
comprehensive instruction-following dataset, including document-related tasks,
such as content extraction from tables, charts, diagrams, sketches, and
infographics, as well as general image tasks. The architecture of Granite
Vision is centered around visual modality alignment with a decoder-only, 2
billion parameter Granite large language model. Additionally, we introduce a
dedicated safety classification approach in test-time that leverages a sparse
set of attention vectors to identify potential harmful inputs. Despite its
lightweight architecture, Granite Vision achieves strong results in standard
benchmarks related to visual document understanding, as well as on the LiveXiv
benchmark, which is designed to avoid test set contamination by using a
constantly updated corpus of recently published Arxiv papers. We are releasing
the model under the Apache-2 license, allowing for both research and commercial
use, while offering complete visibility into the training data and other
relevant details. See https://huggingface.co/ibm-granite/ for model weights.
|
2502.09928
|
Deep Tree Tensor Networks for Image Recognition
|
cs.CV cs.AI
|
Originating in quantum physics, tensor networks (TNs) have been widely
adopted as exponential machines and parameter decomposers for recognition
tasks. Typical TN models, such as Matrix Product States (MPS), have not yet
achieved successful application in natural image processing. When employed,
they primarily serve to compress parameters within off-the-shelf networks, thus
losing their distinctive capability to enhance exponential-order feature
interactions. This paper introduces a novel architecture named
\textit{\textbf{D}eep \textbf{T}ree \textbf{T}ensor \textbf{N}etwork} (DTTN),
which captures $2^L$-order multiplicative interactions across features through
multilinear operations, while essentially unfolding into a \emph{tree}-like TN
topology with the parameter-sharing property. DTTN is stacked with multiple
antisymmetric interacting modules (AIMs), and this design facilitates efficient
implementation. Moreover, we theoretically reveal the equivalency among
quantum-inspired TN models and polynomial and multilinear networks under
certain conditions, and we believe that DTTN can inspire more interpretable
studies in this field. We evaluate the proposed model against a series of
benchmarks and achieve excellent performance compared to its peers and
cutting-edge architectures. Our code will soon be publicly available.
|
2502.09931
|
TransGUNet: Transformer Meets Graph-based Skip Connection for Medical
Image Segmentation
|
cs.CV cs.AI
|
Skip connection engineering is primarily employed to address the semantic gap
between the encoder and decoder, while also integrating global dependencies to
understand the relationships among complex anatomical structures in medical
image segmentation. Although several models have proposed transformer-based
approaches to incorporate global dependencies within skip connections, they
often face limitations in capturing detailed local features with high
computational complexity. In contrast, graph neural networks (GNNs) exploit
graph structures to effectively capture local and global features. Leveraging
these properties, we introduce an attentional cross-scale graph neural network
(ACS-GNN), which enhances the skip connection framework by converting
cross-scale feature maps into a graph structure and capturing complex
anatomical structures through node attention. Additionally, we observed that
deep learning models often produce uninformative feature maps, which degrades
the quality of spatial attention maps. To address this problem, we integrated
entropy-driven feature selection (EFS) with spatial attention, calculating an
entropy score for each channel and filtering out high-entropy feature maps. Our
innovative framework, TransGUNet, comprises ACS-GNN and EFS-based spatial
attentio} to effectively enhance domain generalizability across various
modalities by leveraging GNNs alongside a reliable spatial attention map,
ensuring more robust features within the skip connection. Through comprehensive
experiments and analysis, TransGUNet achieved superior segmentation performance
on six seen and eight unseen datasets, demonstrating significantly higher
efficiency compared to previous methods.
|
2502.09932
|
AffectSRNet : Facial Emotion-Aware Super-Resolution Network
|
cs.CV
|
Facial expression recognition (FER) systems in low-resolution settings face
significant challenges in accurately identifying expressions due to the loss of
fine-grained facial details. This limitation is especially problematic for
applications like surveillance and mobile communications, where low image
resolution is common and can compromise recognition accuracy. Traditional
single-image face super-resolution (FSR) techniques, however, often fail to
preserve the emotional intent of expressions, introducing distortions that
obscure the original affective content. Given the inherently ill-posed nature
of single-image super-resolution, a targeted approach is required to balance
image quality enhancement with emotion retention. In this paper, we propose
AffectSRNet, a novel emotion-aware super-resolution framework that reconstructs
high-quality facial images from low-resolution inputs while maintaining the
intensity and fidelity of facial expressions. Our method effectively bridges
the gap between image resolution and expression accuracy by employing an
expression-preserving loss function, specifically tailored for FER
applications. Additionally, we introduce a new metric to assess emotion
preservation in super-resolved images, providing a more nuanced evaluation of
FER system performance in low-resolution scenarios. Experimental results on
standard datasets, including CelebA, FFHQ, and Helen, demonstrate that
AffectSRNet outperforms existing FSR approaches in both visual quality and
emotion fidelity, highlighting its potential for integration into practical FER
applications. This work not only improves image clarity but also ensures that
emotion-driven applications retain their core functionality in suboptimal
resolution environments, paving the way for broader adoption in FER systems.
|
2502.09933
|
MIR-Bench: Benchmarking LLM's Long-Context Intelligence via Many-Shot
In-Context Inductive Reasoning
|
cs.AI cs.CL cs.LG
|
Inductive Reasoning (IR), the ability to summarize rules from examples and
apply on new ones, has long been viewed as a primal ability for general
intelligence and widely studied by cognitive science and AI researchers. Many
benchmarks have been proposed to measure such ability for Large Language Models
(LLMs); however, they focus on few-shot (usually $<$10) setting and lack
evaluation for aggregating many pieces of information from long contexts. On
the other hand, the ever-growing context length of LLMs have brought forth the
novel paradigm of many-shot In-Context Learning (ICL), which addresses new
tasks with hundreds to thousands of examples without expensive and inefficient
fine-tuning. However, many-shot evaluations are mostly focused on
classification (a very limited aspect of IR), and popular long-context LLM
tasks such as Needle-In-A-Haystack (NIAH) seldom require complicated
intelligence for integrating many pieces of information. To fix the issues from
both worlds, we propose MIR-Bench, the first many-shot in-context inductive
reasoning benchmark that asks LLM to induce output via input-output examples
from underlying functions with diverse data format. Based on MIR-Bench, we
study many novel problems for inductive reasoning and many-shot ICL, including
robustness against erroneous shots and the effect of Chain-of-Thought (CoT),
and acquired insightful findings.
|
2502.09934
|
Fused Partial Gromov-Wasserstein for Structured Objects
|
cs.LG
|
Structured data, such as graphs, are vital in machine learning due to their
capacity to capture complex relationships and interactions. In recent years,
the Fused Gromov-Wasserstein (FGW) distance has attracted growing interest
because it enables the comparison of structured data by jointly accounting for
feature similarity and geometric structure. However, as a variant of optimal
transport (OT), classical FGW assumes an equal mass constraint on the compared
data. In this work, we relax this mass constraint and propose the Fused Partial
Gromov-Wasserstein (FPGW) framework, which extends FGW to accommodate
unbalanced data. Theoretically, we establish the relationship between FPGW and
FGW and prove the metric properties of FPGW. Numerically, we introduce
Frank-Wolfe solvers for the proposed FPGW framework and provide a convergence
analysis. Finally, we evaluate the FPGW distance through graph classification
and clustering experiments, demonstrating its robust performance, especially
when data is corrupted by outlier noise.
|
2502.09935
|
Precise Parameter Localization for Textual Generation in Diffusion
Models
|
cs.CV
|
Novel diffusion models can synthesize photo-realistic images with integrated
high-quality text. Surprisingly, we demonstrate through attention activation
patching that only less than 1% of diffusion models' parameters, all contained
in attention layers, influence the generation of textual content within the
images. Building on this observation, we improve textual generation efficiency
and performance by targeting cross and joint attention layers of diffusion
models. We introduce several applications that benefit from localizing the
layers responsible for textual content generation. We first show that a
LoRA-based fine-tuning solely of the localized layers enhances, even more, the
general text-generation capabilities of large diffusion models while preserving
the quality and diversity of the diffusion models' generations. Then, we
demonstrate how we can use the localized layers to edit textual content in
generated images. Finally, we extend this idea to the practical use case of
preventing the generation of toxic text in a cost-free manner. In contrast to
prior work, our localization approach is broadly applicable across various
diffusion model architectures, including U-Net (e.g., LDM and SDXL) and
transformer-based (e.g., DeepFloyd IF and Stable Diffusion 3), utilizing
diverse text encoders (e.g., from CLIP to the large language models like T5).
Project page available at https://t2i-text-loc.github.io/.
|
2502.09937
|
Tradeoffs in Processing Queries and Supporting Updates over an
ML-Enhanced R-tree
|
cs.DB cs.LG
|
Machine Learning (ML) techniques have been successfully applied to design
various learned database index structures for both the one- and
multi-dimensional spaces. Particularly, a class of traditional
multi-dimensional indexes has been augmented with ML models to design
ML-enhanced variants of their traditional counterparts. This paper focuses on
the R-tree multi-dimensional index structure as it is widely used for indexing
multi-dimensional data. The R-tree has been augmented with machine learning
models to enhance the R-tree performance. The AI+R-tree is an ML-enhanced
R-tree index structure that augments a traditional disk-based R-tree with an ML
model to enhance the R-tree's query processing performance, mainly, to avoid
navigating the overlapping branches of the R-tree that do not yield query
results, e.g., in the presence of high-overlap among the rectangles of the
R-tree nodes. We investigate the empirical tradeoffs in processing dynamic
query workloads and in supporting updates over the AI+R-tree. Particularly, we
investigate the impact of the choice of ML models over the AI+R-tree query
processing performance. Moreover, we present a case study of designing a custom
loss function for a neural network model tailored to the query processing
requirements of the AI+R-tree. Furthermore, we present the design tradeoffs for
adopting various strategies for supporting dynamic inserts, updates, and
deletes with the vision of realizing a mutable AI+R-tree. Experiments on real
datasets demonstrate that the AI+R-tree can enhance the query processing
performance of a traditional R-tree for high-overlap range queries by up to
5.4X while achieving up to 99% average query recall.
|
2502.09939
|
Temporal Scale and Shift Invariant Automatic Event Recognition using the
Mellin Transform
|
cs.CV
|
The Spatio-temporal holographic correlator combines the traditional 2D
optical image correlation techniques with inhomogeneously broadened arrays of
cold atoms to achieve 3D time-space correlation to realize automatic event
recognition at an ultra-high speed. Here we propose a method to realize such
event recognition for videos running at different speeds. With this method, we
can highly improve recognition accuracy and filter almost all the unwanted
events in the video database.
|
2502.09940
|
A Preliminary Exploration with GPT-4o Voice Mode
|
cs.CL cs.SD eess.AS
|
With the rise of multimodal large language models, GPT-4o stands out as a
pioneering model, driving us to evaluate its capabilities. This report assesses
GPT-4o across various tasks to analyze its audio processing and reasoning
abilities. We find that GPT-4o exhibits strong knowledge in audio, speech, and
music understanding, performing well in tasks like intent classification,
spoken command classification, semantic and grammatical reasoning.,
multilingual speech recognition, and singing analysis. It also shows greater
robustness against hallucinations than other large audio-language models
(LALMs). However, it struggles with tasks such as audio duration prediction and
instrument classification. Additionally, GPT-4o's safety mechanisms cause it to
decline tasks like speaker identification, age classification, MOS prediction,
and audio deepfake detection. Notably, the model exhibits a significantly
different refusal rate when responding to speaker verification tasks on
different datasets. This is likely due to variations in the accompanying
instructions or the quality of the input audio, suggesting the sensitivity of
its built-in safeguards. Finally, we acknowledge that model performance varies
with evaluation protocols. This report only serves as a preliminary exploration
of the current state of LALMs.
|
2502.09941
|
A Lightweight and Effective Image Tampering Localization Network with
Vision Mamba
|
cs.CV cs.CR
|
Current image tampering localization methods primarily rely on Convolutional
Neural Networks (CNNs) and Transformers. While CNNs suffer from limited local
receptive fields, Transformers offer global context modeling at the expense of
quadratic computational complexity. Recently, the state space model Mamba has
emerged as a competitive alternative, enabling linear-complexity global
dependency modeling. Inspired by it, we propose a lightweight and effective
FORensic network based on vision MAmba (ForMa) for blind image tampering
localization. Firstly, ForMa captures multi-scale global features that achieves
efficient global dependency modeling through linear complexity. Then the
pixel-wise localization map is generated by a lightweight decoder, which
employs a parameter-free pixel shuffle layer for upsampling. Additionally, a
noise-assisted decoding strategy is proposed to integrate complementary
manipulation traces from tampered images, boosting decoder sensitivity to
forgery cues. Experimental results on 10 standard datasets demonstrate that
ForMa achieves state-of-the-art generalization ability and robustness, while
maintaining the lowest computational complexity. Code is available at
https://github.com/multimediaFor/ForMa.
|
2502.09944
|
Self-Supervised Learning for Neural Topic Models with
Variance-Invariance-Covariance Regularization
|
cs.LG cs.CL
|
In our study, we propose a self-supervised neural topic model (NTM) that
combines the power of NTMs and regularized self-supervised learning methods to
improve performance. NTMs use neural networks to learn latent topics hidden
behind the words in documents, enabling greater flexibility and the ability to
estimate more coherent topics compared to traditional topic models. On the
other hand, some self-supervised learning methods use a joint embedding
architecture with two identical networks that produce similar representations
for two augmented versions of the same input. Regularizations are applied to
these representations to prevent collapse, which would otherwise result in the
networks outputting constant or redundant representations for all inputs. Our
model enhances topic quality by explicitly regularizing latent topic
representations of anchor and positive samples. We also introduced an
adversarial data augmentation method to replace the heuristic sampling method.
We further developed several variation models including those on the basis of
an NTM that incorporates contrastive learning with both positive and negative
samples. Experimental results on three datasets showed that our models
outperformed baselines and state-of-the-art models both quantitatively and
qualitatively.
|
2502.09947
|
Analyzing Patient Daily Movement Behavior Dynamics Using Two-Stage
Encoding Model
|
cs.AI cs.LG
|
In the analysis of remote healthcare monitoring data, time series
representation learning offers substantial value in uncovering deeper patterns
of patient behavior, especially given the fine temporal granularity of the
data. In this study, we focus on a dataset of home activity records from people
living with Dementia. We propose a two-stage self-supervised learning approach.
The first stage involves converting time-series activities into text strings,
which are then encoded by a fine-tuned language model. In the second stage,
these time-series vectors are bi-dimensionalized for applying PageRank method,
to analyze latent state transitions to quantitatively assess participants
behavioral patterns and identify activity biases. These insights, combined with
diagnostic data, aim to support personalized care interventions.
|
2502.09952
|
Using MRNet to Predict Lunar Rock Categories Detected by Chang'e 5 Probe
|
cs.CV cs.AI
|
China's Chang'e 5 mission has been a remarkable success, with the chang'e 5
lander traveling on the Oceanus Procellarum to collect images of the lunar
surface. Over the past half century, people have brought back some lunar rock
samples, but its quantity does not meet the need for research. Under current
circumstances, people still mainly rely on the analysis of rocks on the lunar
surface through the detection of lunar rover. The Oceanus Procellarum, chosen
by Chang'e 5 mission, contains various kind of rock species. Therefore, we
first applied to the National Astronomical Observatories of the China under the
Chinese Academy of Sciences for the Navigation and Terrain Camera (NaTeCam) of
the lunar surface image, and established a lunar surface rock image data set
CE5ROCK. The data set contains 100 images, which randomly divided into
training, validation and test set. Experimental results show that the
identification accuracy testing on convolutional neural network (CNN) models
like AlexNet or MobileNet is about to 40.0%. In order to make full use of the
global information in Moon images, this paper proposes the MRNet (MoonRockNet)
network architecture. The encoding structure of the network uses VGG16 for
feature extraction, and the decoding part adds dilated convolution and commonly
used U-Net structure on the original VGG16 decoding structure, which is more
conducive to identify more refined but more sparsely distributed types of lunar
rocks. We have conducted extensive experiments on the established CE5ROCK data
set, and the experimental results show that MRNet can achieve more accurate
rock type identification, and outperform other existing mainstream algorithms
in the identification performance.
|
2502.09954
|
On Space Folds of ReLU Neural Networks
|
cs.LG cs.NE
|
Recent findings suggest that the consecutive layers of ReLU neural networks
can be understood geometrically as space folding transformations of the input
space, revealing patterns of self-similarity. In this paper, we present the
first quantitative analysis of this space folding phenomenon in ReLU neural
networks. Our approach focuses on examining how straight paths in the Euclidean
input space are mapped to their counterparts in the Hamming activation space.
In this process, the convexity of straight lines is generally lost, giving rise
to non-convex folding behavior. To quantify this effect, we introduce a novel
measure based on range metrics, similar to those used in the study of random
walks, and provide the proof for the equivalence of convexity notions between
the input and activation spaces. Furthermore, we provide empirical analysis on
a geometrical analysis benchmark (CantorNet) as well as an image classification
benchmark (MNIST). Our work advances the understanding of the activation space
in ReLU neural networks by leveraging the phenomena of geometric folding,
providing valuable insights on how these models process input information.
|
2502.09955
|
Diverse Inference and Verification for Advanced Reasoning
|
cs.AI
|
Reasoning LLMs such as OpenAI o1, o3 and DeepSeek R1 have made significant
progress in mathematics and coding, yet find challenging advanced tasks such as
International Mathematical Olympiad (IMO) combinatorics problems, Abstraction
and Reasoning Corpus (ARC) puzzles, and Humanity's Last Exam (HLE) questions.
We use a diverse inference approach that combines multiple models and methods
at test time. We find that verifying mathematics and code problems, and
rejection sampling on other problems is simple and effective. We automatically
verify correctness of solutions to IMO problems by Lean, and ARC puzzles by
code, and find that best-of-N effectively answers HLE questions. Our approach
increases answer accuracy on IMO combinatorics problems from 33.3% to 77.8%,
accuracy on HLE questions from 8% to 37%, and solves 80% of ARC puzzles that
948 humans could not and 26.5% of ARC puzzles that o3 high compute does not.
Test-time simulations, reinforcement learning, and meta-learning with inference
feedback improve generalization by adapting agent graph representations and
varying prompts, code, and datasets. Our approach is reliable, robust, and
scalable, and in the spirit of reproducible research, we will make it publicly
available upon publication.
|
2502.09956
|
KGGen: Extracting Knowledge Graphs from Plain Text with Language Models
|
cs.CL cs.AI cs.IR cs.LG
|
Recent interest in building foundation models for KGs has highlighted a
fundamental challenge: knowledge-graph data is relatively scarce. The
best-known KGs are primarily human-labeled, created by pattern-matching, or
extracted using early NLP techniques. While human-generated KGs are in short
supply, automatically extracted KGs are of questionable quality. We present a
solution to this data scarcity problem in the form of a text-to-KG generator
(KGGen), a package that uses language models to create high-quality graphs from
plaintext. Unlike other KG extractors, KGGen clusters related entities to
reduce sparsity in extracted KGs. KGGen is available as a Python library
(\texttt{pip install kg-gen}), making it accessible to everyone. Along with
KGGen, we release the first benchmark, Measure of of Information in Nodes and
Edges (MINE), that tests an extractor's ability to produce a useful KG from
plain text. We benchmark our new tool against existing extractors and
demonstrate far superior performance.
|
2502.09960
|
Global-Local Interface for On-Demand Teleoperation
|
cs.RO
|
Teleoperation is a critical method for human-robot interface, holds
significant potential for enabling robotic applications in industrial and
unstructured environments. Existing teleoperation methods have distinct
strengths and limitations in flexibility, range of workspace and precision. To
fuse these advantages, we introduce the Global-Local (G-L) Teleoperation
Interface. This interface decouples robotic teleoperation into global behavior,
which ensures the robot motion range and intuitiveness, and local behavior,
which enhances human operator's dexterity and capability for performing fine
tasks. The G-L interface enables efficient teleoperation not only for
conventional tasks like pick-and-place, but also for challenging fine
manipulation and large-scale movements. Based on the G-L interface, we
constructed a single-arm and a dual-arm teleoperation system with different
remote control devices, then demonstrated tasks requiring large motion range,
precise manipulation or dexterous end-effector control. Extensive experiments
validated the user-friendliness, accuracy, and generalizability of the proposed
interface.
|
2502.09963
|
Generating on Generated: An Approach Towards Self-Evolving Diffusion
Models
|
cs.CV
|
Recursive Self-Improvement (RSI) enables intelligence systems to autonomously
refine their capabilities. This paper explores the application of RSI in
text-to-image diffusion models, addressing the challenge of training collapse
caused by synthetic data. We identify two key factors contributing to this
collapse: the lack of perceptual alignment and the accumulation of generative
hallucinations. To mitigate these issues, we propose three strategies: (1) a
prompt construction and filtering pipeline designed to facilitate the
generation of perceptual aligned data, (2) a preference sampling method to
identify human-preferred samples and filter out generative hallucinations, and
(3) a distribution-based weighting scheme to penalize selected samples with
hallucinatory errors. Our extensive experiments validate the effectiveness of
these approaches.
|
2502.09967
|
VicKAM: Visual Conceptual Knowledge Guided Action Map for Weakly
Supervised Group Activity Recognition
|
cs.CV
|
Existing weakly supervised group activity recognition methods rely on object
detectors or attention mechanisms to capture key areas automatically. However,
they overlook the semantic information associated with captured areas, which
may adversely affect the recognition performance. In this paper, we propose a
novel framework named Visual Conceptual Knowledge Guided Action Map (VicKAM)
which effectively captures the locations of individual actions and integrates
them with action semantics for weakly supervised group activity recognition.It
generates individual action prototypes from training set as visual conceptual
knowledge to bridge action semantics and visual representations. Guided by this
knowledge, VicKAM produces action maps that indicate the likelihood of each
action occurring at various locations, based on image correlation theorem. It
further augments individual action maps using group activity related
statistical information, representing individual action distribution under
different group activities, to establish connections between action maps and
specific group activities. The augmented action map is incorporated with action
semantic representations for group activity recognition.Extensive experiments
on two public benchmarks, the Volleyball and the NBA datasets, demonstrate the
effectiveness of our proposed method, even in cases of limited training data.
The code will be released later.
|
2502.09969
|
Data Valuation using Neural Networks for Efficient Instruction
Fine-Tuning
|
cs.LG cs.AI cs.CL
|
Influence functions provide crucial insights into model training, but
existing methods suffer from large computational costs and limited
generalization. Particularly, recent works have proposed various metrics and
algorithms to calculate the influence of data using language models, which do
not scale well with large models and datasets. This is because of the expensive
forward and backward passes required for computation, substantial memory
requirements to store large models, and poor generalization of influence
estimates to new data. In this paper, we explore the use of small neural
networks -- which we refer to as the InfluenceNetwork -- to estimate influence
values, achieving up to 99% cost reduction. Our evaluation demonstrates that
influence values can be estimated with models just 0.0027% the size of full
language models (we use 7B and 8B versions). We apply our algorithm of
estimating influence values (called NN-CIFT: Neural Networks for effiCient
Instruction Fine-Tuning) to the downstream task of subset selection for general
instruction fine-tuning. In our study, we include four state-of-the-art
influence functions and show no compromise in performance, despite large
speedups, between NN-CIFT and the original influence functions. We provide an
in-depth hyperparameter analyses of NN-CIFT. The code for our method can be
found here: https://github.com/agarwalishika/NN-CIFT.
|
2502.09970
|
Universal Machine Learning Interatomic Potentials are Ready for Solid
Ion Conductors
|
cond-mat.mtrl-sci cs.LG
|
With the rapid development of energy storage technology, high-performance
solid-state electrolytes (SSEs) have become critical for next-generation
lithium-ion batteries. These materials require high ionic conductivity,
excellent electrochemical stability, and good mechanical properties to meet the
demands of electric vehicles and portable electronics. However, traditional
methods like density functional theory (DFT) and empirical force fields face
challenges such as high computational costs, poor scalability, and limited
accuracy across material systems. Universal machine learning interatomic
potentials (uMLIPs) offer a promising solution with their efficiency and
near-DFT-level accuracy.This study systematically evaluates six advanced uMLIP
models (MatterSim, MACE, SevenNet, CHGNet, M3GNet, and ORBFF) in terms of
energy, forces, thermodynamic properties, elastic moduli, and lithium-ion
diffusion behavior. The results show that MatterSim outperforms others in
nearly all metrics, particularly in complex material systems, demonstrating
superior accuracy and physical consistency. Other models exhibit significant
deviations due to issues like energy inconsistency or insufficient training
data coverage.Further analysis reveals that MatterSim achieves excellent
agreement with reference values in lithium-ion diffusivity calculations,
especially at room temperature. Studies on Li3YCl6 and Li6PS5Cl uncover how
crystal structure, anion disorder levels, and Na/Li arrangements influence
ionic conductivity. Appropriate S/Cl disorder levels and optimized Na/Li
arrangements enhance diffusion pathway connectivity, improving overall ionic
transport performance.
|
2502.09971
|
Conditional Latent Coding with Learnable Synthesized Reference for Deep
Image Compression
|
cs.CV cs.AI
|
In this paper, we study how to synthesize a dynamic reference from an
external dictionary to perform conditional coding of the input image in the
latent domain and how to learn the conditional latent synthesis and coding
modules in an end-to-end manner. Our approach begins by constructing a
universal image feature dictionary using a multi-stage approach involving
modified spatial pyramid pooling, dimension reduction, and multi-scale feature
clustering. For each input image, we learn to synthesize a conditioning latent
by selecting and synthesizing relevant features from the dictionary, which
significantly enhances the model's capability in capturing and exploring image
source correlation. This conditional latent synthesis involves a
correlation-based feature matching and alignment strategy, comprising a
Conditional Latent Matching (CLM) module and a Conditional Latent Synthesis
(CLS) module. The synthesized latent is then used to guide the encoding
process, allowing for more efficient compression by exploiting the correlation
between the input image and the reference dictionary. According to our
theoretical analysis, the proposed conditional latent coding (CLC) method is
robust to perturbations in the external dictionary samples and the selected
conditioning latent, with an error bound that scales logarithmically with the
dictionary size, ensuring stability even with large and diverse dictionaries.
Experimental results on benchmark datasets show that our new method improves
the coding performance by a large margin (up to 1.2 dB) with a very small
overhead of approximately 0.5\% bits per pixel. Our code is publicly available
at https://github.com/ydchen0806/CLC.
|
2502.09974
|
Has My System Prompt Been Used? Large Language Model Prompt Membership
Inference
|
cs.AI cs.CR
|
Prompt engineering has emerged as a powerful technique for optimizing large
language models (LLMs) for specific applications, enabling faster prototyping
and improved performance, and giving rise to the interest of the community in
protecting proprietary system prompts. In this work, we explore a novel
perspective on prompt privacy through the lens of membership inference. We
develop Prompt Detective, a statistical method to reliably determine whether a
given system prompt was used by a third-party language model. Our approach
relies on a statistical test comparing the distributions of two groups of model
outputs corresponding to different system prompts. Through extensive
experiments with a variety of language models, we demonstrate the effectiveness
of Prompt Detective for prompt membership inference. Our work reveals that even
minor changes in system prompts manifest in distinct response distributions,
enabling us to verify prompt usage with statistical significance.
|
2502.09977
|
LaRA: Benchmarking Retrieval-Augmented Generation and Long-Context LLMs
-- No Silver Bullet for LC or RAG Routing
|
cs.CL cs.AI
|
Effectively incorporating external knowledge into Large Language Models
(LLMs) is crucial for enhancing their capabilities and addressing real-world
needs. Retrieval-Augmented Generation (RAG) offers an effective method for
achieving this by retrieving the most relevant fragments into LLMs. However,
the advancements in context window size for LLMs offer an alternative approach,
raising the question of whether RAG remains necessary for effectively handling
external knowledge. Several existing studies provide inconclusive comparisons
between RAG and long-context (LC) LLMs, largely due to limitations in the
benchmark designs. In this paper, we present LaRA, a novel benchmark
specifically designed to rigorously compare RAG and LC LLMs. LaRA encompasses
2,326 test cases across four practical QA task categories and three types of
naturally occurring long texts. Through systematic evaluation of seven
open-source and four proprietary LLMs, we find that the optimal choice between
RAG and LC depends on a complex interplay of factors, including the model's
parameter size, long-text capabilities, context length, task type, and the
characteristics of the retrieved chunks. Our findings provide actionable
guidelines for practitioners to effectively leverage both RAG and LC approaches
in developing and deploying LLM applications. Our code and dataset is provided
at:
\href{https://github.com/likuanppd/LaRA}{\textbf{https://github.com/likuanppd/LaRA}}.
|
2502.09978
|
RoadFed: A Multimodal Federated Learning System for Improving Road
Safety
|
cs.CE
|
Internet of Things (IoTs) have been widely applied in Collaborative
Intelligent Transportation Systems (C-ITS) for the prevention of road
accidents. As one of the primary causes of road accidents in C-ITS, the
efficient detection and early alarm of road hazards are of paramount
importance. Given the importance, extensive research has explored this topic
and obtained favorable results. However, most existing solutions only explore
single-modality data, struggle with high computation and communication
overhead, or suffer from the curse of high dimensionality in their
privacy-preserving methodologies. To overcome these obstacles, in this paper,
we introduce RoadFed, an innovative and private multimodal Federated
learning-based system tailored for intelligent Road hazard detection and alarm.
This framework encompasses an innovative Multimodal Road Hazard Detector, a
communication-efficient federated learning approach, and a customized
low-error-rate local differential privacy method crafted for high dimensional
multimodal data. Experimental results reveal that the proposed RoadFed
surpasses most existing systems in the self-gathered real-world and CrisisMMD
public datasets. In particular, RoadFed achieves an accuracy of 96.42% with a
mere 0.0351 seconds of latency and its communication cost is up to 1,000 times
lower than existing systems in this field. It facilitates collaborative
training with non-iid high dimensional multimodal real-world data across
various data modalities on multiple edges while ensuring privacy preservation
for road users.
|
2502.09980
|
V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving with
Multi-Modal Large Language Models
|
cs.CV cs.RO
|
Current autonomous driving vehicles rely mainly on their individual sensors
to understand surrounding scenes and plan for future trajectories, which can be
unreliable when the sensors are malfunctioning or occluded. To address this
problem, cooperative perception methods via vehicle-to-vehicle (V2V)
communication have been proposed, but they have tended to focus on detection
and tracking. How those approaches contribute to overall cooperative planning
performance is still under-explored. Inspired by recent progress using Large
Language Models (LLMs) to build autonomous driving systems, we propose a novel
problem setting that integrates an LLM into cooperative autonomous driving,
with the proposed Vehicle-to-Vehicle Question-Answering (V2V-QA) dataset and
benchmark. We also propose our baseline method Vehicle-to-Vehicle Large
Language Model (V2V-LLM), which uses an LLM to fuse perception information from
multiple connected autonomous vehicles (CAVs) and answer driving-related
questions: grounding, notable object identification, and planning. Experimental
results show that our proposed V2V-LLM can be a promising unified model
architecture for performing various tasks in cooperative autonomous driving,
and outperforms other baseline methods that use different fusion approaches.
Our work also creates a new research direction that can improve the safety of
future autonomous driving systems. Our project website:
https://eddyhkchiu.github.io/v2vllm.github.io/ .
|
2502.09981
|
Exploring Neural Granger Causality with xLSTMs: Unveiling Temporal
Dependencies in Complex Data
|
cs.LG
|
Causality in time series can be difficult to determine, especially in the
presence of non-linear dependencies. The concept of Granger causality helps
analyze potential relationships between variables, thereby offering a method to
determine whether one time series can predict-Granger cause-future values of
another. Although successful, Granger causal methods still struggle with
capturing long-range relations between variables. To this end, we leverage the
recently successful Extended Long Short-Term Memory (xLSTM) architecture and
propose Granger causal xLSTMs (GC-xLSTM). It first enforces sparsity between
the time series components by using a novel dynamic lass penalty on the initial
projection. Specifically, we adaptively improve the model and identify sparsity
candidates. Our joint optimization procedure then ensures that the Granger
causal relations are recovered in a robust fashion. Our experimental
evaluations on three datasets demonstrate the overall efficacy of our proposed
GC-xLSTM model.
|
2502.09985
|
On Volume Minimization in Conformal Regression
|
stat.ML cs.LG
|
We study the question of volume optimality in split conformal regression, a
topic still poorly understood in comparison to coverage control. Using the fact
that the calibration step can be seen as an empirical volume minimization
problem, we first derive a finite-sample upper-bound on the excess volume loss
of the interval returned by the classical split method. This important quantity
measures the difference in length between the interval obtained with the split
method and the shortest oracle prediction interval. Then, we introduce EffOrt,
a methodology that modifies the learning step so that the base prediction
function is selected in order to minimize the length of the returned intervals.
In particular, our theoretical analysis of the excess volume loss of the
prediction sets produced by EffOrt reveals the links between the learning and
calibration steps, and notably the impact of the choice of the function class
of the base predictor. We also introduce Ad-EffOrt, an extension of the
previous method, which produces intervals whose size adapts to the value of the
covariate. Finally, we evaluate the empirical performance and the robustness of
our methodologies.
|
2502.09990
|
X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from
Multi-Turn Jailbreaks without Compromising Usability
|
cs.CR cs.AI cs.CL cs.CV cs.LG
|
Despite the rapid development of safety alignment techniques for LLMs,
defending against multi-turn jailbreaks is still a challenging task. In this
paper, we conduct a comprehensive comparison, revealing that some existing
defense methods can improve the robustness of LLMs against multi-turn
jailbreaks but compromise usability, i.e., reducing general capabilities or
causing the over-refusal problem. From the perspective of mechanism
interpretability of LLMs, we discover that these methods fail to establish a
boundary that exactly distinguishes safe and harmful feature representations.
Therefore, boundary-safe representations close to harmful representations are
inevitably disrupted, leading to a decline in usability. To address this issue,
we propose X-Boundary to push harmful representations away from boundary-safe
representations and obtain an exact distinction boundary. In this way, harmful
representations can be precisely erased without disrupting safe ones.
Experimental results show that X-Boundary achieves state-of-the-art defense
performance against multi-turn jailbreaks, while reducing the over-refusal rate
by about 20% and maintaining nearly complete general capability. Furthermore,
we theoretically prove and empirically verify that X-Boundary can accelerate
the convergence process during training. Please see our code at:
https://github.com/AI45Lab/X-Boundary.
|
2502.09992
|
Large Language Diffusion Models
|
cs.CL cs.LG
|
Autoregressive models (ARMs) are widely regarded as the cornerstone of large
language models (LLMs). We challenge this notion by introducing LLaDA, a
diffusion model trained from scratch under the pre-training and supervised
fine-tuning (SFT) paradigm. LLaDA models distributions through a forward data
masking process and a reverse process, parameterized by a vanilla Transformer
to predict masked tokens. By optimizing a likelihood bound, it provides a
principled generative approach for probabilistic inference. Across extensive
benchmarks, LLaDA demonstrates strong scalability, outperforming our
self-constructed ARM baselines. Remarkably, LLaDA 8B is competitive with strong
LLMs like LLaMA3 8B in in-context learning and, after SFT, exhibits impressive
instruction-following abilities in case studies such as multi-turn dialogue.
Moreover, LLaDA addresses the reversal curse, surpassing GPT-4o in a reversal
poem completion task. Our findings establish diffusion models as a viable and
promising alternative to ARMs, challenging the assumption that key LLM
capabilities discussed above are inherently tied to ARMs. Project page and
codes: https://ml-gsai.github.io/LLaDA-demo/.
|
2502.09993
|
Navigating Label Ambiguity for Facial Expression Recognition in the Wild
|
cs.CV
|
Facial expression recognition (FER) remains a challenging task due to label
ambiguity caused by the subjective nature of facial expressions and noisy
samples. Additionally, class imbalance, which is common in real-world datasets,
further complicates FER. Although many studies have shown impressive
improvements, they typically address only one of these issues, leading to
suboptimal results. To tackle both challenges simultaneously, we propose a
novel framework called Navigating Label Ambiguity (NLA), which is robust under
real-world conditions. The motivation behind NLA is that dynamically estimating
and emphasizing ambiguous samples at each iteration helps mitigate noise and
class imbalance by reducing the model's bias toward majority classes. To
achieve this, NLA consists of two main components: Noise-aware Adaptive
Weighting (NAW) and consistency regularization. Specifically, NAW adaptively
assigns higher importance to ambiguous samples and lower importance to noisy
ones, based on the correlation between the intermediate prediction scores for
the ground truth and the nearest negative. Moreover, we incorporate a
regularization term to ensure consistent latent distributions. Consequently,
NLA enables the model to progressively focus on more challenging ambiguous
samples, which primarily belong to the minority class, in the later stages of
training. Extensive experiments demonstrate that NLA outperforms existing
methods in both overall and mean accuracy, confirming its robustness against
noise and class imbalance. To the best of our knowledge, this is the first
framework to address both problems simultaneously.
|
2502.09994
|
Decision Information Meets Large Language Models: The Future of
Explainable Operations Research
|
cs.AI
|
Operations Research (OR) is vital for decision-making in many industries.
While recent OR methods have seen significant improvements in automation and
efficiency through integrating Large Language Models (LLMs), they still
struggle to produce meaningful explanations. This lack of clarity raises
concerns about transparency and trustworthiness in OR applications. To address
these challenges, we propose a comprehensive framework, Explainable Operations
Research (EOR), emphasizing actionable and understandable explanations
accompanying optimization. The core of EOR is the concept of Decision
Information, which emerges from what-if analysis and focuses on evaluating the
impact of complex constraints (or parameters) changes on decision-making.
Specifically, we utilize bipartite graphs to quantify the changes in the OR
model and adopt LLMs to improve the explanation capabilities. Additionally, we
introduce the first industrial benchmark to rigorously evaluate the
effectiveness of explanations and analyses in OR, establishing a new standard
for transparency and clarity in the field.
|
2502.09998
|
Estimation of the Learning Coefficient Using Empirical Loss
|
stat.ML cs.LG
|
The learning coefficient plays a crucial role in analyzing the performance of
information criteria, such as the Widely Applicable Information Criterion
(WAIC) and the Widely Applicable Bayesian Information Criterion (WBIC), which
Sumio Watanabe developed to assess model generalization ability. In regular
statistical models, the learning coefficient is given by d/2, where d is the
dimension of the parameter space. More generally, it is defined as the absolute
value of the pole order of a zeta function derived from the Kullback-Leibler
divergence and the prior distribution. However, except for specific cases such
as reduced-rank regression, the learning coefficient cannot be derived in a
closed form. Watanabe proposed a numerical method to estimate the learning
coefficient, which Imai further refined to enhance its convergence properties.
These methods utilize the asymptotic behavior of WBIC and have been shown to be
statistically consistent as the sample size grows. In this paper, we propose a
novel numerical estimation method that fundamentally differs from previous
approaches and leverages a new quantity, "Empirical Loss," which was introduced
by Watanabe. Through numerical experiments, we demonstrate that our proposed
method exhibits both lower bias and lower variance compared to those of
Watanabe and Imai. Additionally, we provide a theoretical analysis that
elucidates why our method outperforms existing techniques and present empirical
evidence that supports our findings.
|
2502.10001
|
EmbBERT-Q: Breaking Memory Barriers in Embedded NLP
|
cs.CL cs.AR cs.DC cs.LG
|
Large Language Models (LLMs) have revolutionized natural language processing,
setting new standards across a wide range of applications. However, their
relevant memory and computational demands make them impractical for deployment
on technologically-constrained tiny devices such as wearable devices and
Internet-of-Things units. To address this limitation, we introduce EmbBERT-Q, a
novel tiny language model specifically designed for tiny devices with stringent
memory constraints. EmbBERT-Q achieves state-of-the-art (SotA) accuracy in
Natural Language Processing tasks in this scenario, with a total memory
footprint (weights and activations) of just 781 kB, representing a 25x
reduction in size with respect to SotA models. By combining architectural
innovations with hardware-compatible 8-bit quantization, EmbBERT-Q consistently
outperforms several baseline models scaled down to a 2 MB memory budget (i.e.,
the maximum memory typically available in tiny devices), including heavily
compressed versions of BERT and MAMBA. Extensive experimental evaluations on
both a selected benchmark dataset, TinyNLP, specifically curated to evaluate
Tiny Language Models in NLP tasks and real-world scenarios, and the GLUE
benchmark, demonstrate EmbBERT-Q ability to deliver competitive accuracy with
respect to existing approaches, achieving an unmatched balance between memory
and performance. To ensure the complete and immediate reproducibility of all
our results, we release all code, scripts, and model checkpoints at
https://github.com/RiccardoBravin/tiny-LLM.
|
2502.10003
|
SciClaimHunt: A Large Dataset for Evidence-based Scientific Claim
Verification
|
cs.CL
|
Verifying scientific claims presents a significantly greater challenge than
verifying political or news-related claims. Unlike the relatively broad
audience for political claims, the users of scientific claim verification
systems can vary widely, ranging from researchers testing specific hypotheses
to everyday users seeking information on a medication. Additionally, the
evidence for scientific claims is often highly complex, involving technical
terminology and intricate domain-specific concepts that require specialized
models for accurate verification. Despite considerable interest from the
research community, there is a noticeable lack of large-scale scientific claim
verification datasets to benchmark and train effective models. To bridge this
gap, we introduce two large-scale datasets, SciClaimHunt and SciClaimHunt_Num,
derived from scientific research papers. We propose several baseline models
tailored for scientific claim verification to assess the effectiveness of these
datasets. Additionally, we evaluate models trained on SciClaimHunt and
SciClaimHunt_Num against existing scientific claim verification datasets to
gauge their quality and reliability. Furthermore, we conduct human evaluations
of the claims in proposed datasets and perform error analysis to assess the
effectiveness of the proposed baseline models. Our findings indicate that
SciClaimHunt and SciClaimHunt_Num serve as highly reliable resources for
training models in scientific claim verification.
|
2502.10011
|
InterGridNet: An Electric Network Frequency Approach for Audio Source
Location Classification Using Convolutional Neural Networks
|
cs.SD cs.LG eess.AS
|
A novel framework, called InterGridNet, is introduced, leveraging a shallow
RawNet model for geolocation classification of Electric Network Frequency (ENF)
signatures in the SP Cup 2016 dataset. During data preparation, recordings are
sorted into audio and power groups based on inherent characteristics, further
divided into 50 Hz and 60 Hz groups via spectrogram analysis. Residual blocks
within the classification model extract frame-level embeddings, aiding
decision-making through softmax activation. The topology and the
hyperparameters of the shallow RawNet are optimized using a Neural Architecture
Search. The overall accuracy of InterGridNet in the test recordings is 92%,
indicating its effectiveness against the state-of-the-art methods tested in the
SP Cup 2016. These findings underscore InterGridNet's effectiveness in
accurately classifying audio recordings from diverse power grids, advancing
state-of-the-art geolocation estimation methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.