id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.10268
|
Optimized Strategies for Peak Shaving and BESS Efficiency Enhancement
through Cycle-Based Control and Cluster-Level Power Allocation
|
eess.SY cs.SY
|
Battery Energy Storage Systems (BESS) are essential for peak shaving,
balancing power supply and demand while enhancing grid efficiency. This study
proposes a cycle-based control strategy for charging and discharging, which
optimizes capture rate (CR), release rate (RR), and capacity utilization rate
(CUR), improving BESS performance. Compared to traditional day-ahead methods,
the cycle-based approach enhances operational accuracy and reduces capacity
waste, achieving a CUR increase from 75.1% to 79.9%. An innovative
cluster-level power allocation method, leveraging an improved Particle Swarm
Optimization (PSO) algorithm, is introduced. This strategy reduces daily energy
loss by 174.21 kWh (3.7%) and increases BESS efficiency by 0.4%. Transient and
steady-state energy loss components are analyzed, revealing that transient loss
proportion decreases significantly as power depth increases, from 27.2% at 1 MW
to 1.3% at 10 MW. Simulations based on a detailed Simulink/Simscape model
validate these methods, demonstrating enhanced peak shaving effectiveness and
prolonged BESS lifespan by reducing equivalent cycles. The study provides a
robust framework for optimizing BESS performance and efficiency in real-world
applications.
|
2502.10273
|
Probing Perceptual Constancy in Large Vision Language Models
|
cs.CV cs.AI
|
Perceptual constancy is the ability to maintain stable perceptions of objects
despite changes in sensory input, such as variations in distance, angle, or
lighting. This ability is crucial for recognizing visual information in a
dynamic world, making it essential for Vision-Language Models (VLMs). However,
whether VLMs are currently and theoretically capable of mastering this ability
remains underexplored. In this study, we evaluated 33 VLMs using 253
experiments across three domains: color, size, and shape constancy. The
experiments included single-image and video adaptations of classic cognitive
tasks, along with novel tasks in in-the-wild conditions, to evaluate the
models' recognition of object properties under varying conditions. We found
significant variability in VLM performance, with models performance in shape
constancy clearly dissociated from that of color and size constancy.
|
2502.10277
|
Artificial Intelligence to Assess Dental Findings from Panoramic
Radiographs -- A Multinational Study
|
cs.CV
|
Dental panoramic radiographs (DPRs) are widely used in clinical practice for
comprehensive oral assessment but present challenges due to overlapping
structures and time constraints in interpretation.
This study aimed to establish a solid baseline for the AI-automated
assessment of findings in DPRs by developing, evaluating an AI system, and
comparing its performance with that of human readers across multinational data
sets.
We analyzed 6,669 DPRs from three data sets (the Netherlands, Brazil, and
Taiwan), focusing on 8 types of dental findings. The AI system combined object
detection and semantic segmentation techniques for per-tooth finding
identification. Performance metrics included sensitivity, specificity, and area
under the receiver operating characteristic curve (AUC-ROC). AI
generalizability was tested across data sets, and performance was compared with
human dental practitioners.
The AI system demonstrated comparable or superior performance to human
readers, particularly +67.9% (95% CI: 54.0%-81.9%; p < .001) sensitivity for
identifying periapical radiolucencies and +4.7% (95% CI: 1.4%-8.0%; p = .008)
sensitivity for identifying missing teeth. The AI achieved a macro-averaged
AUC-ROC of 96.2% (95% CI: 94.6%-97.8%) across 8 findings. AI agreements with
the reference were comparable to inter-human agreements in 7 of 8 findings
except for caries (p = .024). The AI system demonstrated robust generalization
across diverse imaging and demographic settings and processed images 79 times
faster (95% CI: 75-82) than human readers.
The AI system effectively assessed findings in DPRs, achieving performance on
par with or better than human experts while significantly reducing
interpretation time. These results highlight the potential for integrating AI
into clinical workflows to improve diagnostic efficiency and accuracy, and
patient management.
|
2502.10280
|
Probabilistic Super-Resolution for High-Fidelity Physical System
Simulations with Uncertainty Quantification
|
cs.LG stat.ML
|
Super-resolution (SR) is a promising tool for generating high-fidelity
simulations of physical systems from low-resolution data, enabling fast and
accurate predictions in engineering applications. However, existing
deep-learning based SR methods, require large labeled datasets and lack
reliable uncertainty quantification (UQ), limiting their applicability in
real-world scenarios. To overcome these challenges, we propose a probabilistic
SR framework that leverages the Statistical Finite Element Method and
energy-based generative modeling. Our method enables efficient high-resolution
predictions with inherent UQ, while eliminating the need for extensive labeled
datasets. The method is validated on a 2D Poisson example and compared with
bicubic interpolation upscaling. Results demonstrate a computational speed-up
over high-resolution numerical solvers while providing reliable uncertainty
estimates.
|
2502.10283
|
Anomaly Detection with LWE Encrypted Control
|
cs.CR cs.SY eess.SY
|
Detecting attacks using encrypted signals is challenging since encryption
hides its information content. We present a novel mechanism for anomaly
detection over Learning with Errors (LWE) encrypted signals without using
decryption, secure channels, nor complex communication schemes. Instead, the
detector exploits the homomorphic property of LWE encryption to perform
hypothesis tests on transformations of the encrypted samples. The specific
transformations are determined by solutions to a hard lattice-based
minimization problem. While the test's sensitivity deteriorates with suboptimal
solutions, similar to the exponential deterioration of the (related) test that
breaks the cryptosystem, we show that the deterioration is polynomial for our
test. This rate gap can be exploited to pick parameters that lead to somewhat
weaker encryption but large gains in detection capability. Finally, we conclude
the paper by presenting a numerical example that simulates anomaly detection,
demonstrating the effectiveness of our method in identifying attacks.
|
2502.10284
|
A Hybrid Cross-Stage Coordination Pre-ranking Model for Online
Recommendation Systems
|
cs.IR cs.AI
|
Large-scale recommendation systems often adopt cascading architecture
consisting of retrieval, pre-ranking, ranking, and re-ranking stages. With
strict latency requirements, pre-ranking utilizes lightweight models to perform
a preliminary selection from massive retrieved candidates. However, recent
works focus solely on improving consistency with ranking, relying exclusively
on downstream stages. Since downstream input is derived from the pre-ranking
output, they will exacerbate the sample selection bias (SSB) issue and Matthew
effect, leading to sub-optimal results. To address the limitation, we propose a
novel Hybrid Cross-Stage Coordination Pre-ranking model (HCCP) to integrate
information from upstream (retrieval) and downstream (ranking, re-ranking)
stages. Specifically, cross-stage coordination refers to the pre-ranking's
adaptability to the entire stream and the role of serving as a more effective
bridge between upstream and downstream. HCCP consists of Hybrid Sample
Construction and Hybrid Objective Optimization. Hybrid sample construction
captures multi-level unexposed data from the entire stream and rearranges them
to become the optimal guiding "ground truth" for pre-ranking learning. Hybrid
objective optimization contains the joint optimization of consistency and
long-tail precision through our proposed Margin InfoNCE loss. It is
specifically designed to learn from such hybrid unexposed samples, improving
the overall performance and mitigating the SSB issue. The appendix describes a
proof of the efficacy of the proposed loss in selecting potential positives.
Extensive offline and online experiments indicate that HCCP outperforms SOTA
methods by improving cross-stage coordination. It contributes up to 14.9% UCVR
and 1.3% UCTR in the JD E-commerce recommendation system. Concerning code
privacy, we provide a pseudocode for reference.
|
2502.10288
|
Adversarial Mixup Unlearning
|
cs.LG
|
Machine unlearning is a critical area of research aimed at safeguarding data
privacy by enabling the removal of sensitive information from machine learning
models. One unique challenge in this field is catastrophic unlearning, where
erasing specific data from a well-trained model unintentionally removes
essential knowledge, causing the model to deviate significantly from a
retrained one. To address this, we introduce a novel approach that regularizes
the unlearning process by utilizing synthesized mixup samples, which simulate
the data susceptible to catastrophic effects. At the core of our approach is a
generator-unlearner framework, MixUnlearn, where a generator adversarially
produces challenging mixup examples, and the unlearner effectively forgets
target information based on these synthesized data. Specifically, we first
introduce a novel contrastive objective to train the generator in an
adversarial direction: generating examples that prompt the unlearner to reveal
information that should be forgotten, while losing essential knowledge. Then
the unlearner, guided by two other contrastive loss terms, processes the
synthesized and real data jointly to ensure accurate unlearning without losing
critical knowledge, overcoming catastrophic effects. Extensive evaluations
across benchmark datasets demonstrate that our method significantly outperforms
state-of-the-art approaches, offering a robust solution to machine unlearning.
This work not only deepens understanding of unlearning mechanisms but also lays
the foundation for effective machine unlearning with mixup augmentation.
|
2502.10292
|
Small Loss Bounds for Online Learning Separated Function Classes: A
Gaussian Process Perspective
|
cs.LG stat.ML
|
In order to develop practical and efficient algorithms while circumventing
overly pessimistic computational lower bounds, recent work has been interested
in developing oracle-efficient algorithms in a variety of learning settings.
Two such settings of particular interest are online and differentially private
learning. While seemingly different, these two fields are fundamentally
connected by the requirement that successful algorithms in each case satisfy
stability guarantees; in particular, recent work has demonstrated that
algorithms for online learning whose performance adapts to beneficial problem
instances, attaining the so-called small-loss bounds, require a form of
stability similar to that of differential privacy. In this work, we identify
the crucial role that separation plays in allowing oracle-efficient algorithms
to achieve this strong stability. Our notion, which we term $\rho$-separation,
generalizes and unifies several previous approaches to enforcing this strong
stability, including the existence of small-separator sets and the recent
notion of $\gamma$-approximability. We present an oracle-efficient algorithm
that is capable of achieving small-loss bounds with improved rates in greater
generality than previous work, as well as a variant for differentially private
learning that attains optimal rates, again under our separation condition. In
so doing, we prove a new stability result for minimizers of a Gaussian process
that strengthens and generalizes previous work.
|
2502.10294
|
QMaxViT-Unet+: A Query-Based MaxViT-Unet with Edge Enhancement for
Scribble-Supervised Segmentation of Medical Images
|
cs.CV
|
The deployment of advanced deep learning models for medical image
segmentation is often constrained by the requirement for extensively annotated
datasets. Weakly-supervised learning, which allows less precise labels, has
become a promising solution to this challenge. Building on this approach, we
propose QMaxViT-Unet+, a novel framework for scribble-supervised medical image
segmentation. This framework is built on the U-Net architecture, with the
encoder and decoder replaced by Multi-Axis Vision Transformer (MaxViT) blocks.
These blocks enhance the model's ability to learn local and global features
efficiently. Additionally, our approach integrates a query-based Transformer
decoder to refine features and an edge enhancement module to compensate for the
limited boundary information in the scribble label. We evaluate the proposed
QMaxViT-Unet+ on four public datasets focused on cardiac structures, colorectal
polyps, and breast cancer: ACDC, MS-CMRSeg, SUN-SEG, and BUSI. Evaluation
metrics include the Dice similarity coefficient (DSC) and the 95th percentile
of Hausdorff distance (HD95). Experimental results show that QMaxViT-Unet+
achieves 89.1\% DSC and 1.316mm HD95 on ACDC, 88.4\% DSC and 2.226mm HD95 on
MS-CMRSeg, 71.4\% DSC and 4.996mm HD95 on SUN-SEG, and 69.4\% DSC and 50.122mm
HD95 on BUSI. These results demonstrate that our method outperforms existing
approaches in terms of accuracy, robustness, and efficiency while remaining
competitive with fully-supervised learning approaches. This makes it ideal for
medical image analysis, where high-quality annotations are often scarce and
require significant effort and expense. The code is available at:
https://github.com/anpc849/QMaxViT-Unet
|
2502.10295
|
Fenchel-Young Variational Learning
|
cs.LG
|
From a variational perspective, many statistical learning criteria involve
seeking a distribution that balances empirical risk and regularization. In this
paper, we broaden this perspective by introducing a new general class of
variational methods based on Fenchel-Young (FY) losses, treated as divergences
that generalize (and encompass) the familiar Kullback-Leibler divergence at the
core of classical variational learning. Our proposed formulation -- FY
variational learning -- includes as key ingredients new notions of FY free
energy, FY evidence, FY evidence lower bound, and FY posterior. We derive
alternating minimization and gradient backpropagation algorithms to compute (or
lower bound) the FY evidence, which enables learning a wider class of models
than previous variational formulations. This leads to generalized FY variants
of classical algorithms, such as an FY expectation-maximization (FYEM)
algorithm, and latent-variable models, such as an FY variational autoencoder
(FYVAE). Our new methods are shown to be empirically competitive, often
outperforming their classical counterparts, and most importantly, to have
qualitatively novel features. For example, FYEM has an adaptively sparse
E-step, while the FYVAE can support models with sparse observations and sparse
posteriors.
|
2502.10297
|
DeltaProduct: Increasing the Expressivity of DeltaNet Through Products
of Householders
|
cs.LG cs.CL cs.FL
|
Linear Recurrent Neural Networks (linear RNNs) have emerged as competitive
alternatives to Transformers for sequence modeling, offering efficient training
and linear-time inference. However, existing architectures face a fundamental
trade-off between expressivity and efficiency, dictated by the structure of
their state-transition matrices. While diagonal matrices used in architectures
like Mamba, GLA, or mLSTM yield fast runtime, they suffer from severely limited
expressivity. To address this, recent architectures such as (Gated) DeltaNet
and RWKVv7 adopted a diagonal plus rank-1 structure, allowing simultaneous
token-channel mixing, which overcomes some expressivity limitations with only a
slight decrease in training efficiency. Building on the interpretation of
DeltaNet's recurrence as performing one step of online gradient descent per
token on an associative recall loss, we introduce DeltaProduct, which instead
takes multiple ($n_h$) steps per token. This naturally leads to diagonal plus
rank-$n_h$ state-transition matrices, formed as products of $n_h$ generalized
Householder transformations, providing a tunable mechanism to balance
expressivity and efficiency and a stable recurrence. Through extensive
experiments, we demonstrate that DeltaProduct achieves superior state-tracking
and language modeling capabilities while exhibiting significantly improved
length extrapolation compared to DeltaNet. Additionally, we also strengthen the
theoretical foundation of DeltaNet's expressivity by proving that it can solve
dihedral group word problems in just two layers.
|
2502.10303
|
Reinforcement Learning in Strategy-Based and Atari Games: A Review of
Google DeepMinds Innovations
|
cs.AI cs.GT
|
Reinforcement Learning (RL) has been widely used in many applications,
particularly in gaming, which serves as an excellent training ground for AI
models. Google DeepMind has pioneered innovations in this field, employing
reinforcement learning algorithms, including model-based, model-free, and deep
Q-network approaches, to create advanced AI models such as AlphaGo, AlphaGo
Zero, and MuZero. AlphaGo, the initial model, integrates supervised learning
and reinforcement learning to master the game of Go, surpassing professional
human players. AlphaGo Zero refines this approach by eliminating reliance on
human gameplay data, instead utilizing self-play for enhanced learning
efficiency. MuZero further extends these advancements by learning the
underlying dynamics of game environments without explicit knowledge of the
rules, achieving adaptability across various games, including complex Atari
games. This paper reviews the significance of reinforcement learning
applications in Atari and strategy-based games, analyzing these three models,
their key innovations, training processes, challenges encountered, and
improvements made. Additionally, we discuss advancements in the field of
gaming, including MiniZero and multi-agent models, highlighting future
directions and emerging AI models from Google DeepMind.
|
2502.10307
|
SPIRIT: Short-term Prediction of solar IRradIance for zero-shot Transfer
learning using Foundation Models
|
cs.LG cs.CV
|
Traditional solar forecasting models are based on several years of
site-specific historical irradiance data, often spanning five or more years,
which are unavailable for newer photovoltaic farms. As renewable energy is
highly intermittent, building accurate solar irradiance forecasting systems is
essential for efficient grid management and enabling the ongoing proliferation
of solar energy, which is crucial to achieve the United Nations' net zero
goals. In this work, we propose SPIRIT, a novel approach leveraging foundation
models for solar irradiance forecasting, making it applicable to newer solar
installations. Our approach outperforms state-of-the-art models in zero-shot
transfer learning by about 70%, enabling effective performance at new locations
without relying on any historical data. Further improvements in performance are
achieved through fine-tuning, as more location-specific data becomes available.
These findings are supported by statistical significance, further validating
our approach. SPIRIT represents a pivotal step towards rapid, scalable, and
adaptable solar forecasting solutions, advancing the integration of renewable
energy into global power systems.
|
2502.10308
|
LLM-Powered Preference Elicitation in Combinatorial Assignment
|
cs.AI cs.GT cs.LG
|
We study the potential of large language models (LLMs) as proxies for humans
to simplify preference elicitation (PE) in combinatorial assignment. While
traditional PE methods rely on iterative queries to capture preferences, LLMs
offer a one-shot alternative with reduced human effort. We propose a framework
for LLM proxies that can work in tandem with SOTA ML-powered preference
elicitation schemes. Our framework handles the novel challenges introduced by
LLMs, such as response variability and increased computational costs. We
experimentally evaluate the efficiency of LLM proxies against human queries in
the well-studied course allocation domain, and we investigate the model
capabilities required for success. We find that our approach improves
allocative efficiency by up to 20%, and these results are robust across
different LLMs and to differences in quality and accuracy of reporting.
|
2502.10310
|
Object Detection and Tracking
|
cs.CV cs.CY
|
Efficient and accurate object detection is an important topic in the
development of computer vision systems. With the advent of deep learning
techniques, the accuracy of object detection has increased significantly. The
project aims to integrate a modern technique for object detection with the aim
of achieving high accuracy with real-time performance. The reliance on other
computer vision algorithms in many object identification systems, which results
in poor and ineffective performance, is a significant obstacle. In this
research, we solve the end-to-end object detection problem entirely using deep
learning techniques. The network is trained using the most difficult publicly
available dataset, which is used for an annual item detection challenge.
Applications that need object detection can benefit the system's quick and
precise finding.
|
2502.10311
|
ExplainReduce: Summarising local explanations via proxies
|
cs.LG cs.AI cs.HC
|
Most commonly used non-linear machine learning methods are closed-box models,
uninterpretable to humans. The field of explainable artificial intelligence
(XAI) aims to develop tools to examine the inner workings of these closed
boxes. An often-used model-agnostic approach to XAI involves using simple
models as local approximations to produce so-called local explanations;
examples of this approach include LIME, SHAP, and SLISEMAP. This paper shows
how a large set of local explanations can be reduced to a small "proxy set" of
simple models, which can act as a generative global explanation. This reduction
procedure, ExplainReduce, can be formulated as an optimisation problem and
approximated efficiently using greedy heuristics.
|
2502.10324
|
Analysis and Prediction of Coverage and Channel Rank for UAV Networks in
Rural Scenarios with Foliage
|
eess.SY cs.SY
|
Unmanned aerial vehicles (UAVs) are expected to play a key role in 6G-enabled
vehicular-to-everything (V2X) communications requiring high data rates, low
latency, and reliable connectivity for mission-critical applications.
Multi-input multi-output (MIMO) technology is essential for meeting these
demands. However, UAV link performance is significantly affected by
environmental factors such as signal attenuation, multipath propagation, and
blockage from obstacles, particularly dense foliage in rural areas. In this
paper, we investigate RF coverage and channel rank over UAV channels in
foliage-dominated rural environments using ray tracing (RT) simulations. We
conduct RT-based channel rank and RF coverage analysis over Lake Wheeler Field
Labs at NC State University to examine the impact on UAV links. Custom-modeled
trees are integrated into the RT simulations using NVIDIA Sionna, Blender, and
Open Street Map (OSM) database to capture realistic blockage effects. Results
indicate that tree-induced blockage impacts RF coverage and channel rank at
lower UAV altitudes. We also propose a Kriging interpolation-based 3D channel
rank interpolation scheme, leveraging the observed spatial correlation of
channel rank in the given environments. The accuracy of the proposed scheme is
evaluated using the mean absolute error (MAE) metric and compared against
baseline interpolation methods. Finally, we compare the RT-based received
signal strength (RSS) and channel rank results with real-world measurements
from the NSF AERPAW testbed demonstrating reasonable consistency between
simulation results and the measurements.
|
2502.10325
|
Process Reward Models for LLM Agents: Practical Framework and Directions
|
cs.LG cs.AI
|
We introduce Agent Process Reward Models (AgentPRM), a simple and scalable
framework for training LLM agents to continually improve through interactions.
AgentPRM follows a lightweight actor-critic paradigm, using Monte Carlo
rollouts to compute reward targets and optimize policies. It requires minimal
modifications to existing RLHF pipelines, making it easy to integrate at scale.
Beyond AgentPRM, we propose InversePRM, which learns process rewards directly
from demonstrations without explicit outcome supervision. We also explore key
challenges and opportunities, including exploration, process reward shaping,
and model-predictive reasoning. We evaluate on ALFWorld benchmark, show that
small 3B models trained with AgentPRM and InversePRM outperform strong GPT-4o
baselines, and analyze test-time scaling, reward hacking, and more. Our code is
available at: https://github.com/sanjibanc/agent_prm.
|
2502.10328
|
Generalised Parallel Tempering: Flexible Replica Exchange via Flows and
Diffusions
|
stat.ML cs.LG
|
Parallel Tempering (PT) is a classical MCMC algorithm designed for leveraging
parallel computation to sample efficiently from high-dimensional, multimodal or
otherwise complex distributions via annealing. One limitation of the standard
formulation of PT is the growth of computational resources required to generate
high-quality samples, as measured by effective sample size or round trip rate,
for increasingly challenging distributions. To address this issue, we propose
the framework: Generalised Parallel Tempering (GePT) which allows for the
incorporation of recent advances in modern generative modelling, such as
normalising flows and diffusion models, within Parallel Tempering, while
maintaining the same theoretical guarantees as MCMC-based methods. For
instance, we show that this allows us to utilise diffusion models in a
parallelised manner, bypassing the usual computational cost of a large number
of steps to generate quality samples. Further, we empirically demonstrate that
GePT can improve sample quality and reduce the growth of computational
resources required to handle complex distributions over the classical
algorithm.
|
2502.10330
|
DiOpt: Self-supervised Diffusion for Constrained Optimization
|
cs.LG
|
Recent advances in diffusion models show promising potential for
learning-based optimization by leveraging their multimodal sampling capability
to escape local optima. However, existing diffusion-based optimization
approaches, often reliant on supervised training, lacks a mechanism to ensure
strict constraint satisfaction which is often required in real-world
applications. One resulting observation is the distributional misalignment,
i.e. the generated solution distribution often exhibits small overlap with the
feasible domain. In this paper, we propose DiOpt, a novel diffusion paradigm
that systematically learns near-optimal feasible solution distributions through
iterative self-training. Our framework introduces several key innovations: a
target distribution specifically designed to maximize overlap with the
constrained solution manifold; a bootstrapped self-training mechanism that
adaptively weights candidate solutions based on the severity of constraint
violations and optimality gaps; and a dynamic memory buffer that accelerates
convergence by retaining high-quality solutions over training iterations. To
our knowledge, DiOpt represents the first successful integration of
self-supervised diffusion with hard constraint satisfaction. Evaluations on
diverse tasks, including power grid control, motion retargeting, wireless
allocation demonstrate its superiority in terms of both optimality and
constraint satisfaction.
|
2502.10331
|
InfoPos: A ML-Assisted Solution Design Support Framework for Industrial
Cyber-Physical Systems
|
cs.LG
|
The variety of building blocks and algorithms incorporated in data-centric
and ML-assisted solutions is high, contributing to two challenges: selection of
most effective set and order of building blocks, as well as achieving such a
selection with minimum cost. Considering that ML-assisted solution design is
influenced by the extent of available data, as well as available knowledge of
the target system, it is advantageous to be able to select matching building
blocks. We introduce the first iteration of our InfoPos framework, allowing the
placement of use-cases considering the available positions (levels), i.e., from
poor to rich, of knowledge and data dimensions. With that input, designers and
developers can reveal the most effective corresponding choice(s), streamlining
the solution design process. The results from our demonstrator, an anomaly
identification use-case for industrial Cyber-Physical Systems, reflects
achieved effects upon the use of different building blocks throughout knowledge
and data positions. The achieved ML model performance is considered as the
indicator. Our data processing code and the composed data sets are publicly
available.
|
2502.10334
|
Ocular Disease Classification Using CNN with Deep Convolutional
Generative Adversarial Network
|
cs.CV
|
The Convolutional Neural Network (CNN) has shown impressive performance in
image classification because of its strong learning capabilities. However, it
demands a substantial and balanced dataset for effective training. Otherwise,
networks frequently exhibit over fitting and struggle to generalize to new
examples. Publicly available dataset of fundus images of ocular disease is
insufficient to train any classification model to achieve satisfactory
accuracy. So, we propose Generative Adversarial Network(GAN) based data
generation technique to synthesize dataset for training CNN based
classification model and later use original disease containing ocular images to
test the model. During testing the model classification accuracy with the
original ocular image, the model achieves an accuracy rate of 78.6% for myopia,
88.6% for glaucoma, and 84.6% for cataract, with an overall classification
accuracy of 84.6%.
|
2502.10335
|
Studying number theory with deep learning: a case study with the
M\"obius and squarefree indicator functions
|
math.NT cs.LG
|
Building on work of Charton, we train small transformer models to calculate
the M\"obius function $\mu(n)$ and the squarefree indicator function
$\mu^2(n)$. The models attain nontrivial predictive power. We then iteratively
train additional models to understand how the model functions, ultimately
finding a theoretical explanation.
|
2502.10338
|
Evaluating the Meta- and Object-Level Reasoning of Large Language Models
for Question Answering
|
cs.CL cs.AI
|
Large Language Models (LLMs) excel in natural language tasks but still face
challenges in Question Answering (QA) tasks requiring complex, multi-step
reasoning. We outline the types of reasoning required in some of these tasks,
and reframe them in terms of meta-level reasoning (akin to high-level strategic
reasoning or planning) and object-level reasoning (embodied in lower-level
tasks such as mathematical reasoning). Franklin, a novel dataset with
requirements of meta- and object-level reasoning, is introduced and used along
with three other datasets to evaluate four LLMs at question answering tasks
requiring multiple steps of reasoning. Results from human annotation studies
suggest LLMs demonstrate meta-level reasoning with high frequency, but struggle
with object-level reasoning tasks in some of the datasets used. Additionally,
evidence suggests that LLMs find the object-level reasoning required for the
questions in the Franklin dataset challenging, yet they do exhibit strong
performance with respect to the meta-level reasoning requirements.
|
2502.10339
|
STAR: Spectral Truncation and Rescale for Model Merging
|
cs.CL cs.AI cs.LG
|
Model merging is an efficient way of obtaining a multi-task model from
several pretrained models without further fine-tuning, and it has gained
attention in various domains, including natural language processing (NLP).
Despite the efficiency, a key challenge in model merging is the seemingly
inevitable decrease in task performance as the number of models increases. In
this paper, we propose $\mathbf{S}$pectral $\mathbf{T}$runcation $\mathbf{A}$nd
$\mathbf{R}$escale (STAR) that aims at mitigating ``merging conflicts'' by
truncating small components in the respective spectral spaces, which is
followed by an automatic parameter rescaling scheme to retain the nuclear norm
of the original matrix. STAR requires no additional inference on original
training data and is robust to hyperparamater choice. We demonstrate the
effectiveness of STAR through extensive model merging cases on diverse NLP
tasks. Specifically, STAR works robustly across varying model sizes, and can
outperform baselines by 4.2$\%$ when merging 12 models on Flan-T5. Our code is
publicly available at https://github.com/IBM/STAR.
|
2502.10341
|
Organize the Web: Constructing Domains Enhances Pre-Training Data
Curation
|
cs.CL
|
Modern language models are trained on large, unstructured datasets consisting
of trillions of tokens and obtained by crawling the web. The unstructured
nature makes it difficult to reason about their contents and develop systematic
approaches to data curation. In this paper, we unpack monolithic web corpora by
developing taxonomies of their contents and organizing them into domains. We
introduce WebOrganizer, a framework for organizing web pages in terms of both
their topic and format. Using these two complementary notions of domains, we
automatically annotate pre-training data by distilling annotations from a large
language model into efficient classifiers. This allows us to study how data
from different domains should be mixed to improve models on downstream tasks,
and we show that we can combine insights about effective topics and formats to
further boost performance. We demonstrate that our domain mixing also improves
existing methods that select data based on quality. Furthermore, we study and
compare how quality-based methods will implicitly change the domain mixture.
Overall, our work demonstrates that constructing and mixing domains provides a
valuable complement to quality-based data curation methods, opening new avenues
for effective and insightful pre-training data curation.
|
2502.10352
|
Agentic Verification for Ambiguous Query Disambiguation
|
cs.CL
|
In this work, we tackle the challenge of disambiguating queries in
retrieval-augmented generation (RAG) to diverse yet answerable interpretations.
State-of-the-arts follow a Diversify-then-Verify (DtV) pipeline, where diverse
interpretations are generated by an LLM, later used as search queries to
retrieve supporting passages. Such a process may introduce noise in either
interpretations or retrieval, particularly in enterprise settings, where LLMs
-- trained on static data -- may struggle with domain-specific disambiguations.
Thus, a post-hoc verification phase is introduced to prune noises. Our
distinction is to unify diversification with verification by incorporating
feedback from retriever and generator early on. This joint approach improves
both efficiency and robustness by reducing reliance on multiple retrieval and
inference steps, which are susceptible to cascading errors. We validate the
efficiency and effectiveness of our method, Verified-Diversification with
Consolidation (VERDICT), on the widely adopted ASQA benchmark to achieve
diverse yet verifiable interpretations. Empirical results show that VERDICT
improves grounding-aware F1 score by an average of 23% over the strongest
baseline across different backbone LLMs.
|
2502.10353
|
Assortment Optimization for Patient-Provider Matching
|
cs.CY cs.LG math.OC
|
Rising provider turnover forces healthcare administrators to frequently
rematch patients to available providers, which can be cumbersome and
labor-intensive. To reduce the burden of rematching, we study algorithms for
matching patients and providers through assortment optimization. We develop a
patient-provider matching model in which we simultaneously offer each patient a
menu of providers, and patients subsequently respond and select providers. By
offering assortments upfront, administrators can balance logistical ease and
patient autonomy. We study policies for assortment optimization and
characterize their performance under different problem settings. We demonstrate
that the selection of assortment policy is highly dependent on problem
specifics and, in particular, on a patient's willingness to match and the ratio
between patients and providers. On real-world data, we show that our best
policy can improve match quality by 13% over a greedy solution by tailoring
assortment sizes based on patient characteristics. We conclude with
recommendations for running a real-world patient-provider matching system
inspired by our results.
|
2502.10354
|
Dimension-free Score Matching and Time Bootstrapping for Diffusion
Models
|
cs.LG math.ST stat.ML stat.TH
|
Diffusion models generate samples by estimating the score function of the
target distribution at various noise levels. The model is trained using samples
drawn from the target distribution, progressively adding noise. In this work,
we establish the first (nearly) dimension-free sample complexity bounds for
learning these score functions, achieving a double exponential improvement in
dimension over prior results. A key aspect of our analysis is the use of a
single function approximator to jointly estimate scores across noise levels, a
critical feature of diffusion models in practice which enables generalization
across timesteps. Our analysis introduces a novel martingale-based error
decomposition and sharp variance bounds, enabling efficient learning from
dependent data generated by Markov processes, which may be of independent
interest. Building on these insights, we propose Bootstrapped Score Matching
(BSM), a variance reduction technique that utilizes previously learned scores
to improve accuracy at higher noise levels. These results provide crucial
insights into the efficiency and effectiveness of diffusion models for
generative modeling.
|
2502.10357
|
Learning Euler Factors of Elliptic Curves
|
math.NT cs.LG
|
We apply transformer models and feedforward neural networks to predict
Frobenius traces $a_p$ from elliptic curves given other traces $a_q$. We train
further models to predict $a_p \bmod 2$ from $a_q \bmod 2$, and cross-analysis
such as $a_p \bmod 2$ from $a_q$. Our experiments reveal that these models
achieve high accuracy, even in the absence of explicit number-theoretic tools
like functional equations of $L$-functions. We also present partial
interpretability findings.
|
2502.10359
|
Proper Learnability and the Role of Unlabeled Data
|
cs.LG stat.ML
|
Proper learning refers to the setting in which learners must emit predictors
in the underlying hypothesis class $H$, and often leads to learners with simple
algorithmic forms (e.g. empirical risk minimization (ERM), structural risk
minimization (SRM)). The limitation of proper learning, however, is that there
exist problems which can only be learned improperly, e.g. in multiclass
classification. Thus, we ask: Under what assumptions on the hypothesis class or
the information provided to the learner is a problem properly learnable? We
first demonstrate that when the unlabeled data distribution is given, there
always exists an optimal proper learner governed by distributional
regularization, a randomized generalization of regularization. We refer to this
setting as the distribution-fixed PAC model, and continue to evaluate the
learner on its worst-case performance over all distributions. Our result holds
for all metric loss functions and any finite learning problem (with no
dependence on its size). Further, we demonstrate that sample complexities in
the distribution-fixed PAC model can shrink by only a logarithmic factor from
the classic PAC model, strongly refuting the role of unlabeled data in PAC
learning (from a worst-case perspective).
We complement this with impossibility results which obstruct any
characterization of proper learnability in the realizable PAC model. First, we
observe that there are problems whose proper learnability is logically
undecidable, i.e., independent of the ZFC axioms. We then show that proper
learnability is not a monotone property of the underlying hypothesis class, and
that it is not a local property (in a precise sense). Our impossibility results
all hold even for the fundamental setting of multiclass classification, and go
through a reduction of EMX learning (Ben-David et al., 2019) to proper
classification which may be of independent interest.
|
2502.10361
|
Enhancing Multilingual LLM Pretraining with Model-Based Data Selection
|
cs.CL cs.LG
|
Dataset curation has become a basis for strong large language model (LLM)
performance. While various rule-based filtering heuristics exist for English
and multilingual datasets, model-based filtering techniques have primarily
focused on English. To address the disparity stemming from limited research on
non-English languages, we propose a model-based filtering framework for
multilingual datasets that aims to identify a diverse set of structured and
knowledge-rich samples. Our approach emphasizes transparency, simplicity, and
efficiency, leveraging Transformer- and FastText-based classifiers to ensure
the broad accessibility of our technique and data. We conduct comprehensive
ablation studies on the FineWeb-2 web crawl dataset across diverse language
families, scripts, and resource availability to demonstrate the effectiveness
of our method. Training a 1B-parameter Llama model for 70B and 119B tokens, our
approach can match the baseline MMLU score with as little as 15% of the
training tokens, while also improving across other benchmarks. These findings
provide strong evidence for the generalizability of our approach to other
languages. As a result, we extend our framework to 20 languages for which we
release the refined pretraining datasets.
|
2502.10363
|
BeamDojo: Learning Agile Humanoid Locomotion on Sparse Footholds
|
cs.RO cs.AI cs.LG
|
Traversing risky terrains with sparse footholds poses a significant challenge
for humanoid robots, requiring precise foot placements and stable locomotion.
Existing approaches designed for quadrupedal robots often fail to generalize to
humanoid robots due to differences in foot geometry and unstable morphology,
while learning-based approaches for humanoid locomotion still face great
challenges on complex terrains due to sparse foothold reward signals and
inefficient learning processes. To address these challenges, we introduce
BeamDojo, a reinforcement learning (RL) framework designed for enabling agile
humanoid locomotion on sparse footholds. BeamDojo begins by introducing a
sampling-based foothold reward tailored for polygonal feet, along with a double
critic to balancing the learning process between dense locomotion rewards and
sparse foothold rewards. To encourage sufficient trail-and-error exploration,
BeamDojo incorporates a two-stage RL approach: the first stage relaxes the
terrain dynamics by training the humanoid on flat terrain while providing it
with task terrain perceptive observations, and the second stage fine-tunes the
policy on the actual task terrain. Moreover, we implement a onboard LiDAR-based
elevation map to enable real-world deployment. Extensive simulation and
real-world experiments demonstrate that BeamDojo achieves efficient learning in
simulation and enables agile locomotion with precise foot placement on sparse
footholds in the real world, maintaining a high success rate even under
significant external disturbances.
|
2502.10365
|
AffinityFlow: Guided Flows for Antibody Affinity Maturation
|
cs.LG
|
Antibodies are widely used as therapeutics, but their development requires
costly affinity maturation, involving iterative mutations to enhance binding
affinity.This paper explores a sequence-only scenario for affinity maturation,
using solely antibody and antigen sequences. Recently AlphaFlow wraps AlphaFold
within flow matching to generate diverse protein structures, enabling a
sequence-conditioned generative model of structure. Building on this, we
propose an alternating optimization framework that (1) fixes the sequence to
guide structure generation toward high binding affinity using a structure-based
affinity predictor, then (2) applies inverse folding to create sequence
mutations, refined by a sequence-based affinity predictor for post selection. A
key challenge is the lack of labeled data for training both predictors. To
address this, we develop a co-teaching module that incorporates valuable
information from noisy biophysical energies into predictor refinement. The
sequence-based predictor selects consensus samples to teach the structure-based
predictor, and vice versa. Our method, AffinityFlow, achieves state-of-the-art
performance in affinity maturation experiments. We plan to open-source our code
after acceptance.
|
2502.10367
|
Decentralized State Estimation and Opacity Verification Based on
Partially Ordered Observation Sequences
|
eess.SY cs.SY
|
In this paper, we investigate state estimation and opacity verification
problems within a decentralized observation architecture. Specifically, we
consider a discrete event system whose behavior is recorded by a set of
observation sites. These sites transmit the partially ordered sequences of
observations that they record to a coordinator whenever a
\textit{synchronization} occurs. To properly analyze the system behavior from
the coordinator's viewpoint, we first introduce the notion of an \textit{All
Sequence Structure} (ASS), which concisely captures the state evolution of each
system state upon different information provided by the observation sites.
Based on the ASS, we then construct corresponding current-state and
initial-state estimators for offline state estimation at the coordinator. When
used to verify state-isolation properties under this decentralized
architecture, the use of ASS demonstrates a significant reduction in complexity
compared with existing approaches in the literature. In particular, we discuss
how to verify initial-state opacity at the coordinator, as well as a novel
opacity notion, namely current-state-at-synchronization opacity.
|
2502.10373
|
OWLS: Scaling Laws for Multilingual Speech Recognition and Translation
Models
|
cs.CL cs.AI cs.LG eess.AS
|
Neural scaling laws offer valuable insights for designing robust sequence
processing architectures. While these laws have been extensively characterized
in other modalities, their behavior in speech remains comparatively
underexplored. In this work, we introduce OWLS, an open-access, reproducible
suite of multilingual speech recognition and translation models spanning 0.25B
to 18B parameters, with the 18B version being the largest speech model, to the
best of our knowledge. OWLS leverages up to 360K hours of public speech data
across 150 languages, enabling a systematic investigation into how data, model,
and compute scaling each influence performance in multilingual speech tasks. We
use OWLS to derive neural scaling laws, showing how final performance can be
reliably predicted when scaling. One of our key findings is that scaling
enhances performance on low-resource languages/dialects, helping to mitigate
bias and improve the accessibility of speech technologies. Finally, we show how
OWLS can be used to power new research directions by discovering emergent
abilities in large-scale speech models. Model checkpoints will be released on
https://huggingface.co/collections/espnet/owls-scaling-laws-for-speech-recognition-and-translation-67ab7f991c194065f057ce8d
for future studies.
|
2502.10377
|
ReStyle3D: Scene-Level Appearance Transfer with Semantic Correspondences
|
cs.CV cs.GR
|
We introduce ReStyle3D, a novel framework for scene-level appearance transfer
from a single style image to a real-world scene represented by multiple views.
The method combines explicit semantic correspondences with multi-view
consistency to achieve precise and coherent stylization. Unlike conventional
stylization methods that apply a reference style globally, ReStyle3D uses
open-vocabulary segmentation to establish dense, instance-level correspondences
between the style and real-world images. This ensures that each object is
stylized with semantically matched textures. It first transfers the style to a
single view using a training-free semantic-attention mechanism in a diffusion
model. It then lifts the stylization to additional views via a learned
warp-and-refine network guided by monocular depth and pixel-wise
correspondences. Experiments show that ReStyle3D consistently outperforms prior
methods in structure preservation, perceptual style similarity, and multi-view
coherence. User studies further validate its ability to produce
photo-realistic, semantically faithful results. Our code, pretrained models,
and dataset will be publicly released, to support new applications in interior
design, virtual staging, and 3D-consistent stylization.
|
2502.10378
|
Unknown Word Detection for English as a Second Language (ESL) Learners
Using Gaze and Pre-trained Language Models
|
cs.HC cs.CL
|
English as a Second Language (ESL) learners often encounter unknown words
that hinder their text comprehension. Automatically detecting these words as
users read can enable computing systems to provide just-in-time definitions,
synonyms, or contextual explanations, thereby helping users learn vocabulary in
a natural and seamless manner. This paper presents EyeLingo, a
transformer-based machine learning method that predicts the probability of
unknown words based on text content and eye gaze trajectory in real time with
high accuracy. A 20-participant user study revealed that our method can achieve
an accuracy of 97.6%, and an F1-score of 71.1%. We implemented a real-time
reading assistance prototype to show the effectiveness of EyeLingo. The user
study shows improvement in willingness to use and usefulness compared to
baseline methods.
|
2502.10381
|
Balancing the Scales: A Theoretical and Algorithmic Framework for
Learning from Imbalanced Data
|
cs.LG stat.ML
|
Class imbalance remains a major challenge in machine learning, especially in
multi-class problems with long-tailed distributions. Existing methods, such as
data resampling, cost-sensitive techniques, and logistic loss modifications,
though popular and often effective, lack solid theoretical foundations. As an
example, we demonstrate that cost-sensitive methods are not Bayes consistent.
This paper introduces a novel theoretical framework for analyzing
generalization in imbalanced classification. We propose a new class-imbalanced
margin loss function for both binary and multi-class settings, prove its strong
$H$-consistency, and derive corresponding learning guarantees based on
empirical loss and a new notion of class-sensitive Rademacher complexity.
Leveraging these theoretical results, we devise novel and general learning
algorithms, IMMAX (Imbalanced Margin Maximization), which incorporate
confidence margins and are applicable to various hypothesis sets. While our
focus is theoretical, we also present extensive empirical results demonstrating
the effectiveness of our algorithms compared to existing baselines.
|
2502.10383
|
Representation and Interpretation in Artificial and Natural Computing
|
cs.AI
|
Artificial computing machinery transforms representations through an
objective process, to be interpreted subjectively by humans, so the machine and
the interpreter are different entities, but in the putative natural computing
both processes are performed by the same agent. The method or process that
transforms a representation is called here \emph{the mode of computing}. The
mode used by digital computers is the algorithmic one, but there are others,
such as quantum computers and diverse forms of non-conventional computing, and
there is an open-ended set of representational formats and modes that could be
used in artificial and natural computing. A mode based on a notion of computing
different from Turing's may perform feats beyond what the Turing Machine does
but the modes would not be of the same kind and could not be compared. For a
mode of computing to be more powerful than the algorithmic one, it ought to
compute functions lacking an effective algorithm, and Church Thesis would not
hold. Here, a thought experiment including a computational demon using a
hypothetical mode for such an effect is presented. If there is natural
computing, there is a mode of natural computing whose properties may be causal
to the phenomenological experience. Discovering it would come with solving the
hard problem of consciousness; but if it turns out that such a mode does not
exist, there is no such thing as natural computing, and the mind is not a
computational process.
|
2502.10385
|
Simplifying DINO via Coding Rate Regularization
|
cs.CV cs.AI
|
DINO and DINOv2 are two model families being widely used to learn
representations from unlabeled imagery data at large scales. Their learned
representations often enable state-of-the-art performance for downstream tasks,
such as image classification and segmentation. However, they employ many
empirically motivated design choices and their training pipelines are highly
complex and unstable -- many hyperparameters need to be carefully tuned to
ensure that the representations do not collapse -- which poses considerable
difficulty to improving them or adapting them to new domains. In this work, we
posit that we can remove most such-motivated idiosyncrasies in the pre-training
pipelines, and only need to add an explicit coding rate term in the loss
function to avoid collapse of the representations. As a result, we obtain
highly simplified variants of the DINO and DINOv2 which we call SimDINO and
SimDINOv2, respectively. Remarkably, these simplified models are more robust to
different design choices, such as network architecture and hyperparameters, and
they learn even higher-quality representations, measured by performance on
downstream tasks, offering a Pareto improvement over the corresponding DINO and
DINOv2 models. This work highlights the potential of using simplifying design
principles to improve the empirical practice of deep learning.
|
2502.10388
|
Aspect-Oriented Summarization for Psychiatric Short-Term Readmission
Prediction
|
cs.CL
|
Recent progress in large language models (LLMs) has enabled the automated
processing of lengthy documents even without supervised training on a
task-specific dataset. Yet, their zero-shot performance in complex tasks as
opposed to straightforward information extraction tasks remains suboptimal. One
feasible approach for tasks with lengthy, complex input is to first summarize
the document and then apply supervised fine-tuning to the summary. However, the
summarization process inevitably results in some loss of information. In this
study we present a method for processing the summaries of long documents aimed
to capture different important aspects of the original document. We hypothesize
that LLM summaries generated with different aspect-oriented prompts contain
different \textit{information signals}, and we propose methods to measure these
differences. We introduce approaches to effectively integrate signals from
these different summaries for supervised training of transformer models. We
validate our hypotheses on a high-impact task -- 30-day readmission prediction
from a psychiatric discharge -- using real-world data from four hospitals, and
show that our proposed method increases the prediction performance for the
complex task of predicting patient outcome.
|
2502.10389
|
Region-Adaptive Sampling for Diffusion Transformers
|
cs.CV cs.AI
|
Diffusion models (DMs) have become the leading choice for generative tasks
across diverse domains. However, their reliance on multiple sequential forward
passes significantly limits real-time performance. Previous acceleration
methods have primarily focused on reducing the number of sampling steps or
reusing intermediate results, failing to leverage variations across spatial
regions within the image due to the constraints of convolutional U-Net
structures. By harnessing the flexibility of Diffusion Transformers (DiTs) in
handling variable number of tokens, we introduce RAS, a novel, training-free
sampling strategy that dynamically assigns different sampling ratios to regions
within an image based on the focus of the DiT model. Our key observation is
that during each sampling step, the model concentrates on semantically
meaningful regions, and these areas of focus exhibit strong continuity across
consecutive steps. Leveraging this insight, RAS updates only the regions
currently in focus, while other regions are updated using cached noise from the
previous step. The model's focus is determined based on the output from the
preceding step, capitalizing on the temporal consistency we observed. We
evaluate RAS on Stable Diffusion 3 and Lumina-Next-T2I, achieving speedups up
to 2.36x and 2.51x, respectively, with minimal degradation in generation
quality. Additionally, a user study reveals that RAS delivers comparable
qualities under human evaluation while achieving a 1.6x speedup. Our approach
makes a significant step towards more efficient diffusion transformers,
enhancing their potential for real-time applications.
|
2502.10390
|
(How) Can Transformers Predict Pseudo-Random Numbers?
|
cs.LG cond-mat.dis-nn cs.CR stat.ML
|
Transformers excel at discovering patterns in sequential data, yet their
fundamental limitations and learning mechanisms remain crucial topics of
investigation. In this paper, we study the ability of Transformers to learn
pseudo-random number sequences from linear congruential generators (LCGs),
defined by the recurrence relation $x_{t+1} = a x_t + c \;\mathrm{mod}\; m$.
Our analysis reveals that with sufficient architectural capacity and training
data variety, Transformers can perform in-context prediction of LCG sequences
with unseen moduli ($m$) and parameters ($a,c$). Through analysis of embedding
layers and attention patterns, we uncover how Transformers develop algorithmic
structures to learn these sequences in two scenarios of increasing complexity.
First, we analyze how Transformers learn LCG sequences with unseen ($a, c$) but
fixed modulus, and we demonstrate successful learning up to $m = 2^{32}$. Our
analysis reveals that models learn to factorize the modulus and utilize
digit-wise number representations to make sequential predictions. In the
second, more challenging scenario of unseen moduli, we show that Transformers
can generalize to unseen moduli up to $m_{\text{test}} = 2^{16}$. In this case,
the model employs a two-step strategy: first estimating the unknown modulus
from the context, then utilizing prime factorizations to generate predictions.
For this task, we observe a sharp transition in the accuracy at a critical
depth $=3$. We also find that the number of in-context sequence elements needed
to reach high accuracy scales sublinearly with the modulus.
|
2502.10391
|
MM-RLHF: The Next Step Forward in Multimodal LLM Alignment
|
cs.CL cs.CV
|
Despite notable advancements in Multimodal Large Language Models (MLLMs),
most state-of-the-art models have not undergone thorough alignment with human
preferences. This gap exists because current alignment research has primarily
achieved progress in specific areas (e.g., hallucination reduction), while the
broader question of whether aligning models with human preferences can
systematically enhance MLLM capability remains largely unexplored. To this end,
we introduce MM-RLHF, a dataset containing $\mathbf{120k}$ fine-grained,
human-annotated preference comparison pairs. This dataset represents a
substantial advancement over existing resources, offering superior size,
diversity, annotation granularity, and quality. Leveraging this dataset, we
propose several key innovations to improve both the quality of reward models
and the efficiency of alignment algorithms. Notably, we introduce a
Critique-Based Reward Model, which generates critiques of model outputs before
assigning scores, offering enhanced interpretability and more informative
feedback compared to traditional scalar reward mechanisms. Additionally, we
propose Dynamic Reward Scaling, a method that adjusts the loss weight of each
sample according to the reward signal, thereby optimizing the use of
high-quality comparison pairs. Our approach is rigorously evaluated across
$\mathbf{10}$ distinct dimensions and $\mathbf{27}$ benchmarks, with results
demonstrating significant and consistent improvements in model performance.
Specifically, fine-tuning LLaVA-ov-7B with MM-RLHF and our alignment algorithm
leads to a $\mathbf{19.5}$% increase in conversational abilities and a
$\mathbf{60}$% improvement in safety.
We have open-sourced the preference dataset, reward model, training and
evaluation code, as well as reward modeling and safety benchmarks. For more
details, please visit our project page: https://mm-rlhf.github.io.
|
2502.10392
|
Text-guided Sparse Voxel Pruning for Efficient 3D Visual Grounding
|
cs.CV cs.LG
|
In this paper, we propose an efficient multi-level convolution architecture
for 3D visual grounding. Conventional methods are difficult to meet the
requirements of real-time inference due to the two-stage or point-based
architecture. Inspired by the success of multi-level fully sparse convolutional
architecture in 3D object detection, we aim to build a new 3D visual grounding
framework following this technical route. However, as in 3D visual grounding
task the 3D scene representation should be deeply interacted with text
features, sparse convolution-based architecture is inefficient for this
interaction due to the large amount of voxel features. To this end, we propose
text-guided pruning (TGP) and completion-based addition (CBA) to deeply fuse 3D
scene representation and text features in an efficient way by gradual region
pruning and target completion. Specifically, TGP iteratively sparsifies the 3D
scene representation and thus efficiently interacts the voxel features with
text features by cross-attention. To mitigate the affect of pruning on delicate
geometric information, CBA adaptively fixes the over-pruned region by voxel
completion with negligible computational overhead. Compared with previous
single-stage methods, our method achieves top inference speed and surpasses
previous fastest method by 100\% FPS. Our method also achieves state-of-the-art
accuracy even compared with two-stage methods, with $+1.13$ lead of Acc@0.5 on
ScanRefer, and $+2.6$ and $+3.2$ leads on NR3D and SR3D respectively. The code
is available at
\href{https://github.com/GWxuan/TSP3D}{https://github.com/GWxuan/TSP3D}.
|
2502.10394
|
A Coordination-based Approach for Focused Learning in Knowledge-Based
Systems
|
cs.AI cs.CL
|
Recent progress in Learning by Reading and Machine Reading systems has
significantly increased the capacity of knowledge-based systems to learn new
facts. In this work, we discuss the problem of selecting a set of learning
requests for these knowledge-based systems which would lead to maximum Q/A
performance. To understand the dynamics of this problem, we simulate the
properties of a learning strategy, which sends learning requests to an external
knowledge source. We show that choosing an optimal set of facts for these
learning systems is similar to a coordination game, and use reinforcement
learning to solve this problem. Experiments show that such an approach can
significantly improve Q/A performance.
|
2502.10395
|
An Integrated Platform for Studying Learning with Intelligent Tutoring
Systems: CTAT+TutorShop
|
cs.CY cs.AI cs.HC
|
Intelligent tutoring systems (ITSs) are effective in helping students learn;
further research could make them even more effective. Particularly desirable is
research into how students learn with these systems, how these systems best
support student learning, and what learning sciences principles are key in
ITSs. CTAT+Tutorshop provides a full stack integrated platform that facilitates
a complete research lifecycle with ITSs, which includes using ITS data to
discover learner challenges, to identify opportunities for system improvements,
and to conduct experimental studies. The platform includes authoring tools to
support and accelerate development of ITS, which provide automatic data logging
in a format compatible with DataShop, an independent site that supports the
analysis of ed tech log data to study student learnings. Among the many
technology platforms that exist to support learning sciences research,
CTAT+Tutorshop may be the only one that offers researchers the possibility to
author elements of ITSs, or whole ITSs, as part of designing studies. This
platform has been used to develop and conduct an estimated 147 research studies
which have run in a wide variety of laboratory and real-world educational
settings, including K-12 and higher education, and have addressed a wide range
of research questions. This paper presents five case studies of research
conducted on the CTAT+Tutorshop platform, and summarizes what has been
accomplished and what is possible for future researchers. We reflect on the
distinctive elements of this platform that have made it so effective in
facilitating a wide range of ITS research.
|
2502.10396
|
DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing
|
cs.CY cs.AI cs.LG
|
Knowledge Tracing (KT) predicts future performance by modeling students'
historical interactions, and understanding students' affective states can
enhance the effectiveness of KT, thereby improving the quality of education.
Although traditional KT values students' cognition and learning behaviors,
efficient evaluation of students' affective states and their application in KT
still require further exploration due to the non-affect-oriented nature of the
data and budget constraints. To address this issue, we propose a
computation-driven approach, Dynamic Affect Simulation Knowledge Tracing
(DASKT), to explore the impact of various student affective states (such as
frustration, concentration, boredom, and confusion) on their knowledge states.
In this model, we first extract affective factors from students'
non-affect-oriented behavioral data, then use clustering and spatiotemporal
sequence modeling to accurately simulate students' dynamic affect changes when
dealing with different problems. Subsequently, {\color{blue}we incorporate
affect with time-series analysis to improve the model's ability to infer
knowledge states over time and space.} Extensive experimental results on two
public real-world educational datasets show that DASKT can achieve more
reasonable knowledge states under the effect of students' affective states.
Moreover, DASKT outperforms the most advanced KT methods in predicting student
performance. Our research highlights a promising avenue for future KT studies,
focusing on achieving high interpretability and accuracy.
|
2502.10398
|
Practical Application and Limitations of AI Certification Catalogues in
the Light of the AI Act
|
cs.CY cs.AI cs.LG
|
In this work-in-progress, we investigate the certification of AI systems,
focusing on the practical application and limitations of existing certification
catalogues in the light of the AI Act by attempting to certify a publicly
available AI system. We aim to evaluate how well current approaches work to
effectively certify an AI system, and how publicly accessible AI systems, that
might not be actively maintained or initially intended for certification, can
be selected and used for a sample certification process. Our methodology
involves leveraging the Fraunhofer AI Assessment Catalogue as a comprehensive
tool to systematically assess an AI model's compliance with certification
standards. We find that while the catalogue effectively structures the
evaluation process, it can also be cumbersome and time-consuming to use. We
observe the limitations of an AI system that has no active development team
anymore and highlighted the importance of complete system documentation.
Finally, we identify some limitations of the certification catalogues used and
proposed ideas on how to streamline the certification process.
|
2502.10399
|
Data Stewardship Decoded: Mapping Its Diverse Manifestations and
Emerging Relevance at a time of AI
|
cs.CY cs.AI cs.DB
|
Data stewardship has become a critical component of modern data governance,
especially with the growing use of artificial intelligence (AI). Despite its
increasing importance, the concept of data stewardship remains ambiguous and
varies in its application. This paper explores four distinct manifestations of
data stewardship to clarify its emerging position in the data governance
landscape. These manifestations include a) data stewardship as a set of
competencies and skills, b) a function or role within organizations, c) an
intermediary organization facilitating collaborations, and d) a set of guiding
principles. The paper subsequently outlines the core competencies required for
effective data stewardship, explains the distinction between data stewards and
Chief Data Officers (CDOs), and details the intermediary role of stewards in
bridging gaps between data holders and external stakeholders. It also explores
key principles aligned with the FAIR framework (Findable, Accessible,
Interoperable, Reusable) and introduces the emerging principle of AI readiness
to ensure data meets the ethical and technical requirements of AI systems. The
paper emphasizes the importance of data stewardship in enhancing data
collaboration, fostering public value, and managing data reuse responsibly,
particularly in the era of AI. It concludes by identifying challenges and
opportunities for advancing data stewardship, including the need for
standardized definitions, capacity building efforts, and the creation of a
professional association for data stewardship.
|
2502.10401
|
You Can't Get There From Here: Redefining Information Science to address
our sociotechnical futures
|
cs.CY cs.AI cs.HC
|
Current definitions of Information Science are inadequate to comprehensively
describe the nature of its field of study and for addressing the problems that
are arising from intelligent technologies. The ubiquitous rise of artificial
intelligence applications and their impact on society demands the field of
Information Science acknowledge the sociotechnical nature of these
technologies. Previous definitions of Information Science over the last six
decades have inadequately addressed the environmental, human, and social
aspects of these technologies. This perspective piece advocates for an expanded
definition of Information Science that fully includes the sociotechnical
impacts information has on the conduct of research in this field. Proposing an
expanded definition of Information Science that includes the sociotechnical
aspects of this field should stimulate both conversation and widen the
interdisciplinary lens necessary to address how intelligent technologies may be
incorporated into society and our lives more fairly.
|
2502.10403
|
Implementing agile healthcare frame works in the context of low income
countries: Proposed Framework and Review
|
cs.ET cs.CY cs.IR
|
Agile healthcare frameworks, derived from methodologies in IT and
manufacturing, offer transformative potential for low-income regions. This
study explores Agile integration in resource-constrained environments, focusing
on Ghana. Key benefits include adaptability, iterative planning, and
stakeholder collaboration to address infrastructure gaps, workforce shortages,
and the "know-do gap." Digital tools like mobile health (mHealth) applications
and the District Health Information Management System (DHIMS) demonstrate Agile
scalability and efficacy in improving outcomes and resource allocation. Policy
alignment, such as through Ghana's National Health Insurance Scheme (NHIS), is
crucial for sustaining these practices. Findings reveal Agile ability to enable
real-time decision-making, foster community engagement, and drive
interdisciplinary collaboration. This paper provides actionable strategies and
systemic innovations, positioning Agile as a scalable model for equitable,
high-quality care delivery in other low-income regions.
|
2502.10406
|
FishBargain: An LLM-Empowered Bargaining Agent for Online Fleamarket
Platform Sellers
|
cs.CY cs.AI
|
Different from traditional Business-to-Consumer e-commerce platforms~(e.g.,
Amazon), online fleamarket platforms~(e.g., Craigslist) mainly focus on
individual sellers who are lack of time investment and business proficiency.
Individual sellers often struggle with the bargaining process and thus the deal
is unaccomplished. Recent advancements in Large Language Models(LLMs)
demonstrate huge potential in various dialogue tasks, but those tasks are
mainly in the form of passively following user's instruction. Bargaining, as a
form of proactive dialogue task, represents a distinct art of dialogue
considering the dynamism of environment and uncertainty of adversary
strategies. In this paper, we propose an LLM-empowered bargaining agent
designed for online fleamarket platform sellers, named as FishBargain.
Specifically, FishBargain understands the chat context and product information,
chooses both action and language skill considering possible adversary actions
and generates utterances. FishBargain has been tested by thousands of
individual sellers on one of the largest online fleamarket platforms~(Xianyu)
in China. Both qualitative and quantitative experiments demonstrate that
FishBargain can effectively help sellers make more deals.
|
2502.10407
|
Addressing Bias in Generative AI: Challenges and Research Opportunities
in Information Management
|
cs.CY cs.AI cs.HC
|
Generative AI technologies, particularly Large Language Models (LLMs), have
transformed information management systems but introduced substantial biases
that can compromise their effectiveness in informing business decision-making.
This challenge presents information management scholars with a unique
opportunity to advance the field by identifying and addressing these biases
across extensive applications of LLMs. Building on the discussion on bias
sources and current methods for detecting and mitigating bias, this paper seeks
to identify gaps and opportunities for future research. By incorporating
ethical considerations, policy implications, and sociotechnical perspectives,
we focus on developing a framework that covers major stakeholders of Generative
AI systems, proposing key research questions, and inspiring discussion. Our
goal is to provide actionable pathways for researchers to address bias in LLM
applications, thereby advancing research in information management that
ultimately informs business practices. Our forward-looking framework and
research agenda advocate interdisciplinary approaches, innovative methods,
dynamic perspectives, and rigorous evaluation to ensure fairness and
transparency in Generative AI-driven information systems. We expect this study
to serve as a call to action for information management scholars to tackle this
critical issue, guiding the improvement of fairness and effectiveness in
LLM-based systems for business practice.
|
2502.10408
|
Knowledge Tracing in Programming Education Integrating Students'
Questions
|
cs.CY cs.AI cs.SE
|
Knowledge tracing (KT) in programming education presents unique challenges
due to the complexity of coding tasks and the diverse methods students use to
solve problems. Although students' questions often contain valuable signals
about their understanding and misconceptions, traditional KT models often
neglect to incorporate these questions as inputs to address these challenges.
This paper introduces SQKT (Students' Question-based Knowledge Tracing), a
knowledge tracing model that leverages students' questions and automatically
extracted skill information to enhance the accuracy of predicting students'
performance on subsequent problems in programming education. Our method creates
semantically rich embeddings that capture not only the surface-level content of
the questions but also the student's mastery level and conceptual
understanding. Experimental results demonstrate SQKT's superior performance in
predicting student completion across various Python programming courses of
differing difficulty levels. In in-domain experiments, SQKT achieved a 33.1\%
absolute improvement in AUC compared to baseline models. The model also
exhibited robust generalization capabilities in cross-domain settings,
effectively addressing data scarcity issues in advanced programming courses.
SQKT can be used to tailor educational content to individual learning needs and
design adaptive learning systems in computer science education.
|
2502.10409
|
Data Science Students Perspectives on Learning Analytics: An Application
of Human-Led and LLM Content Analysis
|
cs.CY cs.AI cs.ET stat.AP
|
Objective This study is part of a series of initiatives at a UK university
designed to cultivate a deep understanding of students' perspectives on
analytics that resonate with their unique learning needs. It explores
collaborative data processing undertaken by postgraduate students who examined
an Open University Learning Analytics Dataset (OULAD).
Methods A qualitative approach was adopted, integrating a Retrieval-Augmented
Generation (RAG) and a Large Language Model (LLM) technique with human-led
content analysis to gather information about students' perspectives based on
their submitted work. The study involved 72 postgraduate students in 12 groups.
Findings The analysis of group work revealed diverse insights into essential
learning analytics from the students' perspectives. All groups adopted a
structured data science methodology. The questions formulated by the groups
were categorised into seven themes, reflecting their specific areas of
interest. While there was variation in the selected variables to interpret
correlations, a consensus was found regarding the general results.
Conclusion A significant outcome of this study is that students specialising
in data science exhibited a deeper understanding of learning analytics,
effectively articulating their interests through inferences drawn from their
analyses. While human-led content analysis provided a general understanding of
students' perspectives, the LLM offered nuanced insights.
|
2502.10410
|
Auto-Evaluation: A Critical Measure in Driving Improvements in Quality
and Safety of AI-Generated Lesson Resources
|
cs.CY cs.AI
|
As a publicly funded body in the UK, Oak National Academy is in a unique
position to innovate within this field as we have a comprehensive curriculum of
approximately 13,000 open education resources (OER) for all National Curriculum
subjects, designed and quality-assured by expert, human teachers. This has
provided the corpus of content needed for building a high-quality AI-powered
lesson planning tool, Aila, that is free to use and, therefore, accessible to
all teachers across the country. Furthermore, using our evidence-informed
curriculum principles, we have codified and exemplified each component of
lesson design. To assess the quality of lessons produced by Aila at scale, we
have developed an AI-powered auto-evaluation agent,facilitating informed
improvements to enhance output quality. Through comparisons between human and
auto-evaluations, we have begun to refine this agent further to increase its
accuracy, measured by its alignment with an expert human evaluator. In this
paper we present this iterative evaluation process through an illustrative case
study focused on one quality benchmark - the level of challenge within
multiple-choice quizzes. We also explore the contribution that this may make to
similar projects and the wider sector.
|
2502.10411
|
TrueReason: An Exemplar Personalised Learning System Integrating
Reasoning with Foundational Models
|
cs.CY cs.AI cs.CL cs.IR cs.MA
|
Personalised education is one of the domains that can greatly benefit from
the most recent advances in Artificial Intelligence (AI) and Large Language
Models (LLM). However, it is also one of the most challenging applications due
to the cognitive complexity of teaching effectively while personalising the
learning experience to suit independent learners. We hypothesise that one
promising approach to excelling in such demanding use cases is using a
\emph{society of minds}. In this chapter, we present TrueReason, an exemplar
personalised learning system that integrates a multitude of specialised AI
models that can mimic micro skills that are composed together by a LLM to
operationalise planning and reasoning. The architecture of the initial
prototype is presented while describing two micro skills that have been
incorporated in the prototype. The proposed system demonstrates the first step
in building sophisticated AI systems that can take up very complex cognitive
tasks that are demanded by domains such as education.
|
2502.10412
|
Identifying relevant indicators for monitoring a National Artificial
Intelligence Strategy
|
cs.CY cs.AI
|
How can a National Artificial Intelligence Strategy be effectively monitored?
To address this question, we propose a methodology consisting of two key
components. First, it involves identifying relevant indicators within national
AI strategies. Second, it assesses the alignment between these indicators and
the strategic actions of a specific government's AI strategy, allowing for a
critical evaluation of its monitoring measures. Moreover, identifying these
indicators helps assess the overall quality of the strategy's structure. A lack
of alignment between strategic actions and the identified indicators may reveal
gaps or blind spots in the strategy. This methodology is demonstrated using the
Brazilian AI strategy as a case study.
|
2502.10413
|
Machine Learning-Driven Convergence Analysis in Multijurisdictional
Compliance Using BERT and K-Means Clustering
|
cs.CY cs.AI cs.CE cs.CL cs.LG
|
Digital data continues to grow, there has been a shift towards using
effective regulatory mechanisms to safeguard personal information. The CCPA of
California and the General Data Protection Regulation (GDPR) of the European
Union are two of the most important privacy laws. The regulation is intended to
safeguard consumer privacy, but it varies greatly in scope, definitions, and
methods of enforcement. This paper presents a fresh approach to adaptive
compliance, using machine learning and emphasizing natural language processing
(NLP) as the primary focus of comparison between the GDPR and CCPA. Using NLP,
this study compares various regulations to identify areas where they overlap or
diverge. This includes the "right to be forgotten" provision in the GDPR and
the "opt-out of sale" provision under CCPA. International companies can learn
valuable lessons from this report, as it outlines strategies for better
enforcement of laws across different nations. Additionally, the paper discusses
the challenges of utilizing NLP in legal literature and proposes methods to
enhance the model-ability of machine learning models for studying regulations.
The study's objective is to "bridge the gap between legal knowledge and
technical expertise" by developing regulatory compliance strategies that are
more efficient in operation and more effective in data protection.
|
2502.10414
|
A Neural Network Training Method Based on Neuron Connection Coefficient
Adjustments
|
cs.NE cs.LG
|
In previous studies, we introduced a neural network framework based on
symmetric differential equations, along with one of its training methods. In
this article, we present another training approach for this neural network.
This method leverages backward signal propagation and eliminates reliance on
the traditional chain derivative rule, offering a high degree of biological
interpretability. Unlike the previously introduced method, this approach does
not require adjustments to the fixed points of the differential equations.
Instead, it focuses solely on modifying the connection coefficients between
neurons, closely resembling the training process of traditional multilayer
perceptron (MLP) networks. By adopting a suitable adjustment strategy, this
method effectively avoids certain potential local minima. To validate this
approach, we tested it on the MNIST dataset and achieved promising results.
Through further analysis, we identified certain limitations of the current
neural network architecture and proposed measures for improvement.
|
2502.10417
|
Evolutionary Power-Aware Routing in VANETs using Monte-Carlo Simulation
|
cs.NE cs.AI cs.NI
|
This work addresses the reduction of power consumption of the AODV routing
protocol in vehicular networks as an optimization problem. Nowadays, network
designers focus on energy-aware communication protocols, specially to deploy
wireless networks. Here, we introduce an automatic method to search for
energy-efficient AODV configurations by using an evolutionary algorithm and
parallel Monte-Carlo simulations to improve the accuracy of the evaluation of
tentative solutions. The experimental results demonstrate that significant
power consumption improvements over the standard configuration can be attained,
with no noteworthy loss in the quality of service.
|
2502.10418
|
A Novel Multi-Objective Evolutionary Algorithm for Counterfactual
Generation
|
cs.NE cs.LG
|
Machine learning algorithms that learn black-box predictive models (which
cannot be directly interpreted) are increasingly used to make predictions
affecting the lives of people. It is important that users understand the
predictions of such models, particularly when the model outputs a negative
prediction for the user (e.g. denying a loan). Counterfactual explanations
provide users with guidance on how to change some of their characteristics to
receive a different, positive classification by a predictive model. For
example, if a predictive model rejected a loan application from a user, a
counterfactual explanation might state: If your salary was {\pounds}50,000
(rather than your current {\pounds}35,000), then your loan would be approved.
This paper proposes two novel contributions: (a) a novel multi-objective
Evolutionary Algorithm (EA) for counterfactual generation based on
lexicographic optimisation, rather than the more popular Pareto dominance
approach; and (b) an extension to the definition of the objective of validity
for a counterfactual, based on measuring the resilience of a counterfactual to
violations of monotonicity constraints which are intuitively expected by users;
e.g., intuitively, the probability of a loan application to be approved would
monotonically increase with an increase in the salary of the applicant.
Experiments involving 15 experimental settings (3 types of black box models
times 5 datasets) have shown that the proposed lexicographic optimisation-based
EA is very competitive with an existing Pareto dominance-based EA; and the
proposed extension of the validity objective has led to a substantial increase
in the validity of the counterfactuals generated by the proposed EA.
|
2502.10419
|
A Hybrid Swarm Intelligence Approach for Optimizing Multimodal Large
Language Models Deployment in Edge-Cloud-based Federated Learning
Environments
|
cs.NE cs.AI cs.LG
|
The combination of Federated Learning (FL), Multimodal Large Language Models
(MLLMs), and edge-cloud computing enables distributed and real-time data
processing while preserving privacy across edge devices and cloud
infrastructure. However, the deployment of MLLMs in FL environments with
resource-constrained edge devices presents significant challenges, including
resource management, communication overhead, and non-IID data. To address these
challenges, we propose a novel hybrid framework wherein MLLMs are deployed on
edge devices equipped with sufficient resources and battery life, while the
majority of training occurs in the cloud. To identify suitable edge devices for
deployment, we employ Particle Swarm Optimization (PSO), and Ant Colony
Optimization (ACO) is utilized to optimize the transmission of model updates
between edge and cloud nodes. This proposed swarm intelligence-based framework
aims to enhance the efficiency of MLLM training by conducting extensive
training in the cloud and fine-tuning at the edge, thereby reducing energy
consumption and communication costs. Our experimental results show that the
proposed method significantly improves system performance, achieving an
accuracy of 92%, reducing communication cost by 30%, and enhancing client
participation compared to traditional FL methods. These results make the
proposed approach highly suitable for large-scale edge-cloud computing systems.
|
2502.10420
|
Position: Stop Acting Like Language Model Agents Are Normal Agents
|
cs.AI cs.CL
|
Language Model Agents (LMAs) are increasingly treated as capable of
autonomously navigating interactions with humans and tools. Their design and
deployment tends to presume they are normal agents capable of sustaining
coherent goals, adapting across contexts and acting with a measure of
intentionality. These assumptions are critical to prospective use cases in
industrial, social and governmental settings. But LMAs are not normal agents.
They inherit the structural problems of the large language models (LLMs) around
which they are built: hallucinations, jailbreaking, misalignment and
unpredictability. In this Position paper we argue LMAs should not be treated as
normal agents, because doing so leads to problems that undermine their utility
and trustworthiness. We enumerate pathologies of agency intrinsic to LMAs.
Despite scaffolding such as external memory and tools, they remain
ontologically stateless, stochastic, semantically sensitive, and linguistically
intermediated. These pathologies destabilise the ontological properties of LMAs
including identifiability, continuity, persistence and and consistency,
problematising their claim to agency. In response, we argue LMA ontological
properties should be measured before, during and after deployment so that the
negative effects of pathologies can be mitigated.
|
2502.10421
|
DRiVE: Dynamic Recognition in VEhicles using snnTorch
|
cs.NE cs.AI cs.CV cs.LG
|
Spiking Neural Networks (SNNs) mimic biological brain activity, processing
data efficiently through an event-driven design, wherein the neurons activate
only when inputs exceed specific thresholds. Their ability to track voltage
changes over time via membrane potential dynamics helps retain temporal
information. This study combines SNNs with PyTorch's adaptable framework,
snnTorch, to test their potential for image-based tasks. We introduce DRiVE, a
vehicle detection model that uses spiking neuron dynamics to classify images,
achieving 94.8% accuracy and a near-perfect 0.99 AUC score. These results
highlight DRiVE's ability to distinguish vehicle classes effectively,
challenging the notion that SNNs are limited to temporal data. As interest
grows in energy-efficient neural models, DRiVE's success emphasizes the need to
refine SNN optimization for visual tasks. This work encourages broader
exploration of SNNs in scenarios where conventional networks struggle,
particularly for real-world applications requiring both precision and
efficiency.
|
2502.10422
|
DA-LIF: Dual Adaptive Leaky Integrate-and-Fire Model for Deep Spiking
Neural Networks
|
cs.NE cs.AI
|
Spiking Neural Networks (SNNs) are valued for their ability to process
spatio-temporal information efficiently, offering biological plausibility, low
energy consumption, and compatibility with neuromorphic hardware. However, the
commonly used Leaky Integrate-and-Fire (LIF) model overlooks neuron
heterogeneity and independently processes spatial and temporal information,
limiting the expressive power of SNNs. In this paper, we propose the Dual
Adaptive Leaky Integrate-and-Fire (DA-LIF) model, which introduces spatial and
temporal tuning with independently learnable decays. Evaluations on both static
(CIFAR10/100, ImageNet) and neuromorphic datasets (CIFAR10-DVS, DVS128 Gesture)
demonstrate superior accuracy with fewer timesteps compared to state-of-the-art
methods. Importantly, DA-LIF achieves these improvements with minimal
additional parameters, maintaining low energy consumption. Extensive ablation
studies further highlight the robustness and effectiveness of the DA-LIF model.
|
2502.10423
|
Spiking Neural Network Feature Discrimination Boosts Modality Fusion
|
cs.NE cs.CV cs.LG eess.IV
|
Feature discrimination is a crucial aspect of neural network design, as it
directly impacts the network's ability to distinguish between classes and
generalize across diverse datasets. The accomplishment of achieving
high-quality feature representations ensures high intra-class separability and
poses one of the most challenging research directions. While conventional deep
neural networks (DNNs) rely on complex transformations and very deep networks
to come up with meaningful feature representations, they usually require days
of training and consume significant energy amounts. To this end, spiking neural
networks (SNNs) offer a promising alternative. SNN's ability to capture
temporal and spatial dependencies renders them particularly suitable for
complex tasks, where multi-modal data are required. In this paper, we propose a
feature discrimination approach for multi-modal learning with SNNs, focusing on
audio-visual data. We employ deep spiking residual learning for visual modality
processing and a simpler yet efficient spiking network for auditory modality
processing. Lastly, we deploy a spiking multilayer perceptron for modality
fusion. We present our findings and evaluate our approach against similar works
in the field of classification challenges. To the best of our knowledge, this
is the first work investigating feature discrimination in SNNs.
|
2502.10424
|
QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV
Cache
|
cs.LG cs.AI
|
Large Language Models (LLMs) are increasingly being deployed on edge devices
for long-context settings, creating a growing need for fast and efficient
long-context inference. In these scenarios, the Key-Value (KV) cache is the
primary bottleneck in terms of both GPU memory and latency, as the full KV
cache must be loaded for each decoding step. While speculative decoding is a
widely accepted technique to accelerate autoregressive decoding, existing
methods often struggle to achieve significant speedups due to inefficient KV
cache optimization strategies and result in low acceptance rates. To address
these challenges, we propose a novel self-speculative decoding framework,
QuantSpec, where the draft model shares the architecture of the target model
but employs a hierarchical 4-bit quantized KV cache and 4-bit quantized weights
for acceleration. QuantSpec maintains high acceptance rates ($>$90%) and
reliably provides consistent end-to-end speedups upto $\sim2.5\times$,
outperforming other self-speculative decoding methods that use sparse KV cache
for long-context LLM inference. QuantSpec also reduces the memory requirements
by $\sim 1.3\times$ compared to these alternatives.
|
2502.10425
|
Neuron Platonic Intrinsic Representation From Dynamics Using Contrastive
Learning
|
q-bio.NC cs.AI cs.NE
|
The Platonic Representation Hypothesis suggests a universal,
modality-independent reality representation behind different data modalities.
Inspired by this, we view each neuron as a system and detect its multi-segment
activity data under various peripheral conditions. We assume there's a
time-invariant representation for the same neuron, reflecting its intrinsic
properties like molecular profiles, location, and morphology. The goal of
obtaining these intrinsic neuronal representations has two criteria: (I)
segments from the same neuron should have more similar representations than
those from different neurons; (II) the representations must generalize well to
out-of-domain data. To meet these, we propose the NeurPIR (Neuron Platonic
Intrinsic Representation) framework. It uses contrastive learning, with
segments from the same neuron as positive pairs and those from different
neurons as negative pairs. In implementation, we use VICReg, which focuses on
positive pairs and separates dissimilar samples via regularization. We tested
our method on Izhikevich model-simulated neuronal population dynamics data. The
results accurately identified neuron types based on preset hyperparameters. We
also applied it to two real-world neuron dynamics datasets with neuron type
annotations from spatial transcriptomics and neuron locations. Our model's
learned representations accurately predicted neuron types and locations and
were robust on out-of-domain data (from unseen animals). This shows the
potential of our approach for understanding neuronal systems and future
neuroscience research.
|
2502.10428
|
Dynamic Chain-of-Thought: Towards Adaptive Deep Reasoning
|
cs.AI cs.LG
|
To reduce the cost and consumption of computing resources caused by
computational redundancy and delayed reward assignment in long CoT, this
research proposes the dynamic chain-of-thought (D-CoT) with adaptive reasoning
time and steps. The researcher used simulation experiment to simulate the
integration of D-CoT through Python 3.13 IDLE combined with a Python simulator
based on GPTs. At the same time, the researcher used DeepSeek R1 as a control
group to test and compare the performance of the D-CoT simulator in processing
MIT OpenCourseWare's linear algebra exam questions. Experimental results show
that D-CoT is better than DeepSeek R1 based on long CoT in three indicators:
reasoning time, CoT length (reasoning steps) and token count, which achieves a
significant reduction in computing resource consumption. In addition, this
research has potential value in deep reasoning optimization that is used as a
reference for future dynamic deep reasoning frameworks.
|
2502.10429
|
Real Time Control of Tandem-Wing Experimental Platform Using Concerto
Reinforcement Learning
|
cs.LG cs.AI cs.RO cs.SY eess.SY
|
This paper introduces the CRL2RT algorithm, an advanced reinforcement
learning method aimed at improving the real-time control performance of the
Direct-Drive Tandem-Wing Experimental Platform (DDTWEP). Inspired by dragonfly
flight, DDTWEP's tandem wing structure causes nonlinear and unsteady
aerodynamic interactions, leading to complex load behaviors during pitch, roll,
and yaw maneuvers. These complexities challenge stable motion control at high
frequencies (2000 Hz). To overcome these issues, we developed the CRL2RT
algorithm, which combines classical control elements with reinforcement
learning-based controllers using a time-interleaved architecture and a
rule-based policy composer. This integration ensures finite-time convergence
and single-life adaptability. Experimental results under various conditions,
including different flapping frequencies and yaw disturbances, show that CRL2RT
achieves a control frequency surpassing 2500 Hz on standard CPUs. Additionally,
when integrated with classical controllers like PID, Adaptive PID, and Model
Reference Adaptive Control (MRAC), CRL2RT enhances tracking performance by
18.3% to 60.7%. These findings demonstrate CRL2RT's broad applicability and
superior performance in complex real-time control scenarios, validating its
effectiveness in overcoming existing control strategy limitations and advancing
robust, efficient real-time control for biomimetic aerial vehicles.
|
2502.10431
|
Leveraging Constraint Violation Signals For Action-Constrained
Reinforcement Learning
|
cs.LG cs.AI
|
In many RL applications, ensuring an agent's actions adhere to constraints is
crucial for safety. Most previous methods in Action-Constrained Reinforcement
Learning (ACRL) employ a projection layer after the policy network to correct
the action. However projection-based methods suffer from issues like the zero
gradient problem and higher runtime due to the usage of optimization solvers.
Recently methods were proposed to train generative models to learn a
differentiable mapping between latent variables and feasible actions to address
this issue. However, generative models require training using samples from the
constrained action space, which itself is challenging. To address such
limitations, first, we define a target distribution for feasible actions based
on constraint violation signals, and train normalizing flows by minimizing the
KL divergence between an approximated distribution over feasible actions and
the target. This eliminates the need to generate feasible action samples,
greatly simplifying the flow model learning. Second, we integrate the learned
flow model with existing deep RL methods, which restrict it to exploring only
the feasible action space. Third, we extend our approach beyond ACRL to handle
state-wise constraints by learning the constraint violation signal from the
environment. Empirically, our approach has significantly fewer constraint
violations while achieving similar or better quality in several control tasks
than previous best methods.
|
2502.10432
|
A Case Study on Virtual and Physical I/O Throughputs
|
cs.DC cs.DB
|
Input/Output (I/O) performance is one of the key areas that need to be
carefully examined to better support IT services. With the rapid development
and deployment of virtualization technology, many essential business
applications have been migrated to the virtualized platform due to reduced cost
and improved agility. However, the impact of such transition on the I/O
performance is not very well studied. In this research project, the authors
investigated the disk write request performance on a virtual storage interface
and on a physical storage interface. Specifically, the study aimed to identify
whether a virtual SCSI disk controller can process 4KB and 32KB I/O write
requests faster than a standard physical IDE controller. The experiments of
this study were constructed in a way to best emulate real world IT
configurations. The results were carefully analyzed. The results reveal that a
virtual SCSI controller can process smaller write requests (4KB) faster than
the physical IDE controller but it is outperformed by its physical counterpart
if the sizes of write request are bigger (32KB). This manuscript presents the
details of this research along with recommendations for improving virtual I/O
performance.
|
2502.10433
|
Neural Genetic Search in Discrete Spaces
|
cs.NE cs.LG
|
Effective search methods are crucial for improving the performance of deep
generative models at test time. In this paper, we introduce a novel test-time
search method, Neural Genetic Search (NGS), which incorporates the evolutionary
mechanism of genetic algorithms into the generation procedure of deep models.
The core idea behind NGS is its crossover, which is defined as
parent-conditioned generation using trained generative models. This approach
offers a versatile and easy-to-implement search algorithm for deep generative
models. We demonstrate the effectiveness and flexibility of NGS through
experiments across three distinct domains: routing problems, adversarial prompt
generation for language models, and molecular design.
|
2502.10434
|
Agency in Artificial Intelligence Systems
|
cs.AI cs.CY
|
There is a general concern that present developments in artificial
intelligence (AI) research will lead to sentient AI systems, and these may pose
an existential threat to humanity. But why cannot sentient AI systems benefit
humanity instead? This paper endeavours to put this question in a tractable
manner. I ask whether a putative AI system will develop an altruistic or a
malicious disposition towards our society, or what would be the nature of its
agency? Given that AI systems are being developed into formidable problem
solvers, we can reasonably expect these systems to preferentially take on
conscious aspects of human problem solving. I identify the relevant phenomenal
aspects of agency in human problem solving. The functional aspects of conscious
agency can be monitored using tools provided by functionalist theories of
consciousness. A recent expert report (Butlin et al. 2023) has identified
functionalist indicators of agency based on these theories. I show how to use
the Integrated Information Theory (IIT) of consciousness, to monitor the
phenomenal nature of this agency. If we are able to monitor the agency of AI
systems as they develop, then we can dissuade them from becoming a menace to
society while encouraging them to be an aid.
|
2502.10435
|
RAMer: Reconstruction-based Adversarial Model for Multi-party
Multi-modal Multi-label Emotion Recognition
|
cs.CV cs.AI
|
Conventional multi-modal multi-label emotion recognition (MMER) from videos
typically assumes full availability of visual, textual, and acoustic
modalities. However, real-world multi-party settings often violate this
assumption, as non-speakers frequently lack acoustic and textual inputs,
leading to a significant degradation in model performance. Existing approaches
also tend to unify heterogeneous modalities into a single representation,
overlooking each modality's unique characteristics. To address these
challenges, we propose RAMer (Reconstruction-based Adversarial Model for
Emotion Recognition), which leverages adversarial learning to refine
multi-modal representations by exploring both modality commonality and
specificity through reconstructed features enhanced by contrastive learning.
RAMer also introduces a personality auxiliary task to complement missing
modalities using modality-level attention, improving emotion reasoning. To
further strengthen the model's ability to capture label and modality
interdependency, we propose a stack shuffle strategy to enrich correlations
between labels and modality-specific features. Experiments on three benchmarks,
i.e., MEmoR, CMU-MOSEI, and $M^3$ED, demonstrate that RAMer achieves
state-of-the-art performance in dyadic and multi-party MMER scenarios.
|
2502.10436
|
MERGE$^3$: Efficient Evolutionary Merging on Consumer-grade GPUs
|
cs.NE cs.AI cs.LG
|
Evolutionary model merging enables the creation of high-performing multi-task
models but remains computationally prohibitive for consumer hardware. We
introduce MERGE$^3$, an efficient framework that makes evolutionary merging
feasible on a single GPU by reducing fitness computation costs 50$\times$ while
preserving performance. MERGE$^3$ achieves this by Extracting a reduced dataset
for evaluation, Estimating model abilities using Item Response Theory (IRT),
and Evolving optimal merges via IRT-based performance estimators. Our method
enables state-of-the-art multilingual and cross-lingual merging, transferring
knowledge across languages with significantly lower computational overhead. We
provide theoretical guarantees and an open-source library, democratizing
high-quality model merging.
|
2502.10438
|
Injecting Universal Jailbreak Backdoors into LLMs in Minutes
|
cs.CR cs.AI cs.LG
|
Jailbreak backdoor attacks on LLMs have garnered attention for their
effectiveness and stealth. However, existing methods rely on the crafting of
poisoned datasets and the time-consuming process of fine-tuning. In this work,
we propose JailbreakEdit, a novel jailbreak backdoor injection method that
exploits model editing techniques to inject a universal jailbreak backdoor into
safety-aligned LLMs with minimal intervention in minutes. JailbreakEdit
integrates a multi-node target estimation to estimate the jailbreak space, thus
creating shortcuts from the backdoor to this estimated jailbreak space that
induce jailbreak actions. Our attack effectively shifts the models' attention
by attaching strong semantics to the backdoor, enabling it to bypass internal
safety mechanisms. Experimental results show that JailbreakEdit achieves a high
jailbreak success rate on jailbreak prompts while preserving generation
quality, and safe performance on normal queries. Our findings underscore the
effectiveness, stealthiness, and explainability of JailbreakEdit, emphasizing
the need for more advanced defense mechanisms in LLMs.
|
2502.10439
|
Crypto Miner Attack: GPU Remote Code Execution Attacks
|
cs.CR cs.AI cs.LG
|
Remote Code Execution (RCE) exploits pose a significant threat to AI and ML
systems, particularly in GPU-accelerated environments where the computational
power of GPUs can be misused for malicious purposes. This paper focuses on RCE
attacks leveraging deserialization vulnerabilities and custom layers, such as
TensorFlow Lambda layers, which are often overlooked due to the complexity of
monitoring GPU workloads. These vulnerabilities enable attackers to execute
arbitrary code, blending malicious activity seamlessly into expected model
behavior and exploiting GPUs for unauthorized tasks such as cryptocurrency
mining. Unlike traditional CPU-based attacks, the parallel processing nature of
GPUs and their high resource utilization make runtime detection exceptionally
challenging. In this work, we provide a comprehensive examination of RCE
exploits targeting GPUs, demonstrating an attack that utilizes these
vulnerabilities to deploy a crypto miner on a GPU. We highlight the technical
intricacies of such attacks, emphasize their potential for significant
financial and computational costs, and propose strategies for mitigation. By
shedding light on this underexplored attack vector, we aim to raise awareness
and encourage the adoption of robust security measures in GPU-driven AI and ML
systems, with an emphasis on static and model scanning as an easier way to
detect exploits.
|
2502.10440
|
Towards Copyright Protection for Knowledge Bases of Retrieval-augmented
Language Models via Ownership Verification with Reasoning
|
cs.CR cs.AI cs.CL cs.IR cs.LG
|
Large language models (LLMs) are increasingly integrated into real-world
applications through retrieval-augmented generation (RAG) mechanisms to
supplement their responses with up-to-date and domain-specific knowledge.
However, the valuable and often proprietary nature of the knowledge bases used
in RAG introduces the risk of unauthorized usage by adversaries. Existing
methods that can be generalized as watermarking techniques to protect these
knowledge bases typically involve poisoning attacks. However, these methods
require to alter the results of verification samples (\eg, generating incorrect
outputs), inevitably making them susceptible to anomaly detection and even
introduce new security risks. To address these challenges, we propose \name{}
for `harmless' copyright protection of knowledge bases. Instead of manipulating
LLM's final output, \name{} implants distinct verification behaviors in the
space of chain-of-thought (CoT) reasoning, maintaining the correctness of the
final answer. Our method has three main stages: (1) \textbf{Generating CoTs}:
For each verification question, we generate two CoTs, including a target CoT
for building watermark behaviors; (2) \textbf{Optimizing Watermark Phrases and
Target CoTs}: We optimize them to minimize retrieval errors under the black-box
setting of suspicious LLM, ensuring that the watermarked verification queries
activate the target CoTs without being activated in non-watermarked ones; (3)
\textbf{Ownership Verification}: We exploit a pairwise Wilcoxon test to
statistically verify whether a suspicious LLM is augmented with the protected
knowledge base by comparing its responses to watermarked and benign
verification queries. Our experiments on diverse benchmarks demonstrate that
\name{} effectively protects knowledge bases against unauthorized usage while
preserving the integrity and performance of the RAG.
|
2502.10441
|
AI Alignment at Your Discretion
|
cs.AI cs.CY cs.LG
|
In AI alignment, extensive latitude must be granted to annotators, either
human or algorithmic, to judge which model outputs are `better' or `safer.' We
refer to this latitude as alignment discretion. Such discretion remains largely
unexamined, posing two risks: (i) annotators may use their power of discretion
arbitrarily, and (ii) models may fail to mimic this discretion. To study this
phenomenon, we draw on legal concepts of discretion that structure how
decision-making authority is conferred and exercised, particularly in cases
where principles conflict or their application is unclear or irrelevant.
Extended to AI alignment, discretion is required when alignment principles and
rules are (inevitably) conflicting or indecisive. We present a set of metrics
to systematically analyze when and how discretion in AI alignment is exercised,
such that both risks (i) and (ii) can be observed. Moreover, we distinguish
between human and algorithmic discretion and analyze the discrepancy between
them. By measuring both human and algorithmic discretion over safety alignment
datasets, we reveal layers of discretion in the alignment process that were
previously unaccounted for. Furthermore, we demonstrate how algorithms trained
on these datasets develop their own forms of discretion in interpreting and
applying these principles, which challenges the purpose of having any
principles at all. Our paper presents the first step towards formalizing this
core gap in current alignment processes, and we call on the community to
further scrutinize and control alignment discretion.
|
2502.10442
|
Analysis of Overparameterization in Continual Learning under a Linear
Model
|
cs.LG cs.AI stat.ML
|
Autonomous machine learning systems that learn many tasks in sequence are
prone to the catastrophic forgetting problem. Mathematical theory is needed in
order to understand the extent of forgetting during continual learning. As a
foundational step towards this goal, we study continual learning and
catastrophic forgetting from a theoretical perspective in the simple setting of
gradient descent with no explicit algorithmic mechanism to prevent forgetting.
In this setting, we analytically demonstrate that overparameterization alone
can mitigate forgetting in the context of a linear regression model. We
consider a two-task setting motivated by permutation tasks, and show that as
the overparameterization ratio becomes sufficiently high, a model trained on
both tasks in sequence results in a low-risk estimator for the first task. As
part of this work, we establish a non-asymptotic bound of the risk of a single
linear regression task, which may be of independent interest to the field of
double descent theory.
|
2502.10443
|
One Class Restricted Kernel Machines
|
cs.LG
|
Restricted kernel machines (RKMs) have demonstrated a significant impact in
enhancing generalization ability in the field of machine learning. Recent
studies have introduced various methods within the RKM framework, combining
kernel functions with the least squares support vector machine (LSSVM) in a
manner similar to the energy function of restricted boltzmann machines (RBM),
such that a better performance can be achieved. However, RKM's efficacy can be
compromised by the presence of outliers and other forms of contamination within
the dataset. These anomalies can skew the learning process, leading to less
accurate and reliable outcomes. To address this critical issue and to ensure
the robustness of the model, we propose the novel one-class RKM (OCRKM). In the
framework of OCRKM, we employ an energy function akin to that of the RBM, which
integrates both visible and hidden variables in a nonprobabilistic setting. The
formulation of the proposed OCRKM facilitates the seamless integration of
one-class classification method with the RKM, enhancing its capability to
detect outliers and anomalies effectively. The proposed OCRKM model is
evaluated over UCI benchmark datasets. Experimental findings and statistical
analyses consistently emphasize the superior generalization capabilities of the
proposed OCRKM model over baseline models across all scenarios.
|
2502.10444
|
A Survey of Representation Learning, Optimization Strategies, and
Applications for Omnidirectional Vision
|
cs.CV
|
Omnidirectional image (ODI) data is captured with a field-of-view of 360x180,
which is much wider than the pinhole cameras and captures richer surrounding
environment details than the conventional perspective images. In recent years,
the availability of customer-level 360 cameras has made omnidirectional vision
more popular, and the advance of deep learning (DL) has significantly sparked
its research and applications. This paper presents a systematic and
comprehensive review and analysis of the recent progress of DL for
omnidirectional vision. It delineates the distinct challenges and complexities
encountered in applying DL to omnidirectional images as opposed to traditional
perspective imagery. Our work covers four main contents: (i) A thorough
introduction to the principles of omnidirectional imaging and commonly explored
projections of ODI; (ii) A methodical review of varied representation learning
approaches tailored for ODI; (iii) An in-depth investigation of optimization
strategies specific to omnidirectional vision; (iv) A structural and
hierarchical taxonomy of the DL methods for the representative omnidirectional
vision tasks, from visual enhancement (e.g., image generation and
super-resolution) to 3D geometry and motion estimation (e.g., depth and optical
flow estimation), alongside the discussions on emergent research directions;
(v) An overview of cutting-edge applications (e.g., autonomous driving and
virtual reality), coupled with a critical discussion on prevailing challenges
and open questions, to trigger more research in the community.
|
2502.10446
|
Evaluating and Explaining Earthquake-Induced Liquefaction Potential
through Multi-Modal Transformers
|
cs.LG physics.geo-ph
|
This study presents an explainable parallel transformer architecture for soil
liquefaction prediction that integrates three distinct data streams: spectral
seismic encoding, soil stratigraphy tokenization, and site-specific features.
The architecture processes data from 165 case histories across 11 major
earthquakes, employing Fast Fourier Transform for seismic waveform encoding and
principles from large language models for soil layer tokenization.
Interpretability is achieved through SHapley Additive exPlanations (SHAP),
which decompose predictions into individual contributions from seismic
characteristics, soil properties, and site conditions. The model achieves
93.75% prediction accuracy on cross-regional validation sets and demonstrates
robust performance through sensitivity analysis of ground motion intensity and
soil resistance parameters. Notably, validation against previously unseen
ground motion data from the 2024 Noto Peninsula earthquake confirms the model's
generalization capabilities and practical utility. Implementation as a publicly
accessible web application enables rapid assessment of multiple sites
simultaneously. This approach establishes a new framework in geotechnical deep
learning where sophisticated multi-modal analysis meets practical engineering
requirements through quantitative interpretation and accessible deployment.
|
2502.10447
|
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech
Recognition
|
eess.AS cs.CL cs.LG
|
Audio-visual speech recognition (AVSR) has become critical for enhancing
speech recognition in noisy environments by integrating both auditory and
visual modalities. However, existing AVSR systems struggle to scale up without
compromising computational efficiency. In this study, we introduce MoHAVE
(Mixture of Hierarchical Audio-Visual Experts), a novel robust AVSR framework
designed to address these scalability constraints. By leveraging a
Mixture-of-Experts (MoE) architecture, MoHAVE activates modality-specific
expert groups, ensuring dynamic adaptation to various audio-visual inputs with
minimal computational overhead. Key contributions of MoHAVE include: (1) a
sparse MoE framework that efficiently scales AVSR model capacity, (2) a
hierarchical gating mechanism that dynamically utilizes the expert groups based
on input context, enhancing adaptability and robustness, and (3) remarkable
performance across robust AVSR benchmarks, including LRS3 and MuAViC
transcription and translation tasks, setting a new standard for scalable speech
recognition systems.
|
2502.10450
|
Trustworthy AI on Safety, Bias, and Privacy: A Survey
|
cs.CR cs.AI cs.CL cs.LG
|
The capabilities of artificial intelligence systems have been advancing to a
great extent, but these systems still struggle with failure modes,
vulnerabilities, and biases. In this paper, we study the current state of the
field, and present promising insights and perspectives regarding concerns that
challenge the trustworthiness of AI models. In particular, this paper
investigates the issues regarding three thrusts: safety, privacy, and bias,
which hurt models' trustworthiness. For safety, we discuss safety alignment in
the context of large language models, preventing them from generating toxic or
harmful content. For bias, we focus on spurious biases that can mislead a
network. Lastly, for privacy, we cover membership inference attacks in deep
neural networks. The discussions addressed in this paper reflect our own
experiments and observations.
|
2502.10451
|
FlexControl: Computation-Aware ControlNet with Differentiable Router for
Text-to-Image Generation
|
cs.LG cs.GR
|
ControlNet offers a powerful way to guide diffusion-based generative models,
yet most implementations rely on ad-hoc heuristics to choose which network
blocks to control-an approach that varies unpredictably with different tasks.
To address this gap, we propose FlexControl, a novel framework that copies all
diffusion blocks during training and employs a trainable gating mechanism to
dynamically select which blocks to activate at each denoising step. With
introducing a computation-aware loss, we can encourage control blocks only to
activate when it benefit the generation quality. By eliminating manual block
selection, FlexControl enhances adaptability across diverse tasks and
streamlines the design pipeline, with computation-aware training loss in an
end-to-end training manner. Through comprehensive experiments on both UNet
(e.g., SD1.5) and DiT (e.g., SD3.0), we show that our method outperforms
existing ControlNet variants in certain key aspects of interest. As evidenced
by both quantitative and qualitative evaluations, FlexControl preserves or
enhances image fidelity while also reducing computational overhead by
selectively activating the most relevant blocks. These results underscore the
potential of a flexible, data-driven approach for controlled diffusion and open
new avenues for efficient generative model design. The code will soon be
available at https://github.com/Anonymousuuser/FlexControl.
|
2502.10452
|
Quaternion-Hadamard Network: A Novel Defense Against Adversarial Attacks
with a New Dataset
|
cs.LG eess.IV
|
This paper addresses the vulnerability of deep-learning models designed for
rain, snow, and haze removal. Despite enhancing image quality in adverse
weather, these models are susceptible to adversarial attacks that compromise
their effectiveness. Traditional defenses such as adversarial training and
model distillation often require extensive retraining, making them costly and
impractical for real-world deployment. While denoising and super-resolution
techniques can aid image classification models, they impose high computational
demands and introduce visual artifacts that hinder image processing tasks. We
propose a model-agnostic defense against first-order white-box adversarial
attacks using the Quaternion-Hadamard Network (QHNet) to tackle these
challenges. White-box attacks are particularly difficult to defend against
since attackers have full access to the model's architecture, weights, and
training procedures. Our defense introduces the Quaternion Hadamard Denoising
Convolutional Block (QHDCB) and the Quaternion Denoising Residual Block (QDRB),
leveraging polynomial thresholding. QHNet incorporates these blocks within an
encoder-decoder architecture, enhanced by feature refinement, to effectively
neutralize adversarial noise. Additionally, we introduce the Adversarial
Weather Conditions Vision Dataset (AWCVD), created by applying first-order
gradient attacks on state-of-the-art weather removal techniques in scenarios
involving haze, rain streaks, and snow. Using PSNR and SSIM metrics, we
demonstrate that QHNet significantly enhances the robustness of low-level
computer vision models against adversarial attacks compared with
state-of-the-art denoising and super-resolution techniques. The source code and
dataset will be released alongside the final version of this paper.
|
2502.10453
|
Linking Cryptoasset Attribution Tags to Knowledge Graph Entities: An
LLM-based Approach
|
cs.CR cs.AI cs.CL cs.DB cs.LG
|
Attribution tags form the foundation of modern cryptoasset forensics.
However, inconsistent or incorrect tags can mislead investigations and even
result in false accusations. To address this issue, we propose a novel
computational method based on Large Language Models (LLMs) to link attribution
tags with well-defined knowledge graph concepts. We implemented this method in
an end-to-end pipeline and conducted experiments showing that our approach
outperforms baseline methods by up to 37.4% in F1-score across three publicly
available attribution tag datasets. By integrating concept filtering and
blocking procedures, we generate candidate sets containing five knowledge graph
entities, achieving a recall of 93% without the need for labeled data.
Additionally, we demonstrate that local LLM models can achieve F1-scores of
90%, comparable to remote models which achieve 94%. We also analyze the
cost-performance trade-offs of various LLMs and prompt templates, showing that
selecting the most cost-effective configuration can reduce costs by 90%, with
only a 1% decrease in performance. Our method not only enhances attribution tag
quality but also serves as a blueprint for fostering more reliable forensic
evidence.
|
2502.10454
|
One Example Shown, Many Concepts Known! Counterexample-Driven Conceptual
Reasoning in Mathematical LLMs
|
cs.LG cs.AI cs.CL
|
Leveraging mathematical Large Language Models (LLMs) for proof generation is
a fundamental topic in LLMs research. We argue that the ability of current LLMs
to prove statements largely depends on whether they have encountered the
relevant proof process during training. This reliance limits their deeper
understanding of mathematical theorems and related concepts. Inspired by the
pedagogical method of "proof by counterexamples" commonly used in human
mathematics education, our work aims to enhance LLMs' ability to conduct
mathematical reasoning and proof through counterexamples. Specifically, we
manually create a high-quality, university-level mathematical benchmark,
CounterMATH, which requires LLMs to prove mathematical statements by providing
counterexamples, thereby assessing their grasp of mathematical concepts.
Additionally, we develop a data engineering framework to automatically obtain
training data for further model improvement. Extensive experiments and detailed
analyses demonstrate that CounterMATH is challenging, indicating that LLMs,
such as OpenAI o1, have insufficient counterexample-driven proof capabilities.
Moreover, our exploration into model training reveals that strengthening LLMs'
counterexample-driven conceptual reasoning abilities is crucial for improving
their overall mathematical capabilities. We believe that our work offers new
perspectives on the community of mathematical LLMs.
|
2502.10455
|
E2LVLM:Evidence-Enhanced Large Vision-Language Model for Multimodal
Out-of-Context Misinformation Detection
|
cs.LG cs.MM
|
Recent studies in Large Vision-Language Models (LVLMs) have demonstrated
impressive advancements in multimodal Out-of-Context (OOC) misinformation
detection, discerning whether an authentic image is wrongly used in a claim.
Despite their success, the textual evidence of authentic images retrieved from
the inverse search is directly transmitted to LVLMs, leading to inaccurate or
false information in the decision-making phase. To this end, we present E2LVLM,
a novel evidence-enhanced large vision-language model by adapting textual
evidence in two levels. First, motivated by the fact that textual evidence
provided by external tools struggles to align with LVLMs inputs, we devise a
reranking and rewriting strategy for generating coherent and contextually
attuned content, thereby driving the aligned and effective behavior of LVLMs
pertinent to authentic images. Second, to address the scarcity of news domain
datasets with both judgment and explanation, we generate a novel OOC multimodal
instruction-following dataset by prompting LVLMs with informative content to
acquire plausible explanations. Further, we develop a multimodal
instruction-tuning strategy with convincing explanations for beyond detection.
This scheme contributes to E2LVLM for multimodal OOC misinformation detection
and explanation. A multitude of experiments demonstrate that E2LVLM achieves
superior performance than state-of-the-art methods, and also provides
compelling rationales for judgments.
|
2502.10456
|
Deep Reinforcement Learning-Based User Scheduling for Collaborative
Perception
|
cs.LG cs.RO
|
Stand-alone perception systems in autonomous driving suffer from limited
sensing ranges and occlusions at extended distances, potentially resulting in
catastrophic outcomes. To address this issue, collaborative perception is
envisioned to improve perceptual accuracy by using vehicle-to-everything (V2X)
communication to enable collaboration among connected and autonomous vehicles
and roadside units. However, due to limited communication resources, it is
impractical for all units to transmit sensing data such as point clouds or
high-definition video. As a result, it is essential to optimize the scheduling
of communication links to ensure efficient spectrum utilization for the
exchange of perceptual data. In this work, we propose a deep reinforcement
learning-based V2X user scheduling algorithm for collaborative perception.
Given the challenges in acquiring perceptual labels, we reformulate the
conventional label-dependent objective into a label-free goal, based on
characteristics of 3D object detection. Incorporating both channel state
information (CSI) and semantic information, we develop a double deep Q-Network
(DDQN)-based user scheduling framework for collaborative perception, named
SchedCP. Simulation results verify the effectiveness and robustness of SchedCP
compared with traditional V2X scheduling methods. Finally, we present a case
study to illustrate how our proposed algorithm adaptively modifies the
scheduling decisions by taking both instantaneous CSI and perceptual semantics
into account.
|
2502.10458
|
I Think, Therefore I Diffuse: Enabling Multimodal In-Context Reasoning
in Diffusion Models
|
cs.LG cs.AI
|
This paper presents ThinkDiff, a novel alignment paradigm that empowers
text-to-image diffusion models with multimodal in-context understanding and
reasoning capabilities by integrating the strengths of vision-language models
(VLMs). Existing multimodal diffusion finetuning methods largely focus on
pixel-level reconstruction rather than in-context reasoning, and are
constrained by the complexity and limited availability of reasoning-based
datasets. ThinkDiff addresses these challenges by leveraging vision-language
training as a proxy task, aligning VLMs with the decoder of an encoder-decoder
large language model (LLM) instead of a diffusion decoder. This proxy task
builds on the observation that the $\textbf{LLM decoder}$ shares the same input
feature space with $\textbf{diffusion decoders}$ that use the corresponding
$\textbf{LLM encoder}$ for prompt embedding. As a result, aligning VLMs with
diffusion decoders can be simplified through alignment with the LLM decoder.
Without complex training and datasets, ThinkDiff effectively unleashes
understanding, reasoning, and composing capabilities in diffusion models.
Experiments demonstrate that ThinkDiff significantly improves accuracy from
19.2% to 46.3% on the challenging CoBSAT benchmark for multimodal in-context
reasoning generation, with only 5 hours of training on 4 A100 GPUs.
Additionally, ThinkDiff demonstrates exceptional performance in composing
multiple images and texts into logically coherent images. Project page:
https://mizhenxing.github.io/ThinkDiff.
|
2502.10459
|
LLM4GNAS: A Large Language Model Based Toolkit for Graph Neural
Architecture Search
|
cs.LG cs.AI
|
Graph Neural Architecture Search (GNAS) facilitates the automatic design of
Graph Neural Networks (GNNs) tailored to specific downstream graph learning
tasks. However, existing GNAS approaches often require manual adaptation to new
graph search spaces, necessitating substantial code optimization and
domain-specific knowledge. To address this challenge, we present LLM4GNAS, a
toolkit for GNAS that leverages the generative capabilities of Large Language
Models (LLMs). LLM4GNAS includes an algorithm library for graph neural
architecture search algorithms based on LLMs, enabling the adaptation of GNAS
methods to new search spaces through the modification of LLM prompts. This
approach reduces the need for manual intervention in algorithm adaptation and
code modification. The LLM4GNAS toolkit is extensible and robust, incorporating
LLM-enhanced graph feature engineering, LLM-enhanced graph neural architecture
search, and LLM-enhanced hyperparameter optimization. Experimental results
indicate that LLM4GNAS outperforms existing GNAS methods on tasks involving
both homogeneous and heterogeneous graphs.
|
2502.10460
|
SenDaL: An Effective and Efficient Calibration Framework of Low-Cost
Sensors for Daily Life
|
cs.LG
|
The collection of accurate and noise-free data is a crucial part of Internet
of Things (IoT)-controlled environments. However, the data collected from
various sensors in daily life often suffer from inaccuracies. Additionally,
IoT-controlled devices with low-cost sensors lack sufficient hardware resources
to employ conventional deep-learning models. To overcome this limitation, we
propose sensors for daily life (SenDaL), the first framework that utilizes
neural networks for calibrating low cost sensors. SenDaL introduces novel
training and inference processes that enable it to achieve accuracy comparable
to deep learning models while simultaneously preserving latency and energy
consumption similar to linear models. SenDaL is first trained in a bottom-up
manner, making decisions based on calibration results from both linear and deep
learning models. Once both models are trained, SenDaL makes independent
decisions through a top-down inference process, ensuring accuracy and inference
speed. Furthermore, SenDaL can select the optimal deep learning model according
to the resources of the IoT devices because it is compatible with various deep
learning models, such as long short-term memory-based and Transformer-based
models. We have verified that SenDaL outperforms existing deep learning models
in terms of accuracy, latency, and energy efficiency through experiments
conducted in different IoT environments and real-life scenarios.
|
2502.10461
|
Performance of energy harvesters with parameter mismatch
|
eess.SY cs.SY
|
This study explores the impact of parameter mismatch on the stability of
cross-well motion in energy harvesters, using a basin stability metric. Energy
harvesters, essential for converting ambient energy into electricity,
increasingly incorporate multi-well systems to enhance efficiency. However,
these systems are sensitive to initial conditions and parameter variations,
which can affect their ability to sustain optimal cross-well motion -- a state
associated with maximum power output. Our analysis compared four harvester
types under varying levels of parameter mismatch, assessing resilience of the
devices to parameter variations. By identifying safe operating ranges within
the excitation parameter space, this study provides practical guidance for
designing robust, stable harvesters capable of maintaining cross-well motion
despite parameter uncertainties. These insights contribute to advancing the
reliability of energy harvesting devices in real-world applications where
parameter mismatches are inevitable.
|
2502.10463
|
From Layers to States: A State Space Model Perspective to Deep Neural
Network Layer Dynamics
|
cs.LG cs.AI cs.NI
|
The depth of neural networks is a critical factor for their capability, with
deeper models often demonstrating superior performance. Motivated by this,
significant efforts have been made to enhance layer aggregation - reusing
information from previous layers to better extract features at the current
layer, to improve the representational power of deep neural networks. However,
previous works have primarily addressed this problem from a discrete-state
perspective which is not suitable as the number of network layers grows. This
paper novelly treats the outputs from layers as states of a continuous process
and considers leveraging the state space model (SSM) to design the aggregation
of layers in very deep neural networks. Moreover, inspired by its advancements
in modeling long sequences, the Selective State Space Models (S6) is employed
to design a new module called Selective State Space Model Layer Aggregation
(S6LA). This module aims to combine traditional CNN or transformer
architectures within a sequential framework, enhancing the representational
capabilities of state-of-the-art vision networks. Extensive experiments show
that S6LA delivers substantial improvements in both image classification and
detection tasks, highlighting the potential of integrating SSMs with
contemporary deep learning techniques.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.