id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.12957
|
Fixed-Budget Change Point Identification in Piecewise Constant Bandits
|
stat.ML cs.LG
|
We study the piecewise constant bandit problem where the expected reward is a
piecewise constant function with one change point (discontinuity) across the
action space $[0,1]$ and the learner's aim is to locate the change point. Under
the assumption of a fixed exploration budget, we provide the first
non-asymptotic analysis of policies designed to locate abrupt changes in the
mean reward function under bandit feedback. We study the problem under a large
and small budget regime, and for both settings establish lower bounds on the
error probability and provide algorithms with near matching upper bounds.
Interestingly, our results show a separation in the complexity of the two
regimes. We then propose a regime adaptive algorithm which is near optimal for
both small and large budgets simultaneously. We complement our theoretical
analysis with experimental results in simulated environments to support our
findings.
|
2501.12958
|
A Novel Tracking Framework for Devices in X-ray Leveraging Supplementary
Cue-Driven Self-Supervised Features
|
cs.CV cs.AI
|
To restore proper blood flow in blocked coronary arteries via angioplasty
procedure, accurate placement of devices such as catheters, balloons, and
stents under live fluoroscopy or diagnostic angiography is crucial. Identified
balloon markers help in enhancing stent visibility in X-ray sequences, while
the catheter tip aids in precise navigation and co-registering vessel
structures, reducing the need for contrast in angiography. However, accurate
detection of these devices in interventional X-ray sequences faces significant
challenges, particularly due to occlusions from contrasted vessels and other
devices and distractions from surrounding, resulting in the failure to track
such small objects. While most tracking methods rely on spatial correlation of
past and current appearance, they often lack strong motion comprehension
essential for navigating through these challenging conditions, and fail to
effectively detect multiple instances in the scene. To overcome these
limitations, we propose a self-supervised learning approach that enhances its
spatio-temporal understanding by incorporating supplementary cues and learning
across multiple representation spaces on a large dataset. Followed by that, we
introduce a generic real-time tracking framework that effectively leverages the
pretrained spatio-temporal network and also takes the historical appearance and
trajectory data into account. This results in enhanced localization of multiple
instances of device landmarks. Our method outperforms state-of-the-art methods
in interventional X-ray device tracking, especially stability and robustness,
achieving an 87% reduction in max error for balloon marker detection and a 61%
reduction in max error for catheter tip detection.
|
2501.12959
|
Efficient Prompt Compression with Evaluator Heads for Long-Context
Transformer Inference
|
cs.CL
|
Although applications involving long-context inputs are crucial for the
effective utilization of large language models (LLMs), they also result in
increased computational costs and reduced performance. To address this
challenge, we propose an efficient, training-free prompt compression method
that retains key information within compressed prompts. We identify specific
attention heads in transformer-based LLMs, which we designate as evaluator
heads, that are capable of selecting tokens in long inputs that are most
significant for inference. Building on this discovery, we develop EHPC, an
Evaluator Head-based Prompt Compression method, which enables LLMs to rapidly
"skim through" input prompts by leveraging only the first few layers with
evaluator heads during the pre-filling stage, subsequently passing only the
important tokens to the model for inference. EHPC achieves state-of-the-art
results across two mainstream benchmarks: prompt compression and long-context
inference acceleration. Consequently, it effectively reduces the complexity and
costs associated with commercial API calls. We further demonstrate that EHPC
attains competitive results compared to key-value cache-based acceleration
methods, thereby highlighting its potential to enhance the efficiency of LLMs
for long-context tasks.
|
2501.12962
|
It's complicated. The relationship of algorithmic fairness and
non-discrimination regulations in the EU AI Act
|
cs.LG cs.AI cs.CY
|
What constitutes a fair decision? This question is not only difficult for
humans but becomes more challenging when Artificial Intelligence (AI) models
are used. In light of discriminatory algorithmic behaviors, the EU has recently
passed the AI Act, which mandates specific rules for AI models, incorporating
both traditional legal non-discrimination regulations and machine learning
based algorithmic fairness concepts. This paper aims to bridge these two
different concepts in the AI Act through: First a high-level introduction of
both concepts targeting legal and computer science-oriented scholars, and
second an in-depth analysis of the AI Act's relationship between legal
non-discrimination regulations and algorithmic fairness. Our analysis reveals
three key findings: (1.), most non-discrimination regulations target only
high-risk AI systems. (2.), the regulation of high-risk systems encompasses
both data input requirements and output monitoring, though these regulations
are often inconsistent and raise questions of computational feasibility. (3.)
Regulations for General Purpose AI Models, such as Large Language Models that
are not simultaneously classified as high-risk systems, currently lack
specificity compared to other regulations. Based on these findings, we
recommend developing more specific auditing and testing methodologies for AI
systems. This paper aims to serve as a foundation for future interdisciplinary
collaboration between legal scholars and computer science-oriented machine
learning researchers studying discrimination in AI systems.
|
2501.12969
|
Lipschitz Safe Bayesian Optimization for Automotive Control
|
eess.SY cs.SY
|
Controller tuning is a labor-intensive process that requires human
intervention and expert knowledge. Bayesian optimization has been applied
successfully in different fields to automate this process. However, when tuning
on hardware, such as in automotive applications, strict safety requirements
often arise. To obtain safety guarantees, many existing safe Bayesian
optimization methods rely on assumptions that are hard to verify in practice.
This leads to the use of unjustified heuristics in many applications, which
invalidates the theoretical safety guarantees. Furthermore, applications often
require multiple safety constraints to be satisfied simultaneously. Building on
recently proposed Lipschitz-only safe Bayesian optimization, we develop an
algorithm that relies on readily interpretable assumptions and satisfies
multiple safety constraints at the same time. We apply this algorithm to the
problem of automatically tuning a trajectory-tracking controller of a
self-driving car. Results both from simulations and an actual test vehicle
underline the algorithm's ability to learn tracking controllers without leaving
the track or violating any other safety constraints.
|
2501.12971
|
On Universal Decoding over Discrete Additive Channels by Noise Guessing
|
cs.IT math.IT
|
We study universal decoding over parametric discrete additive channels. Our
decoders are variants of noise guessing decoders that use estimators for the
probability of a noise sequence, when the actual channel law is unknown. A
deterministic version produces noise sequences in a fixed order, and a
randomised one draws them at random; noise sequences are then queried whether
they result in a valid codeword when subtracted from the received sequence. In
all cases, we give sufficient conditions on the family of parametric channels
for the decoding strategies to be random-coding strongly universal, and we
derive non-asymptotic upper bounds for the complexity of such strategies. We
give examples of families in which our results hold, and a numerical example
illustrates this performance.
|
2501.12972
|
Accessible Smart Contracts Verification: Synthesizing Formal Models with
Tamed LLMs
|
cs.SE cs.AI
|
When blockchain systems are said to be trustless, what this really means is
that all the trust is put into software. Thus, there are strong incentives to
ensure blockchain software is correct -- vulnerabilities here cost millions and
break businesses. One of the most powerful ways of establishing software
correctness is by using formal methods. Approaches based on formal methods,
however, induce a significant overhead in terms of time and expertise required
to successfully employ them. Our work addresses this critical disadvantage by
automating the creation of a formal model -- a mathematical abstraction of the
software system -- which is often a core task when employing formal methods. We
perform model synthesis in three phases: we first transpile the code into model
stubs; then we "fill in the blanks" using a large language model (LLM);
finally, we iteratively repair the generated model, on both syntactical and
semantical level. In this way, we significantly reduce the amount of time
necessary to create formal models and increase accessibility of valuable
software verification methods that rely on them. The practical context of our
work was reducing the time-to-value of using formal models for correctness
audits of smart contracts.
|
2501.12974
|
MorphoSkel3D: Morphological Skeletonization of 3D Point Clouds for
Informed Sampling in Object Classification and Retrieval
|
cs.CV
|
Point clouds are a set of data points in space to represent the 3D geometry
of objects. A fundamental step in the processing is to identify a subset of
points to represent the shape. While traditional sampling methods often ignore
to incorporate geometrical information, recent developments in learning-based
sampling models have achieved significant levels of performance. With the
integration of geometrical priors, the ability to learn and preserve the
underlying structure can be enhanced when sampling. To shed light into the
shape, a qualitative skeleton serves as an effective descriptor to guide
sampling for both local and global geometries. In this paper, we introduce
MorphoSkel3D as a new technique based on morphology to facilitate an efficient
skeletonization of shapes. With its low computational cost, MorphoSkel3D is a
unique, rule-based algorithm to benchmark its quality and performance on two
large datasets, ModelNet and ShapeNet, under different sampling ratios. The
results show that training with MorphoSkel3D leads to an informed and more
accurate sampling in the practical application of object classification and
point cloud retrieval.
|
2501.12975
|
OnionEval: An Unified Evaluation of Fact-conflicting Hallucination for
Small-Large Language Models
|
cs.CL
|
Large Language Models (LLMs) are highly capable but require significant
computational resources for both training and inference. Within the LLM family,
smaller models (those with fewer than 10 billion parameters) also perform well
across various tasks. However, these smaller models share similar limitations
to their larger counterparts, including the tendency to hallucinate. Despite
the existence of many benchmarks to evaluate hallucination in LLMs, few have
specifically focused on small LLMs (SLLMs). Additionally, SLLMs show widely
varying performance across different benchmarks. In this paper, we introduce
OnionEval, a multi-layer structured framework with a specific metric called the
context-influence score (CI), designed to effectively assess the
fact-conflicting hallucination tendencies of small LLMs across different
contextual levels. Our experimental results reveal a key feature of SLLMs: they
excel in factual analysis but face challenges with context reasoning. Further
investigation shows that a simple Chain-of-Thought strategy can significantly
reduce these limitations, improving the practical usefulness of SLLMs in
real-world applications.
|
2501.12976
|
LiT: Delving into a Simplified Linear Diffusion Transformer for Image
Generation
|
cs.CV
|
In commonly used sub-quadratic complexity modules, linear attention benefits
from simplicity and high parallelism, making it promising for image synthesis
tasks. However, the architectural design and learning strategy for linear
attention remain underexplored in this field. In this paper, we offer a suite
of ready-to-use solutions for efficient linear diffusion Transformers. Our core
contributions include: (1) Simplified Linear Attention using few heads,
observing the free-lunch effect of performance without latency increase. (2)
Weight inheritance from a fully pre-trained diffusion Transformer: initializing
linear Transformer using pre-trained diffusion Transformer and loading all
parameters except for those related to linear attention. (3) Hybrid knowledge
distillation objective: using a pre-trained diffusion Transformer to help the
training of the student linear Transformer, supervising not only the predicted
noise but also the variance of the reverse diffusion process. These guidelines
lead to our proposed Linear Diffusion Transformer (LiT), an efficient
text-to-image Transformer that can be deployed offline on a laptop. Experiments
show that in class-conditional 256*256 and 512*512 ImageNet benchmark LiT
achieves highly competitive FID while reducing training steps by 80% and 77%
compared to DiT. LiT also rivals methods based on Mamba or Gated Linear
Attention. Besides, for text-to-image generation, LiT allows for the rapid
synthesis of up to 1K resolution photorealistic images. Project page:
https://techmonsterwang.github.io/LiT/.
|
2501.12978
|
Galois groups of polynomials and neurosymbolic networks
|
cs.LG cs.AI math.HO
|
This paper introduces a novel approach to understanding Galois theory, one of
the foundational areas of algebra, through the lens of machine learning. By
analyzing polynomial equations with machine learning techniques, we aim to
streamline the process of determining solvability by radicals and explore
broader applications within Galois theory. This summary encapsulates the
background, methodology, potential applications, and challenges of using data
science in Galois theory.
More specifically, we design a neurosymbolic network to classify Galois
groups and show how this is more efficient than usual neural networks. We
discover some very interesting distribution of polynomials for groups not
isomorphic to the symmetric groups and alternating groups.
|
2501.12979
|
FlanEC: Exploring Flan-T5 for Post-ASR Error Correction
|
cs.CL cs.AI cs.SD eess.AS
|
In this paper, we present an encoder-decoder model leveraging Flan-T5 for
post-Automatic Speech Recognition (ASR) Generative Speech Error Correction
(GenSEC), and we refer to it as FlanEC. We explore its application within the
GenSEC framework to enhance ASR outputs by mapping n-best hypotheses into a
single output sentence. By utilizing n-best lists from ASR models, we aim to
improve the linguistic correctness, accuracy, and grammaticality of final ASR
transcriptions. Specifically, we investigate whether scaling the training data
and incorporating diverse datasets can lead to significant improvements in
post-ASR error correction. We evaluate FlanEC using the HyPoradise dataset,
providing a comprehensive analysis of the model's effectiveness in this domain.
Furthermore, we assess the proposed approach under different settings to
evaluate model scalability and efficiency, offering valuable insights into the
potential of instruction-tuned encoder-decoder models for this task.
|
2501.12980
|
Implicit Causality-biases in humans and LLMs as a tool for benchmarking
LLM discourse capabilities
|
cs.CL
|
In this paper, we compare data generated with mono- and multilingual LLMs
spanning a range of model sizes with data provided by human participants in an
experimental setting investigating well-established discourse biases. Beyond
the comparison as such, we aim to develop a benchmark to assess the
capabilities of LLMs with discourse biases as a robust proxy for more general
discourse understanding capabilities. More specifically, we investigated
Implicit Causality verbs, for which psycholinguistic research has found
participants to display biases with regard to three phenomena:\ the
establishment of (i) coreference relations (Experiment 1), (ii) coherence
relations (Experiment 2), and (iii) the use of particular referring expressions
(Experiments 3 and 4). With regard to coreference biases we found only the
largest monolingual LLM (German Bloom 6.4B) to display more human-like biases.
For coherence relation, no LLM displayed the explanation bias usually found for
humans. For referring expressions, all LLMs displayed a preference for
referring to subject arguments with simpler forms than to objects. However, no
bias effect on referring expression was found, as opposed to recent studies
investigating human biases.
|
2501.12981
|
UniUIR: Considering Underwater Image Restoration as An All-in-One
Learner
|
cs.CV
|
Existing underwater image restoration (UIR) methods generally only handle
color distortion or jointly address color and haze issues, but they often
overlook the more complex degradations that can occur in underwater scenes. To
address this limitation, we propose a Universal Underwater Image Restoration
method, termed as UniUIR, considering the complex scenario of real-world
underwater mixed distortions as an all-in-one manner. To decouple
degradation-specific issues and explore the inter-correlations among various
degradations in UIR task, we designed the Mamba Mixture-of-Experts module. This
module enables each expert to identify distinct types of degradation and
collaboratively extract task-specific priors while maintaining global feature
representation based on linear complexity. Building upon this foundation, to
enhance degradation representation and address the task conflicts that arise
when handling multiple types of degradation, we introduce the spatial-frequency
prior generator. This module extracts degradation prior information in both
spatial and frequency domains, and adaptively selects the most appropriate
task-specific prompts based on image content, thereby improving the accuracy of
image restoration. Finally, to more effectively address complex,
region-dependent distortions in UIR task, we incorporate depth information
derived from a large-scale pre-trained depth prediction model, thereby enabling
the network to perceive and leverage depth variations across different image
regions to handle localized degradation. Extensive experiments demonstrate that
UniUIR can produce more attractive results across qualitative and quantitative
comparisons, and shows strong generalization than state-of-the-art methods.
|
2501.12982
|
Low-dimensional adaptation of diffusion models: Convergence in total
variation
|
stat.ML cs.LG
|
This paper investigates how diffusion generative models leverage (unknown)
low-dimensional structure to accelerate sampling. Focusing on two mainstream
samplers -- the denoising diffusion implicit model (DDIM) and the denoising
diffusion probabilistic model (DDPM) -- and assuming accurate score estimates,
we prove that their iteration complexities are no greater than the order of
$k/\varepsilon$ (up to some log factor), where $\varepsilon$ is the precision
in total variation distance and $k$ is some intrinsic dimension of the target
distribution. Our results are applicable to a broad family of target
distributions without requiring smoothness or log-concavity assumptions.
Further, we develop a lower bound that suggests the (near) necessity of the
coefficients introduced by Ho et al.(2020) and Song et al.(2020) in
facilitating low-dimensional adaptation. Our findings provide the first
rigorous evidence for the adaptivity of the DDIM-type samplers to unknown
low-dimensional structure, and improve over the state-of-the-art DDPM theory
regarding total variation convergence.
|
2501.12984
|
Lower Bounds on the Sub-Packetization of Optimal-Access MSR Codes for
Multiple-Node Repair
|
cs.IT math.IT
|
We establish lower bounds on the sub-packetization of optimal-access MSR
codes in the context of multiple-node failures. These bounds generalize the
tight bounds for single-node failure presented by Balaji et al. (IEEE
Transactions on Information Theory, vol. 68, no. 10, 2022). Moreover, we
utilize generating functions to provide a more refined analysis, further
strengthening these bounds.
|
2501.12989
|
Learning-based Distributed Model Predictive Control using Multi-Agent
Bayesian Optimization
|
eess.SY cs.SY
|
This paper presents a fusion of Multi-agent Bayesian Optimization (MABO) and
Distributed Model Predictive Control (DMPC) aiming at learning the DMPC schemes
with imperfect local models in a distributed manner. In the proposed method, we
use a dual-decomposition method for a DMPC and leverage an Alternating
Direction Method of Multipliers (ADMM)-based MABO for coordinated learning of
the parameterized DMPC scheme to improve the closed-loop performance of the
local MPC schemes even if their models cannot capture the real multi-agent
system perfectly.
|
2501.12991
|
An Offline Multi-Agent Reinforcement Learning Framework for Radio
Resource Management
|
cs.MA cs.LG
|
Offline multi-agent reinforcement learning (MARL) addresses key limitations
of online MARL, such as safety concerns, expensive data collection, extended
training intervals, and high signaling overhead caused by online interactions
with the environment. In this work, we propose an offline MARL algorithm for
radio resource management (RRM), focusing on optimizing scheduling policies for
multiple access points (APs) to jointly maximize the sum and tail rates of user
equipment (UEs). We evaluate three training paradigms: centralized,
independent, and centralized training with decentralized execution (CTDE). Our
simulation results demonstrate that the proposed offline MARL framework
outperforms conventional baseline approaches, achieving over a 15\% improvement
in a weighted combination of sum and tail rates. Additionally, the CTDE
framework strikes an effective balance, reducing the computational complexity
of centralized methods while addressing the inefficiencies of independent
training. These results underscore the potential of offline MARL to deliver
scalable, robust, and efficient solutions for resource management in dynamic
wireless networks.
|
2501.12993
|
European Energy Vision 2060: Charting Diverse Pathways for Europe's
Energy Transition
|
eess.SY cs.SY
|
Europe is warming at the fastest rate of all continents, experiencing a
temperature increase of about 1{\deg}C higher than the corresponding global
increase. Aiming to be the first climate-neutral continent by 2050 under the
European Green Deal, Europe requires an in-depth understanding of the potential
energy transition pathways. In this paper, we develop four qualitative
long-term scenarios covering the European energy landscape, considering key
uncertainty pillars -- categorized under social, technological, economic,
political, and geopolitical aspects. First, we place the scenarios in a
three-dimensional space defined by Social dynamics, Innovation, and
Geopolitical instabilities. These scenarios are brought to life by defining
their narratives and focus areas according to their location in this
three-dimensional space. The scenarios envision diverse futures and include
distinct features. The EU Trinity scenario pictures how internal divisions
among EU member states, in the context of global geopolitical instability,
affect the EU climate targets. The REPowerEU++ scenario outlines the steps
needed for a self-sufficient, independent European energy system by 2050. The
Go RES scenario examines the feasibility of achieving carbon neutrality earlier
than 2050 given favourable uncertain factors. The NECP Essentials scenario
extends current national energy and climate plans until 2060 to assess their
role in realizing climate neutrality. The scenarios are extended by
incorporating policies and economic factors and detailed in a Qualitative to
Quantitative (Q2Q) matrix, linking narratives to quantification. Finally, two
scenarios are quantified to illustrate the quantification process. All the
scenarios are in the process of being quantified and will be openly available
and reusable.
|
2501.12997
|
Ehrenfeucht-Haussler Rank and Chain of Thought
|
cs.LG cs.AI
|
The notion of rank of a Boolean function has been a cornerstone in the theory
of PAC learning, enabling quasipolynomial-time learning algorithms for
polynomial-size decision trees. We present a novel characterization of rank,
grounded in the well-known Transformer architecture. We show that the rank of a
function $f$ corresponds to the minimum number of Chain of Thought (CoT) steps
required by a single-layer transformer decoder with hard attention to compute
$f$. Based on this characterization we establish tight bounds on the number of
CoT steps required for specific problems, showing that $\ell$-fold function
composition necessitates exactly $\ell$ CoT steps. Furthermore, we analyze the
problem of identifying the position of the $k$-th occurrence of 1 in a Boolean
sequence, proving that it requires $k$ CoT steps.
|
2501.13003
|
Communication-Efficient Distributed Kalman Filtering using ADMM
|
eess.SY cs.SY eess.SP
|
This paper addresses the problem of optimal linear filtering in a network of
local estimators, commonly referred to as distributed Kalman filtering (DKF).
The DKF problem is formulated within a distributed optimization framework,
where coupling constraints require the exchange of local state and covariance
updates between neighboring nodes to achieve consensus. To address these
constraints, the problem is transformed into an unconstrained optimization form
using the augmented Lagrangian method. The distributed alternating direction
method of multipliers (ADMM) is then applied to derive update steps that
achieve the desired performance while exchanging only the primal variables.
Notably, the proposed method enhances communication efficiency by eliminating
the need for dual variable exchange. We show that the design parameters depend
on the maximum eigenvalue of the network's Laplacian matrix, yielding a
significantly tighter bound compared to existing results. A rigorous
convergence analysis is provided, proving that the state estimates converge to
the true state and that the covariance matrices across all local estimators
converge to a globally optimal solution. Numerical results are presented to
validate the efficacy of the proposed approach.
|
2501.13007
|
PairJudge RM: Perform Best-of-N Sampling with Knockout Tournament
|
cs.CL
|
Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large
Language Models (LLMs), relies on reward models to select the best candidate
solution from multiple generations. However, traditional reward models often
assign arbitrary and inconsistent scores, limiting their effectiveness. To
address this, we propose a Pairwise Judge Reward Model (PariJudge RM) combined
with a knockout tournament for BoN sampling. Instead of assigning absolute
scores, given one math problem, PariJudge RM judges two candidate solutions'
correctness with chain-of-thought reasoning simultaneously. This approach
eliminates the need for scoring and enables cross-validation of solutions
through parallel judgment. In the knockout tournament, PariJudge RM conducts
pairwise Judgment between candidate solutions and eliminates the incorrect ones
iteratively. We construct PairJudge-432K, a large-scale dataset of 432K
pairwise judgments derived from NumiaMath and annotated using
\texttt{gemini-1.5-flash}, and train the PariJudge RM via supervised
fine-tuning. Experiments on MATH-500 and the Olympiad Bench demonstrate
significant improvements over baseline reward models. And a 40\% to 60\%
relative improvement is achieved on the top 50\% challenging problems.
|
2501.13009
|
Deep Learning-Based Image Recovery and Pose Estimation for Resident
Space Objects
|
cs.CV cs.LG eess.IV
|
As the density of spacecraft in Earth's orbit increases, their recognition,
pose and trajectory identification becomes crucial for averting potential
collisions and executing debris removal operations. However, training models
able to identify a spacecraft and its pose presents a significant challenge due
to a lack of available image data for model training. This paper puts forth an
innovative framework for generating realistic synthetic datasets of Resident
Space Object (RSO) imagery. Using the International Space Station (ISS) as a
test case, it goes on to combine image regression with image restoration
methodologies to estimate pose from blurred images. An analysis of the proposed
image recovery and regression techniques was undertaken, providing insights
into the performance, potential enhancements and limitations when applied to
real imagery of RSOs. The image recovery approach investigated involves first
applying image deconvolution using an effective point spread function, followed
by detail object extraction with a U-Net. Interestingly, using only U-Net for
image reconstruction the best pose performance was attained, reducing the
average Mean Squared Error in image recovery by 97.28% and the average angular
error by 71.9%. The successful application of U-Net image restoration combined
with the Resnet50 regression network for pose estimation of the International
Space Station demonstrates the value of a diverse set of evaluation tools for
effective solutions to real-world problems such as the analysis of distant
objects in Earth's orbit.
|
2501.13010
|
Learning accurate rigid registration for longitudinal brain MRI from
synthetic data
|
eess.IV cs.CV
|
Rigid registration aims to determine the translations and rotations necessary
to align features in a pair of images. While recent machine learning methods
have become state-of-the-art for linear and deformable registration across
subjects, they have demonstrated limitations when applied to longitudinal
(within-subject) registration, where achieving precise alignment is critical.
Building on an existing framework for anatomy-aware, acquisition-agnostic
affine registration, we propose a model optimized for longitudinal, rigid brain
registration. By training the model with synthetic within-subject pairs
augmented with rigid and subtle nonlinear transforms, the model estimates more
accurate rigid transforms than previous cross-subject networks and performs
robustly on longitudinal registration pairs within and across magnetic
resonance imaging (MRI) contrasts.
|
2501.13011
|
MONA: Myopic Optimization with Non-myopic Approval Can Mitigate
Multi-step Reward Hacking
|
cs.LG cs.AI
|
Future advanced AI systems may learn sophisticated strategies through
reinforcement learning (RL) that humans cannot understand well enough to safely
evaluate. We propose a training method which avoids agents learning undesired
multi-step plans that receive high reward (multi-step "reward hacks") even if
humans are not able to detect that the behaviour is undesired. The method,
Myopic Optimization with Non-myopic Approval (MONA), works by combining
short-sighted optimization with far-sighted reward. We demonstrate that MONA
can prevent multi-step reward hacking that ordinary RL causes, even without
being able to detect the reward hacking and without any extra information that
ordinary RL does not get access to. We study MONA empirically in three settings
which model different misalignment failure modes including 2-step environments
with LLMs representing delegated oversight and encoded reasoning and
longer-horizon gridworld environments representing sensor tampering.
|
2501.13013
|
The regret lower bound for communicating Markov Decision Processes
|
cs.LG stat.ML
|
This paper is devoted to the extension of the regret lower bound beyond
ergodic Markov decision processes (MDPs) in the problem dependent setting.
While the regret lower bound for ergodic MDPs is well-known and reached by
tractable algorithms, we prove that the regret lower bound becomes
significatively more complex in communicating MDPs. Our lower bound revisits
the necessary explorative behavior of consistent learning agents and further
explains that all optimal regions of the environment must be overvisited
compared to sub-optimal ones, a phenomenon that we refer to as co-exploration.
In tandem, we show that these two explorative and co-explorative behaviors are
intertwined with navigation constraints obtained by scrutinizing the navigation
structure at logarithmic scale. The resulting lower bound is expressed as the
solution of an optimization problem that, in many standard classes of MDPs, can
be specialized to recover existing results. From a computational perspective,
it is provably $\Sigma_2^\textrm{P}$-hard in general and as a matter of fact,
even testing the membership to the feasible region is coNP-hard. We further
provide an algorithm to approximate the lower bound in a constructive way.
|
2501.13014
|
Paper Quality Assessment based on Individual Wisdom Metrics from Open
Peer Review
|
cs.SI cs.AI cs.GT
|
This study proposes a data-driven framework for enhancing the accuracy and
efficiency of scientific peer review through an open, bottom-up process that
estimates reviewer quality. Traditional closed peer review systems, while
essential for quality control, are often slow, costly, and subject to biases
that can impede scientific progress. Here, we introduce a method that evaluates
individual reviewer reliability by quantifying agreement with community
consensus scores and applying Bayesian weighting to refine paper quality
assessments. We analyze open peer review data from two major scientific
conferences, and demonstrate that reviewer-specific quality scores
significantly improve the reliability of paper quality estimation. Perhaps
surprisingly, we find that reviewer quality scores are unrelated to authorship
quality. Our model incorporates incentive structures to recognize high-quality
reviewers and encourage broader coverage of submitted papers, thereby
mitigating the common "rich-get-richer" pitfall of social media. These findings
suggest that open peer review, with mechanisms for estimating and incentivizing
reviewer quality, offers a scalable and equitable alternative for scientific
publishing, with potential to enhance the speed, fairness, and transparency of
the peer review process.
|
2501.13018
|
Multi-Objective Hyperparameter Selection via Hypothesis Testing on
Reliability Graphs
|
cs.LG cs.IT math.IT
|
In sensitive application domains, multi-objective hyperparameter selection
can ensure the reliability of AI models prior to deployment, while optimizing
auxiliary performance metrics. The state-of-the-art Pareto Testing (PT) method
guarantees statistical reliability constraints by adopting a multiple
hypothesis testing framework. In PT, hyperparameters are validated one at a
time, following a data-driven order determined by expected reliability levels.
This paper introduces a novel framework for multi-objective hyperparameter
selection that captures the interdependencies among the reliability levels of
different hyperparameter configurations using a directed acyclic graph (DAG),
which is termed the reliability graph (RG). The RG is constructed based on
prior information and data by using the Bradley-Terry model. The proposed
approach, RG-based PT (RG-PT), leverages the RG to enable the efficient,
parallel testing of multiple hyperparameters at the same reliability level. By
integrating False Discovery Rate (FDR) control, RG-PT ensures robust
statistical reliability guarantees and is shown via experiments across diverse
domains to consistently yield superior solutions for multi-objective
calibration problems.
|
2501.13021
|
Extension of the Poltyrev Bound to Binary Memoryless Symmetric Channels
|
cs.IT math.IT
|
The Poltyrev bound provides a very tight upper bound on the decoding error
probability when using binary linear codes for transmission over the binary
symmetric channel and the additive white Gaussian noise channel, making use of
the code's weight spectrum. In the present work, the bound is extended to
memoryless symmetric channels with a discrete output alphabet. The derived
bound is demonstrated on a hybrid BSC-BEC channel. Additionally, a
reduced-complexity bound is introduced at the cost of some loss in tightness.
|
2501.13023
|
Provably-Safe Neural Network Training Using Hybrid Zonotope Reachability
Analysis
|
cs.LG cs.AI
|
Even though neural networks are being increasingly deployed in
safety-critical applications, it remains difficult to enforce constraints on
their output, meaning that it is hard to guarantee safety in such settings.
Towards addressing this, many existing methods seek to verify a neural
network's satisfaction of safety constraints, but do not address how to correct
an "unsafe" network. On the other hand, the few works that extract a training
signal from verification cannot handle non-convex sets, and are either
conservative or slow. To address these challenges, this work proposes a neural
network training method that can encourage the exact reachable set of a
non-convex input set through a neural network with rectified linear unit (ReLU)
nonlinearities to avoid a non-convex unsafe region, using recent results in
non-convex set representation with hybrid zonotopes and extracting gradient
information from mixed-integer linear programs (MILPs). The proposed method is
fast, with the computational complexity of each training iteration comparable
to that of solving a linear program (LP) with number of dimensions and
constraints linear to the number of neurons and complexity of input and unsafe
sets. For a neural network with three hidden layers of width 30, the method was
able to drive the reachable set of a non-convex input set with 55 generators
and 26 constraints out of a non-convex unsafe region with 21 generators and 11
constraints in 490 seconds.
|
2501.13025
|
A MIMO ISAC System for Ultra-Reliable and Low-Latency Communications
|
cs.IT math.IT
|
In this paper, we propose a bi-static multiple-input multiple-output (MIMO)
integrated sensing and communication (ISAC) system to detect the arrival of
ultra-reliable and low-latency communication (URLLC) messages and prioritize
their delivery. In this system, a dual-function base station (BS) communicates
with a user equipment (UE) and a sensing receiver (SR) is deployed to collect
echo signals reflected from a target of interest. The BS regularly transmits
messages of enhanced mobile broadband (eMBB) services to the UE. During each
eMBB transmission, if the SR senses the presence of a target of interest, it
immediately triggers the transmission of an additional URLLC message. To
reinforce URLLC transmissions, we propose a dirty-paper coding (DPC)-based
technique that mitigates the interference of both eMBB and sensing signals. For
this system, we formulate the rate-reliability-detection trade-off in the
finite blocklength regime by evaluating the communication rate of the eMBB
transmissions, the reliability of the URLLC transmissions and the probability
of the target detection. Our numerical analysis show that our proposed
DPC-based ISAC scheme significantly outperforms power-sharing based ISAC and
traditional time-sharing schemes. In particular, it achieves higher eMBB
transmission rate while satisfying both URLLC and sensing constraints.
|
2501.13028
|
Optimizing Return Distributions with Distributional Dynamic Programming
|
cs.LG cs.AI cs.SY eess.SY
|
We introduce distributional dynamic programming (DP) methods for optimizing
statistical functionals of the return distribution, with standard reinforcement
learning as a special case. Previous distributional DP methods could optimize
the same class of expected utilities as classic DP. To go beyond expected
utilities, we combine distributional DP with stock augmentation, a technique
previously introduced for classic DP in the context of risk-sensitive RL, where
the MDP state is augmented with a statistic of the rewards obtained so far
(since the first time step). We find that a number of recently studied problems
can be formulated as stock-augmented return distribution optimization, and we
show that we can use distributional DP to solve them. We analyze distributional
value and policy iteration, with bounds and a study of what objectives these
distributional DP methods can or cannot optimize. We describe a number of
applications outlining how to use distributional DP to solve different
stock-augmented return distribution optimization problems, for example
maximizing conditional value-at-risk, and homeostatic regulation. To highlight
the practical potential of stock-augmented return distribution optimization and
distributional DP, we combine the core ideas of distributional value iteration
with the deep RL agent DQN, and empirically evaluate it for solving instances
of the applications discussed.
|
2501.13031
|
A Probabilistic Model for Self-Supervised Learning
|
cs.LG
|
Self-supervised learning (SSL) aims to find meaningful representations from
unlabeled data by encoding semantic similarities through data augmentations.
Despite its current popularity, theoretical insights about SSL are still
scarce. For example, it is not yet known whether commonly used SSL loss
functions can be related to a statistical model, much in the same as OLS,
generalized linear models or PCA naturally emerge as maximum likelihood
estimates of an underlying generative process. In this short paper, we consider
a latent variable statistical model for SSL that exhibits an interesting
property: Depending on the informativeness of the data augmentations, the MLE
of the model either reduces to PCA, or approaches a simple non-contrastive
loss. We analyze the model and also empirically illustrate our findings.
|
2501.13034
|
OLS4: A new Ontology Lookup Service for a growing interdisciplinary
knowledge ecosystem
|
cs.IR
|
The Ontology Lookup Service (OLS) is an open source search engine for
ontologies which is used extensively in the bioinformatics and chemistry
communities to annotate biological and biomedical data with ontology terms.
Recently there has been a significant increase in the size and complexity of
ontologies due to new scales of biological knowledge, such as spatial
transcriptomics, new ontology development methodologies, and curation on an
increased scale. Existing Web-based tools for ontology browsing such as
BioPortal and OntoBee do not support the full range of definitions used by
today's ontologies. In order to support the community going forward, we have
developed OLS4, implementing the complete OWL2 specification,
internationalization support for multiple languages, and a new user interface
with UX enhancements such as links out to external databases. OLS4 has replaced
OLS3 in production at EMBL-EBI and has a backwards compatible API supporting
users of OLS3 to transition.
|
2501.13041
|
TimeFilter: Patch-Specific Spatial-Temporal Graph Filtration for Time
Series Forecasting
|
cs.LG
|
Current time series forecasting methods can be broadly classified into two
categories: Channel Independent (CI) and Channel Dependent (CD) strategies,
both aiming to capture the complex dependencies within time series data.
However, the CI strategy fails to exploit highly correlated covariate
information, while the CD strategy integrates all dependencies, including
irrelevant or noisy ones, thus compromising generalization. To mitigate these
issues, recent works have introduced the Channel Clustering (CC) strategy by
grouping channels with similar characteristics and applying different modeling
techniques to each cluster. However, coarse-grained clustering cannot flexibly
capture complex, time-varying interactions. Addressing the above challenges, we
propose TimeFilter, a graph-based framework for adaptive and fine-grained
dependency modeling. Specifically, after constructing the graph with the input
sequence, TimeFilter filters out irrelevant correlations and preserves the most
critical ones through patch-specific filtering. Extensive experiments on 13
real-world datasets from various application domains demonstrate the
state-of-the-art performance of TimeFilter. The code is available at
https://github.com/TROUBADOUR000/TimeFilter.
|
2501.13042
|
Does Table Source Matter? Benchmarking and Improving Multimodal
Scientific Table Understanding and Reasoning
|
cs.CL
|
Recent large language models (LLMs) have advanced table understanding
capabilities but rely on converting tables into text sequences. While
multimodal large language models (MLLMs) enable direct visual processing, they
face limitations in handling scientific tables due to fixed input image
resolutions and insufficient numerical reasoning capabilities. We present a
comprehensive framework for multimodal scientific table understanding and
reasoning with dynamic input image resolutions. Our framework consists of three
key components: (1) MMSci-Pre, a domain-specific table structure learning
dataset of 52K scientific table structure recognition samples, (2) MMSci-Ins,
an instruction tuning dataset with 12K samples across three table-based tasks,
and (3) MMSci-Eval, a benchmark with 3,114 testing samples specifically
designed to evaluate numerical reasoning capabilities. Extensive experiments
demonstrate that our domain-specific approach with 52K scientific table images
achieves superior performance compared to 150K general-domain tables,
highlighting the importance of data quality over quantity. Our proposed
table-based MLLMs with dynamic input resolutions show significant improvements
in both general table understanding and numerical reasoning capabilities, with
strong generalisation to held-out datasets. Our code and data are publicly
available at https://github.com/Bernard-Yang/MMSci_Table.
|
2501.13045
|
Sketch and Patch: Efficient 3D Gaussian Representation for Man-Made
Scenes
|
cs.CV cs.MM
|
3D Gaussian Splatting (3DGS) has emerged as a promising representation for
photorealistic rendering of 3D scenes. However, its high storage requirements
pose significant challenges for practical applications. We observe that
Gaussians exhibit distinct roles and characteristics that are analogous to
traditional artistic techniques -- Like how artists first sketch outlines
before filling in broader areas with color, some Gaussians capture
high-frequency features like edges and contours; While other Gaussians
represent broader, smoother regions, that are analogous to broader brush
strokes that add volume and depth to a painting. Based on this observation, we
propose a novel hybrid representation that categorizes Gaussians into (i)
Sketch Gaussians, which define scene boundaries, and (ii) Patch Gaussians,
which cover smooth regions. Sketch Gaussians are efficiently encoded using
parametric models, leveraging their geometric coherence, while Patch Gaussians
undergo optimized pruning, retraining, and vector quantization to maintain
volumetric consistency and storage efficiency. Our comprehensive evaluation
across diverse indoor and outdoor scenes demonstrates that this structure-aware
approach achieves up to 32.62% improvement in PSNR, 19.12% in SSIM, and 45.41%
in LPIPS at equivalent model sizes, and correspondingly, for an indoor scene,
our model maintains the visual quality with 2.3% of the original model size.
|
2501.13051
|
Column-Oriented Datalog on the GPU
|
cs.DB
|
Datalog is a logic programming language widely used in knowledge
representation and reasoning (KRR), program analysis, and social media mining
due to its expressiveness and high performance. Traditionally, Datalog engines
use either row-oriented or column-oriented storage. Engines like VLog and Nemo
favor column-oriented storage for efficiency on limited-resource machines,
while row-oriented engines like Souffle use advanced data structures with
locking to perform better on multi-core CPUs. The advent of modern datacenter
GPUs, such as the NVIDIA H100 with its ability to run over 16k threads
simultaneously and high memory bandwidth, has reopened the debate on which
storage layout is more effective. This paper presents the first column-oriented
Datalog engines tailored to the strengths of modern GPUs. We present VFLog, a
CUDA-based Datalog runtime library with a column-oriented GPU datastructure
that supports all necessary relational algebra operations. Our results
demonstrate over 200x performance gains over SOTA CPU-based column-oriented
Datalog engines and a 2.5x speedup over GPU Datalog engines in various
workloads, including KRR.
|
2501.13052
|
One-Class Domain Adaptation via Meta-Learning
|
cs.LG
|
The deployment of IoT (Internet of Things) sensor-based machine learning
models in industrial systems for anomaly classification tasks poses significant
challenges due to distribution shifts, as the training data acquired in
controlled laboratory settings may significantly differ from real-time data in
production environments. Furthermore, many real-world applications cannot
provide a substantial number of labeled examples for each anomalous class in
every new environment. It is therefore crucial to develop adaptable machine
learning models that can be effectively transferred from one environment to
another, enabling rapid adaptation using normal operational data. We extended
this problem setting to an arbitrary classification task and formulated the
one-class domain adaptation (OC-DA) problem setting. We took a meta-learning
approach to tackle the challenge of OC-DA, and proposed a task sampling
strategy to adapt any bi-level meta-learning algorithm to OC-DA. We modified
the well-established model-agnostic meta-learning (MAML) algorithm and
introduced the OC-DA MAML algorithm. We provided a theoretical analysis showing
that OC-DA MAML optimizes for meta-parameters that enable rapid one-class
adaptation across domains. The OC-DA MAML algorithm is evaluated on the
Rainbow-MNIST meta-learning benchmark and on a real-world dataset of
vibration-based sensor readings. The results show that OC-DA MAML significantly
improves the performance on the target domains and outperforms MAML using the
standard task sampling strategy.
|
2501.13054
|
STMDNet: A Lightweight Directional Framework for Motion Pattern
Recognition of Tiny Targets
|
cs.CV
|
Recognizing motions of tiny targets - only few dozen pixels - in cluttered
backgrounds remains a fundamental challenge when standard feature-based or deep
learning methods fail under scarce visual cues. We propose STMDNet, a
model-based computational framework to Recognize motions of tiny targets at
variable velocities under low-sampling frequency scenarios. STMDNet designs a
novel dual-dynamics-and-correlation mechanism, harnessing ipsilateral
excitation to integrate target cues and leakage-enhancing-type contralateral
inhibition to suppress large-object and background motion interference.
Moreover, we develop the first collaborative directional encoding-decoding
strategy that determines the motion direction from only one correlation per
spatial location, cutting computational costs to one-eighth of prior methods.
Further, simply substituting the backbone of a strong STMD model with STMDNet
raises AUC by 24%, yielding an enhanced STMDNet-F. Evaluations on real-world
low sampling frequency datasets show state-of-the-art results, surpassing the
deep learning baseline. Across diverse speeds, STMDNet-F improves mF1 by 19%,
16%, and 8% at 240Hz, 120Hz, and 60Hz, respectively, while STMDNet achieves 87
FPS on a single CPU thread. These advances highlight STMDNet as a
next-generation backbone for tiny target motion pattern recognition and
underscore its broader potential to revitalize model-based visual approaches in
motion detection.
|
2501.13058
|
A polynomial formula for the perspective four points problem
|
math.AG cs.CV
|
We present a fast and accurate solution to the perspective n-points problem,
by way of a new approach to the n=4 case. Our solution hinges on a novel
separation of variables: given four 3D points and four corresponding 2D points
on the camera canvas, we start by finding another set of 3D points, sitting on
the rays connecting the camera to the 2D canvas points, so that the six
pair-wise distances between these 3D points are as close as possible to the six
distances between the original 3D points. This step reduces the perspective
problem to an absolute orientation problem (which has a solution via explicit
formula). To solve the first problem we set coordinates which are as
orientation-free as possible: on the 3D points side our coordinates are the
squared distances between the points. On the 2D canvas-points side our
coordinates are the dot products of the points after rotating one of them to
sit on the optical axis. We then derive the solution with the help of a
computer algebra system.
|
2501.13061
|
Systematic comparison of gender inequality in scientific rankings across
disciplines
|
cs.SI
|
The participation of women in academia has increased in the last few decades
across many fields (e.g., Computer Science, History, Medicine). However, this
increase in the participation of women has not been the same at all career
stages. Here, we study how gender participation within different fields is
related to gender representation in top-ranking positions in productivity
(number of papers), research impact (number of citations), and co-authorship
networks (degree of connectivity). We analyzed over 80 million papers published
from 1975 to 2020 in 19 academic fields. Our findings reveal that women remain
a minority in all 19 fields, with physics, geology, and mathematics having the
lowest percentage of papers authored by women at 14% and psychology having the
largest percentage at 39%. Women are significantly underrepresented in
top-ranking positions (top 10% or higher) across all fields and metrics
(productivity, citations, and degree), indicating that it remains challenging
for early researchers (especially women) to reach top-ranking positions, as our
results reveal the rankings to be rigid over time. Finally, we show that in
most fields, women and men with comparable productivity levels and career age
tend to attain different levels of citations, where women tend to benefit more
from co-authorships, while men tend to benefit more from productivity,
especially in pSTEMs. Our findings highlight that while the participation of
women has risen in some fields, they remain under-represented in top-ranking
positions. Greater gender participation at entry levels often helps
representation, but stronger interventions are still needed to achieve
long-lasting careers for women and their participation in top-ranking
positions.
|
2501.13066
|
SMART-Vision: Survey of Modern Action Recognition Techniques in Vision
|
cs.CV
|
Human Action Recognition (HAR) is a challenging domain in computer vision,
involving recognizing complex patterns by analyzing the spatiotemporal dynamics
of individuals' movements in videos. These patterns arise in sequential data,
such as video frames, which are often essential to accurately distinguish
actions that would be ambiguous in a single image. HAR has garnered
considerable interest due to its broad applicability, ranging from robotics and
surveillance systems to sports motion analysis, healthcare, and the burgeoning
field of autonomous vehicles. While several taxonomies have been proposed to
categorize HAR approaches in surveys, they often overlook hybrid methodologies
and fail to demonstrate how different models incorporate various architectures
and modalities. In this comprehensive survey, we present the novel SMART-Vision
taxonomy, which illustrates how innovations in deep learning for HAR complement
one another, leading to hybrid approaches beyond traditional categories. Our
survey provides a clear roadmap from foundational HAR works to current
state-of-the-art systems, highlighting emerging research directions and
addressing unresolved challenges in discussion sections for architectures
within the HAR domain. We provide details of the research datasets that various
approaches used to measure and compare goodness HAR approaches. We also explore
the rapidly emerging field of Open-HAR systems, which challenges HAR systems by
presenting samples from unknown, novel classes during test time.
|
2501.13068
|
Beyond the Lungs: Extending the Field of View in Chest CT with Latent
Diffusion Models
|
cs.CV eess.IV
|
The interconnection between the human lungs and other organs, such as the
liver and kidneys, is crucial for understanding the underlying risks and
effects of lung diseases and improving patient care. However, most research
chest CT imaging is focused solely on the lungs due to considerations of cost
and radiation dose. This restricted field of view (FOV) in the acquired images
poses challenges to comprehensive analysis and hinders the ability to gain
insights into the impact of lung diseases on other organs. To address this, we
propose SCOPE (Spatial Coverage Optimization with Prior Encoding), a novel
approach to capture the inter-organ relationships from CT images and extend the
FOV of chest CT images. Our approach first trains a variational autoencoder
(VAE) to encode 2D axial CT slices individually, then stacks the latent
representations of the VAE to form a 3D context for training a latent diffusion
model. Once trained, our approach extends the FOV of CT images in the
z-direction by generating new axial slices in a zero-shot manner. We evaluated
our approach on the National Lung Screening Trial (NLST) dataset, and results
suggest that it effectively extends the FOV to include the liver and kidneys,
which are not completely covered in the original NLST data acquisition.
Quantitative results on a held-out whole-body dataset demonstrate that the
generated slices exhibit high fidelity with acquired data, achieving an SSIM of
0.81.
|
2501.13071
|
Robust Body Composition Analysis by Generating 3D CT Volumes from
Limited 2D Slices
|
cs.CV eess.IV
|
Body composition analysis provides valuable insights into aging, disease
progression, and overall health conditions. Due to concerns of radiation
exposure, two-dimensional (2D) single-slice computed tomography (CT) imaging
has been used repeatedly for body composition analysis. However, this approach
introduces significant spatial variability that can impact the accuracy and
robustness of the analysis. To mitigate this issue and facilitate body
composition analysis, this paper presents a novel method to generate 3D CT
volumes from limited number of 2D slices using a latent diffusion model (LDM).
Our approach first maps 2D slices into a latent representation space using a
variational autoencoder. An LDM is then trained to capture the 3D context of a
stack of these latent representations. To accurately interpolate
intermediateslices and construct a full 3D volume, we utilize body part
regression to determine the spatial location and distance between the acquired
slices. Experiments on both in-house and public 3D abdominal CT datasets
demonstrate that the proposed method significantly enhances body composition
analysis compared to traditional 2D-based analysis, with a reduced error rate
from 23.3% to 15.2%.
|
2501.13072
|
AdaWM: Adaptive World Model based Planning for Autonomous Driving
|
cs.RO cs.AI
|
World model based reinforcement learning (RL) has emerged as a promising
approach for autonomous driving, which learns a latent dynamics model and uses
it to train a planning policy. To speed up the learning process, the
pretrain-finetune paradigm is often used, where online RL is initialized by a
pretrained model and a policy learned offline. However, naively performing such
initialization in RL may result in dramatic performance degradation during the
online interactions in the new task. To tackle this challenge, we first analyze
the performance degradation and identify two primary root causes therein: the
mismatch of the planning policy and the mismatch of the dynamics model, due to
distribution shift. We further analyze the effects of these factors on
performance degradation during finetuning, and our findings reveal that the
choice of finetuning strategies plays a pivotal role in mitigating these
effects. We then introduce AdaWM, an Adaptive World Model based planning
method, featuring two key steps: (a) mismatch identification, which quantifies
the mismatches and informs the finetuning strategy, and (b) alignment-driven
finetuning, which selectively updates either the policy or the model as needed
using efficient low-rank updates. Extensive experiments on the challenging
CARLA driving tasks demonstrate that AdaWM significantly improves the
finetuning process, resulting in more robust and efficient performance in
autonomous driving systems.
|
2501.13073
|
CHaRNet: Conditioned Heatmap Regression for Robust Dental Landmark
Localization
|
cs.CV
|
Identifying anatomical landmarks in 3D dental models is vital for orthodontic
treatment, yet manual placement is complex and time-consuming. Although some
machine learning approaches have been proposed for automatic tooth landmark
detection in 3D Intraoral Scans (IOS), none provide a fully end-to-end solution
that bypasses teeth segmentation, limiting practical applicability. We
introduce CHaRNet (Conditioned Heatmap Regression Network), the first fully
end-to-end deep learning framework for tooth landmark detection in 3D IOS.
Unlike traditional two-stage workflows that segment teeth before detecting
landmarks, CHaRNet directly operates on the input point cloud, thus reducing
complexity and computational overhead. Our method integrates four modules: (1)
a point cloud encoder, (2) a point cloud decoder with a heatmap regression
head, (3) a teeth presence classification head, and (4) the novel Conditioned
Heatmap Regression (CHaR) module. By leveraging teeth presence classification,
the CHaR module dynamically adapts to missing teeth and enhances detection
accuracy in complex dental models. We evaluate CHaRNet using five point cloud
learning algorithms on a clinical dataset of 1,214 annotated 3D models. Both
the dataset and code will be publicly released to address the lack of open
datasets in orthodontics and inspire further research. CHaRNet achieves a Mean
Euclidean Distance Error (MEDE) of 0.51 mm on typical dental models and 1.28 mm
across all dentition types, with corresponding Mean Success Rates (MSR) of
87.06% and 82.40%, respectively. Notably, it exhibits robust performance on
irregular geometries, including models with missing teeth. This end-to-end
approach streamlines orthodontic workflows, enhances 3D IOS analysis precision,
and supports efficient computer-assisted treatment planning.
|
2501.13074
|
Autonomy-of-Experts Models
|
cs.CL cs.AI cs.LG
|
Mixture-of-Experts (MoE) models mostly use a router to assign tokens to
specific expert modules, activating only partial parameters and often
outperforming dense models. We argue that the separation between the router's
decision-making and the experts' execution is a critical yet overlooked issue,
leading to suboptimal expert selection and ineffective learning. To address
this, we propose Autonomy-of-Experts (AoE), a novel MoE paradigm in which
experts autonomously select themselves to process inputs. AoE is based on the
insight that an expert is aware of its own capacity to effectively process a
token, an awareness reflected in the scale of its internal activations. In AoE,
routers are removed; instead, experts pre-compute internal activations for
inputs and are ranked based on their activation norms. Only the top-ranking
experts proceed with the forward pass, while the others abort. The overhead of
pre-computing activations is reduced through a low-rank weight factorization.
This self-evaluating-then-partner-comparing approach ensures improved expert
selection and effective learning. We pre-train language models having 700M up
to 4B parameters, demonstrating that AoE outperforms traditional MoE models
with comparable efficiency.
|
2501.13075
|
Evolution and The Knightian Blindspot of Machine Learning
|
cs.AI
|
This paper claims that machine learning (ML) largely overlooks an important
facet of general intelligence: robustness to a qualitatively unknown future in
an open world. Such robustness relates to Knightian uncertainty (KU) in
economics, i.e. uncertainty that cannot be quantified, which is excluded from
consideration in ML's key formalisms. This paper aims to identify this blind
spot, argue its importance, and catalyze research into addressing it, which we
believe is necessary to create truly robust open-world AI. To help illuminate
the blind spot, we contrast one area of ML, reinforcement learning (RL), with
the process of biological evolution. Despite staggering ongoing progress, RL
still struggles in open-world situations, often failing under unforeseen
situations. For example, the idea of zero-shot transferring a self-driving car
policy trained only in the US to the UK currently seems exceedingly ambitious.
In dramatic contrast, biological evolution routinely produces agents that
thrive within an open world, sometimes even to situations that are remarkably
out-of-distribution (e.g. invasive species; or humans, who do undertake such
zero-shot international driving). Interestingly, evolution achieves such
robustness without explicit theory, formalisms, or mathematical gradients. We
explore the assumptions underlying RL's typical formalisms, showing how they
limit RL's engagement with the unknown unknowns characteristic of an
ever-changing complex world. Further, we identify mechanisms through which
evolutionary processes foster robustness to novel and unpredictable challenges,
and discuss potential pathways to algorithmically embody them. The conclusion
is that the intriguing remaining fragility of ML may result from blind spots in
its formalisms, and that significant gains may result from direct confrontation
with the challenge of KU.
|
2501.13080
|
Refining Input Guardrails: Enhancing LLM-as-a-Judge Efficiency Through
Chain-of-Thought Fine-Tuning and Alignment
|
cs.CL cs.CR cs.LG
|
Large Language Models (LLMs) have demonstrated powerful capabilities that
render them valuable in different applications, including conversational AI
products. It is paramount to ensure the security and reliability of these
products by mitigating their vulnerabilities towards malicious user
interactions, which can lead to the exposure of great risks and reputational
repercussions. In this work, we present a comprehensive study on the efficacy
of fine-tuning and aligning Chain-of-Thought (CoT) responses of different LLMs
that serve as input moderation guardrails. We systematically explore various
tuning methods by leveraging a small set of training data to adapt these models
as proxy defense mechanisms to detect malicious inputs and provide a reasoning
for their verdicts, thereby preventing the exploitation of conversational
agents. We rigorously evaluate the efficacy and robustness of different tuning
strategies to generalize across diverse adversarial and malicious query types.
Our experimental results outline the potential of alignment processes tailored
to a varied range of harmful input queries, even with constrained data
resources. These techniques significantly enhance the safety of conversational
AI systems and provide a feasible framework for deploying more secure and
trustworthy AI-driven interactions.
|
2501.13083
|
Boosting MCTS with Free Energy Minimization
|
cs.AI
|
Active Inference, grounded in the Free Energy Principle, provides a powerful
lens for understanding how agents balance exploration and goal-directed
behavior in uncertain environments. Here, we propose a new planning framework,
that integrates Monte Carlo Tree Search (MCTS) with active inference objectives
to systematically reduce epistemic uncertainty while pursuing extrinsic
rewards. Our key insight is that MCTS already renowned for its search
efficiency can be naturally extended to incorporate free energy minimization by
blending expected rewards with information gain. Concretely, the Cross-Entropy
Method (CEM) is used to optimize action proposals at the root node, while tree
expansions leverage reward modeling alongside intrinsic exploration bonuses.
This synergy allows our planner to maintain coherent estimates of value and
uncertainty throughout planning, without sacrificing computational
tractability. Empirically, we benchmark our planner on a diverse set of
continuous control tasks, where it demonstrates performance gains over both
standalone CEM and MCTS with random rollouts.
|
2501.13084
|
Attention-Driven Hierarchical Reinforcement Learning with Particle
Filtering for Source Localization in Dynamic Fields
|
cs.LG cs.AI
|
In many real-world scenarios, such as gas leak detection or environmental
pollutant tracking, solving the Inverse Source Localization and
Characterization problem involves navigating complex, dynamic fields with
sparse and noisy observations. Traditional methods face significant challenges,
including partial observability, temporal and spatial dynamics,
out-of-distribution generalization, and reward sparsity. To address these
issues, we propose a hierarchical framework that integrates Bayesian inference
and reinforcement learning. The framework leverages an attention-enhanced
particle filtering mechanism for efficient and accurate belief updates, and
incorporates two complementary execution strategies: Attention Particle
Filtering Planning and Attention Particle Filtering Reinforcement Learning.
These approaches optimize exploration and adaptation under uncertainty.
Theoretical analysis proves the convergence of the attention-enhanced particle
filter, while extensive experiments across diverse scenarios validate the
framework's superior accuracy, adaptability, and computational efficiency. Our
results highlight the framework's potential for broad applications in dynamic
field estimation tasks.
|
2501.13086
|
Information Degradation and Misinformation in Gossip Networks
|
cs.IT cs.NI math.IT
|
We study networks of gossiping users where a source observing a process sends
updates to an underlying graph. Nodes in the graph update their neighbors
randomly and nodes always accept packets that have newer information, thus
attempting to minimize their age of information (AoI). We show that while
gossiping reduces AoI, information can rapidly degrade in such a network. We
model degradation by arbitrary discrete-time Markov chains on k states. As a
packet is transmitted through the network it modifies its state according to
the Markov chain. In the last section, we specialize the Markov chain to
represent misinformation spread, and show that the rate of misinformation
spread is proportional to the age of information in both the fully-connected
graph and ring graph.
|
2501.13087
|
Orchid: Image Latent Diffusion for Joint Appearance and Geometry
Generation
|
cs.CV cs.LG
|
Diffusion models are state-of-the-art for image generation. Trained on large
datasets, they capture expressive image priors that have been used for tasks
like inpainting, depth, and (surface) normal prediction. However, these models
are typically trained for one specific task, e.g., a separate model for each of
color, depth, and normal prediction. Such models do not leverage the intrinsic
correlation between appearance and geometry, often leading to inconsistent
predictions.
In this paper, we propose using a novel image diffusion prior that jointly
encodes appearance and geometry. We introduce a diffusion model Orchid,
comprising a Variational Autoencoder (VAE) to encode color, depth, and surface
normals to a latent space, and a Latent Diffusion Model (LDM) for generating
these joint latents. Orchid directly generates photo-realistic color images,
relative depth, and surface normals from user-provided text, and can be used to
create image-aligned partial 3D scenes seamlessly. It can also perform
image-conditioned tasks like joint monocular depth and normal prediction and is
competitive in accuracy to state-of-the-art methods designed for those tasks
alone. Lastly, our model learns a joint prior that can be used zero-shot as a
regularizer for many inverse problems that entangle appearance and geometry.
For example, we demonstrate its effectiveness in color-depth-normal inpainting,
showcasing its applicability to problems in 3D generation from sparse views.
|
2501.13092
|
An Analytical Study of the Min-Sum Approximation for Polar Codes
|
cs.IT math.IT
|
The min-sum approximation is widely used in the decoding of polar codes.
Although it is a numerical approximation, hardly any penalties are incurred in
practice. We give a theoretical justification for this. We consider the common
case of a binary-input, memoryless, and symmetric channel, decoded using
successive cancellation and the min-sum approximation. Under mild assumptions,
we show the following. For the finite length case, we show how to exactly
calculate the error probabilities of all synthetic (bit) channels in time
$O(N^{1.585})$, where $N$ is the codeword length. This implies a code
construction algorithm with the above complexity. For the asymptotic case, we
develop two rate thresholds, denoted $R_{\mathrm{L}} = R_{\mathrm{L}}(\lambda)$
and $R_{\mathrm{U}} =R_{\mathrm{U}}(\lambda)$, where $\lambda(\cdot)$ is the
labeler of the channel outputs (essentially, a quantizer). For any $0 < \beta <
\frac{1}{2}$ and any code rate $R < R_{\mathrm{L}}$, there exists a family of
polar codes with growing lengths such that their rates are at least $R$ and
their error probabilities are at most $2^{-N^\beta}$. That is, strong
polarization continues to hold under the min-sum approximation. Conversely, for
code rates exceeding $R_{\mathrm{U}}$, the error probability approaches $1$ as
the code-length increases, irrespective of which bits are frozen. We show that
$0 < R_{\mathrm{L}} \leq R_{\mathrm{U}} \leq C$, where $C$ is the channel
capacity. The last inequality is often strict, in which case the ramification
of using the min-sum approximation is that we can no longer achieve capacity.
|
2501.13093
|
Guaranteed Recovery of Unambiguous Clusters
|
cs.IT cs.AI cs.DS cs.LG math.IT math.ST stat.TH
|
Clustering is often a challenging problem because of the inherent ambiguity
in what the "correct" clustering should be. Even when the number of clusters
$K$ is known, this ambiguity often still exists, particularly when there is
variation in density among different clusters, and clusters have multiple
relatively separated regions of high density. In this paper we propose an
information-theoretic characterization of when a $K$-clustering is ambiguous,
and design an algorithm that recovers the clustering whenever it is
unambiguous. This characterization formalizes the situation when two high
density regions within a cluster are separable enough that they look more like
two distinct clusters than two truly distinct clusters in the clustering. The
algorithm first identifies $K$ partial clusters (or "seeds") using a
density-based approach, and then adds unclustered points to the initial $K$
partial clusters in a greedy manner to form a complete clustering. We implement
and test a version of the algorithm that is modified to effectively handle
overlapping clusters, and observe that it requires little parameter selection
and displays improved performance on many datasets compared to widely used
algorithms for non-convex cluster recovery.
|
2501.13094
|
Robust Representation Consistency Model via Contrastive Denoising
|
cs.CV cs.AI cs.LG
|
Robustness is essential for deep neural networks, especially in
security-sensitive applications. To this end, randomized smoothing provides
theoretical guarantees for certifying robustness against adversarial
perturbations. Recently, diffusion models have been successfully employed for
randomized smoothing to purify noise-perturbed samples before making
predictions with a standard classifier. While these methods excel at small
perturbation radii, they struggle with larger perturbations and incur a
significant computational overhead during inference compared to classical
methods. To address this, we reformulate the generative modeling task along the
diffusion trajectories in pixel space as a discriminative task in the latent
space. Specifically, we use instance discrimination to achieve consistent
representations along the trajectories by aligning temporally adjacent points.
After fine-tuning based on the learned representations, our model enables
implicit denoising-then-classification via a single prediction, substantially
reducing inference costs. We conduct extensive experiments on various datasets
and achieve state-of-the-art performance with minimal computation budget during
inference. For example, our method outperforms the certified accuracy of
diffusion-based methods on ImageNet across all perturbation radii by 5.3% on
average, with up to 11.6% at larger radii, while reducing inference costs by
85$\times$ on average. Codes are available at:
https://github.com/jiachenlei/rRCM.
|
2501.13099
|
Which Sensor to Observe? Timely Tracking of a Joint Markov Source with
Model Predictive Control
|
cs.IT cs.NI cs.SY eess.SP eess.SY math.IT
|
In this paper, we investigate the problem of remote estimation of a
discrete-time joint Markov process using multiple sensors. Each sensor observes
a different component of the joint Markov process, and in each time slot, the
monitor obtains a partial state value by sending a pull request to one of the
sensors. The monitor chooses the sequence of sensors to observe with the goal
of minimizing the mean of age of incorrect information (MAoII) by using the
partial state observations obtained, which have different freshness levels. For
instance, a monitor may be interested in tracking the location of an object by
obtaining observations from two sensors, which observe the $x$ and $y$
coordinates of the object separately, in different time slots. The monitor,
then, needs to decide which coordinate to observe in the next time slot given
the history. In addition to this partial observability of the state of Markov
process, there is an erasure channel with a fixed one-slot delay between each
sensor and the monitor. First, we obtain a sufficient statistic, namely the
\emph{belief}, representing the joint distribution of the age of incorrect
information (AoII) and the current state of the observed process by using the
history of all pull requests and observations. Then, we formulate the problem
with a continuous state-space Markov decision problem (MDP), namely belief MDP.
To solve the problem, we propose two model predictive control (MPC) methods,
namely MPC without terminal costs (MPC-WTC) and reinforcement learning MPC
(RL-MPC), that have different advantages in implementation.
|
2501.13100
|
A Rate-Distortion Framework for Summarization
|
cs.IT cs.CL cs.LG math.IT
|
This paper introduces an information-theoretic framework for text
summarization. We define the summarizer rate-distortion function and show that
it provides a fundamental lower bound on summarizer performance. We describe an
iterative procedure, similar to Blahut-Arimoto algorithm, for computing this
function. To handle real-world text datasets, we also propose a practical
method that can calculate the summarizer rate-distortion function with limited
data. Finally, we empirically confirm our theoretical results by comparing the
summarizer rate-distortion function with the performances of different
summarizers used in practice.
|
2501.13104
|
Neural Radiance Fields for the Real World: A Survey
|
cs.CV cs.GR
|
Neural Radiance Fields (NeRFs) have remodeled 3D scene representation since
release. NeRFs can effectively reconstruct complex 3D scenes from 2D images,
advancing different fields and applications such as scene understanding, 3D
content generation, and robotics. Despite significant research progress, a
thorough review of recent innovations, applications, and challenges is lacking.
This survey compiles key theoretical advancements and alternative
representations and investigates emerging challenges. It further explores
applications on reconstruction, highlights NeRFs' impact on computer vision and
robotics, and reviews essential datasets and toolkits. By identifying gaps in
the literature, this survey discusses open challenges and offers directions for
future research.
|
2501.13105
|
On the Service Rate Region of Reed-Muller Codes
|
cs.IT math.IT
|
We study the Service Rate Region (SRR) of Reed-Muller (RM) codes in the
context of distributed storage systems. The SRR is a convex polytope comprising
all achievable data access request rates under a given coding scheme. It
represents a critical metric for evaluating system efficiency and scalability.
Using the geometric properties of RM codes, we characterize recovery sets for
data objects, including their existence, uniqueness, and enumeration. This
analysis reveals a connection between recovery sets and minimum-weight
codewords in the dual RM code, providing a framework for identifying small
recovery sets. Using these results, we derive explicit and tight bounds for the
maximal achievable demand for individual data objects, which define the maximal
simplex within the service rate region.
|
2501.13106
|
VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video
Understanding
|
cs.CV
|
In this paper, we propose VideoLLaMA3, a more advanced multimodal foundation
model for image and video understanding. The core design philosophy of
VideoLLaMA3 is vision-centric. The meaning of "vision-centric" is two-fold: the
vision-centric training paradigm and vision-centric framework design. The key
insight of our vision-centric training paradigm is that high-quality image-text
data is crucial for both image and video understanding. Instead of preparing
massive video-text datasets, we focus on constructing large-scale and
high-quality image-text datasets. VideoLLaMA3 has four training stages: 1)
Vision Encoder Adaptation, which enables vision encoder to accept images of
variable resolutions as input; 2) Vision-Language Alignment, which jointly
tunes the vision encoder, projector, and LLM with large-scale image-text data
covering multiple types (including scene images, documents, charts) as well as
text-only data. 3) Multi-task Fine-tuning, which incorporates image-text SFT
data for downstream tasks and video-text data to establish a foundation for
video understanding. 4) Video-centric Fine-tuning, which further improves the
model's capability in video understanding. As for the framework design, to
better capture fine-grained details in images, the pretrained vision encoder is
adapted to encode images of varying sizes into vision tokens with corresponding
numbers, rather than a fixed number of tokens. For video inputs, we reduce the
number of vision tokens according to their similarity so that the
representation of videos will be more precise and compact. Benefit from
vision-centric designs, VideoLLaMA3 achieves compelling performances in both
image and video understanding benchmarks.
|
2501.13107
|
Accelerate High-Quality Diffusion Models with Inner Loop Feedback
|
cs.CV
|
We propose Inner Loop Feedback (ILF), a novel approach to accelerate
diffusion models' inference. ILF trains a lightweight module to predict future
features in the denoising process by leveraging the outputs from a chosen
diffusion backbone block at a given time step. This approach exploits two key
intuitions; (1) the outputs of a given block at adjacent time steps are
similar, and (2) performing partial computations for a step imposes a lower
burden on the model than skipping the step entirely. Our method is highly
flexible, since we find that the feedback module itself can simply be a block
from the diffusion backbone, with all settings copied. Its influence on the
diffusion forward can be tempered with a learnable scaling factor from zero
initialization. We train this module using distillation losses; however, unlike
some prior work where a full diffusion backbone serves as the student, our
model freezes the backbone, training only the feedback module. While many
efforts to optimize diffusion models focus on achieving acceptable image
quality in extremely few steps (1-4 steps), our emphasis is on matching best
case results (typically achieved in 20 steps) while significantly reducing
runtime. ILF achieves this balance effectively, demonstrating strong
performance for both class-to-image generation with diffusion transformer (DiT)
and text-to-image generation with DiT-based PixArt-alpha and PixArt-sigma. The
quality of ILF's 1.7x-1.8x speedups are confirmed by FID, CLIP score, CLIP
Image Quality Assessment, ImageReward, and qualitative comparisons. Project
information is available at https://mgwillia.github.io/ilf.
|
2501.13111
|
iServe: An Intent-based Serving System for LLMs
|
cs.SE cs.LG
|
Large Language Models (LLMs) are becoming ubiquitous across industries, where
applications demand they fulfill diverse user intents. However, developers
currently face the challenge of manually exploring numerous deployment
configurations - combinations of parallelism and compression techniques that
impact resource usage, latency, cost, and accuracy - to meet these intents.
Assessing the impact of these configurations on user metrics requires
extensive, costly profiling for each model. Existing approaches avoid this
expense by using fixed, static configurations, but this often leads to
sub-optimal performance and higher costs. Moreover, none of these solutions
dynamically adapt to changing user intents to balance latency and cost,
effectively. We present iServe, an automated, intent-based system for
distributed LLM inference. Instead of manually selecting deployment
configurations, developers simply specify their intent - such as minimizing
latency, reducing cost, or meeting specific targets for either. iServe
introduces fingerprints, lightweight representations of LLMs, to efficiently
estimate how different configurations impact latency and memory usage. Based on
these insights and GPU availability, iServe dynamically selects the optimal
configuration to align with the user's intent. For various LLMs and query
arrival rates, iServe best meets user intents compared to state-of-the-art
systems by reducing latency by 77.62% and SLO violations by 7.09x while
improving GPU throughput by 4.72x. Moreover, iServe's fingerprint-based
profiling reduces profiling cost by 6.05x (GPU-hours) compared to baselines.
|
2501.13115
|
Dagger Behind Smile: Fool LLMs with a Happy Ending Story
|
cs.CL cs.AI cs.CR
|
The wide adoption of Large Language Models (LLMs) has attracted significant
attention from $\textit{jailbreak}$ attacks, where adversarial prompts crafted
through optimization or manual design exploit LLMs to generate malicious
contents. However, optimization-based attacks have limited efficiency and
transferability, while existing manual designs are either easily detectable or
demand intricate interactions with LLMs. In this paper, we first point out a
novel perspective for jailbreak attacks: LLMs are more responsive to
$\textit{positive}$ prompts. Based on this, we deploy Happy Ending Attack (HEA)
to wrap up a malicious request in a scenario template involving a positive
prompt formed mainly via a $\textit{happy ending}$, it thus fools LLMs into
jailbreaking either immediately or at a follow-up malicious request.This has
made HEA both efficient and effective, as it requires only up to two turns to
fully jailbreak LLMs. Extensive experiments show that our HEA can successfully
jailbreak on state-of-the-art LLMs, including GPT-4o, Llama3-70b, Gemini-pro,
and achieves 88.79\% attack success rate on average. We also provide
quantitative explanations for the success of HEA.
|
2501.13117
|
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language
Models via Double Chain of Thought Thinking
|
cs.CL cs.AI
|
Recent advancements in large language models (LLMs) have demonstrated their
impressive abilities in various reasoning and decision-making tasks. However,
the quality and coherence of the reasoning process can still benefit from
enhanced introspection and self-reflection. In this paper, we introduce
Multiplex CoT (Chain of Thought), a method that enables LLMs to simulate a form
of self-review while reasoning, by initiating double Chain of Thought (CoT)
thinking. Multiplex CoT leverages the power of iterative reasoning, where the
model generates an initial chain of thought and subsequently critiques and
refines this reasoning with a second round of thought generation. This
recursive approach allows for more coherent, logical, and robust answers,
improving the overall decision-making process. We demonstrate how this method
can be effectively implemented using simple prompt engineering in existing LLM
architectures, achieving an effect similar to that of the Learning-Refinement
Model (LRM) without the need for additional training. Additionally, we present
a practical guide for implementing the method in Google Colab, enabling easy
integration into real-world applications.
|
2501.13120
|
Multilinguality in LLM-Designed Reward Functions for Restless Bandits:
Effects on Task Performance and Fairness
|
cs.CL cs.AI cs.LG cs.MA
|
Restless Multi-Armed Bandits (RMABs) have been successfully applied to
resource allocation problems in a variety of settings, including public health.
With the rapid development of powerful large language models (LLMs), they are
increasingly used to design reward functions to better match human preferences.
Recent work has shown that LLMs can be used to tailor automated allocation
decisions to community needs using language prompts. However, this has been
studied primarily for English prompts and with a focus on task performance
only. This can be an issue since grassroots workers, especially in developing
countries like India, prefer to work in local languages, some of which are
low-resource. Further, given the nature of the problem, biases along population
groups unintended by the user are also undesirable. In this work, we study the
effects on both task performance and fairness when the DLM algorithm, a recent
work on using LLMs to design reward functions for RMABs, is prompted with
non-English language commands. Specifically, we run the model on a synthetic
environment for various prompts translated into multiple languages. The prompts
themselves vary in complexity. Our results show that the LLM-proposed reward
functions are significantly better when prompted in English compared to other
languages. We also find that the exact phrasing of the prompt impacts task
performance. Further, as prompt complexity increases, performance worsens for
all languages; however, it is more robust with English prompts than with
lower-resource languages. On the fairness side, we find that low-resource
languages and more complex prompts are both highly likely to create unfairness
along unintended dimensions.
|
2501.13121
|
Episodic Memories Generation and Evaluation Benchmark for Large Language
Models
|
cs.CL cs.AI cs.LG
|
Episodic memory -- the ability to recall specific events grounded in time and
space -- is a cornerstone of human cognition, enabling not only coherent
storytelling, but also planning and decision-making. Despite their remarkable
capabilities, Large Language Models (LLMs) lack a robust mechanism for episodic
memory: we argue that integrating episodic memory capabilities into LLM is
essential for advancing AI towards human-like cognition, increasing their
potential to reason consistently and ground their output in real-world episodic
events, hence avoiding confabulations. To address this challenge, we introduce
a comprehensive framework to model and evaluate LLM episodic memory
capabilities. Drawing inspiration from cognitive science, we develop a
structured approach to represent episodic events, encapsulating temporal and
spatial contexts, involved entities, and detailed descriptions. We synthesize a
unique episodic memory benchmark, free from contamination, and release open
source code and datasets to assess LLM performance across various recall and
episodic reasoning tasks. Our evaluation of state-of-the-art models, including
GPT-4 and Claude variants, Llama 3.1, and o1-mini, reveals that even the most
advanced LLMs struggle with episodic memory tasks, particularly when dealing
with multiple related events or complex spatio-temporal relationships -- even
in contexts as short as 10k-100k tokens.
|
2501.13122
|
Zero-Shot Verification-guided Chain of Thoughts
|
cs.CL cs.AI
|
Previous works have demonstrated the effectiveness of Chain-of-Thought (COT)
prompts and verifiers in guiding Large Language Models (LLMs) through the space
of reasoning. However, most such studies either use a fine-tuned verifier or
rely on manually handcrafted few-shot examples. In contrast, in this paper, we
focus on LLM-based self-verification of self-generated reasoning steps via COT
prompts in a completely zero-shot regime. To explore this setting, we design a
new zero-shot prompt, which we call COT STEP, to aid zero-shot decomposition of
reasoning steps and design two new zero-shot prompts for LLM-based verifiers.
We evaluate the verifiers' ability to classify the correctness of reasoning
chains and explore different ways to use verifier scores in guiding reasoning
for various mathematical and commonsense reasoning tasks with different LLMs.
|
2501.13124
|
Debate Helps Weak-to-Strong Generalization
|
cs.CL cs.AI
|
Common methods for aligning already-capable models with desired behavior rely
on the ability of humans to provide supervision. However, future superhuman
models will surpass the capability of humans. Therefore, humans will only be
able to weakly supervise superhuman models. This expected deficiency of human
evaluation would weaken the safety of future AI systems. Scalable oversight and
weak-to-strong generalization are two complementary approaches to tackle this
issue. In this paper, we attempt to combine the strengths of these two
approaches to further improve alignment. Specifically, we investigate ways of
improving human supervision with a strong pretrained model and then supervise
the strong model with enhanced weak human supervision. To make iterative
empirical progress, we consider an analogy: can we use a strong model to
improve weak model supervision and then use it to supervise the strong model?
We empirically test it by finetuning a small weak model on ground truth labels
with the additional help from a large strong model, and then finetuning the
strong model on labels generated by the weak model. We find that debate can
assist a weak model in extracting trustworthy information from an untrustworthy
strong model, which provides leverage as context on samples when training a
weak model. We also show that an ensemble of weak models helps exploit long
arguments generated by strong model debaters and obtain a more robust
supervision estimate. Extensive experiments on the OpenAI weak-to-strong NLP
benchmarks show that the combination approach leads to better alignment, which
indicates that debate has the potential to help weak-to-strong generalization.
|
2501.13125
|
Generating Plausible Distractors for Multiple-Choice Questions via
Student Choice Prediction
|
cs.CL cs.AI cs.LG
|
In designing multiple-choice questions (MCQs) in education, creating
plausible distractors is crucial for identifying students' misconceptions and
gaps in knowledge and accurately assessing their understanding. However, prior
studies on distractor generation have not paid sufficient attention to
enhancing the difficulty of distractors, resulting in reduced effectiveness of
MCQs. This study presents a pipeline for training a model to generate
distractors that are more likely to be selected by students. First, we train a
pairwise ranker to reason about students' misconceptions and assess the
relative plausibility of two distractors. Using this model, we create a dataset
of pairwise distractor ranks and then train a distractor generator via Direct
Preference Optimization (DPO) to generate more plausible distractors.
Experiments on computer science subjects (Python, DB, MLDL) demonstrate that
our pairwise ranker effectively identifies students' potential
misunderstandings and achieves ranking accuracy comparable to human experts.
Furthermore, our distractor generator outperforms several baselines in
generating plausible distractors and produces questions with a higher item
discrimination index (DI).
|
2501.13126
|
Preference Curriculum: LLMs Should Always Be Pretrained on Their
Preferred Data
|
cs.CL cs.AI
|
Large language models (LLMs) generally utilize a consistent data distribution
throughout the pretraining process. However, as the model's capability
improves, it is intuitive that its data preferences dynamically change,
indicating the need for pretraining with different data at various training
stages. To achieve it, we propose the Perplexity Difference (PD) based
Preference Curriculum learning (PDPC) framework, which always perceives and
uses the data preferred by LLMs to train and boost them. First, we introduce
the PD metric to quantify the difference in how challenging a sample is for
weak versus strong models. Samples with high PD are more challenging for weak
models to learn and are more suitable to be arranged in the later stage of
pretraining. Second, we propose the preference function to approximate and
predict the data preference of the LLM at any training step, so as to complete
the arrangement of the dataset offline and ensure continuous training without
interruption. Experimental results on 1.3B and 3B models demonstrate that PDPC
significantly surpasses baselines. Notably, the 3B model trained on 1T tokens
achieves an increased average accuracy of over 8.1% across MMLU and CMMLU.
|
2501.13128
|
A Learnt Half-Quadratic Splitting-Based Algorithm for Fast and
High-Quality Industrial Cone-beam CT Reconstruction
|
eess.IV cs.LG
|
Industrial X-ray cone-beam CT (XCT) scanners are widely used for scientific
imaging and non-destructive characterization. Industrial CBCT scanners use
large detectors containing millions of pixels and the subsequent 3D
reconstructions can be of the order of billions of voxels. In order to obtain
high-quality reconstruction when using typical analytic algorithms, the scan
involves collecting a large number of projections/views which results in large
measurement times - limiting the utility of the technique. Model-based
iterative reconstruction (MBIR) algorithms can produce high-quality
reconstructions from fast sparse-view CT scans, but are computationally
expensive and hence are avoided in practice. Single-step deep-learning (DL)
based methods have demonstrated that it is possible to obtain fast and
high-quality reconstructions from sparse-view data but they do not generalize
well to out-of-distribution scenarios. In this work, we propose a
half-quadratic splitting-based algorithm that uses convolutional neural
networks (CNN) in order to obtain high-quality reconstructions from large
sparse-view cone-beam CT (CBCT) measurements while overcoming the challenges
with typical approaches. The algorithm alternates between the application of a
CNN and a conjugate gradient (CG) step enforcing data-consistency (DC). The
proposed method outperforms other methods on the publicly available Walnuts
data-set.
|
2501.13132
|
A Hierarchical Reinforcement Learning Framework for Multi-UAV Combat
Using Leader-Follower Strategy
|
cs.MA cs.AI cs.RO cs.SY eess.SY
|
Multi-UAV air combat is a complex task involving multiple autonomous UAVs, an
evolving field in both aerospace and artificial intelligence. This paper aims
to enhance adversarial performance through collaborative strategies. Previous
approaches predominantly discretize the action space into predefined actions,
limiting UAV maneuverability and complex strategy implementation. Others
simplify the problem to 1v1 combat, neglecting the cooperative dynamics among
multiple UAVs. To address the high-dimensional challenges inherent in
six-degree-of-freedom space and improve cooperation, we propose a hierarchical
framework utilizing the Leader-Follower Multi-Agent Proximal Policy
Optimization (LFMAPPO) strategy. Specifically, the framework is structured into
three levels. The top level conducts a macro-level assessment of the
environment and guides execution policy. The middle level determines the angle
of the desired action. The bottom level generates precise action commands for
the high-dimensional action space. Moreover, we optimize the state-value
functions by assigning distinct roles with the leader-follower strategy to
train the top-level policy, followers estimate the leader's utility, promoting
effective cooperation among agents. Additionally, the incorporation of a target
selector, aligned with the UAVs' posture, assesses the threat level of targets.
Finally, simulation experiments validate the effectiveness of our proposed
method.
|
2501.13133
|
Graph Representation Learning with Diffusion Generative Models
|
cs.LG cs.AI
|
Diffusion models have established themselves as state-of-the-art generative
models across various data modalities, including images and videos, due to
their ability to accurately approximate complex data distributions. Unlike
traditional generative approaches such as VAEs and GANs, diffusion models
employ a progressive denoising process that transforms noise into meaningful
data over multiple iterative steps. This gradual approach enhances their
expressiveness and generation quality. Not only that, diffusion models have
also been shown to extract meaningful representations from data while learning
to generate samples. Despite their success, the application of diffusion models
to graph-structured data remains relatively unexplored, primarily due to the
discrete nature of graphs, which necessitates discrete diffusion processes
distinct from the continuous methods used in other domains. In this work, we
leverage the representational capabilities of diffusion models to learn
meaningful embeddings for graph data. By training a discrete diffusion model
within an autoencoder framework, we enable both effective autoencoding and
representation learning tailored to the unique characteristics of
graph-structured data. We only need the encoder at the end to extract
representations. Our approach demonstrates the potential of discrete diffusion
models to be used for graph representation learning.
|
2501.13134
|
UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model
Using Diffusion Prior
|
eess.IV cs.LG
|
Image restoration aims to recover content from inputs degraded by various
factors, such as adverse weather, blur, and noise. Perceptual Image Restoration
(PIR) methods improve visual quality but often do not support downstream tasks
effectively. On the other hand, Task-oriented Image Restoration (TIR) methods
focus on enhancing image utility for high-level vision tasks, sometimes
compromising visual quality. This paper introduces UniRestore, a unified image
restoration model that bridges the gap between PIR and TIR by using a diffusion
prior. The diffusion prior is designed to generate images that align with human
visual quality preferences, but these images are often unsuitable for TIR
scenarios. To solve this limitation, UniRestore utilizes encoder features from
an autoencoder to adapt the diffusion prior to specific tasks. We propose a
Complementary Feature Restoration Module (CFRM) to reconstruct degraded encoder
features and a Task Feature Adapter (TFA) module to facilitate adaptive feature
fusion in the decoder. This design allows UniRestore to optimize images for
both human perception and downstream task requirements, addressing
discrepancies between visual quality and functional needs. Integrating these
modules also enhances UniRestore's adapability and efficiency across diverse
tasks. Extensive expertments demonstrate the superior performance of UniRestore
in both PIR and TIR scenarios.
|
2501.13135
|
Applications and Challenges of AI and Microscopy in Life Science
Research: A Review
|
q-bio.OT cs.AI physics.med-ph q-bio.SC
|
The complexity of human biology and its intricate systems holds immense
potential for advancing human health, disease treatment, and scientific
discovery. However, traditional manual methods for studying biological
interactions are often constrained by the sheer volume and complexity of
biological data. Artificial Intelligence (AI), with its proven ability to
analyze vast datasets, offers a transformative approach to addressing these
challenges. This paper explores the intersection of AI and microscopy in life
sciences, emphasizing their potential applications and associated challenges.
We provide a detailed review of how various biological systems can benefit from
AI, highlighting the types of data and labeling requirements unique to this
domain. Particular attention is given to microscopy data, exploring the
specific AI techniques required to process and interpret this information. By
addressing challenges such as data heterogeneity and annotation scarcity, we
outline potential solutions and emerging trends in the field. Written primarily
from an AI perspective, this paper aims to serve as a valuable resource for
researchers working at the intersection of AI, microscopy, and biology. It
summarizes current advancements, key insights, and open problems, fostering an
understanding that encourages interdisciplinary collaborations. By offering a
comprehensive yet concise synthesis of the field, this paper aspires to
catalyze innovation, promote cross-disciplinary engagement, and accelerate the
adoption of AI in life science research.
|
2501.13136
|
Forecasting of Bitcoin Prices Using Hashrate Features: Wavelet and Deep
Stacking Approach
|
q-fin.ST cs.AI cs.LG
|
Digital currencies have become popular in the last decade due to their
non-dependency and decentralized nature. The price of these currencies has seen
a lot of fluctuations at times, which has increased the need for prediction. As
their most popular, Bitcoin(BTC) has become a research hotspot. The main
challenge and trend of digital currencies, especially BTC, is price
fluctuations, which require studying the basic price prediction model. This
research presents a classification and regression model based on stack deep
learning that uses a wavelet to remove noise to predict movements and prices of
BTC at different time intervals. The proposed model based on the stacking
technique uses models based on deep learning, especially neural networks and
transformers, for one, seven, thirty and ninety-day forecasting. Three feature
selection models, Chi2, RFE and Embedded, were also applied to the data in the
pre-processing stage. The classification model achieved 63\% accuracy for
predicting the next day and 64\%, 67\% and 82\% for predicting the seventh,
thirty and ninety days, respectively. For daily price forecasting, the
percentage error was reduced to 0.58, while the error ranged from 2.72\% to
2.85\% for seven- to ninety-day horizons. These results show that the proposed
model performed better than other models in the literature.
|
2501.13137
|
On the reproducibility of discrete-event simulation studies in health
research: an empirical study using open models
|
q-bio.OT cs.SY eess.SY
|
Reproducibility of computational research is critical for ensuring
transparency, reliability and reusability. Challenges with computational
reproducibility have been documented in several fields, but healthcare
discrete-event simulation (DES) models have not been thoroughly examined in
this context. This study assessed the computational reproducibility of eight
published healthcare DES models (Python or R), selected to represent diverse
contexts, complexities, and years of publication. Repositories and articles
were also assessed against guidelines and reporting standards, offering
insights into their relationship with reproducibility success. Reproducing
results required up to 28 hours of troubleshooting per model, with 50% fully
reproduced and 50% partially reproduced (12.5% to 94.1% of reported outcomes).
Key barriers included the absence of open licences, discrepancies between
reported and coded parameters, and missing code to produce model outputs, run
scenarios, and generate tables and figures. Addressing these issues would often
require relatively little effort from authors: adding an open licence and
sharing all materials used to produce the article. Actionable recommendations
are proposed to enhance reproducibility practices for simulation modellers and
reviewers.
|
2501.13139
|
Efficient Implementation of LinearUCB through Algorithmic Improvements
and Vector Computing Acceleration for Embedded Learning Systems
|
cs.LG cs.AR
|
As the Internet of Things expands, embedding Artificial Intelligence
algorithms in resource-constrained devices has become increasingly important to
enable real-time, autonomous decision-making without relying on centralized
cloud servers. However, implementing and executing complex algorithms in
embedded devices poses significant challenges due to limited computational
power, memory, and energy resources. This paper presents algorithmic and
hardware techniques to efficiently implement two LinearUCB Contextual Bandits
algorithms on resource-constrained embedded devices. Algorithmic modifications
based on the Sherman-Morrison-Woodbury formula streamline model complexity,
while vector acceleration is harnessed to speed up matrix operations. We
analyze the impact of each optimization individually and then combine them in a
two-pronged strategy. The results show notable improvements in execution time
and energy consumption, demonstrating the effectiveness of combining
algorithmic and hardware optimizations to enhance learning models for edge
computing environments with low-power and real-time requirements.
|
2501.13141
|
AirRadar: Inferring Nationwide Air Quality in China with Deep Neural
Networks
|
cs.LG cs.AI
|
Monitoring real-time air quality is essential for safeguarding public health
and fostering social progress. However, the widespread deployment of air
quality monitoring stations is constrained by their significant costs. To
address this limitation, we introduce \emph{AirRadar}, a deep neural network
designed to accurately infer real-time air quality in locations lacking
monitoring stations by utilizing data from existing ones. By leveraging
learnable mask tokens, AirRadar reconstructs air quality features in
unmonitored regions. Specifically, it operates in two stages: first capturing
spatial correlations and then adjusting for distribution shifts. We validate
AirRadar's efficacy using a year-long dataset from 1,085 monitoring stations
across China, demonstrating its superiority over multiple baselines, even with
varying degrees of unobserved data. The source code can be accessed at
https://github.com/CityMind-Lab/AirRadar.
|
2501.13165
|
QuFeX: Quantum feature extraction module for hybrid quantum-classical
deep neural networks
|
quant-ph cs.AI cs.LG
|
We introduce Quantum Feature Extraction (QuFeX), a novel quantum machine
learning module. The proposed module enables feature extraction in a
reduced-dimensional space, significantly decreasing the number of parallel
evaluations required in typical quantum convolutional neural network
architectures. Its design allows seamless integration into deep classical
neural networks, making it particularly suitable for hybrid quantum-classical
models. As an application of QuFeX, we propose Qu-Net -- a hybrid architecture
which integrates QuFeX at the bottleneck of a U-Net architecture. The latter is
widely used for image segmentation tasks such as medical imaging and autonomous
driving. Our numerical analysis indicates that the Qu-Net can achieve superior
segmentation performance compared to a U-Net baseline. These results highlight
the potential of QuFeX to enhance deep neural networks by leveraging hybrid
computational paradigms, providing a path towards a robust framework for
real-world applications requiring precise feature extraction.
|
2501.13181
|
Learning in Log-Domain: Subthreshold Analog AI Accelerator Based on
Stochastic Gradient Descent
|
cs.AR cs.AI
|
The rapid proliferation of AI models, coupled with growing demand for edge
deployment, necessitates the development of AI hardware that is both
high-performance and energy-efficient. In this paper, we propose a novel analog
accelerator architecture designed for AI/ML training workloads using stochastic
gradient descent with L2 regularization (SGDr). The architecture leverages
log-domain circuits in subthreshold MOS and incorporates volatile memory. We
establish a mathematical framework for solving SGDr in the continuous time
domain and detail the mapping of SGDr learning equations to log-domain
circuits. By operating in the analog domain and utilizing weak inversion, the
proposed design achieves significant reductions in transistor area and power
consumption compared to digital implementations. Experimental results
demonstrate that the architecture closely approximates ideal behavior, with a
mean square error below 0.87% and precision as low as 8 bits. Furthermore, the
architecture supports a wide range of hyperparameters. This work paves the way
for energy-efficient analog AI hardware with on-chip training capabilities.
|
2501.13183
|
MONA: Moving Object Detection from Videos Shot by Dynamic Camera
|
cs.CV
|
Dynamic urban environments, characterized by moving cameras and objects, pose
significant challenges for camera trajectory estimation by complicating the
distinction between camera-induced and object motion. We introduce MONA, a
novel framework designed for robust moving object detection and segmentation
from videos shot by dynamic cameras. MONA comprises two key modules: Dynamic
Points Extraction, which leverages optical flow and tracking any point to
identify dynamic points, and Moving Object Segmentation, which employs adaptive
bounding box filtering, and the Segment Anything for precise moving object
segmentation. We validate MONA by integrating with the camera trajectory
estimation method LEAP-VO, and it achieves state-of-the-art results on the MPI
Sintel dataset comparing to existing methods. These results demonstrate MONA's
effectiveness for moving object detection and its potential in many other
applications in the urban planning field.
|
2501.13188
|
Topological constraints on self-organisation in locally interacting
systems
|
cond-mat.stat-mech cs.LG nlin.AO q-bio.CB
|
All intelligence is collective intelligence, in the sense that it is made of
parts which must align with respect to system-level goals. Understanding the
dynamics which facilitate or limit navigation of problem spaces by aligned
parts thus impacts many fields ranging across life sciences and engineering. To
that end, consider a system on the vertices of a planar graph, with pairwise
interactions prescribed by the edges of the graph. Such systems can sometimes
exhibit long-range order, distinguishing one phase of macroscopic behaviour
from another. In networks of interacting systems we may view spontaneous
ordering as a form of self-organisation, modelling neural and basal forms of
cognition. Here, we discuss necessary conditions on the topology of the graph
for an ordered phase to exist, with an eye towards finding constraints on the
ability of a system with local interactions to maintain an ordered target
state. By studying the scaling of free energy under the formation of domain
walls in three model systems -- the Potts model, autoregressive models, and
hierarchical networks -- we show how the combinatorics of interactions on a
graph prevent or allow spontaneous ordering. As an application we are able to
analyse why multiscale systems like those prevalent in biology are capable of
organising into complex patterns, whereas rudimentary language models are
challenged by long sequences of outputs.
|
2501.13189
|
Map Prediction and Generative Entropy for Multi-Agent Exploration
|
cs.RO cs.CV cs.LG
|
Traditionally, autonomous reconnaissance applications have acted on explicit
sets of historical observations. Aided by recent breakthroughs in generative
technologies, this work enables robot teams to act beyond what is currently
known about the environment by inferring a distribution of reasonable
interpretations of the scene. We developed a map predictor that inpaints the
unknown space in a multi-agent 2D occupancy map during an exploration mission.
From a comparison of several inpainting methods, we found that a fine-tuned
latent diffusion inpainting model could provide rich and coherent
interpretations of simulated urban environments with relatively little
computation time. By iteratively inferring interpretations of the scene
throughout an exploration run, we are able to identify areas that exhibit high
uncertainty in the prediction, which we formalize with the concept of
generative entropy. We prioritize tasks in regions of high generative entropy,
hypothesizing that this will expedite convergence on an accurate predicted map
of the scene. In our study we juxtapose this new paradigm of task ranking with
the state of the art, which ranks regions to explore by those which maximize
expected information recovery. We compare both of these methods in a simulated
urban environment with three vehicles. Our results demonstrate that by using
our new task ranking method, we can predict a correct scene significantly
faster than with a traditional information-guided method.
|
2501.13192
|
Remote State Estimation over Unreliable Channels with Unreliable
Feedback: Fundamental Limits
|
cs.IT math.IT math.OC
|
This article is concerned with networked estimation in a system composed of a
source that is observed by a sensor, a remote monitor that needs to estimate
the state of the source in real time, and a communication channel that connects
the source to the monitor. The source is a partially observable dynamical
process, and the communication channel is a packet-erasure channel with
feedback. Our main objective is to obtain the fundamental performance limits of
the underlying networked system in the sense of a causal tradeoff between the
packet rate and the mean square error when both forward and backward channels
are unreliable. We characterize an optimal coding policy profile consisting of
a scheduling policy for the encoder and an estimation policy for the decoder.
We complement our theoretical results with a numerical analysis, and compare
the performance limits of the networked system in different communication
regimes.
|
2501.13193
|
Revisiting Data Augmentation for Ultrasound Images
|
eess.IV cs.CV
|
Data augmentation is a widely used and effective technique to improve the
generalization performance of deep neural networks. Yet, despite often facing
limited data availability when working with medical images, it is frequently
underutilized. This appears to come from a gap in our collective understanding
of the efficacy of different augmentation techniques across different tasks and
modalities. One modality where this is especially true is ultrasound imaging.
This work addresses this gap by analyzing the effectiveness of different
augmentation techniques at improving model performance across a wide range of
ultrasound image analysis tasks. To achieve this, we introduce a new
standardized benchmark of 14 ultrasound image classification and semantic
segmentation tasks from 10 different sources and covering 11 body regions. Our
results demonstrate that many of the augmentations commonly used for tasks on
natural images are also effective on ultrasound images, even more so than
augmentations developed specifically for ultrasound images in some cases. We
also show that diverse augmentation using TrivialAugment, which is widely used
for natural images, is also effective for ultrasound images. Moreover, our
proposed methodology represents a structured approach for assessing various
data augmentations that can be applied to other contexts and modalities.
|
2501.13198
|
S-LoRA: Scalable Low-Rank Adaptation for Class Incremental Learning
|
cs.LG
|
Continual Learning with foundation models has recently emerged as a promising
approach to harnessing the power of pre-trained models for sequential tasks.
Existing prompt-based methods generally use a gating mechanism to select
relevant prompts aligned with the test query for further processing. However,
the success of these methods largely depends on the precision of the gating
mechanism, which becomes less scalable with additional computational overhead
as tasks increases. To overcome these issues, we propose a Scalable Low-Rank
Adaptation (S-LoRA) method for CL (in particular class incremental learning),
which incrementally decouples the learning of the direction and magnitude of
LoRA parameters. S-LoRA supports efficient inference by employing the
last-stage trained model for direct testing without a gating process. Our
theoretical and empirical analysis demonstrates that S-LoRA tends to follow a
low-loss trajectory that converges to an overlapped low-loss region, resulting
in an excellent stability-plasticity trade-off in CL. Furthermore, based on our
findings, we develop variants of S-LoRA with further improved scalability.
Extensive experiments across multiple CL benchmarks and various foundation
models consistently validate the effectiveness of S-LoRA.
|
2501.13199
|
Symbolic Control for Autonomous Docking of Marine Surface Vessels
|
eess.SY cs.SY
|
Docking marine surface vessels remains a largely manual task due to its
safety-critical nature. In this paper, we develop a hierarchical symbolic
control architecture for autonomous docking maneuvers of a dynamic positioning
vessel, to provide formal safety guarantees. At the upper-level, we treat the
vessel's desired surge, sway, and yaw velocities as control inputs and
synthesize a symbolic controller in real-time. The desired velocities are then
transmitted to and executed by the vessel's low-level velocity feedback control
loop. Given a synthesized symbolic controller, we investigate methods to
optimize the performance of the proposed control scheme for the docking task.
The efficacy of this methodology is evaluated on a low-fidelity simulation
model of a marine surface vessel in the presence of static and dynamic
obstacles and, for the first time, through physical experiments on a scaled
model vessel.
|
2501.13200
|
SRMT: Shared Memory for Multi-agent Lifelong Pathfinding
|
cs.LG cs.AI cs.MA
|
Multi-agent reinforcement learning (MARL) demonstrates significant progress
in solving cooperative and competitive multi-agent problems in various
environments. One of the principal challenges in MARL is the need for explicit
prediction of the agents' behavior to achieve cooperation. To resolve this
issue, we propose the Shared Recurrent Memory Transformer (SRMT) which extends
memory transformers to multi-agent settings by pooling and globally
broadcasting individual working memories, enabling agents to exchange
information implicitly and coordinate their actions. We evaluate SRMT on the
Partially Observable Multi-Agent Pathfinding problem in a toy Bottleneck
navigation task that requires agents to pass through a narrow corridor and on a
POGEMA benchmark set of tasks. In the Bottleneck task, SRMT consistently
outperforms a variety of reinforcement learning baselines, especially under
sparse rewards, and generalizes effectively to longer corridors than those seen
during training. On POGEMA maps, including Mazes, Random, and MovingAI, SRMT is
competitive with recent MARL, hybrid, and planning-based algorithms. These
results suggest that incorporating shared recurrent memory into the
transformer-based architectures can enhance coordination in decentralized
multi-agent systems. The source code for training and evaluation is available
on GitHub: https://github.com/Aloriosa/srmt.
|
2501.13201
|
Polyhedral Collision Detection via Vertex Enumeration
|
cs.CG cs.RO
|
Collision detection is a critical functionality for robotics. The degree to
which objects collide cannot be represented as a continuously differentiable
function for any shapes other than spheres. This paper proposes a framework for
handling collision detection between polyhedral shapes. We frame the signed
distance between two polyhedral bodies as the optimal value of a convex
optimization, and consider constraining the signed distance in a bilevel
optimization problem. To avoid relying on specialized bilevel solvers, our
method exploits the fact that the signed distance is the minimal point of a
convex region related to the two bodies. Our method enumerates the values
obtained at all extreme points of this region and lists them as constraints in
the higher-level problem. We compare our formulation to existing methods in
terms of reliability and speed when solved using the same mixed complementarity
problem solver. We demonstrate that our approach more reliably solves difficult
collision detection problems with multiple obstacles than other methods, and is
faster than existing methods in some cases.
|
2501.13203
|
Safe and Efficient Robot Action Planning in the Presence of Unconcerned
Humans
|
cs.RO math.OC
|
This paper proposes a robot action planning scheme that provides an efficient
and probabilistically safe plan for a robot interacting with an unconcerned
human -- someone who is either unaware of the robot's presence or unwilling to
engage in ensuring safety. The proposed scheme is predictive, meaning that the
robot is required to predict human actions over a finite future horizon; such
predictions are often inaccurate in real-world scenarios. One possible approach
to reduce the uncertainties is to provide the robot with the capability of
reasoning about the human's awareness of potential dangers. This paper
discusses that by using a binary variable, so-called danger awareness
coefficient, it is possible to differentiate between concerned and unconcerned
humans, and provides a learning algorithm to determine this coefficient by
observing human actions. Moreover, this paper argues how humans rely on
predictions of other agents' future actions (including those of robots in
human-robot interaction) in their decision-making. It also shows that ignoring
this aspect in predicting human's future actions can significantly degrade the
efficiency of the interaction, causing agents to deviate from their optimal
paths. The proposed robot action planning scheme is verified and validated via
extensive simulation and experimental studies on a LoCoBot WidowX-250.
|
2501.13212
|
Covert Communication via Action-Dependent States
|
cs.IT math.IT
|
This paper studies covert communication over channels with ADSI when the
state is available either non-causally or causally at the transmitter. Covert
communication refers to reliable communication between a transmitter and a
receiver while ensuring a low probability of detection by an adversary, which
we refer to as `warden'. It is well known that in a point-to-point DMC, it is
possible to communicate on the order of $\sqrt{N}$ bits reliably and covertly
over $N$ channel uses while the transmitter and the receiver are required to
share a secret key on the order of $\sqrt{N}$ bits. This paper studies
achieving reliable and covert communication of positive rate, i.e., reliable
and covert communication on the order of N bits in N channel uses, over a
channel with ADSI while the transmitter has non-causal or causal access to the
ADSI, and the transmitter and the receiver share a secret key of negligible
rate. We derive achievable rates for both the non-causal and causal scenarios
by using block-Markov encoding and secret key generation from the ADSI, which
subsumes the best achievable rates for channels with random states. We also
derive upper bounds, for both non-causal and causal scenarios, that meet our
achievable rates for some special cases. As an application of our problem
setup, we study covert communication over channels with rewrite options, which
are closely related to recording covert information on memory, and show that a
positive covert rate can be achieved in such channels. As a special case of our
problem, we study the AWGN channels and provide lower and upper bounds on the
covert capacity that meet when the transmitter and the receiver share a secret
key of sufficient rate and when the warden's channel is noisier than the
legitimate receiver channel. As another application of our problem setup, we
show that cooperation can lead to a positive covert rate in Gaussian channels.
|
2501.13215
|
A model for French voters
|
cs.SI physics.soc-ph
|
Models of opinion dynamics describe how opinions are shaped in various
environments. While these models are able to replicate macroscopical opinion
distributions observed in real-world scenarios, their capacity to align with
data at the microscopical level remains mostly untested. We evaluate the
capacity of the multi-state voter model with zealots to capture individual
opinions in a fine-grained Twitter dataset collected during the 2017 French
Presidential elections. Our findings reveal a strong correspondence between
individual opinion distributions in the equilibrium state of the model and
ground-truth political leanings of the users. Additionally, we demonstrate that
discord probabilities accurately identify pairs of like-minded users. These
results emphasize the validity of the voter model in complex settings, and
advocate for further empirical evaluations of opinion dynamics models at the
microscopical level.
|
2501.13219
|
Enhancing Multi-Attribute Fairness in Healthcare Predictive Modeling
|
cs.LG cs.CY
|
Artificial intelligence (AI) systems in healthcare have demonstrated
remarkable potential to improve patient outcomes. However, if not designed with
fairness in mind, they also carry the risks of perpetuating or exacerbating
existing health disparities. Although numerous fairness-enhancing techniques
have been proposed, most focus on a single sensitive attribute and neglect the
broader impact that optimizing fairness for one attribute may have on the
fairness of other sensitive attributes. In this work, we introduce a novel
approach to multi-attribute fairness optimization in healthcare AI, tackling
fairness concerns across multiple demographic attributes concurrently. Our
method follows a two-phase approach: initially optimizing for predictive
performance, followed by fine-tuning to achieve fairness across multiple
sensitive attributes. We develop our proposed method using two strategies,
sequential and simultaneous. Our results show a significant reduction in
Equalized Odds Disparity (EOD) for multiple attributes, while maintaining high
predictive accuracy. Notably, we demonstrate that single-attribute fairness
methods can inadvertently increase disparities in non-targeted attributes
whereas simultaneous multi-attribute optimization achieves more balanced
fairness improvements across all attributes. These findings highlight the
importance of comprehensive fairness strategies in healthcare AI and offer
promising directions for future research in this critical area.
|
2501.13223
|
Scaling for Fairness? Analyzing Model Size, Data Composition, and
Multilinguality in Vision-Language Bias
|
cs.LG
|
As large scale vision language models become increasingly central to modern
AI applications, understanding and mitigating social biases in these systems
has never been more critical. We investigate how dataset composition, model
size, and multilingual training affect gender and racial bias in a popular VLM,
CLIP, and its open source variants. In particular, we systematically evaluate
models trained on varying dataset scales and architectures, as well as
multilingual versions encompassing English along with Persian, Turkish, and
Finnish,languages with minimal gender marking. To assess social perception
bias, we measure the zero-shot performance on face images featuring socially
charged terms rooted in the psychological constructs of communion and agency,
and demographic labeling bias using both the FairFace and PATA datasets.
Our findings reveal three key insights. First, while larger training datasets
can mitigate some biases, they may also introduce or amplify others when the
data composition is imbalanced. Second, although increasing model size
generally improves performance, it does not consistently reduce bias and can,
in certain cases, exacerbate it. Finally, while multilingual training broadens
linguistic coverage, it does not inherently neutralize bias and can transfer or
intensify inequities across languages. Taken together, these results highlight
the necessity of inclusive, carefully curated training data to foster fairness
rather than relying solely on model scaling or language expansion. We provide a
systematic evaluation for vision language bias across diverse demographics,
underscoring the urgent need for intentional bias mitigation strategies in
next-generation AI systems.
|
2501.13225
|
MLPs at the EOC: Spectrum of the NTK
|
cs.LG
|
We study the properties of the Neural Tangent Kernel (NTK)
$\overset{\scriptscriptstyle\infty}{K} : \mathbb{R}^{m_0} \times
\mathbb{R}^{m_0} \to \mathbb{R}^{m_l \times m_l}$ corresponding to infinitely
wide $l$-layer Multilayer Perceptrons (MLPs) taking inputs from
$\mathbb{R}^{m_0}$ to outputs in $\mathbb{R}^{m_l}$ equipped with activation
functions $\phi(s) = a s + b \vert s \vert$ for some $a,b \in \mathbb{R}$ and
initialized at the Edge Of Chaos (EOC). We find that the entries
$\overset{\scriptscriptstyle\infty}{K}(x_1,x_2)$ can be approximated by the
inverses of the cosine distances of the activations corresponding to $x_1$ and
$x_2$ increasingly better as the depth $l$ increases. By quantifying these
inverse cosine distances and the spectrum of the matrix containing them, we
obtain tight spectral bounds for the NTK matrix
$\overset{\scriptscriptstyle\infty}{K} = [\frac{1}{n}
\overset{\scriptscriptstyle\infty}{K}(x_{i_1},x_{i_2}) : i_1, i_2 \in [1:n]]$
over a dataset $\{x_1,\cdots,x_n\} \subset \mathbb{R}^{m_0}$, transferred from
the inverse cosine distance matrix via our approximation result. Our results
show that $\Delta_\phi = \frac{b^2}{a^2+b^2}$ determines the rate at which the
condition number of the NTK matrix converges to its limit as depth increases,
implying in particular that the absolute value ($\Delta_\phi=1$) is better than
the ReLU ($\Delta_\phi=\frac{1}{2}$) in this regard.
|
2501.13230
|
Let SSMs be ConvNets: State-space Modeling with Optimal Tensor
Contractions
|
cs.LG cs.AI
|
We introduce Centaurus, a class of networks composed of generalized
state-space model (SSM) blocks, where the SSM operations can be treated as
tensor contractions during training. The optimal order of tensor contractions
can then be systematically determined for every SSM block to maximize training
efficiency. This allows more flexibility in designing SSM blocks beyond the
depthwise-separable configuration commonly implemented. The new design choices
will take inspiration from classical convolutional blocks including group
convolutions, full convolutions, and bottleneck blocks. We architect the
Centaurus network with a mixture of these blocks, to balance between network
size and performance, as well as memory and computational efficiency during
both training and inference. We show that this heterogeneous network design
outperforms its homogeneous counterparts in raw audio processing tasks
including keyword spotting, speech denoising, and automatic speech recognition
(ASR). For ASR, Centaurus is the first network with competitive performance
that can be made fully state-space based, without using any nonlinear
recurrence (LSTMs), explicit convolutions (CNNs), or (surrogate) attention
mechanism. Source code is available at github.com/Brainchip-Inc/Centaurus
|
2501.13233
|
"See You Later, Alligator": Impacts of Robot Small Talk on Task,
Rapport, and Interaction Dynamics in Human-Robot Collaboration
|
cs.RO cs.HC
|
Small talk can foster rapport building in human-human teamwork; yet how
non-anthropomorphic robots, such as collaborative manipulators commonly used in
industry, may capitalize on these social communications remains unclear. This
work investigates how robot-initiated small talk influences task performance,
rapport, and interaction dynamics in human-robot collaboration. We developed an
autonomous robot system that assists a human in an assembly task while
initiating and engaging in small talk. A user study ($N = 58$) was conducted in
which participants worked with either a functional robot, which engaged in only
task-oriented speech, or a social robot, which also initiated small talk. Our
study found that participants in the social condition reported significantly
higher levels of rapport with the robot. Moreover, all participants in the
social condition responded to the robot's small talk attempts; 59% initiated
questions to the robot, and 73% engaged in lingering conversations after
requesting the final task item. Although active working times were similar
across conditions, participants in the social condition recorded longer task
durations than those in the functional condition. We discuss the design and
implications of robot small talk in shaping human-robot collaboration.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.