id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.04530
|
Robust Probabilistic Model Checking with Continuous Reward Domains
|
cs.AI cs.FL cs.LG
|
Probabilistic model checking traditionally verifies properties on the
expected value of a measure of interest. This restriction may fail to capture
the quality of service of a significant proportion of a system's runs,
especially when the probability distribution of the measure of interest is
poorly represented by its expected value due to heavy-tail behaviors or
multiple modalities. Recent works inspired by distributional reinforcement
learning use discrete histograms to approximate integer reward distribution,
but they struggle with continuous reward space and present challenges in
balancing accuracy and scalability. We propose a novel method for handling both
continuous and discrete reward distributions in Discrete Time Markov Chains
using moment matching with Erlang mixtures. By analytically deriving
higher-order moments through Moment Generating Functions, our method
approximates the reward distribution with theoretically bounded error while
preserving the statistical properties of the true distribution. This detailed
distributional insight enables the formulation and robust model checking of
quality properties based on the entire reward distribution function, rather
than restricting to its expected value. We include a theoretical foundation
ensuring bounded approximation errors, along with an experimental evaluation
demonstrating our method's accuracy and scalability in practical model-checking
problems.
|
2502.04531
|
AnyPlace: Learning Generalized Object Placement for Robot Manipulation
|
cs.RO cs.AI cs.CV
|
Object placement in robotic tasks is inherently challenging due to the
diversity of object geometries and placement configurations. To address this,
we propose AnyPlace, a two-stage method trained entirely on synthetic data,
capable of predicting a wide range of feasible placement poses for real-world
tasks. Our key insight is that by leveraging a Vision-Language Model (VLM) to
identify rough placement locations, we focus only on the relevant regions for
local placement, which enables us to train the low-level
placement-pose-prediction model to capture diverse placements efficiently. For
training, we generate a fully synthetic dataset of randomly generated objects
in different placement configurations (insertion, stacking, hanging) and train
local placement-prediction models. We conduct extensive evaluations in
simulation, demonstrating that our method outperforms baselines in terms of
success rate, coverage of possible placement modes, and precision. In
real-world experiments, we show how our approach directly transfers models
trained purely on synthetic data to the real world, where it successfully
performs placements in scenarios where other models struggle -- such as with
varying object geometries, diverse placement modes, and achieving high
precision for fine placement. More at: https://any-place.github.io.
|
2502.04533
|
Polarization-Dependent Loss Mitigation with Orthogonal-Design-Based
Precoding and Interference Cancellation
|
cs.IT math.IT
|
Recent work by Shehadeh and Kschischang provides a simple capacity-achieving
scheme for channels with polarization-dependent loss (PDL) under common
modeling assumptions via a careful choice of orthogonal-design-based precoding
and interference cancellation. This letter extends that work with a
simulation-based demonstration showing that this scheme remains highly
effective at mitigating PDL in the highly practical setting of 4-PAM with
Chase-decoded extended Hamming inner codes rather than the near-capacity inner
codes considered in the original work. An alternative near-optimal variation of
this scheme is provided requiring only one inner code rather than two and
suffering no penalty in the absence of PDL, making it much more practical.
|
2502.04535
|
A Decoding Algorithm for Length-Control Summarization Based on Directed
Acyclic Transformers
|
cs.CL
|
Length-control summarization aims to condense long texts into a short one
within a certain length limit. Previous approaches often use autoregressive
(AR) models and treat the length requirement as a soft constraint, which may
not always be satisfied. In this study, we propose a novel length-control
decoding algorithm based on the Directed Acyclic Transformer (DAT). Our
approach allows for multiple plausible sequence fragments and predicts a
\emph{path} to connect them. In addition, we propose a Sequence Maximum a
Posteriori (SeqMAP) decoding algorithm that marginalizes different possible
paths and finds the most probable summary satisfying the length budget. Our
algorithm is based on beam search, which further facilitates a reranker for
performance improvement. Experimental results on the Gigaword and DUC2004
datasets demonstrate our state-of-the-art performance for length-control
summarization.
|
2502.04537
|
Multilingual Non-Autoregressive Machine Translation without Knowledge
Distillation
|
cs.CL
|
Multilingual neural machine translation (MNMT) aims at using one single model
for multiple translation directions. Recent work applies non-autoregressive
Transformers to improve the efficiency of MNMT, but requires expensive
knowledge distillation (KD) processes. To this end, we propose an M-DAT
approach to non-autoregressive multilingual machine translation. Our system
leverages the recent advance of the directed acyclic Transformer (DAT), which
does not require KD. We further propose a pivot back-translation (PivotBT)
approach to improve the generalization to unseen translation directions.
Experiments show that our M-DAT achieves state-of-the-art performance in
non-autoregressive MNMT.
|
2502.04541
|
The Phantom of the Elytra -- Phylogenetic Trait Extraction from Images
of Rove Beetles Using Deep Learning -- Is the Mask Enough?
|
cs.CV
|
Phylogenetic analysis traditionally relies on labor-intensive manual
extraction of morphological traits, limiting its scalability for large
datasets. Recent advances in deep learning offer the potential to automate this
process, but the effectiveness of different morphological representations for
phylogenetic trait extraction remains poorly understood. In this study, we
compare the performance of deep learning models using three distinct
morphological representations - full segmentations, binary masks, and Fourier
descriptors of beetle outlines. We test this on the Rove-Tree-11 dataset, a
curated collection of images from 215 rove beetle species. Our results
demonstrate that the mask-based model outperformed the others, achieving a
normalized Align Score of 0.33 plus/minus 0.02 on the test set, compared to
0.45 plus/minus 0.01 for the Fourier-based model and 0.39 plus/minus 0.07 for
the segmentation-based model. The performance of the mask-based model likely
reflects its ability to capture shape features while taking advantage of the
depth and capacity of the ResNet50 architecture. These results also indicate
that dorsal textural features, at least in this group of beetles, may be of
lowered phylogenetic relevance, though further investigation is necessary to
confirm this. In contrast, the Fourier-based model suffered from reduced
capacity and occasional inaccuracies in outline approximations, particularly in
fine structures like legs. These findings highlight the importance of selecting
appropriate morphological representations for automated phylogenetic studies
and the need for further research into explainability in automatic
morphological trait extraction.
|
2502.04543
|
Sparsity-Based Interpolation of External, Internal and Swap Regret
|
stat.ML cs.LG
|
Focusing on the expert problem in online learning, this paper studies the
interpolation of several performance metrics via $\phi$-regret minimization,
which measures the performance of an algorithm by its regret with respect to an
arbitrary action modification rule $\phi$. With $d$ experts and $T\gg d$ rounds
in total, we present a single algorithm achieving the instance-adaptive
$\phi$-regret bound \begin{equation*} \tilde
O\left(\min\left\{\sqrt{d-d^{\mathrm{unif}}_\phi+1},\sqrt{d-d^{\mathrm{self}}_\phi}\right\}\cdot\sqrt{T}\right),
\end{equation*} where $d^{\mathrm{unif}}_\phi$ is the maximum amount of experts
modified identically by $\phi$, and $d^{\mathrm{self}}_\phi$ is the amount of
experts that $\phi$ trivially modifies to themselves. By recovering the optimal
$O(\sqrt{T\log d})$ external regret bound when $d^{\mathrm{unif}}_\phi=d$, the
standard $\tilde O(\sqrt{T})$ internal regret bound when
$d^{\mathrm{self}}_\phi=d-1$ and the optimal $\tilde O(\sqrt{dT})$ swap regret
bound in the worst case, we improve existing results in the intermediate
regimes. In addition, the same algorithm achieves the optimal quantile regret
bound, which corresponds to even easier settings of $\phi$ than the external
regret.
Building on the classical reduction from $\phi$-regret minimization to
external regret minimization on stochastic matrices, our main idea is to
further convert the latter to online linear regression using
Haar-wavelet-inspired matrix features. Then, we apply a particular
$L_1$-version of comparator-adaptive online learning algorithms to exploit the
sparsity in this regression subroutine.
|
2502.04544
|
Solvability of Approximate Reach-Avoid Games
|
eess.SY cs.LO cs.SY
|
Objective: In a companion paper, we propose a parametric hybrid automaton
model and an algorithm for the online synthesis of robustly correct and
near-optimal controllers for cyber-physical system with reach-avoid guarantees.
A key part of this synthesis problem is based on a weighted discretised game
and solved via scope-adaptive discrete dynamic programming. Approach: This work
examines proofs of key properties of the discussed algorithm. Evaluation: The
main proof is by induction over the stages of a discrete
Hamilton-Jacobi-Bellman system of equations. Contribution: The results include
a game solvability theorem and identify necessary and sufficient conditions for
its applicability.
|
2502.04548
|
Contextual Gradient Flow Modeling for Large Language Model
Generalization in Multi-Scale Feature Spaces
|
cs.CL
|
Optimization methodologies for training large-scale neural architectures
often rely on uniform gradient propagation mechanisms that fail to align with
hierarchical linguistic structures, limiting their capacity to generalize
across diverse language distributions. A structured gradient refinement
framework was introduced to incorporate multi-scale contextual adjustments,
improving parameter adaptation through dynamic weighting strategies that
enhanced representation coherence. Empirical evaluations demonstrated that
structured propagation mechanisms contributed to reductions in gradient
oscillations, resulting in more stable training dynamics and improved
optimization efficiency. The comparative performance assessment indicated that
models incorporating hierarchical propagation strategies exhibited greater
robustness in long-range dependency retention and cross-domain adaptation. The
hierarchical adjustment of weight updates provided an alternative to
conventional backpropagation, reducing sensitivity to initialization conditions
while improving overall convergence efficiency. The experimental results
confirmed that structured gradient propagation influenced representation
learning trajectories, aligning parameter updates with broader linguistic
dependencies rather than isolated token-level relationships. Statistical
evaluations indicated that structured optimization strategies mitigated
overfitting while preserving adaptability across heterogeneous text
distributions. The findings established that structured gradient propagation
provided an empirically validated framework for refining hierarchical
representation learning, supporting more effective integration of linguistic
dependencies into optimization dynamics.
|
2502.04549
|
Mechanisms of Projective Composition of Diffusion Models
|
cs.LG
|
We study the theoretical foundations of composition in diffusion models, with
a particular focus on out-of-distribution extrapolation and
length-generalization. Prior work has shown that composing distributions via
linear score combination can achieve promising results, including
length-generalization in some cases (Du et al., 2023; Liu et al., 2022).
However, our theoretical understanding of how and why such compositions work
remains incomplete. In fact, it is not even entirely clear what it means for
composition to "work". This paper starts to address these fundamental gaps. We
begin by precisely defining one possible desired result of composition, which
we call projective composition. Then, we investigate: (1) when linear score
combinations provably achieve projective composition, (2) whether
reverse-diffusion sampling can generate the desired composition, and (3) the
conditions under which composition fails. Finally, we connect our theoretical
analysis to prior empirical observations where composition has either worked or
failed, for reasons that were unclear at the time.
|
2502.04552
|
Reinforcement Learning Based Prediction of PID Controller Gains for
Quadrotor UAVs
|
eess.SY cs.RO cs.SY
|
A reinforcement learning (RL) based methodology is proposed and implemented
for online fine-tuning of PID controller gains, thus, improving quadrotor
effective and accurate trajectory tracking. The RL agent is first trained
offline on a quadrotor PID attitude controller and then validated through
simulations and experimental flights. RL exploits a Deep Deterministic Policy
Gradient (DDPG) algorithm, which is an off-policy actor-critic method. Training
and simulation studies are performed using Matlab/Simulink and the UAV Toolbox
Support Package for PX4 Autopilots. Performance evaluation and comparison
studies are performed between the hand-tuned and RL-based tuned approaches. The
results show that the controller parameters based on RL are adjusted during
flights, achieving the smallest attitude errors, thus significantly improving
attitude tracking performance compared to the hand-tuned approach.
|
2502.04554
|
Unifying and Optimizing Data Values for Selection via
Sequential-Decision-Making
|
cs.AI
|
Data selection has emerged as a crucial downstream application of data
valuation. While existing data valuation methods have shown promise in
selection tasks, the theoretical foundations and full potential of using data
values for selection remain largely unexplored. In this work, we first
demonstrate that data values applied for selection can be naturally
reformulated as a sequential-decision-making problem, where the optimal data
value can be derived through dynamic programming. We show this framework
unifies and reinterprets existing methods like Data Shapley through the lens of
approximate dynamic programming, specifically as myopic reward function
approximations to this sequential problem. Furthermore, we analyze how
sequential data selection optimality is affected when the ground-truth utility
function exhibits monotonic submodularity with curvature. To address the
computational challenges in obtaining optimal data values, we propose an
efficient approximation scheme using learned bipartite graphs as surrogate
utility models, ensuring greedy selection is still optimal when the surrogate
utility is correctly specified and learned. Extensive experiments demonstrate
the effectiveness of our approach across diverse datasets.
|
2502.04555
|
Decomposing Multivariate Information Rates in Networks of Random
Processes
|
stat.ME cs.IT math.IT
|
The Partial Information Decomposition (PID) framework has emerged as a
powerful tool for analyzing high-order interdependencies in complex network
systems. However, its application to dynamic processes remains challenging due
to the implicit assumption of memorylessness, which often falls in real-world
scenarios. In this work, we introduce the framework of Partial Information Rate
Decomposition (PIRD) that extends PID to random processes with temporal
correlations. By leveraging mutual information rate (MIR) instead of mutual
information (MI), our approach decomposes the dynamic information shared by
multivariate random processes into unique, redundant, and synergistic
contributions obtained aggregating information rate atoms in a principled
manner. To solve PIRD, we define a pointwise redundancy rate function based on
the minimum MI principle applied locally in the frequency-domain representation
of the processes. The framework is validated in benchmark simulations of
Gaussian systems, demonstrating its advantages over traditional PID in
capturing temporal correlations and showing how the spectral representation may
reveal scale-specific higher-order interactions that are obscured in the time
domain. Furthermore, we apply PIRD to a physiological network comprising
cerebrovascular and cardiovascular variables, revealing frequency-dependent
redundant information exchange during a protocol of postural stress. Our
results highlight the necessity of accounting for the full temporal statistical
structure and spectral content of vector random processes to meaningfully
perform information decomposition in network systems with dynamic behavior such
as those typically encountered in neuroscience and physiology.
|
2502.04556
|
TruthFlow: Truthful LLM Generation via Representation Flow Correction
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) are known to struggle with consistently
generating truthful responses. While various representation intervention
techniques have been proposed, these methods typically apply a universal
representation correction vector to all input queries, limiting their
effectiveness against diverse queries in practice. In this study, we introduce
TruthFlow, a novel method that leverages the Flow Matching technique for
query-specific truthful representation correction. Specifically, TruthFlow
first uses a flow model to learn query-specific correction vectors that
transition representations from hallucinated to truthful states. Then, during
inference, the trained flow model generates these correction vectors to enhance
the truthfulness of LLM outputs. Experimental results demonstrate that
TruthFlow significantly improves performance on open-ended generation tasks
across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained
TruthFlow model exhibits strong transferability, performing effectively on
other unseen hallucination benchmarks.
|
2502.04557
|
Speeding up Speculative Decoding via Approximate Verification
|
cs.LG cs.IT math.IT
|
Speculative Decoding (SD) is a recently proposed technique for faster
inference using Large Language Models (LLMs). SD operates by using a smaller
draft LLM for autoregressively generating a sequence of tokens and a larger
target LLM for parallel verification to ensure statistical consistency.
However, periodic parallel calls to the target LLM for verification prevent SD
from achieving even lower latencies. We propose SPRINTER, which utilizes a
low-complexity verifier trained to predict if tokens generated from a draft LLM
would be accepted by the target LLM. By performing approximate sequential
verification, SPRINTER does not require verification by the target LLM and is
only invoked when a token is deemed unacceptable. This leads to reducing the
number of calls to the larger LLM and can achieve further speedups. We present
a theoretical analysis of SPRINTER, examining the statistical properties of the
generated tokens, as well as the expected reduction in latency as a function of
the verifier. We evaluate SPRINTER on several datasets and model pairs,
demonstrating that approximate verification can still maintain high quality
generation while further reducing latency. For instance, on Wiki-Summaries
dataset, SPRINTER achieves a 1.7x latency speedup and requires 8.3x fewer flops
relative to SD, while still generating high-quality responses when using
GPT2-Small and GPT2-XL as draft/target models.
|
2502.04558
|
Probing a Vision-Language-Action Model for Symbolic States and
Integration into a Cognitive Architecture
|
cs.RO cs.AI
|
Vision-language-action (VLA) models hold promise as generalist robotics
solutions by translating visual and linguistic inputs into robot actions, yet
they lack reliability due to their black-box nature and sensitivity to
environmental changes. In contrast, cognitive architectures (CA) excel in
symbolic reasoning and state monitoring but are constrained by rigid predefined
execution. This work bridges these approaches by probing OpenVLA's hidden
layers to uncover symbolic representations of object properties, relations, and
action states, enabling integration with a CA for enhanced interpretability and
robustness. Through experiments on LIBERO-spatial pick-and-place tasks, we
analyze the encoding of symbolic states across different layers of OpenVLA's
Llama backbone. Our probing results show consistently high accuracies (> 0.90)
for both object and action states across most layers, though contrary to our
hypotheses, we did not observe the expected pattern of object states being
encoded earlier than action states. We demonstrate an integrated DIARC-OpenVLA
system that leverages these symbolic representations for real-time state
monitoring, laying the foundation for more interpretable and reliable robotic
manipulation.
|
2502.04562
|
Mixture of neural operator experts for learning boundary conditions and
model selection
|
cs.LG cs.NA math.NA physics.flu-dyn
|
While Fourier-based neural operators are best suited to learning mappings
between functions on periodic domains, several works have introduced techniques
for incorporating non trivial boundary conditions. However, all previously
introduced methods have restrictions that limit their applicability. In this
work, we introduce an alternative approach to imposing boundary conditions
inspired by volume penalization from numerical methods and Mixture of Experts
(MoE) from machine learning. By introducing competing experts, the approach
additionally allows for model selection. To demonstrate the method, we combine
a spatially conditioned MoE with the Fourier based, Modal Operator Regression
for Physics (MOR-Physics) neural operator and recover a nonlinear operator on a
disk and quarter disk. Next, we extract a large eddy simulation (LES) model
from direct numerical simulation of channel flow and show the domain
decomposition provided by our approach. Finally, we train our LES model with
Bayesian variational inference and obtain posterior predictive samples of flow
far past the DNS simulation time horizon.
|
2502.04563
|
WaferLLM: A Wafer-Scale LLM Inference System
|
cs.LG cs.AI cs.AR cs.DC cs.ET
|
Emerging AI accelerators increasingly adopt wafer-scale manufacturing
technologies, integrating hundreds of thousands of AI cores in a mesh-based
architecture with large distributed on-chip memory (tens of GB in total) and
ultra-high on-chip memory bandwidth (tens of PB/s). However, current LLM
inference systems, optimized for shared memory architectures like GPUs, fail to
fully exploit these accelerators.
We introduce WaferLLM, the first wafer-scale LLM inference system. WaferLLM
is guided by a novel PLMR model (pronounced as "Plummer") that captures the
unique hardware characteristics of wafer-scale architectures. Leveraging this
model, WaferLLM pioneers wafer-scale LLM parallelism, optimizing the
utilization of hundreds of thousands of on-chip cores. It also introduces
MeshGEMM and MeshGEMV, the first GEMM and GEMV implementations designed to
scale effectively on wafer-scale accelerators.
Evaluations show that WaferLLM achieves 200$\times$ better wafer-scale
accelerator utilization than state-of-the-art systems. On a commodity
wafer-scale accelerator, WaferLLM delivers 606$\times$ faster and 22$\times$
more energy-efficient GEMV compared to an advanced GPU. For LLMs, based on
16-bit data type, WaferLLM achieves 2700 toks/sec/req decode speed on Llama3-8B
model and 840 toks/sec/req decode speed on Qwen2-72B model, which enables
39$\times$ faster decoding with 1.7$\times$ better energy efficiency. We
anticipate these numbers will grow significantly as wafer-scale AI models,
software, and hardware continue to mature.
|
2502.04564
|
My LLM might Mimic AAE -- But When Should it?
|
cs.CL
|
We examine the representation of African American English (AAE) in large
language models (LLMs), exploring (a) the perceptions Black Americans have of
how effective these technologies are at producing authentic AAE, and (b) in
what contexts Black Americans find this desirable. Through both a survey of
Black Americans ($n=$ 104) and annotation of LLM-produced AAE by Black
Americans ($n=$ 228), we find that Black Americans favor choice and autonomy in
determining when AAE is appropriate in LLM output. They tend to prefer that
LLMs default to communicating in Mainstream U.S. English in formal settings,
with greater interest in AAE production in less formal settings. When LLMs were
appropriately prompted and provided in context examples, our participants found
their outputs to have a level of AAE authenticity on par with transcripts of
Black American speech. Select code and data for our project can be found here:
https://github.com/smelliecat/AAEMime.git
|
2502.04565
|
Private Federated Learning In Real World Application -- A Case Study
|
cs.LG cs.CR
|
This paper presents an implementation of machine learning model training
using private federated learning (PFL) on edge devices. We introduce a novel
framework that uses PFL to address the challenge of training a model using
users' private data. The framework ensures that user data remain on individual
devices, with only essential model updates transmitted to a central server for
aggregation with privacy guarantees. We detail the architecture of our app
selection model, which incorporates a neural network with attention mechanisms
and ambiguity handling through uncertainty management. Experiments conducted
through off-line simulations and on device training demonstrate the feasibility
of our approach in real-world scenarios. Our results show the potential of PFL
to improve the accuracy of an app selection model by adapting to changes in
user behavior over time, while adhering to privacy standards. The insights
gained from this study are important for industries looking to implement PFL,
offering a robust strategy for training a predictive model directly on edge
devices while ensuring user data privacy.
|
2502.04566
|
An Optimized YOLOv5 Based Approach For Real-time Vehicle Detection At
Road Intersections Using Fisheye Cameras
|
cs.CV
|
Real time vehicle detection is a challenging task for urban traffic
surveillance. Increase in urbanization leads to increase in accidents and
traffic congestion in junction areas resulting in delayed travel time. In order
to solve these problems, an intelligent system utilizing automatic detection
and tracking system is significant. But this becomes a challenging task at road
intersection areas which require a wide range of field view. For this reason,
fish eye cameras are widely used in real time vehicle detection purpose to
provide large area coverage and 360 degree view at junctions. However, it
introduces challenges such as light glare from vehicles and street lights,
shadow, non-linear distortion, scaling issues of vehicles and proper
localization of small vehicles. To overcome each of these challenges, a
modified YOLOv5 object detection scheme is proposed. YOLOv5 is a deep learning
oriented convolutional neural network (CNN) based object detection method. The
proposed scheme for detecting vehicles in fish-eye images consists of a
light-weight day-night CNN classifier so that two different solutions can be
implemented to address the day-night detection issues. Furthurmore, challenging
instances are upsampled in the dataset for proper localization of vehicles and
later on the detection model is ensembled and trained in different combination
of vehicle datasets for better generalization, detection and accuracy. For
testing, a real world fisheye dataset provided by the Video and Image
Processing (VIP) Cup organizer ISSD has been used which includes images from
video clips of different fisheye cameras at junction of different cities during
day and night time. Experimental results show that our proposed model has
outperformed the YOLOv5 model on the dataset by 13.7% mAP @ 0.5.
|
2502.04567
|
Preference Optimization via Contrastive Divergence: Your Reward Model is
Secretly an NLL Estimator
|
cs.AI
|
Existing studies on preference optimization (PO) have centered on
constructing pairwise preference data following simple heuristics, such as
maximizing the margin between preferred and dispreferred completions based on
human (or AI) ranked scores. However, none of these heuristics has a full
theoretical justification. In this work, we develop a novel PO framework that
provides theoretical guidance to effectively sample dispreferred completions.
To achieve this, we formulate PO as minimizing the negative log-likelihood
(NLL) of a probability model and propose to estimate its normalization constant
via a sampling strategy. As we will demonstrate, these estimative samples can
act as dispreferred completions in PO. We then select contrastive divergence
(CD) as the sampling strategy, and propose a novel MC-PO algorithm that applies
the Monte Carlo (MC) kernel from CD to sample hard negatives w.r.t. the
parameterized reward model. Finally, we propose the OnMC-PO algorithm, an
extension of MC-PO to the online setting. On popular alignment benchmarks,
MC-PO outperforms existing SOTA baselines, and OnMC-PO leads to further
improvement.
|
2502.04568
|
Learning Semantics-aware Search Operators for Genetic Programming
|
cs.LG cs.NE
|
Fitness landscapes in test-based program synthesis are known to be extremely
rugged, with even minimal modifications of programs often leading to
fundamental changes in their behavior and, consequently, fitness values.
Relying on fitness as the only guidance in iterative search algorithms like
genetic programming is thus unnecessarily limiting, especially when combined
with purely syntactic search operators that are agnostic about their impact on
program behavior. In this study, we propose a semantics-aware search operator
that steers the search towards candidate programs that are valuable not only
actually (high fitness) but also only potentially, i.e. are likely to be turned
into high-quality solutions even if their current fitness is low. The key
component of the method is a graph neural network that learns to model the
interactions between program instructions and processed data, and produces a
saliency map over graph nodes that represents possible search decisions. When
applied to a suite of symbolic regression benchmarks, the proposed method
outperforms conventional tree-based genetic programming and the ablated variant
of the method.
|
2502.04573
|
Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially
Pre-trained Transformer
|
cs.LG cs.AI
|
We present an Adversarially Pre-trained Transformer (APT) that is able to
perform zero-shot meta-learning on tabular prediction tasks without
pre-training on any real-world dataset, extending on the recent development of
Prior-Data Fitted Networks (PFNs) and TabPFN. Specifically, APT is pre-trained
with adversarial synthetic data agents, who continue to shift their underlying
data generating distribution and deliberately challenge the model with
different synthetic datasets. In addition, we propose a mixture block
architecture that is able to handle classification tasks with arbitrary number
of classes, addressing the class size limitation -- a crucial weakness of prior
deep tabular zero-shot learners. In experiments, we show that our framework
matches state-of-the-art performance on small classification tasks without
filtering on dataset characteristics such as number of classes and number of
missing values, while maintaining an average runtime under one second. On
common benchmark dataset suites in both classification and regression, we show
that adversarial pre-training was able to enhance TabPFN's performance. In our
analysis, we demonstrate that the adversarial synthetic data agents were able
to generate a more diverse collection of data compared to the ordinary random
generator in TabPFN. In addition, we demonstrate that our mixture block neural
design has improved generalizability and greatly accelerated pre-training.
|
2502.04574
|
Dark Brain Energy: Toward an Integrative Model of Spontaneous Slow
Oscillations
|
q-bio.NC cs.IT math.IT stat.AP
|
Neural oscillations facilitate the functioning of the human brain in spatial
and temporal dimensions at various frequencies. These oscillations feature a
universal frequency architecture that is governed by brain anatomy, ensuring
frequency specificity remains invariant across different measurement
techniques. Initial magnetic resonance imaging (MRI) methodology constrained
functional MRI (fMRI) investigations to a singular frequency range, thereby
neglecting the frequency characteristics inherent in blood oxygen
level-dependent oscillations. With advancements in MRI technology, it has
become feasible to decode intricate brain activities via multi-band frequency
analysis (MBFA). During the past decade, the utilization of MBFA in fMRI
studies has surged, unveiling frequency-dependent characteristics of
spontaneous slow oscillations (SSOs) believed to base dark energy in the brain.
There remains a dearth of conclusive insights and hypotheses pertaining to the
properties and functionalities of SSOs in distinct bands. We surveyed the SSO
MBFA studies during the past 15 years to delineate the attributes of SSOs and
enlighten their correlated functions. We further proposed a model to elucidate
the hierarchical organization of multi-band SSOs by integrating their function,
aimed at bridging theoretical gaps and guiding future MBFA research endeavors.
|
2502.04575
|
Complexity Analysis of Normalizing Constant Estimation: from Jarzynski
Equality to Annealed Importance Sampling and beyond
|
stat.ML cs.LG cs.NA math.NA physics.comp-ph stat.CO
|
Given an unnormalized probability density $\pi\propto\mathrm{e}^{-V}$,
estimating its normalizing constant
$Z=\int_{\mathbb{R}^d}\mathrm{e}^{-V(x)}\mathrm{d}x$ or free energy $F=-\log Z$
is a crucial problem in Bayesian statistics, statistical mechanics, and machine
learning. It is challenging especially in high dimensions or when $\pi$ is
multimodal. To mitigate the high variance of conventional importance sampling
estimators, annealing-based methods such as Jarzynski equality and annealed
importance sampling are commonly adopted, yet their quantitative complexity
guarantees remain largely unexplored. We take a first step toward a
non-asymptotic analysis of annealed importance sampling. In particular, we
derive an oracle complexity of
$\widetilde{O}\left(\frac{d\beta^2{\mathcal{A}}^2}{\varepsilon^4}\right)$ for
estimating $Z$ within $\varepsilon$ relative error with high probability, where
$\beta$ is the smoothness of $V$ and $\mathcal{A}$ denotes the action of a
curve of probability measures interpolating $\pi$ and a tractable reference
distribution. Our analysis, leveraging Girsanov theorem and optimal transport,
does not explicitly require isoperimetric assumptions on the target
distribution. Finally, to tackle the large action of the widely used geometric
interpolation of probability distributions, we propose a new normalizing
constant estimation algorithm based on reverse diffusion samplers and establish
a framework for analyzing its complexity.
|
2502.04576
|
Self-Regulation and Requesting Interventions
|
cs.LG cs.CL
|
Human intelligence involves metacognitive abilities like self-regulation,
recognizing limitations, and seeking assistance only when needed. While LLM
Agents excel in many domains, they often lack this awareness. Overconfident
agents risk catastrophic failures, while those that seek help excessively
hinder efficiency. A key challenge is enabling agents with a limited
intervention budget $C$ is to decide when to request assistance. In this paper,
we propose an offline framework that trains a "helper" policy to request
interventions, such as more powerful models or test-time compute, by combining
LLM-based process reward models (PRMs) with tabular reinforcement learning.
Using state transitions collected offline, we score optimal intervention timing
with PRMs and train the helper model on these labeled trajectories. This
offline approach significantly reduces costly intervention calls during
training. Furthermore, the integration of PRMs with tabular RL enhances
robustness to off-policy data while avoiding the inefficiencies of deep RL. We
empirically find that our method delivers optimal helper behavior.
|
2502.04577
|
Position-aware Automatic Circuit Discovery
|
cs.LG cs.CL
|
A widely used strategy to discover and understand language model mechanisms
is circuit analysis. A circuit is a minimal subgraph of a model's computation
graph that executes a specific task. We identify a gap in existing circuit
discovery methods: they assume circuits are position-invariant, treating model
components as equally relevant across input positions. This limits their
ability to capture cross-positional interactions or mechanisms that vary across
positions. To address this gap, we propose two improvements to incorporate
positionality into circuits, even on tasks containing variable-length examples.
First, we extend edge attribution patching, a gradient-based method for circuit
discovery, to differentiate between token positions. Second, we introduce the
concept of a dataset schema, which defines token spans with similar semantics
across examples, enabling position-aware circuit discovery in datasets with
variable length examples. We additionally develop an automated pipeline for
schema generation and application using large language models. Our approach
enables fully automated discovery of position-sensitive circuits, yielding
better trade-offs between circuit size and faithfulness compared to prior work.
|
2502.04580
|
Technical Debt in In-Context Learning: Diminishing Efficiency in Long
Context
|
cs.LG cs.AI
|
Transformers have demonstrated remarkable in-context learning (ICL)
capabilities, adapting to new tasks by simply conditioning on demonstrations
without parameter updates. Compelling empirical and theoretical evidence
suggests that ICL, as a general-purpose learner, could outperform task-specific
models. However, it remains unclear to what extent the transformers optimally
learn in-context compared to principled learning algorithms. To bridge this
gap, we introduce a new framework for quantifying optimality of ICL as a
learning algorithm in stylized settings. Our findings reveal a striking
dichotomy: while ICL initially matches the efficiency of a Bayes optimal
estimator, its efficiency significantly deteriorates in long context. Through
an information-theoretic analysis, we show that the diminishing efficiency is
inherent to ICL. These results clarify the trade-offs in adopting ICL as a
universal problem solver, motivating a new generation of on-the-fly adaptive
methods without the diminishing efficiency.
|
2502.04582
|
The Mini Wheelbot: A Testbed for Learning-based Balancing, Flips, and
Articulated Driving
|
cs.RO cs.SY eess.SY math.OC
|
The Mini Wheelbot is a balancing, reaction wheel unicycle robot designed as a
testbed for learning-based control. It is an unstable system with highly
nonlinear yaw dynamics, non-holonomic driving, and discrete contact switches in
a small, powerful, and rugged form factor. The Mini Wheelbot can use its wheels
to stand up from any initial orientation - enabling automatic environment
resets in repetitive experiments and even challenging half flips. We illustrate
the effectiveness of the Mini Wheelbot as a testbed by implementing two popular
learning-based control algorithms. First, we showcase Bayesian optimization for
tuning the balancing controller. Second, we use imitation learning from an
expert nonlinear MPC that uses gyroscopic effects to reorient the robot and can
track higher-level velocity and orientation commands. The latter allows the
robot to drive around based on user commands - for the first time in this class
of robots. The Mini Wheelbot is not only compelling for testing learning-based
control algorithms, but it is also just fun to work with, as demonstrated in
the video of our experiments.
|
2502.04583
|
Overcoming Fake Solutions in Semi-Dual Neural Optimal Transport: A
Smoothing Approach for Learning the Optimal Transport Plan
|
cs.LG
|
We address the convergence problem in learning the Optimal Transport (OT)
map, where the OT Map refers to a map from one distribution to another while
minimizing the transport cost. Semi-dual Neural OT, a widely used approach for
learning OT Maps with neural networks, often generates fake solutions that fail
to transfer one distribution to another accurately. We identify a sufficient
condition under which the max-min solution of Semi-dual Neural OT recovers the
true OT Map. Moreover, to address cases when this sufficient condition is not
satisfied, we propose a novel method, OTP, which learns both the OT Map and the
Optimal Transport Plan, representing the optimal coupling between two
distributions. Under sharp assumptions on the distributions, we prove that our
model eliminates the fake solution issue and correctly solves the OT problem.
Our experiments show that the OTP model recovers the optimal transport map
where existing methods fail and outperforms current OT-based models in
image-to-image translation tasks. Notably, the OTP model can learn stochastic
transport maps when deterministic OT Maps do not exist, such as one-to-many
tasks like colorization.
|
2502.04584
|
Joint State and Noise Covariance Estimation
|
cs.RO math.OC
|
This paper tackles the problem of jointly estimating the noise covariance
matrix alongside primary parameters (such as poses and points) from
measurements corrupted by Gaussian noise. In such settings, the noise
covariance matrix determines the weights assigned to individual measurements in
the least squares problem. We show that the joint problem exhibits a convex
structure and provide a full characterization of the optimal noise covariance
estimate (with analytical solutions) within joint maximum a posteriori and
likelihood frameworks and several variants. Leveraging this theoretical result,
we propose two novel algorithms that jointly estimate the primary parameters
and the noise covariance matrix. To validate our approach, we conduct extensive
experiments across diverse scenarios and offer practical insights into their
application in robotics and computer vision estimation problems with a
particular focus on SLAM.
|
2502.04586
|
Automatic Ply Partitioning for Laminar Composite Process Planning
|
math.OC cs.CE
|
This work introduces an automated ply partitioning strategy for large-scale
laminar composite manufacturing. It specifically targets the problem of
fabricating large plies from available spooled materials, while minimizing the
adverse effects on part quality. The proposed method inserts fiber-aligned
seams sequentially until each resulting sub-ply can be manufactured from
available materials, while simultaneously enforcing constraints to avoid
quality issues induced by the stacking of seams across multiple plies.
Leveraging the developable nature of individual plies, the partitioning problem
is cast as a sequence of one-dimensional piecewise linear optimization
problems, thus allowing for efficient local optimization via linear
programming. We experimentally demonstrate that coupling the local search with
a greedy global search produces the same results as an exhaustive search. The
resulting automated method provides an efficient and robust alternative to the
existing trial-and-error approach, and can be readily integrated into
state-of-the-art composite design workflows. In addition, this formulation
enables the inclusion of common constraints regarding laminate thickness
tolerance, sub-ply geometry, stay-out zones, material wastage, etc. The
efficacy of the proposed method is demonstrated through its application to the
surface of an airplane wing and to the body panels of an armored vehicle, each
subject to various performance and manufacturing-related geometric constraints.
|
2502.04591
|
Rethinking Oversmoothing in Graph Neural Networks: A Rank-Based
Perspective
|
cs.LG cs.AI stat.ML
|
Oversmoothing is a fundamental challenge in graph neural networks (GNNs): as
the number of layers increases, node embeddings become increasingly similar,
and model performance drops sharply. Traditionally, oversmoothing has been
quantified using metrics that measure the similarity of neighbouring node
features, such as the Dirichlet energy. While these metrics are related to
oversmoothing, we argue they have critical limitations and fail to reliably
capture oversmoothing in realistic scenarios. For instance, they provide
meaningful insights only for very deep networks and under somewhat strict
conditions on the norm of network weights and feature representations. As an
alternative, we propose measuring oversmoothing by examining the numerical or
effective rank of the feature representations. We provide theoretical support
for this approach, demonstrating that the numerical rank of feature
representations converges to one for a broad family of nonlinear activation
functions under the assumption of nonnegative trained weights. To the best of
our knowledge, this is the first result that proves the occurrence of
oversmoothing without assumptions on the boundedness of the weight matrices.
Along with the theoretical findings, we provide extensive numerical evaluation
across diverse graph architectures. Our results show that rank-based metrics
consistently capture oversmoothing, whereas energy-based metrics often fail.
Notably, we reveal that a significant drop in the rank aligns closely with
performance degradation, even in scenarios where energy metrics remain
unchanged.
|
2502.04592
|
CAMEF: Causal-Augmented Multi-Modality Event-Driven Financial
Forecasting by Integrating Time Series Patterns and Salient Macroeconomic
Announcements
|
cs.LG cs.AI cs.CE
|
Accurately forecasting the impact of macroeconomic events is critical for
investors and policymakers. Salient events like monetary policy decisions and
employment reports often trigger market movements by shaping expectations of
economic growth and risk, thereby establishing causal relationships between
events and market behavior. Existing forecasting methods typically focus either
on textual analysis or time-series modeling, but fail to capture the
multi-modal nature of financial markets and the causal relationship between
events and price movements. To address these gaps, we propose CAMEF
(Causal-Augmented Multi-Modality Event-Driven Financial Forecasting), a
multi-modality framework that effectively integrates textual and time-series
data with a causal learning mechanism and an LLM-based counterfactual event
augmentation technique for causal-enhanced financial forecasting. Our
contributions include: (1) a multi-modal framework that captures causal
relationships between policy texts and historical price data; (2) a new
financial dataset with six types of macroeconomic releases from 2008 to April
2024, and high-frequency real trading data for five key U.S. financial assets;
and (3) an LLM-based counterfactual event augmentation strategy. We compare
CAMEF to state-of-the-art transformer-based time-series and multi-modal
baselines, and perform ablation studies to validate the effectiveness of the
causal learning mechanism and event types.
|
2502.04593
|
The $\alpha$-Alternator: Dynamic Adaptation To Varying Noise Levels In
Sequences Using The Vendi Score For Improved Robustness and Performance
|
cs.LG cs.AI cs.NE stat.ML
|
Current state-of-the-art dynamical models, such as Mamba, assume the same
level of noisiness for all elements of a given sequence, which limits their
performance on noisy temporal data. In this paper, we introduce the
$\alpha$-Alternator, a novel generative model for time-dependent data that
dynamically adapts to the complexity introduced by varying noise levels in
sequences. The $\alpha$-Alternator leverages the Vendi Score (VS), a flexible
similarity-based diversity metric, to adjust, at each time step $t$, the
influence of the sequence element at time $t$ and the latent representation of
the dynamics up to that time step on the predicted future dynamics. This
influence is captured by a parameter that is learned and shared across all
sequences in a given dataset. The sign of this parameter determines the
direction of influence. A negative value indicates a noisy dataset, where a
sequence element that increases the VS is considered noisy, and the model
relies more on the latent history when processing that element. Conversely,
when the parameter is positive, a sequence element that increases the VS is
considered informative, and the $\alpha$-Alternator relies more on this new
input than on the latent history when updating its predicted latent dynamics.
The $\alpha$-Alternator is trained using a combination of observation masking
and Alternator loss minimization. Masking simulates varying noise levels in
sequences, enabling the model to be more robust to these fluctuations and
improving its performance in trajectory prediction, imputation, and
forecasting. Our experimental results demonstrate that the $\alpha$-Alternator
outperforms both Alternators and state-of-the-art state-space models across
neural decoding and time-series forecasting benchmarks.
|
2502.04595
|
A Fractional-Order Nonlinear Backstepping Controller Design for
Current-Controlled Maglev System
|
eess.SY cs.SY
|
The magnetic levitation system (Maglev) is a nonlinear system by which an
object is suspended with no support other than magnetic fields. The main
control perspective of the Maglev system is to levitate a steel ball in air by
the electromagnetic force. However, the Maglev system has highly nonlinear
dynamics which is inconvenient in the sense of sensitive control/regulation of
its nonlinear dynamics. In this paper, the nonlinear backstepping controller
based on the fractional-order derivative is proposed for the control of the
nonlinear current-controlled Maglev system. After, the system dynamics and
fractional-order backstepping controller design are given, the asymptotic
stability of the closed-loop system is proved by employing the Lyapunov theory.
Some computer-based numerical experiments are carried out to show the
effectiveness of the proposed controller for the control of Maglev system.
|
2502.04597
|
Multiscale style transfer based on a Laplacian pyramid for traditional
Chinese painting
|
cs.CV
|
Style transfer is adopted to synthesize appealing stylized images that
preserve the structure of a content image but carry the pattern of a style
image. Many recently proposed style transfer methods use only western oil
paintings as style images to achieve image stylization. As a result, unnatural
messy artistic effects are produced in stylized images when using these methods
to directly transfer the patterns of traditional Chinese paintings, which are
composed of plain colors and abstract objects. Moreover, most of them work only
at the original image scale and thus ignore multiscale image information during
training. In this paper, we present a novel effective multiscale style transfer
method based on Laplacian pyramid decomposition and reconstruction, which can
transfer unique patterns of Chinese paintings by learning different image
features at different scales. In the first stage, the holistic patterns are
transferred at low resolution by adopting a Style Transfer Base Network. Then,
the details of the content and style are gradually enhanced at higher
resolutions by a Detail Enhancement Network with an edge information selection
(EIS) module in the second stage. The effectiveness of our method is
demonstrated through the generation of appealing high-quality stylization
results and a comparison with some state-of-the-art style transfer methods.
Datasets and codes are available at
https://github.com/toby-katakuri/LP_StyleTransferNet.
|
2502.04600
|
Cooperative Payload Estimation by a Team of Mocobots
|
cs.RO
|
Consider the following scenario: a human guides multiple mobile manipulators
to grasp a common payload. For subsequent high-performance autonomous
manipulation of the payload by the mobile manipulator team, or for
collaborative manipulation with the human, the robots should be able to
discover where the other robots are attached to the payload, as well as the
payload's mass and inertial properties. In this paper, we describe a method for
the robots to autonomously discover this information. The robots cooperatively
manipulate the payload, and the twist, twist derivative, and wrench data at
their grasp frames are used to estimate the transformation matrices between the
grasp frames, the location of the payload's center of mass, and the payload's
inertia matrix. The method is validated experimentally with a team of three
mobile cobots, or mocobots.
|
2502.04601
|
LATTEO: A Framework to Support Learning Asynchronously Tempered with
Trusted Execution and Obfuscation
|
cs.CR cs.LG
|
The privacy vulnerabilities of the federated learning (FL) paradigm,
primarily caused by gradient leakage, have prompted the development of various
defensive measures. Nonetheless, these solutions have predominantly been
crafted for and assessed in the context of synchronous FL systems, with minimal
focus on asynchronous FL. This gap arises in part due to the unique challenges
posed by the asynchronous setting, such as the lack of coordinated updates,
increased variability in client participation, and the potential for more
severe privacy risks. These concerns have stymied the adoption of asynchronous
FL. In this work, we first demonstrate the privacy vulnerabilities of
asynchronous FL through a novel data reconstruction attack that exploits
gradient updates to recover sensitive client data. To address these
vulnerabilities, we propose a privacy-preserving framework that combines a
gradient obfuscation mechanism with Trusted Execution Environments (TEEs) for
secure asynchronous FL aggregation at the network edge. To overcome the
limitations of conventional enclave attestation, we introduce a novel
data-centric attestation mechanism based on Multi-Authority Attribute-Based
Encryption. This mechanism enables clients to implicitly verify TEE-based
aggregation services, effectively handle on-demand client participation, and
scale seamlessly with an increasing number of asynchronous connections. Our
gradient obfuscation mechanism reduces the structural similarity index of data
reconstruction by 85% and increases reconstruction error by 400%, while our
framework improves attestation efficiency by lowering average latency by up to
1500% compared to RA-TLS, without additional overhead.
|
2502.04602
|
Extracting and Understanding the Superficial Knowledge in Alignment
|
cs.CL cs.AI
|
Alignment of large language models (LLMs) with human values and preferences,
often achieved through fine-tuning based on human feedback, is essential for
ensuring safe and responsible AI behaviors. However, the process typically
requires substantial data and computation resources. Recent studies have
revealed that alignment might be attainable at lower costs through simpler
methods, such as in-context learning. This leads to the question: Is alignment
predominantly superficial? In this paper, we delve into this question and
provide a quantitative analysis. We formalize the concept of superficial
knowledge, defining it as knowledge that can be acquired through easily token
restyling, without affecting the model's ability to capture underlying causal
relationships between tokens. We propose a method to extract and isolate
superficial knowledge from aligned models, focusing on the shallow
modifications to the final token selection process. By comparing models
augmented only with superficial knowledge to fully aligned models, we quantify
the superficial portion of alignment. Our findings reveal that while
superficial knowledge constitutes a significant portion of alignment,
particularly in safety and detoxification tasks, it is not the whole story.
Tasks requiring reasoning and contextual understanding still rely on deeper
knowledge. Additionally, we demonstrate two practical advantages of isolated
superficial knowledge: (1) it can be transferred between models, enabling
efficient offsite alignment of larger models using extracted superficial
knowledge from smaller models, and (2) it is recoverable, allowing for the
restoration of alignment in compromised models without sacrificing performance.
|
2502.04609
|
Force interaction, modeling and soft tissue deformation during
reciprocating insertion of multi-part probe
|
cs.RO cs.SY eess.SY
|
The bio-inspired engineering of ovipositing wasps, which employ a
reciprocating motion for soft tissue insertion, offers potential advantages in
reducing insertion force and minimizing tissue damage. However, the underlying
mechanisms of tissue interaction and sparing are not fully understood. In this
study, we aim to investigate a multi-part probe designed to mimic the
reciprocating motion of ovipositors. A reciprocal insertion model was developed
to study the interaction between the probe and soft tissue, and experimental
testing was conducted using a force sensor and laser optical technique to gain
insights into interacting forces and tissue deformation. The results reveal
that during the cutting phase of reciprocal motion, the peak force and average
displacement of the soft substrate were approximately 19% and 20% lower,
respectively, compared to direct insertion at an overall probe velocity of 1
mm/s. This study presents a novel approach combining mechanical modeling and
experimental analysis to explore the force mechanics of the reciprocating
insertion method, providing a better understanding of the interaction between
the probe and soft tissue.
|
2502.04615
|
Neural Clustering for Prefractured Mesh Generation in Real-time Object
Destruction
|
cs.CV cs.GR
|
Prefracture method is a practical implementation for real-time object
destruction that is hardly achievable within performance constraints, but can
produce unrealistic results due to its heuristic nature. To mitigate it, we
approach the clustering of prefractured mesh generation as an unordered
segmentation on point cloud data, and propose leveraging the deep neural
network trained on a physics-based dataset. Our novel paradigm successfully
predicts the structural weakness of object that have been limited, exhibiting
ready-to-use results with remarkable quality.
|
2502.04623
|
HetSSNet: Spatial-Spectral Heterogeneous Graph Learning Network for
Panchromatic and Multispectral Images Fusion
|
cs.CV
|
Remote sensing pansharpening aims to reconstruct spatial-spectral properties
during the fusion of panchromatic (PAN) images and low-resolution
multi-spectral (LR-MS) images, finally generating the high-resolution
multi-spectral (HR-MS) images. In the mainstream modeling strategies, i.e., CNN
and Transformer, the input images are treated as the equal-sized grid of pixels
in the Euclidean space. They have limitations in facing remote sensing images
with irregular ground objects. Graph is the more flexible structure, however,
there are two major challenges when modeling spatial-spectral properties with
graph: \emph{1) constructing the customized graph structure for
spatial-spectral relationship priors}; \emph{2) learning the unified
spatial-spectral representation through the graph}. To address these
challenges, we propose the spatial-spectral heterogeneous graph learning
network, named \textbf{HetSSNet}. Specifically, HetSSNet initially constructs
the heterogeneous graph structure for pansharpening, which explicitly describes
pansharpening-specific relationships. Subsequently, the basic relationship
pattern generation module is designed to extract the multiple relationship
patterns from the heterogeneous graph. Finally, relationship pattern
aggregation module is exploited to collaboratively learn unified
spatial-spectral representation across different relationships among nodes with
adaptive importance learning from local and global perspectives. Extensive
experiments demonstrate the significant superiority and generalization of
HetSSNet.
|
2502.04625
|
Phonetic Reconstruction of the Consonant System of Middle Chinese via
Mixed Integer Optimization
|
cs.CL
|
This paper is concerned with phonetic reconstruction of the consonant system
of Middle Chinese. We propose to cast the problem as a Mixed Integer
Programming problem, which is able to automatically explore homophonic
information from ancient rhyme dictionaries and phonetic information from
modern Chinese dialects, the descendants of Middle Chinese. Numerical
evaluation on a wide range of synthetic and real data demonstrates the
effectiveness and robustness of the new method. We apply the method to
information from Guangyun and 20 modern Chinese dialects to obtain a new
phonetic reconstruction result. A linguistically-motivated discussion of this
result is also provided.
|
2502.04628
|
AIQViT: Architecture-Informed Post-Training Quantization for Vision
Transformers
|
cs.CV
|
Post-training quantization (PTQ) has emerged as a promising solution for
reducing the storage and computational cost of vision transformers (ViTs).
Recent advances primarily target at crafting quantizers to deal with peculiar
activations characterized by ViTs. However, most existing methods underestimate
the information loss incurred by weight quantization, resulting in significant
performance deterioration, particularly in low-bit cases. Furthermore, a common
practice in quantizing post-Softmax activations of ViTs is to employ
logarithmic transformations, which unfortunately prioritize less informative
values around zero. This approach introduces additional redundancies,
ultimately leading to suboptimal quantization efficacy. To handle these, this
paper proposes an innovative PTQ method tailored for ViTs, termed AIQViT
(Architecture-Informed Post-training Quantization for ViTs). First, we design
an architecture-informed low rank compensation mechanism, wherein learnable
low-rank weights are introduced to compensate for the degradation caused by
weight quantization. Second, we design a dynamic focusing quantizer to
accommodate the unbalanced distribution of post-Softmax activations, which
dynamically selects the most valuable interval for higher quantization
resolution. Extensive experiments on five vision tasks, including image
classification, object detection, instance segmentation, point cloud
classification, and point cloud part segmentation, demonstrate the superiority
of AIQViT over state-of-the-art PTQ methods.
|
2502.04630
|
High-Speed Dynamic 3D Imaging with Sensor Fusion Splatting
|
cs.CV cs.GR
|
Capturing and reconstructing high-speed dynamic 3D scenes has numerous
applications in computer graphics, vision, and interdisciplinary fields such as
robotics, aerodynamics, and evolutionary biology. However, achieving this using
a single imaging modality remains challenging. For instance, traditional RGB
cameras suffer from low frame rates, limited exposure times, and narrow
baselines. To address this, we propose a novel sensor fusion approach using
Gaussian splatting, which combines RGB, depth, and event cameras to capture and
reconstruct deforming scenes at high speeds. The key insight of our method lies
in leveraging the complementary strengths of these imaging modalities: RGB
cameras capture detailed color information, event cameras record rapid scene
changes with microsecond resolution, and depth cameras provide 3D scene
geometry. To unify the underlying scene representation across these modalities,
we represent the scene using deformable 3D Gaussians. To handle rapid scene
movements, we jointly optimize the 3D Gaussian parameters and their temporal
deformation fields by integrating data from all three sensor modalities. This
fusion enables efficient, high-quality imaging of fast and complex scenes, even
under challenging conditions such as low light, narrow baselines, or rapid
motion. Experiments on synthetic and real datasets captured with our prototype
sensor fusion setup demonstrate that our method significantly outperforms
state-of-the-art techniques, achieving noticeable improvements in both
rendering fidelity and structural accuracy.
|
2502.04632
|
Tight Bounds for Noisy Computation of High-Influence Functions,
Connectivity, and Threshold
|
cs.DS cs.CC cs.IT math.IT
|
In the noisy query model, the (binary) return value of every query (possibly
repeated) is independently flipped with some fixed probability $p \in (0,
1/2)$. In this paper, we obtain tight bounds on the noisy query complexity of
several fundamental problems.
Our first contribution is to show that any Boolean function with total
influence $\Omega(n)$ has noisy query complexity $\Theta(n\log n)$. Previous
works often focus on specific problems, and it is of great interest to have a
characterization of noisy query complexity for general functions. Our result is
the first noisy query complexity lower bound of this generality, beyond what
was known for random Boolean functions [Reischuk and Schmeltz, FOCS 1991].
Our second contribution is to prove that Graph Connectivity has noisy query
complexity $\Theta(n^2 \log n)$. In this problem, the goal is to determine
whether an undirected graph is connected using noisy edge queries. While the
upper bound can be achieved by a simple algorithm, no non-trivial lower bounds
were known prior to this work.
Last but not least, we determine the exact number of noisy queries (up to
lower order terms) needed to solve the $k$-Threshold problem and the Counting
problem. The $k$-Threshold problem asks to decide whether there are at least
$k$ ones among $n$ bits, given noisy query access to the bits. We prove that
$(1\pm o(1)) \frac{n\log (\min\{k,n-k+1\}/\delta)}{(1-2p)\log \frac{1-p}p}$
queries are both sufficient and necessary to achieve error probability $\delta
= o(1)$. Previously, such a result was only known when $\min\{k,n-k+1\}=o(n)$
[Wang, Ghaddar, Zhu and Wang, arXiv 2024]. We also show a similar $(1\pm o(1))
\frac{n\log (\min\{k+1,n-k+1\}/\delta)}{(1-2p)\log \frac{1-p}p}$ bound for the
Counting problem, where one needs to count the number of ones among $n$ bits
given noisy query access and $k$ denotes the answer.
|
2502.04635
|
Exercise Specialists Evaluation of Robot-led Physical Therapy for People
with Parkinsons Disease
|
cs.RO
|
Robot-led physical therapy (PT) offers a promising avenue to enhance the care
provided by clinical exercise specialists (ES) and physical and occupational
therapists to improve patients' adherence to prescribed exercises outside of a
clinic, such as at home. Collaborative efforts among roboticists, ES, physical
and occupational therapists, and patients are essential for developing
interactive, personalized exercise systems that meet each stakeholder's needs.
We conducted a user study in which 11 ES evaluated a novel robot-led PT system
for people with Parkinson's disease (PD), introduced in [1], focusing on the
system's perceived efficacy and acceptance. Utilizing a mixed-methods approach,
including technology acceptance questionnaires, task load questionnaires, and
semi-structured interviews, we gathered comprehensive insights into ES
perspectives and experiences after interacting with the system. Findings reveal
a broadly positive reception, which highlights the system's capacity to augment
traditional PT for PD, enhance patient engagement, and ensure consistent
exercise support. We also identified two key areas for improvement:
incorporating more human-like feedback systems and increasing the robot's ease
of use. This research emphasizes the value of incorporating robotic aids into
PT for PD, offering insights that can guide the development of more effective
and user-friendly rehabilitation technologies.
|
2502.04636
|
An Empirical Study of Code Obfuscation Practices in the Google Play
Store
|
cs.CR cs.AI cs.SE
|
The Android ecosystem is vulnerable to issues such as app repackaging,
counterfeiting, and piracy, threatening both developers and users. To mitigate
these risks, developers often employ code obfuscation techniques. However,
while effective in protecting legitimate applications, obfuscation also hinders
security investigations as it is often exploited for malicious purposes. As
such, it is important to understand code obfuscation practices in Android apps.
In this paper, we analyze over 500,000 Android APKs from Google Play, spanning
an eight-year period, to investigate the evolution and prevalence of code
obfuscation techniques. First, we propose a set of classifiers to detect
obfuscated code, tools, and techniques and then conduct a longitudinal analysis
to identify trends. Our results show a 13% increase in obfuscation from 2016 to
2023, with ProGuard and Allatori as the most commonly used tools. We also show
that obfuscation is more prevalent in top-ranked apps and gaming genres such as
Casino apps. To our knowledge, this is the first large-scale study of
obfuscation adoption in the Google Play Store, providing insights for
developers and security analysts.
|
2502.04638
|
Learning Street View Representations with Spatiotemporal Contrast
|
cs.CV cs.AI
|
Street view imagery is extensively utilized in representation learning for
urban visual environments, supporting various sustainable development tasks
such as environmental perception and socio-economic assessment. However, it is
challenging for existing image representations to specifically encode the
dynamic urban environment (such as pedestrians, vehicles, and vegetation), the
built environment (including buildings, roads, and urban infrastructure), and
the environmental ambiance (such as the cultural and socioeconomic atmosphere)
depicted in street view imagery to address downstream tasks related to the
city. In this work, we propose an innovative self-supervised learning framework
that leverages temporal and spatial attributes of street view imagery to learn
image representations of the dynamic urban environment for diverse downstream
tasks. By employing street view images captured at the same location over time
and spatially nearby views at the same time, we construct contrastive learning
tasks designed to learn the temporal-invariant characteristics of the built
environment and the spatial-invariant neighborhood ambiance. Our approach
significantly outperforms traditional supervised and unsupervised methods in
tasks such as visual place recognition, socioeconomic estimation, and
human-environment perception. Moreover, we demonstrate the varying behaviors of
image representations learned through different contrastive learning objectives
across various downstream tasks. This study systematically discusses
representation learning strategies for urban studies based on street view
images, providing a benchmark that enhances the applicability of visual data in
urban science. The code is available at https://github.com/yonglleee/UrbanSTCL.
|
2502.04640
|
Building Rome with Convex Optimization
|
cs.RO cs.CV math.OC
|
Global bundle adjustment is made easy by depth prediction and convex
optimization. We (i) propose a scaled bundle adjustment (SBA) formulation that
lifts 2D keypoint measurements to 3D with learned depth, (ii) design an
empirically tight convex semidfinite program (SDP) relaxation that solves SBA
to certfiable global optimality, (iii) solve the SDP relaxations at extreme
scale with Burer-Monteiro factorization and a CUDA-based trust-region
Riemannian optimizer (dubbed XM), (iv) build a structure from motion (SfM)
pipeline with XM as the optimization engine and show that XM-SfM dominates or
compares favorably with existing SfM pipelines in terms of reconstruction
quality while being faster, more scalable, and initialization-free.
|
2502.04642
|
Dynamic Incentive Selection for Hierarchical Convex Model Predictive
Control
|
eess.SY cs.SY math.OC
|
In this paper, we discuss incentive design for hierarchical model predictive
control (MPC) systems viewed as Stackelberg games. We consider a hierarchical
MPC formulation where, given a lower-level convex MPC (LoMPC), the upper-level
system solves a bilevel MPC (BiMPC) subject to the constraint that the
lower-level system inputs are optimal for the LoMPC. Such hierarchical problems
are challenging due to optimality constraints in the BiMPC formulation. We
analyze an incentive Stackelberg game variation of the problem, where the BiMPC
provides additional incentives for the LoMPC cost function, which grants the
BiMPC influence over the LoMPC inputs. We show that for such problems, the
BiMPC can be reformulated as a simpler optimization problem, and the optimal
incentives can be iteratively computed without knowing the LoMPC cost function.
We extend our formulation for the case of multiple LoMPCs and propose an
algorithm that finds bounded suboptimal solutions for the BiMPC. We demonstrate
our algorithm for a dynamic price control example, where an independent system
operator (ISO) sets the electricity prices for electric vehicle (EV) charging
with the goal of minimizing a social cost and satisfying electricity generation
constraints. Notably, our method scales well to large EV population sizes.
|
2502.04643
|
Confidence Elicitation: A New Attack Vector for Large Language Models
|
cs.LG cs.CL cs.CR
|
A fundamental issue in deep learning has been adversarial robustness. As
these systems have scaled, such issues have persisted. Currently, large
language models (LLMs) with billions of parameters suffer from adversarial
attacks just like their earlier, smaller counterparts. However, the threat
models have changed. Previously, having gray-box access, where input embeddings
or output logits/probabilities were visible to the user, might have been
reasonable. However, with the introduction of closed-source models, no
information about the model is available apart from the generated output. This
means that current black-box attacks can only utilize the final prediction to
detect if an attack is successful. In this work, we investigate and demonstrate
the potential of attack guidance, akin to using output probabilities, while
having only black-box access in a classification setting. This is achieved
through the ability to elicit confidence from the model. We empirically show
that the elicited confidence is calibrated and not hallucinated for current
LLMs. By minimizing the elicited confidence, we can therefore increase the
likelihood of misclassification. Our new proposed paradigm demonstrates
promising state-of-the-art results on three datasets across two models
(LLaMA-3-8B-Instruct and Mistral-7B-Instruct-V0.3) when comparing our technique
to existing hard-label black-box attack methods that introduce word-level
substitutions.
|
2502.04644
|
Agentic Reasoning: Reasoning LLMs with Tools for the Deep Research
|
cs.AI cs.CL
|
We introduce Agentic Reasoning, a framework that enhances large language
model (LLM) reasoning by integrating external tool-using agents. Unlike
conventional LLM-based reasoning approaches, which rely solely on internal
inference, Agentic Reasoning dynamically engages web search, code execution,
and structured reasoning-context memory to solve complex problems requiring
deep research and multi-step logical deduction. Our framework introduces the
Mind Map agent, which constructs a structured knowledge graph to track logical
relationships, improving deductive reasoning. Additionally, the integration of
web-search and coding agents enables real-time retrieval and computational
analysis, enhancing reasoning accuracy and decision-making. Evaluations on
PhD-level scientific reasoning (GPQA) and domain-specific deep research tasks
demonstrate that our approach significantly outperforms existing models,
including leading retrieval-augmented generation (RAG) systems and
closed-source LLMs. Moreover, our results indicate that agentic reasoning
improves expert-level knowledge synthesis, test-time scalability, and
structured problem-solving. The code is at:
https://github.com/theworldofagents/Agentic-Reasoning.
|
2502.04645
|
Cross-Encoder Rediscovers a Semantic Variant of BM25
|
cs.IR cs.AI
|
Neural Ranking Models (NRMs) have rapidly advanced state-of-the-art
performance on information retrieval tasks. In this work, we investigate a
Cross-Encoder variant of MiniLM to determine which relevance features it
computes and where they are stored. We find that it employs a semantic variant
of the traditional BM25 in an interpretable manner, featuring localized
components: (1) Transformer attention heads that compute soft term frequency
while controlling for term saturation and document length effects, and (2) a
low-rank component of its embedding matrix that encodes inverse document
frequency information for the vocabulary. This suggests that the Cross-Encoder
uses the same fundamental mechanisms as BM25, but further leverages their
capacity to capture semantics for improved retrieval performance. The granular
understanding lays the groundwork for model editing to enhance model
transparency, addressing safety concerns, and improving scalability in training
and real-world applications.
|
2502.04646
|
Importance Sampling via Score-based Generative Models
|
cs.LG cs.AI
|
Importance sampling, which involves sampling from a probability density
function (PDF) proportional to the product of an importance weight function and
a base PDF, is a powerful technique with applications in variance reduction,
biased or customized sampling, data augmentation, and beyond. Inspired by the
growing availability of score-based generative models (SGMs), we propose an
entirely training-free Importance sampling framework that relies solely on an
SGM for the base PDF. Our key innovation is realizing the importance sampling
process as a backward diffusion process, expressed in terms of the score
function of the base PDF and the specified importance weight function--both
readily available--eliminating the need for any additional training. We conduct
a thorough analysis demonstrating the method's scalability and effectiveness
across diverse datasets and tasks, including importance sampling for industrial
and natural images with neural importance weight functions. The training-free
aspect of our method is particularly compelling in real-world scenarios where a
single base distribution underlies multiple biased sampling tasks, each
requiring a different importance weight function. To the best of our knowledge
our approach is the first importance sampling framework to achieve this.
|
2502.04649
|
End-to-End Learning Framework for Solving Non-Markovian Optimal Control
|
cs.SY cs.LG math.OC
|
Integer-order calculus often falls short in capturing the long-range
dependencies and memory effects found in many real-world processes. Fractional
calculus addresses these gaps via fractional-order integrals and derivatives,
but fractional-order dynamical systems pose substantial challenges in system
identification and optimal control due to the lack of standard control
methodologies. In this paper, we theoretically derive the optimal control via
linear quadratic regulator (LQR) for fractional-order linear time-invariant
(FOLTI) systems and develop an end-to-end deep learning framework based on this
theoretical foundation. Our approach establishes a rigorous mathematical model,
derives analytical solutions, and incorporates deep learning to achieve
data-driven optimal control of FOLTI systems. Our key contributions include:
(i) proposing an innovative system identification method control strategy for
FOLTI systems, (ii) developing the first end-to-end data-driven learning
framework, Fractional-Order Learning for Optimal Control (FOLOC), that learns
control policies from observed trajectories, and (iii) deriving a theoretical
analysis of sample complexity to quantify the number of samples required for
accurate optimal control in complex real-world problems. Experimental results
indicate that our method accurately approximates fractional-order system
behaviors without relying on Gaussian noise assumptions, pointing to promising
avenues for advanced optimal control.
|
2502.04655
|
Before It's Too Late: A State Space Model for the Early Prediction of
Misinformation and Disinformation Engagement
|
cs.CL
|
In today's digital age, conspiracies and information campaigns can emerge
rapidly and erode social and democratic cohesion. While recent deep learning
approaches have made progress in modeling engagement through language and
propagation models, they struggle with irregularly sampled data and early
trajectory assessment. We present IC-Mamba, a novel state space model that
forecasts social media engagement by modeling interval-censored data with
integrated temporal embeddings. Our model excels at predicting engagement
patterns within the crucial first 15-30 minutes of posting (RMSE 0.118-0.143),
enabling rapid assessment of content reach. By incorporating interval-censored
modeling into the state space framework, IC-Mamba captures fine-grained
temporal dynamics of engagement growth, achieving a 4.72% improvement over
state-of-the-art across multiple engagement metrics (likes, shares, comments,
and emojis). Our experiments demonstrate IC-Mamba's effectiveness in
forecasting both post-level dynamics and broader narrative patterns (F1
0.508-0.751 for narrative-level predictions). The model maintains strong
predictive performance across extended time horizons, successfully forecasting
opinion-level engagement up to 28 days ahead using observation windows of 3-10
days. These capabilities enable earlier identification of potentially
problematic content, providing crucial lead time for designing and implementing
countermeasures. Code is available at: https://github.com/ltian678/ic-mamba. An
interactive dashboard demonstrating our results is available at:
https://ic-mamba.behavioral-ds.science.
|
2502.04656
|
MHAF-YOLO: Multi-Branch Heterogeneous Auxiliary Fusion YOLO for accurate
object detection
|
cs.CV
|
Due to the effective multi-scale feature fusion capabilities of the Path
Aggregation FPN (PAFPN), it has become a widely adopted component in YOLO-based
detectors. However, PAFPN struggles to integrate high-level semantic cues with
low-level spatial details, limiting its performance in real-world applications,
especially with significant scale variations. In this paper, we propose
MHAF-YOLO, a novel detection framework featuring a versatile neck design called
the Multi-Branch Auxiliary FPN (MAFPN), which consists of two key modules: the
Superficial Assisted Fusion (SAF) and Advanced Assisted Fusion (AAF). The SAF
bridges the backbone and the neck by fusing shallow features, effectively
transferring crucial low-level spatial information with high fidelity.
Meanwhile, the AAF integrates multi-scale feature information at deeper neck
layers, delivering richer gradient information to the output layer and further
enhancing the model learning capacity. To complement MAFPN, we introduce the
Global Heterogeneous Flexible Kernel Selection (GHFKS) mechanism and the
Reparameterized Heterogeneous Multi-Scale (RepHMS) module to enhance feature
fusion. RepHMS is globally integrated into the network, utilizing GHFKS to
select larger convolutional kernels for various feature layers, expanding the
vertical receptive field and capturing contextual information across spatial
hierarchies. Locally, it optimizes convolution by processing both large and
small kernels within the same layer, broadening the lateral receptive field and
preserving crucial details for detecting smaller targets. The source code of
this work is available at: https://github.com/yang0201/MHAF-YOLO.
|
2502.04658
|
Shifting Attention to You: Personalized Brain-Inspired AI Models
|
q-bio.NC cs.AI
|
The integration of human and artificial intelligence represents a scientific
opportunity to advance our understanding of information processing, as each
system offers unique computational insights that can enhance and inform the
other. The synthesis of human cognitive principles with artificial intelligence
has the potential to produce more interpretable and functionally aligned
computational models, while simultaneously providing a formal framework for
investigating the neural mechanisms underlying perception, learning, and
decision-making through systematic model comparisons and representational
analyses. In this study, we introduce personalized brain-inspired modeling that
integrates human behavioral embeddings and neural data to align with cognitive
processes. We took a stepwise approach, fine-tuning the Contrastive
Language-Image Pre-training (CLIP) model with large-scale behavioral decisions,
group-level neural data, and finally, participant-level neural data within a
broader framework that we have named CLIP-Human-Based Analysis (CLIP-HBA). We
found that fine-tuning on behavioral data enhances its ability to predict human
similarity judgments while indirectly aligning it with dynamic representations
captured via MEG. To further gain mechanistic insights into the temporal
evolution of cognitive processes, we introduced a model specifically fine-tuned
on millisecond-level MEG neural dynamics (CLIP-HBA-MEG). This model resulted in
enhanced temporal alignment with human neural processing while still showing
improvement on behavioral alignment. Finally, we trained individualized models
on participant-specific neural data, effectively capturing individualized
neural dynamics and highlighting the potential for personalized AI systems.
These personalized systems have far-reaching implications for the fields of
medicine, cognitive research, human-computer interfaces, and AI development.
|
2502.04662
|
Adversarially-Robust TD Learning with Markovian Data: Finite-Time Rates
and Fundamental Limits
|
cs.LG cs.SY eess.SY math.OC
|
One of the most basic problems in reinforcement learning (RL) is policy
evaluation: estimating the long-term return, i.e., value function,
corresponding to a given fixed policy. The celebrated Temporal Difference (TD)
learning algorithm addresses this problem, and recent work has investigated
finite-time convergence guarantees for this algorithm and variants thereof.
However, these guarantees hinge on the reward observations being always
generated from a well-behaved (e.g., sub-Gaussian) true reward distribution.
Motivated by harsh, real-world environments where such an idealistic assumption
may no longer hold, we revisit the policy evaluation problem from the
perspective of adversarial robustness. In particular, we consider a
Huber-contaminated reward model where an adversary can arbitrarily corrupt each
reward sample with a small probability $\epsilon$. Under this observation
model, we first show that the adversary can cause the vanilla TD algorithm to
converge to any arbitrary value function. We then develop a novel algorithm
called Robust-TD and prove that its finite-time guarantees match that of
vanilla TD with linear function approximation up to a small $O(\epsilon)$ term
that captures the effect of corruption. We complement this result with a
minimax lower bound, revealing that such an additive corruption-induced term is
unavoidable. To our knowledge, these results are the first of their kind in the
context of adversarial robustness of stochastic approximation schemes driven by
Markov noise. The key new technical tool that enables our results is an
analysis of the Median-of-Means estimator with corrupted, time-correlated data
that might be of independent interest to the literature on robust statistics.
|
2502.04664
|
Implicit Bias of SignGD and Adam on Multiclass Separable Data
|
cs.LG math.OC
|
In the optimization of overparameterized models, different gradient-based
methods can achieve zero training error yet converge to distinctly different
solutions inducing different generalization properties. While a decade of
research on implicit optimization bias has illuminated this phenomenon in
various settings, even the foundational case of linear classification with
separable data still has important open questions. We resolve a fundamental gap
by characterizing the implicit bias of both Adam and Sign Gradient Descent in
multi-class cross-entropy minimization: we prove that their iterates converge
to solutions that maximize the margin with respect to the classifier matrix's
max-norm and characterize the rate of convergence. We extend our results to
general p-norm normalized steepest descent algorithms and to other multi-class
losses.
|
2502.04666
|
Enhancing Health Information Retrieval with RAG by Prioritizing Topical
Relevance and Factual Accuracy
|
cs.IR
|
The exponential surge in online health information, coupled with its
increasing use by non-experts, highlights the pressing need for advanced Health
Information Retrieval models that consider not only topical relevance but also
the factual accuracy of the retrieved information, given the potential risks
associated with health misinformation. To this aim, this paper introduces a
solution driven by Retrieval-Augmented Generation (RAG), which leverages the
capabilities of generative Large Language Models (LLMs) to enhance the
retrieval of health-related documents grounded in scientific evidence. In
particular, we propose a three-stage model: in the first stage, the user's
query is employed to retrieve topically relevant passages with associated
references from a knowledge base constituted by scientific literature. In the
second stage, these passages, alongside the initial query, are processed by
LLMs to generate a contextually relevant rich text (GenText). In the last
stage, the documents to be retrieved are evaluated and ranked both from the
point of view of topical relevance and factual accuracy by means of their
comparison with GenText, either through stance detection or semantic
similarity. In addition to calculating factual accuracy, GenText can offer a
layer of explainability for it, aiding users in understanding the reasoning
behind the retrieval. Experimental evaluation of our model on benchmark
datasets and against baseline models demonstrates its effectiveness in
enhancing the retrieval of both topically relevant and factually accurate
health information, thus presenting a significant step forward in the health
misinformation mitigation problem.
|
2502.04667
|
Unveiling the Mechanisms of Explicit CoT Training: How Chain-of-Thought
Enhances Reasoning Generalization
|
cs.LG cs.AI cs.CL
|
Training large language models (LLMs) with high-quality Chain-of-Thought
(CoT) annotations has become a widely adopted strategy due to its significant
enhancement of reasoning capabilities. To fully comprehend this approach, two
questions naturally arise: (Q1) What advantages does training with CoT offer
compared to training without CoT? (Q2) If there are advantages, what are the
underlying mechanisms of explicit CoT training? Analyzing the advantages and
mechanisms of CoT training is challenging due to the many factors involved. To
address this, we conduct a detailed analysis using clear and controllable data
distributions and, for the first time, reveal that CoT training offers the
following advantages: (1) Training with CoT markedly improves reasoning
generalization, extending it from in-distribution (ID) to both ID and
out-of-distribution (OOD) scenarios, while also speeding up convergence; (2)
Even when training with CoT includes a certain range of erroneous reasoning
steps, it still enables the model to learn reasoning patterns, leading to
systematic generalization. We further explore the underlying mechanisms from a
circuit perspective: (1) The data distribution (e.g., ratio $\lambda$ and
pattern) plays a crucial role in influencing the model's systematic
generalization; (2) CoT training (with two-hop facts) internalizes reasoning
into a two-stage generalizing circuit, where the number of stages corresponds
to the explicit reasoning steps during training. Our findings elucidate the
mechanisms underlying explicit CoT training and offer critical insights into
tuning strategies for LLMs to achieve robust generalization.
|
2502.04668
|
Machine-Learning Interatomic Potentials for Long-Range Systems
|
physics.chem-ph cs.LG
|
Machine-learning interatomic potentials have emerged as a revolutionary class
of force-field models in molecular simulations, delivering quantum-mechanical
accuracy at a fraction of the computational cost and enabling the simulation of
large-scale systems over extended timescales. However, they often focus on
modeling local environments, neglecting crucial long-range interactions. We
propose a Sum-of-Gaussians Neural Network (SOG-Net), a lightweight and
versatile framework for integrating long-range interactions into machine
learning force field. The SOG-Net employs a latent-variable learning network
that seamlessly bridges short-range and long-range components, coupled with an
efficient Fourier convolution layer that incorporates long-range effects. By
learning sum-of-Gaussian multipliers across different convolution layers, the
SOG-Net adaptively captures diverse long-range decay behaviors while
maintaining close-to-linear computational complexity during training and
simulation via non-uniform fast Fourier transforms. The method is demonstrated
effective for a broad range of long-range systems.
|
2502.04669
|
A Comprehensive Review on Noise Control of Diffusion Model
|
cs.LG cs.AI
|
Diffusion models have recently emerged as powerful generative frameworks for
producing high-quality images. A pivotal component of these models is the noise
schedule, which governs the rate of noise injection during the diffusion
process. Since the noise schedule substantially influences sampling quality and
training quality, understanding its design and implications is crucial. In this
discussion, various noise schedules are examined, and their distinguishing
features and performance characteristics are highlighted.
|
2502.04670
|
CCS: Controllable and Constrained Sampling with Diffusion Models via
Initial Noise Perturbation
|
cs.LG cs.AI
|
Diffusion models have emerged as powerful tools for generative tasks,
producing high-quality outputs across diverse domains. However, how the
generated data responds to the initial noise perturbation in diffusion models
remains under-explored, which hinders understanding the controllability of the
sampling process. In this work, we first observe an interesting phenomenon: the
relationship between the change of generation outputs and the scale of initial
noise perturbation is highly linear through the diffusion ODE sampling. Then we
provide both theoretical and empirical study to justify this linearity property
of this input-output (noise-generation data) relationship. Inspired by these
new insights, we propose a novel Controllable and Constrained Sampling method
(CCS) together with a new controller algorithm for diffusion models to sample
with desired statistical properties while preserving good sample quality. We
perform extensive experiments to compare our proposed sampling approach with
other methods on both sampling controllability and sampled data quality.
Results show that our CCS method achieves more precisely controlled sampling
while maintaining superior sample quality and diversity.
|
2502.04671
|
ProofWala: Multilingual Proof Data Synthesis and Theorem-Proving
|
cs.AI cs.LG cs.LO cs.PL
|
Neural networks have shown substantial promise at automatic theorem-proving
in interactive proof assistants (ITPs) like Lean and Coq. However, most neural
theorem-proving models are restricted to specific ITPs, leaving out
opportunities for cross-lingual $\textit{transfer}$ between ITPs. We address
this weakness with a multilingual proof framework, ${\rm P{\small ROOF}W{\small
ALA}}$, that allows a standardized form of interaction between neural
theorem-provers and two established ITPs (Coq and Lean). It enables the
collection of multilingual proof step data -- data recording the result of
proof actions on ITP states -- for training neural provers. ${\rm P{\small
ROOF}W{\small ALA}}$ allows the systematic evaluation of a model's performance
across different ITPs and problem domains via efficient parallel proof search
algorithms. We show that multilingual training enabled by ${\rm P{\small
ROOF}W{\small ALA}}$ can lead to successful transfer across ITPs. Specifically,
a model trained on a mix of ${\rm P{\small ROOF}W{\small ALA}}$-generated Coq
and Lean data outperforms Lean-only and Coq-only models on the standard
prove-at-$k$ metric. We open source all code including code for the ${\rm
P{\small ROOF}W{\small ALA}}$ Framework
(https://github.com/trishullab/proof-wala), and the Multilingual ITP
interaction framework (https://github.com/trishullab/itp-interface).
|
2502.04673
|
Optimistic Algorithms for Adaptive Estimation of the Average Treatment
Effect
|
stat.ML cs.LG stat.ME
|
Estimation and inference for the Average Treatment Effect (ATE) is a
cornerstone of causal inference and often serves as the foundation for
developing procedures for more complicated settings. Although traditionally
analyzed in a batch setting, recent advances in martingale theory have paved
the way for adaptive methods that can enhance the power of downstream
inference. Despite these advances, progress in understanding and developing
adaptive algorithms remains in its early stages. Existing work either focus on
asymptotic analyses that overlook exploration-exploitation tradeoffs relevant
in finite-sample regimes or rely on simpler but suboptimal estimators. In this
work, we address these limitations by studying adaptive sampling procedures
that take advantage of the asymptotically optimal Augmented Inverse Probability
Weighting (AIPW) estimator. Our analysis uncovers challenges obscured by
asymptotic approaches and introduces a novel algorithmic design principle
reminiscent of optimism in multiarmed bandits. This principled approach enables
our algorithm to achieve significant theoretical and empirical gains compared
to prior methods. Our findings mark a step forward in advancing adaptive causal
inference methods in theory and practice.
|
2502.04674
|
AdParaphrase: Paraphrase Dataset for Analyzing Linguistic Features
toward Generating Attractive Ad Texts
|
cs.CL cs.AI
|
Effective linguistic choices that attract potential customers play crucial
roles in advertising success. This study aims to explore the linguistic
features of ad texts that influence human preferences. Although the creation of
attractive ad texts is an active area of research, progress in understanding
the specific linguistic features that affect attractiveness is hindered by
several obstacles. First, human preferences are complex and influenced by
multiple factors, including their content, such as brand names, and their
linguistic styles, making analysis challenging. Second, publicly available ad
text datasets that include human preferences are lacking, such as ad
performance metrics and human feedback, which reflect people's interests. To
address these problems, we present AdParaphrase, a paraphrase dataset that
contains human preferences for pairs of ad texts that are semantically
equivalent but differ in terms of wording and style. This dataset allows for
preference analysis that focuses on the differences in linguistic features. Our
analysis revealed that ad texts preferred by human judges have higher fluency,
longer length, more nouns, and use of bracket symbols. Furthermore, we
demonstrate that an ad text-generation model that considers these findings
significantly improves the attractiveness of a given text. The dataset is
publicly available at: https://github.com/CyberAgentAILab/AdParaphrase.
|
2502.04675
|
Scalable Oversight for Superhuman AI via Recursive Self-Critiquing
|
cs.AI cs.CL cs.LG
|
As AI capabilities increasingly surpass human proficiency in complex tasks,
current alignment techniques including SFT and RLHF face fundamental challenges
in ensuring reliable oversight. These methods rely on direct human assessment
and become untenable when AI outputs exceed human cognitive thresholds. In
response to this challenge, we explore two hypotheses: (1) critique of critique
can be easier than critique itself, extending the widely-accepted observation
that verification is easier than generation to the critique domain, as critique
itself is a specialized form of generation; (2) this difficulty relationship is
recursively held, suggesting that when direct evaluation is infeasible,
performing high-order critiques (e.g., critique of critique of critique) offers
a more tractable supervision pathway. To examine these hypotheses, we perform
Human-Human, Human-AI, and AI-AI experiments across multiple tasks. Our results
demonstrate encouraging evidence supporting these hypotheses and suggest that
recursive self-critiquing is a promising direction for scalable oversight.
|
2502.04678
|
Nearly Tight Bounds for Cross-Learning Contextual Bandits with Graphical
Feedback
|
cs.LG
|
The cross-learning contextual bandit problem with graphical feedback has
recently attracted significant attention. In this setting, there is a
contextual bandit with a feedback graph over the arms, and pulling an arm
reveals the loss for all neighboring arms in the feedback graph across all
contexts. Initially proposed by Han et al. (2024), this problem has broad
applications in areas such as bidding in first price auctions, and explores a
novel frontier in the feedback structure of bandit problems. A key theoretical
question is whether an algorithm with $\widetilde{O}(\sqrt{\alpha T})$ regret
exists, where $\alpha$ represents the independence number of the feedback
graph. This question is particularly interesting because it concerns whether an
algorithm can achieve a regret bound entirely independent of the number of
contexts and matching the minimax regret of vanilla graphical bandits. Previous
work has demonstrated that such an algorithm is impossible for adversarial
contexts, but the question remains open for stochastic contexts. In this work,
we affirmatively answer this open question by presenting an algorithm that
achieves the minimax $\widetilde{O}(\sqrt{\alpha T})$ regret for cross-learning
contextual bandits with graphical feedback and stochastic contexts. Notably,
although that question is open even for stochastic bandits, we directly solve
the strictly stronger adversarial bandit version of the problem.
|
2502.04679
|
Mechanistic Understandings of Representation Vulnerabilities and
Engineering Robust Vision Transformers
|
cs.CV cs.LG
|
While transformer-based models dominate NLP and vision applications, their
underlying mechanisms to map the input space to the label space semantically
are not well understood. In this paper, we study the sources of known
representation vulnerabilities of vision transformers (ViT), where perceptually
identical images can have very different representations and semantically
unrelated images can have the same representation. Our analysis indicates that
imperceptible changes to the input can result in significant representation
changes, particularly in later layers, suggesting potential instabilities in
the performance of ViTs. Our comprehensive study reveals that adversarial
effects, while subtle in early layers, propagate and amplify through the
network, becoming most pronounced in middle to late layers. This insight
motivates the development of NeuroShield-ViT, a novel defense mechanism that
strategically neutralizes vulnerable neurons in earlier layers to prevent the
cascade of adversarial effects. We demonstrate NeuroShield-ViT's effectiveness
across various attacks, particularly excelling against strong iterative
attacks, and showcase its remarkable zero-shot generalization capabilities.
Without fine-tuning, our method achieves a competitive accuracy of 77.8% on
adversarial examples, surpassing conventional robustness methods. Our results
shed new light on how adversarial effects propagate through ViT layers, while
providing a promising approach to enhance the robustness of vision transformers
against adversarial attacks. Additionally, they provide a promising approach to
enhance the robustness of vision transformers against adversarial attacks.
|
2502.04680
|
Performance Evaluation of Image Enhancement Techniques on Transfer
Learning for Touchless Fingerprint Recognition
|
cs.CV cs.LG
|
Fingerprint recognition remains one of the most reliable biometric
technologies due to its high accuracy and uniqueness. Traditional systems rely
on contact-based scanners, which are prone to issues such as image degradation
from surface contamination and inconsistent user interaction. To address these
limitations, contactless fingerprint recognition has emerged as a promising
alternative, providing non-intrusive and hygienic authentication. This study
evaluates the impact of image enhancement tech-niques on the performance of
pre-trained deep learning models using transfer learning for touchless
fingerprint recognition. The IIT-Bombay Touchless and Touch-Based Fingerprint
Database, containing data from 200 subjects, was employed to test the
per-formance of deep learning architectures such as VGG-16, VGG-19,
Inception-V3, and ResNet-50. Experimental results reveal that transfer learning
methods with fingerprint image enhance-ment (indirect method) significantly
outperform those without enhancement (direct method). Specifically, VGG-16
achieved an accuracy of 98% in training and 93% in testing when using the
enhanced images, demonstrating superior performance compared to the direct
method.
This paper provides a detailed comparison of the effectiveness of image
enhancement in improving the accuracy of transfer learning models for touchless
fingerprint recognition, offering key insights for developing more efficient
biometric systems.
|
2502.04682
|
AI-Driven Solutions for Falcon Disease Classification: Concatenated
ConvNeXt cum EfficientNet AI Model Approach
|
cs.CV
|
Falconry, an ancient practice of training and hunting with falcons,
emphasizes the need for vigilant health monitoring to ensure the well-being of
these highly valued birds, especially during hunting activities. This research
paper introduces a cutting-edge approach, which leverages the power of
Concatenated ConvNeXt and EfficientNet AI models for falcon disease
classification. Focused on distinguishing 'Normal,' 'Liver,' and
'Aspergillosis' cases, the study employs a comprehensive dataset for model
training and evaluation, utilizing metrics such as accuracy, precision, recall,
and f1-score. Through rigorous experimentation and evaluation, we demonstrate
the superior performance of the concatenated AI model compared to traditional
methods and standalone architectures. This novel approach contributes to
accurate falcon disease classification, laying the groundwork for further
advancements in avian veterinary AI applications.
|
2502.04684
|
G2PDiffusion: Genotype-to-Phenotype Prediction with Diffusion Models
|
cs.LG cs.AI
|
Discovering the genotype-phenotype relationship is crucial for genetic
engineering, which will facilitate advances in fields such as crop breeding,
conservation biology, and personalized medicine. Current research usually
focuses on single species and small datasets due to limitations in phenotypic
data collection, especially for traits that require visual assessments or
physical measurements. Deciphering complex and composite phenotypes, such as
morphology, from genetic data at scale remains an open question. To break
through traditional generic models that rely on simplified assumptions, this
paper introduces G2PDiffusion, the first-of-its-kind diffusion model designed
for genotype-to-phenotype generation across multiple species. Specifically, we
use images to represent morphological phenotypes across species and redefine
phenotype prediction as conditional image generation. To this end, this paper
introduces an environment-enhanced DNA sequence conditioner and trains a stable
diffusion model with a novel alignment method to improve genotype-to-phenotype
consistency. Extensive experiments demonstrate that our approach enhances
phenotype prediction accuracy across species, capturing subtle genetic
variations that contribute to observable traits.
|
2502.04686
|
Learning Strategic Language Agents in the Werewolf Game with Iterative
Latent Space Policy Optimization
|
cs.AI
|
Large language model (LLM)-based agents have recently shown impressive
progress in a variety of domains, including open-ended conversation and
multi-step decision-making. However, applying these agents to social deduction
games such as Werewolf, which requires both strategic decision-making and
free-form language interaction, remains non-trivial. Traditional methods based
on Counterfactual Regret Minimization (CFR) or reinforcement learning (RL)
typically depend on a predefined action space, making them unsuitable for
language games with unconstrained text action space. Meanwhile, pure LLM-based
agents often suffer from intrinsic biases and require prohibitively large
datasets for fine-tuning. We propose Latent Space Policy Optimization (LSPO),
an iterative framework that addresses these challenges by first mapping
free-form text to a discrete latent space, where methods like CFR and RL can
learn strategic policy more effectively. We then translate the learned policy
back into natural language dialogues, which are used to fine-tune an LLM via
Direct Preference Optimization (DPO). By iteratively alternating between these
stages, our LSPO agent progressively enhances both strategic reasoning and
language communication. Experiment results on the Werewolf game show that our
method improves the agent's performance in each iteration and outperforms
existing Werewolf agents, underscoring its promise for free-form language
decision-making.
|
2502.04688
|
M-IFEval: Multilingual Instruction-Following Evaluation
|
cs.CL cs.AI
|
Instruction following is a core capability of modern Large language models
(LLMs), making evaluating this capability essential to understanding these
models. The Instruction Following Evaluation (IFEval) benchmark from the
literature does this using objective criteria, offering a measure of LLM
performance without subjective AI or human judgement. However, it only includes
English instructions, limiting its ability to assess LLMs in other languages.
We propose the Multilingual Instruction Following Evaluation (M-IFEval)
benchmark, expanding the evaluation to French, Japanese, and Spanish, with both
general and language-specific instructions. Applying this benchmark to 8
state-of-the-art LLMs, we find that benchmark performance across languages and
instruction types can vary widely, underscoring the importance of a
multilingual benchmark for evaluating LLMs in a diverse cultural context.
|
2502.04689
|
ARR: Question Answering with Large Language Models via Analyzing,
Retrieving, and Reasoning
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) achieve remarkable performance on challenging
benchmarks that are often structured as multiple-choice question-answering (QA)
tasks. Zero-shot Chain-of-Thought (CoT) prompting enhances reasoning in LLMs
but provides only vague and generic guidance ("think step by step"). This paper
introduces ARR, an intuitive and effective zero-shot prompting method that
explicitly incorporates three key steps in QA solving: analyzing the intent of
the question, retrieving relevant information, and reasoning step by step.
Comprehensive experiments across diverse and challenging QA tasks demonstrate
that ARR consistently improves the Baseline (without ARR prompting) and
outperforms CoT. Ablation and case studies further validate the positive
contributions of each component: analyzing, retrieving, and reasoning. Notably,
intent analysis plays a vital role in ARR. Additionally, extensive evaluations
across various model sizes, LLM series, and generation settings solidify the
effectiveness, robustness, and generalizability of ARR.
|
2502.04692
|
STRIDE: Automating Reward Design, Deep Reinforcement Learning Training
and Feedback Optimization in Humanoid Robotics Locomotion
|
cs.RO cs.LG
|
Humanoid robotics presents significant challenges in artificial intelligence,
requiring precise coordination and control of high-degree-of-freedom systems.
Designing effective reward functions for deep reinforcement learning (DRL) in
this domain remains a critical bottleneck, demanding extensive manual effort,
domain expertise, and iterative refinement. To overcome these challenges, we
introduce STRIDE, a novel framework built on agentic engineering to automate
reward design, DRL training, and feedback optimization for humanoid robot
locomotion tasks. By combining the structured principles of agentic engineering
with large language models (LLMs) for code-writing, zero-shot generation, and
in-context optimization, STRIDE generates, evaluates, and iteratively refines
reward functions without relying on task-specific prompts or templates. Across
diverse environments featuring humanoid robot morphologies, STRIDE outperforms
the state-of-the-art reward design framework EUREKA, achieving an average
improvement of round 250% in efficiency and task performance. Using
STRIDE-generated rewards, simulated humanoid robots achieve sprint-level
locomotion across complex terrains, highlighting its ability to advance DRL
workflows and humanoid robotics research.
|
2502.04693
|
Be Water, My Antennas: Riding on Radio Wave Fluctuation in Nature for
Spatial Multiplexing using Programmable Meta-Fluid Antenna
|
physics.app-ph cs.SY eess.SY
|
Interference and scattering, often deemed undesirable, are inevitable in
wireless communications, especially when the current mobile networks and
upcoming sixth generation (6G) have turned into ultra-dense networks. Current
approaches relying on multiple-input multiple-output (MIMO) combined with
artificial-intelligence-aided (AI) signal processing have drawbacks of being
power-hungry and requiring wide bandwidth that raise scalability concerns. In
this article, we take a radical approach and utilize the channel fading
phenomenon to our advantage. Specifically, we propose a novel meta-fluid
antenna architecture, referred to as the `fluid' antenna system (FAS), that can
freely surf on radio wave fluctuations, like `fluid' figuratively speaking,
with fine resolution in space to opportunistically avoid interference,
eliminating the need for expensive signal processing. Our experimental results
demonstrate that under rich scattering conditions, the proposed meta-fluidic
architecture is able to exploit the natural ups and downs of radio waves in
space for spatial multiplexing. These breakthrough results show that scattering
can be desirable not harmful and interference can be dodged not suppressed,
fundamentally changing our perception of fading and our understanding on how
interference should be managed in wireless communications networks.
|
2502.04695
|
Bridging the Gap in XAI-Why Reliable Metrics Matter for Explainability
and Compliance
|
cs.AI cs.CE cs.ET cs.LG
|
This position paper emphasizes the critical gap in the evaluation of
Explainable AI (XAI) due to the lack of standardized and reliable metrics,
which diminishes its practical value, trustworthiness, and ability to meet
regulatory requirements. Current evaluation methods are often fragmented,
subjective, and biased, making them prone to manipulation and complicating the
assessment of complex models. A central issue is the absence of a ground truth
for explanations, complicating comparisons across various XAI approaches. To
address these challenges, we advocate for widespread research into developing
robust, context-sensitive evaluation metrics. These metrics should be resistant
to manipulation, relevant to each use case, and based on human judgment and
real-world applicability. We also recommend creating domain-specific evaluation
benchmarks that align with the user and regulatory needs of sectors such as
healthcare and finance. By encouraging collaboration among academia, industry,
and regulators, we can create standards that balance flexibility and
consistency, ensuring XAI explanations are meaningful, trustworthy, and
compliant with evolving regulations.
|
2502.04696
|
Adaptive Learning-based Model Predictive Control Strategy for Drift
Vehicles
|
cs.RO
|
Drift vehicle control offers valuable insights to support safe autonomous
driving in extreme conditions, which hinges on tracking a particular path while
maintaining the vehicle states near the drift equilibrium points (DEP).
However, conventional tracking methods are not adaptable for drift vehicles due
to their opposite steering angle and yaw rate. In this paper, we propose an
adaptive path tracking (APT) control method to dynamically adjust drift states
to follow the reference path, improving the commonly utilized predictive path
tracking methods with released computation burden. Furthermore, existing
control strategies necessitate a precise system model to calculate the DEP,
which can be more intractable due to the highly nonlinear drift dynamics and
sensitive vehicle parameters. To tackle this problem, an adaptive
learning-based model predictive control (ALMPC) strategy is proposed based on
the APT method, where an upper-level Bayesian optimization is employed to learn
the DEP and APT control law to instruct a lower-level MPC drift controller.
This hierarchical system architecture can also resolve the inherent control
conflict between path tracking and drifting by separating these objectives into
different layers. The ALMPC strategy is verified on the Matlab-Carsim platform,
and simulation results demonstrate its effectiveness in controlling the drift
vehicle to follow a clothoid-based reference path even with the misidentified
road friction parameter.
|
2502.04697
|
Multi-Agent Coverage Control in Non-Convex Annulus Region with Conformal
Mapping
|
eess.SY cs.SY
|
Efficiently fulfilling coverage tasks in non-convex regions has long been a
significant challenge for multi-agent systems (MASs). By leveraging conformal
mapping, this paper introduces a novel sectorial coverage formulation to
transform a non-convex annulus region into a topologically equivalent one. This
approach enables the deployment of MASs in a non-star-shaped region while
optimizing coverage performance and achieving load balance among sub-regions.
It provides a unique perspective on the partitioned sub-regions to highlight
the geodesic convex property of the non-star-shaped region. By utilizing the
sectorial partition mechanism and the diffeomorphism property of conformal
mapping, a decentralized control law is designed to drive MASs towards a
desired configuration, which not only optimizes the global coverage cost but
also ensures exponential convergence of equitable workload. Moreover, an
iterative search algorithm is developed to identify the optimal approximation
of multi-agent deployment in the non-star-shaped region. Theoretical analysis
is conducted to confirm the asymptotic stability and global convergence with
arbitrary small tolerance of the closed-loop system. Finally, numerical
simulations demonstrate the practicality of the proposed coverage formulation
with conformal mapping.
|
2502.04699
|
A Meta-learner for Heterogeneous Effects in Difference-in-Differences
|
stat.ML cs.LG
|
We address the problem of estimating heterogeneous treatment effects in panel
data, adopting the popular Difference-in-Differences (DiD) framework under the
conditional parallel trends assumption. We propose a novel doubly robust
meta-learner for the Conditional Average Treatment Effect on the Treated
(CATT), reducing the estimation to a convex risk minimization problem involving
a set of auxiliary models. Our framework allows for the flexible estimation of
the CATT, when conditioning on any subset of variables of interest using
generic machine learning. Leveraging Neyman orthogonality, our proposed
approach is robust to estimation errors in the auxiliary models. As a
generalization to our main result, we develop a meta-learning approach for the
estimation of general conditional functionals under covariate shift. We also
provide an extension to the instrumented DiD setting with non-compliance.
Empirical results demonstrate the superiority of our approach over existing
baselines.
|
2502.04700
|
EigenLoRAx: Recycling Adapters to Find Principal Subspaces for
Resource-Efficient Adaptation and Inference
|
cs.LG cs.AI
|
The rapid growth of large models has raised concerns about their
environmental impact and equity in accessibility due to significant
computational costs. Low-Rank Adapters (LoRA) offer a lightweight solution for
finetuning large models, resulting in an abundance of publicly available
adapters tailored to diverse domains. We ask: Can these pretrained adapters be
leveraged to further streamline adaptation to new tasks while addressing these
challenges? We introduce EigenLoRAx, a parameter-efficient finetuning method
that recycles existing adapters to create a principal subspace aligned with
their shared domain knowledge which can be further augmented with orthogonal
basis vectors in low-resource scenarios. This enables rapid adaptation to new
tasks by learning only lightweight coefficients on the principal components of
the subspace - eliminating the need to finetune entire adapters. EigenLoRAx
requires significantly fewer parameters and memory, improving efficiency for
both training and inference. Our method demonstrates strong performance across
diverse domains and tasks, offering a scalable for edge-based applications,
personalization, and equitable deployment of large models in
resource-constrained environments.
|
2502.04703
|
Symbolic Regression of Data-Driven Reduced Order Model Closures for
Under-Resolved, Convection-Dominated Flows
|
math.NA cs.LG cs.NA physics.flu-dyn
|
Data-driven closures correct the standard reduced order models (ROMs) to
increase their accuracy in under-resolved, convection-dominated flows. There
are two types of data-driven ROM closures in current use: (i) structural, with
simple ansatzes (e.g., linear or quadratic); and (ii) machine learning-based,
with neural network ansatzes. We propose a novel symbolic regression (SR)
data-driven ROM closure strategy, which combines the advantages of current
approaches and eliminates their drawbacks. As a result, the new data-driven SR
closures yield ROMs that are interpretable, parsimonious, accurate,
generalizable, and robust. To compare the data-driven SR-ROM closures with the
structural and machine learning-based ROM closures, we consider the data-driven
variational multiscale ROM framework and two under-resolved,
convection-dominated test problems: the flow past a cylinder and the lid-driven
cavity flow at Reynolds numbers Re = 10000, 15000, and 20000. This numerical
investigation shows that the new data-driven SR-ROM closures yield more
accurate and robust ROMs than the structural and machine learning ROM closures.
|
2502.04706
|
Enhancing Impression Change Prediction in Speed Dating Simulations Based
on Speakers' Personalities
|
cs.CL cs.HC
|
This paper focuses on simulating text dialogues in which impressions between
speakers improve during speed dating. This simulation involves selecting an
utterance from multiple candidates generated by a text generation model that
replicates a specific speaker's utterances, aiming to improve the impression of
the speaker. Accurately selecting an utterance that improves the impression is
crucial for the simulation. We believe that whether an utterance improves a
dialogue partner's impression of the speaker may depend on the personalities of
both parties. However, recent methods for utterance selection do not consider
the impression per utterance or the personalities. To address this, we propose
a method that predicts whether an utterance improves a partner's impression of
the speaker, considering the personalities. The evaluation results showed that
personalities are useful in predicting impression changes per utterance.
Furthermore, we conducted a human evaluation of simulated dialogues using our
method. The results showed that it could simulate dialogues more favorably
received than those selected without considering personalities.
|
2502.04713
|
Leveraging band diversity for feature selection in EO data
|
eess.IV cs.CV
|
Hyperspectral imaging (HSI) is a powerful earth observation technology that
captures and processes information across a wide spectrum of wavelengths.
Hyperspectral imaging provides comprehensive and detailed spectral data that is
invaluable for a wide range of reconstruction problems. However due to
complexity in analysis it often becomes difficult to handle this data. To
address the challenge of handling large number of bands in reconstructing high
quality HSI, we propose to form groups of bands. In this position paper we
propose a method of selecting diverse bands using determinantal point processes
in correlated bands. To address the issue of overlapping bands that may arise
from grouping, we use spectral angle mapper analysis. This analysis can be fed
to any Machine learning model to enable detailed analysis and monitoring with
high precision and accuracy.
|
2502.04718
|
Evaluating Text Style Transfer Evaluation: Are There Any Reliable
Metrics?
|
cs.CL
|
Text Style Transfer (TST) is the task of transforming a text to reflect a
particular style while preserving its original content. Evaluating TST outputs
is a multidimensional challenge, requiring the assessment of style transfer
accuracy, content preservation, and naturalness. Using human evaluation is
ideal but costly, same as in other natural language processing (NLP) tasks,
however, automatic metrics for TST have not received as much attention as
metrics for, e.g., machine translation or summarization. In this paper, we
examine both set of existing and novel metrics from broader NLP tasks for TST
evaluation, focusing on two popular subtasks-sentiment transfer and
detoxification-in a multilingual context comprising English, Hindi, and
Bengali. By conducting meta-evaluation through correlation with human
judgments, we demonstrate the effectiveness of these metrics when used
individually and in ensembles. Additionally, we investigate the potential of
Large Language Models (LLMs) as tools for TST evaluation. Our findings
highlight that certain advanced NLP metrics and experimental-hybrid-techniques,
provide better insights than existing TST metrics for delivering more accurate,
consistent, and reproducible TST evaluations.
|
2502.04719
|
Tolerance-Aware Deep Optics
|
cs.CV cs.GR
|
Deep optics has emerged as a promising approach by co-designing optical
elements with deep learning algorithms. However, current research typically
overlooks the analysis and optimization of manufacturing and assembly
tolerances. This oversight creates a significant performance gap between
designed and fabricated optical systems. To address this challenge, we present
the first end-to-end tolerance-aware optimization framework that incorporates
multiple tolerance types into the deep optics design pipeline. Our method
combines physics-informed modelling with data-driven training to enhance
optical design by accounting for and compensating for structural deviations in
manufacturing and assembly. We validate our approach through computational
imaging applications, demonstrating results in both simulations and real-world
experiments. We further examine how our proposed solution improves the
robustness of optical systems and vision algorithms against tolerances through
qualitative and quantitative analyses. Code and additional visual results are
available at openimaginglab.github.io/LensTolerance.
|
2502.04722
|
Singing Voice Conversion with Accompaniment Using Self-Supervised
Representation-Based Melody Features
|
cs.SD cs.LG eess.AS
|
Melody preservation is crucial in singing voice conversion (SVC). However, in
many scenarios, audio is often accompanied with background music (BGM), which
can cause audio distortion and interfere with the extraction of melody and
other key features, significantly degrading SVC performance. Previous methods
have attempted to address this by using more robust neural network-based melody
extractors, but their performance drops sharply in the presence of complex
accompaniment. Other approaches involve performing source separation before
conversion, but this often introduces noticeable artifacts, leading to a
significant drop in conversion quality and increasing the user's operational
costs. To address these issues, we introduce a novel SVC method that uses
self-supervised representation-based melody features to improve melody modeling
accuracy in the presence of BGM. In our experiments, we compare the
effectiveness of different self-supervised learning (SSL) models for melody
extraction and explore for the first time how SSL benefits the task of melody
extraction. The experimental results demonstrate that our proposed SVC model
significantly outperforms existing baseline methods in terms of melody accuracy
and shows higher similarity and naturalness in both subjective and objective
evaluations across noisy and clean audio environments.
|
2502.04725
|
Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?
|
cs.CV cs.AI
|
Despite the remarkable success of diffusion models (DMs) in data generation,
they exhibit specific failure cases with unsatisfactory outputs. We focus on
one such limitation: the ability of DMs to learn hidden rules between image
features. Specifically, for image data with dependent features ($\mathbf{x}$)
and ($\mathbf{y}$) (e.g., the height of the sun ($\mathbf{x}$) and the length
of the shadow ($\mathbf{y}$)), we investigate whether DMs can accurately
capture the inter-feature rule ($p(\mathbf{y}|\mathbf{x})$). Empirical
evaluations on mainstream DMs (e.g., Stable Diffusion 3.5) reveal consistent
failures, such as inconsistent lighting-shadow relationships and mismatched
object-mirror reflections. Inspired by these findings, we design four synthetic
tasks with strongly correlated features to assess DMs' rule-learning abilities.
Extensive experiments show that while DMs can identify coarse-grained rules,
they struggle with fine-grained ones. Our theoretical analysis demonstrates
that DMs trained via denoising score matching (DSM) exhibit constant errors in
learning hidden rules, as the DSM objective is not compatible with rule
conformity. To mitigate this, we introduce a common technique - incorporating
additional classifier guidance during sampling, which achieves (limited)
improvements. Our analysis reveals that the subtle signals of fine-grained
rules are challenging for the classifier to capture, providing insights for
future exploration.
|
2502.04728
|
Generating Symbolic World Models via Test-time Scaling of Large Language
Models
|
cs.AI
|
Solving complex planning problems requires Large Language Models (LLMs) to
explicitly model the state transition to avoid rule violations, comply with
constraints, and ensure optimality-a task hindered by the inherent ambiguity of
natural language. To overcome such ambiguity, Planning Domain Definition
Language (PDDL) is leveraged as a planning abstraction that enables precise and
formal state descriptions. With PDDL, we can generate a symbolic world model
where classic searching algorithms, such as A*, can be seamlessly applied to
find optimal plans. However, directly generating PDDL domains with current LLMs
remains an open challenge due to the lack of PDDL training data. To address
this challenge, we propose to scale up the test-time computation of LLMs to
enhance their PDDL reasoning capabilities, thereby enabling the generation of
high-quality PDDL domains. Specifically, we introduce a simple yet effective
algorithm, which first employs a Best-of-N sampling approach to improve the
quality of the initial solution and then refines the solution in a fine-grained
manner with verbalized machine learning. Our method outperforms o1-mini by a
considerable margin in the generation of PDDL domain, achieving over 50%
success rate on two tasks (i.e., generating PDDL domains from natural language
description or PDDL problems). This is done without requiring additional
training. By taking advantage of PDDL as state abstraction, our method is able
to outperform current state-of-the-art methods on almost all competition-level
planning tasks.
|
2502.04729
|
The "negative end" of change in grammar: terminology, concepts and
causes
|
cs.CL cs.CY
|
The topic of "negative end" of change is, contrary to the fields of
innovation and emergence, largely under-researched. Yet, it has lately started
to gain an increasing attention from language scholars worldwide. The main
focus of this article is threefold, namely to discuss the i) terminology; ii)
concepts and iii) causes associated with the "negative end" of change in
grammar. The article starts with an overview of research conducted on the
topic. It then moves to situating phenomena referred to as loss, decline or
obsolescence among processes of language change, before elaborating on the
terminology and concepts behind it. The last part looks at possible causes for
constructions to display a (gradual or rapid, but very consistent) decrease in
the frequency of use over time, which continues until the construction
disappears or there are only residual or fossilised forms left. Keywords: loss,
obsolescence, decline, competition, higher
|
2502.04730
|
PhyloVAE: Unsupervised Learning of Phylogenetic Trees via Variational
Autoencoders
|
stat.ML cs.LG q-bio.PE
|
Learning informative representations of phylogenetic tree structures is
essential for analyzing evolutionary relationships. Classical distance-based
methods have been widely used to project phylogenetic trees into Euclidean
space, but they are often sensitive to the choice of distance metric and may
lack sufficient resolution. In this paper, we introduce phylogenetic
variational autoencoders (PhyloVAEs), an unsupervised learning framework
designed for representation learning and generative modeling of tree
topologies. Leveraging an efficient encoding mechanism inspired by
autoregressive tree topology generation, we develop a deep latent-variable
generative model that facilitates fast, parallelized topology generation.
PhyloVAE combines this generative model with a collaborative inference model
based on learnable topological features, allowing for high-resolution
representations of phylogenetic tree samples. Extensive experiments demonstrate
PhyloVAE's robust representation learning capabilities and fast generation of
phylogenetic tree topologies.
|
2502.04734
|
SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting
|
cs.CV cs.GR
|
360-degree cameras streamline data collection for radiance field 3D
reconstruction by capturing comprehensive scene data. However, traditional
radiance field methods do not address the specific challenges inherent to
360-degree images. We present SC-OmniGS, a novel self-calibrating
omnidirectional Gaussian splatting system for fast and accurate omnidirectional
radiance field reconstruction using 360-degree images. Rather than converting
360-degree images to cube maps and performing perspective image calibration, we
treat 360-degree images as a whole sphere and derive a mathematical framework
that enables direct omnidirectional camera pose calibration accompanied by 3D
Gaussians optimization. Furthermore, we introduce a differentiable
omnidirectional camera model in order to rectify the distortion of real-world
data for performance enhancement. Overall, the omnidirectional camera intrinsic
model, extrinsic poses, and 3D Gaussians are jointly optimized by minimizing
weighted spherical photometric loss. Extensive experiments have demonstrated
that our proposed SC-OmniGS is able to recover a high-quality radiance field
from noisy camera poses or even no pose prior in challenging scenarios
characterized by wide baselines and non-object-centric configurations. The
noticeable performance gain in the real-world dataset captured by
consumer-grade omnidirectional cameras verifies the effectiveness of our
general omnidirectional camera model in reducing the distortion of 360-degree
images.
|
2502.04737
|
Learning Universal Multi-level Market Irrationality Factors to Improve
Stock Return Forecasting
|
cs.LG
|
Recent years have witnessed the perfect encounter of deep learning and
quantitative trading has achieved great success in stock investment. Numerous
deep learning-based models have been developed for forecasting stock returns,
leveraging the powerful representation capabilities of neural networks to
identify patterns and factors influencing stock prices. These models can
effectively capture general patterns in the market, such as stock price trends,
volume-price relationships, and time variations. However, the impact of special
irrationality factors -- such as market sentiment, speculative behavior, market
manipulation, and psychological biases -- have not been fully considered in
existing deep stock forecasting models due to their relative abstraction as
well as lack of explicit labels and data description. To fill this gap, we
propose UMI, a Universal multi-level Market Irrationality factor model to
enhance stock return forecasting. The UMI model learns factors that can reflect
irrational behaviors in market from both individual stock and overall market
levels. For the stock-level, UMI construct an estimated rational price for each
stock, which is cointegrated with the stock's actual price. The discrepancy
between the actual and the rational prices serves as a factor to indicate
stock-level irrational events. Additionally, we define market-level irrational
behaviors as anomalous synchronous fluctuations of stocks within a market.
Using two self-supervised representation learning tasks, i.e., sub-market
comparative learning and market synchronism prediction, the UMI model
incorporates market-level irrationalities into a market representation vector,
which is then used as the market-level irrationality factor.
|
2502.04740
|
SelaFD:Seamless Adaptation of Vision Transformer Fine-tuning for
Radar-based Human Activity
|
cs.CV cs.LG
|
Human Activity Recognition (HAR) such as fall detection has become
increasingly critical due to the aging population, necessitating effective
monitoring systems to prevent serious injuries and fatalities associated with
falls. This study focuses on fine-tuning the Vision Transformer (ViT) model
specifically for HAR using radar-based Time-Doppler signatures. Unlike
traditional image datasets, these signals present unique challenges due to
their non-visual nature and the high degree of similarity among various
activities. Directly fine-tuning the ViT with all parameters proves suboptimal
for this application. To address this challenge, we propose a novel approach
that employs Low-Rank Adaptation (LoRA) fine-tuning in the weight space to
facilitate knowledge transfer from pre-trained ViT models. Additionally, to
extract fine-grained features, we enhance feature representation through the
integration of a serial-parallel adapter in the feature space. Our innovative
joint fine-tuning method, tailored for radar-based Time-Doppler signatures,
significantly improves HAR accuracy, surpassing existing state-of-the-art
methodologies in this domain. Our code is released at
https://github.com/wangyijunlyy/SelaFD.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.