id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.07645
|
Beyond Behavior Cloning: Robustness through Interactive Imitation and
Contrastive Learning
|
cs.RO
|
Behavior cloning (BC) traditionally relies on demonstration data, assuming
the demonstrated actions are optimal. This can lead to overfitting under noisy
data, particularly when expressive models are used (e.g., the energy-based
model in Implicit BC). To address this, we extend behavior cloning into an
iterative process of optimal action estimation within the Interactive Imitation
Learning framework. Specifically, we introduce Contrastive policy Learning from
Interactive Corrections (CLIC). CLIC leverages human corrections to estimate a
set of desired actions and optimizes the policy to select actions from this
set. We provide theoretical guarantees for the convergence of the desired
action set to optimal actions in both single and multiple optimal action cases.
Extensive simulation and real-robot experiments validate CLIC's advantages over
existing state-of-the-art methods, including stable training of energy-based
models, robustness to feedback noise, and adaptability to diverse feedback
types beyond demonstrations. Our code will be publicly available soon.
|
2502.07646
|
Causal Additive Models with Unobserved Causal Paths and Backdoor Paths
|
cs.LG stat.ME stat.ML
|
Causal additive models have been employed as tractable yet expressive
frameworks for causal discovery involving hidden variables. State-of-the-art
methodologies suggest that determining the causal relationship between a pair
of variables is infeasible in the presence of an unobserved backdoor or an
unobserved causal path. Contrary to this assumption, we theoretically show that
resolving the causal direction is feasible in certain scenarios by
incorporating two novel components into the theory. The first component
introduces a novel characterization of regression sets within independence
between regression residuals. The second component leverages conditional
independence among the observed variables. We also provide a search algorithm
that integrates these innovations and demonstrate its competitive performance
against existing methods.
|
2502.07650
|
Guiding Time-Varying Generative Models with Natural Gradients on
Exponential Family Manifold
|
stat.ML cs.LG
|
Optimising probabilistic models is a well-studied field in statistics.
However, its connection with the training of generative models remains largely
under-explored. In this paper, we show that the evolution of time-varying
generative models can be projected onto an exponential family manifold,
naturally creating a link between the parameters of a generative model and
those of a probabilistic model. We then train the generative model by moving
its projection on the manifold according to the natural gradient descent
scheme. This approach also allows us to approximate the natural gradient of the
KL divergence efficiently without relying on MCMC for intractable models.
Furthermore, we propose particle versions of the algorithm, which feature
closed-form update rules for any parametric model within the exponential
family. Through toy and real-world experiments, we validate the effectiveness
of the proposed algorithms.
|
2502.07656
|
A Unifying Framework for Causal Imitation Learning with Hidden
Confounders
|
cs.LG cs.AI
|
We propose a general and unifying framework for causal Imitation Learning
(IL) with hidden confounders that subsumes several existing confounded IL
settings from the literature. Our framework accounts for two types of hidden
confounders: (a) those observed by the expert, which thus influence the
expert's policy, and (b) confounding noise hidden to both the expert and the IL
algorithm. For additional flexibility, we also introduce a confounding noise
horizon and time-varying expert-observable hidden variables. We show that
causal IL in our framework can be reduced to a set of Conditional Moment
Restrictions (CMRs) by leveraging trajectory histories as instruments to learn
a history-dependent policy. We propose DML-IL, a novel algorithm that uses
instrumental variable regression to solve these CMRs and learn a policy. We
provide a bound on the imitation gap for DML-IL, which recovers prior results
as special cases. Empirical evaluation on a toy environment with continues
state-action spaces and multiple Mujoco tasks demonstrate that DML-IL
outperforms state-of-the-art causal IL algorithms.
|
2502.07657
|
Private Low-Rank Approximation for Covariance Matrices, Dyson Brownian
Motion, and Eigenvalue-Gap Bounds for Gaussian Perturbations
|
cs.DS cs.CR cs.LG cs.NA math.NA math.PR
|
We consider the problem of approximating a $d \times d$ covariance matrix $M$
with a rank-$k$ matrix under $(\varepsilon,\delta)$-differential privacy. We
present and analyze a complex variant of the Gaussian mechanism and obtain
upper bounds on the Frobenius norm of the difference between the matrix output
by this mechanism and the best rank-$k$ approximation to $M$. Our analysis
provides improvements over previous bounds, particularly when the spectrum of
$M$ satisfies natural structural assumptions. The novel insight is to view the
addition of Gaussian noise to a matrix as a continuous-time matrix Brownian
motion. This viewpoint allows us to track the evolution of eigenvalues and
eigenvectors of the matrix, which are governed by stochastic differential
equations discovered by Dyson. These equations enable us to upper bound the
Frobenius distance between the best rank-$k$ approximation of $M$ and that of a
Gaussian perturbation of $M$ as an integral that involves inverse eigenvalue
gaps of the stochastically evolving matrix, as opposed to a sum of perturbation
bounds obtained via Davis-Kahan-type theorems. Subsequently, again using the
Dyson Brownian motion viewpoint, we show that the eigenvalues of the matrix $M$
perturbed by Gaussian noise have large gaps with high probability. These
results also contribute to the analysis of low-rank approximations under
average-case perturbations, and to an understanding of eigenvalue gaps for
random matrices, both of which may be of independent interest.
|
2502.07658
|
IU4Rec: Interest Unit-Based Product Organization and Recommendation for
E-Commerce Platform
|
cs.IR
|
Most recommendation systems typically follow a product-based paradigm
utilizing user-product interactions to identify the most engaging items for
users. However, this product-based paradigm has notable drawbacks for
Xianyu~\footnote{Xianyu is China's largest online C2C e-commerce platform where
a large portion of the product are post by individual sellers}. Most of the
product on Xianyu posted from individual sellers often have limited stock
available for distribution, and once the product is sold, it's no longer
available for distribution. This result in most items distributed product on
Xianyu having relatively few interactions, affecting the effectiveness of
traditional recommendation depending on accumulating user-item interactions. To
address these issues, we introduce \textbf{IU4Rec}, an \textbf{I}nterest
\textbf{U}nit-based two-stage \textbf{Rec}ommendation system framework. We
first group products into clusters based on attributes such as category, image,
and semantics. These IUs are then integrated into the Recommendation system,
delivering both product and technological innovations. IU4Rec begins by
grouping products into clusters based on attributes such as category, image,
and semantics, forming Interest Units (IUs). Then we redesign the
recommendation process into two stages. In the first stage, the focus is on
recommend these Interest Units, capturing broad-level interests. In the second
stage, it guides users to find the best option among similar products within
the selected Interest Unit. User-IU interactions are incorporated into our
ranking models, offering the advantage of more persistent IU behaviors compared
to item-specific interactions. Experimental results on the production dataset
and online A/B testing demonstrate the effectiveness and superiority of our
proposed IU-centric recommendation approach.
|
2502.07661
|
Partial-Label Learning with Conformal Candidate Cleaning
|
cs.LG stat.ML
|
Real-world data is often ambiguous; for example, human annotation produces
instances with multiple conflicting class labels. Partial-label learning (PLL)
aims at training a classifier in this challenging setting, where each instance
is associated with a set of candidate labels and one correct, but unknown,
class label. A multitude of algorithms targeting this setting exists and, to
enhance their prediction quality, several extensions that are applicable across
a wide range of PLL methods have been introduced. While many of these
extensions rely on heuristics, this article proposes a novel enhancing method
that incrementally prunes candidate sets using conformal prediction. To work
around the missing labeled validation set, which is typically required for
conformal prediction, we propose a strategy that alternates between training a
PLL classifier to label the validation set, leveraging these predicted class
labels for calibration, and pruning candidate labels that are not part of the
resulting conformal sets. In this sense, our method alternates between
empirical risk minimization and candidate set pruning. We establish that our
pruning method preserves the conformal validity with respect to the unknown
ground truth. Our extensive experiments on artificial and real-world data show
that the proposed approach significantly improves the test set accuracies of
several state-of-the-art PLL classifiers.
|
2502.07663
|
Human Decision-making is Susceptible to AI-driven Manipulation
|
cs.AI cs.CL cs.CY cs.HC
|
Artificial Intelligence (AI) systems are increasingly intertwined with daily
life, assisting users in executing various tasks and providing guidance on
decision-making. This integration introduces risks of AI-driven manipulation,
where such systems may exploit users' cognitive biases and emotional
vulnerabilities to steer them toward harmful outcomes. Through a randomized
controlled trial with 233 participants, we examined human susceptibility to
such manipulation in financial (e.g., purchases) and emotional (e.g., conflict
resolution) decision-making contexts. Participants interacted with one of three
AI agents: a neutral agent (NA) optimizing for user benefit without explicit
influence, a manipulative agent (MA) designed to covertly influence beliefs and
behaviors, or a strategy-enhanced manipulative agent (SEMA) employing explicit
psychological tactics to reach its hidden objectives. By analyzing
participants' decision patterns and shifts in their preference ratings
post-interaction, we found significant susceptibility to AI-driven
manipulation. Particularly, across both decision-making domains, participants
interacting with the manipulative agents shifted toward harmful options at
substantially higher rates (financial, MA: 62.3%, SEMA: 59.6%; emotional, MA:
42.3%, SEMA: 41.5%) compared to the NA group (financial, 35.8%; emotional,
12.8%). Notably, our findings reveal that even subtle manipulative objectives
(MA) can be as effective as employing explicit psychological strategies (SEMA)
in swaying human decision-making. By revealing the potential for covert AI
influence, this study highlights a critical vulnerability in human-AI
interactions, emphasizing the need for ethical safeguards and regulatory
frameworks to ensure responsible deployment of AI technologies and protect
human autonomy.
|
2502.07677
|
Auto-Drafting Police Reports from Noisy ASR Outputs: A Trust-Centered
LLM Approach
|
cs.CL
|
Achieving a delicate balance between fostering trust in law enforcement and
protecting the rights of both officers and civilians continues to emerge as a
pressing research and product challenge in the world today. In the pursuit of
fairness and transparency, this study presents an innovative AI-driven system
designed to generate police report drafts from complex, noisy, and multi-role
dialogue data. Our approach intelligently extracts key elements of law
enforcement interactions and includes them in the draft, producing structured
narratives that are not only high in quality but also reinforce accountability
and procedural clarity. This framework holds the potential to transform the
reporting process, ensuring greater oversight, consistency, and fairness in
future policing practices. A demonstration video of our system can be accessed
at
https://drive.google.com/file/d/1kBrsGGR8e3B5xPSblrchRGj-Y-kpCHNO/view?usp=sharing
|
2502.07680
|
Multiview Point Cloud Registration Based on Minimum Potential Energy for
Free-Form Blade Measurement
|
cs.CV cs.CG
|
Point cloud registration is an essential step for free-form blade
reconstruction in industrial measurement. Nonetheless, measuring defects of the
3D acquisition system unavoidably result in noisy and incomplete point cloud
data, which renders efficient and accurate registration challenging. In this
paper, we propose a novel global registration method that is based on the
minimum potential energy (MPE) method to address these problems. The basic
strategy is that the objective function is defined as the minimum potential
energy optimization function of the physical registration system. The function
distributes more weight to the majority of inlier points and less weight to the
noise and outliers, which essentially reduces the influence of perturbations in
the mathematical formulation. We decompose the solution into a globally optimal
approximation procedure and a fine registration process with the trimmed
iterative closest point algorithm to boost convergence. The approximation
procedure consists of two main steps. First, according to the construction of
the force traction operator, we can simply compute the position of the
potential energy minimum. Second, to find the MPE point, we propose a new
theory that employs two flags to observe the status of the registration
procedure. We demonstrate the performance of the proposed algorithm on four
types of blades. The proposed method outperforms the other global methods in
terms of both accuracy and noise resistance.
|
2502.07683
|
exHarmony: Authorship and Citations for Benchmarking the Reviewer
Assignment Problem
|
cs.IR cs.CL
|
The peer review process is crucial for ensuring the quality and reliability
of scholarly work, yet assigning suitable reviewers remains a significant
challenge. Traditional manual methods are labor-intensive and often
ineffective, leading to nonconstructive or biased reviews. This paper
introduces the exHarmony (eHarmony but for connecting experts to manuscripts)
benchmark, designed to address these challenges by re-imagining the Reviewer
Assignment Problem (RAP) as a retrieval task. Utilizing the extensive data from
OpenAlex, we propose a novel approach that considers a host of signals from the
authors, most similar experts, and the citation relations as potential
indicators for a suitable reviewer for a manuscript. This approach allows us to
develop a standard benchmark dataset for evaluating the reviewer assignment
problem without needing explicit labels. We benchmark various methods,
including traditional lexical matching, static neural embeddings, and
contextualized neural embeddings, and introduce evaluation metrics that assess
both relevance and diversity in the context of RAP. Our results indicate that
while traditional methods perform reasonably well, contextualized embeddings
trained on scholarly literature show the best performance. The findings
underscore the importance of further research to enhance the diversity and
effectiveness of reviewer assignments.
|
2502.07685
|
Matrix3D: Large Photogrammetry Model All-in-One
|
cs.CV
|
We present Matrix3D, a unified model that performs several photogrammetry
subtasks, including pose estimation, depth prediction, and novel view synthesis
using just the same model. Matrix3D utilizes a multi-modal diffusion
transformer (DiT) to integrate transformations across several modalities, such
as images, camera parameters, and depth maps. The key to Matrix3D's large-scale
multi-modal training lies in the incorporation of a mask learning strategy.
This enables full-modality model training even with partially complete data,
such as bi-modality data of image-pose and image-depth pairs, thus
significantly increases the pool of available training data. Matrix3D
demonstrates state-of-the-art performance in pose estimation and novel view
synthesis tasks. Additionally, it offers fine-grained control through
multi-round interactions, making it an innovative tool for 3D content creation.
Project page: https://nju-3dv.github.io/projects/matrix3d.
|
2502.07687
|
Large Language Models as Proxies for Theories of Human Linguistic
Cognition
|
cs.CL
|
We consider the possible role of current large language models (LLMs) in the
study of human linguistic cognition. We focus on the use of such models as
proxies for theories of cognition that are relatively linguistically-neutral in
their representations and learning but differ from current LLMs in key ways. We
illustrate this potential use of LLMs as proxies for theories of cognition in
the context of two kinds of questions: (a) whether the target theory accounts
for the acquisition of a given pattern from a given corpus; and (b) whether the
target theory makes a given typologically-attested pattern easier to acquire
than another, typologically-unattested pattern. For each of the two questions
we show, building on recent literature, how current LLMs can potentially be of
help, but we note that at present this help is quite limited.
|
2502.07693
|
SoK: A Classification for AI-driven Personalized Privacy Assistants
|
cs.CY cs.AI
|
To help users make privacy-related decisions, personalized privacy assistants
based on AI technology have been developed in recent years. These AI-driven
Personalized Privacy Assistants (AI-driven PPAs) can reap significant benefits
for users, who may otherwise struggle to make decisions regarding their
personal data in environments saturated with privacy-related decision requests.
However, no study systematically inquired about the features of these AI-driven
PPAs, their underlying technologies, or the accuracy of their decisions. To
fill this gap, we present a Systematization of Knowledge (SoK) to map the
existing solutions found in the scientific literature. We screened 1697 unique
research papers over the last decade (2013-2023), constructing a classification
from 39 included papers. As a result, this SoK reviews several aspects of
existing research on AI-driven PPAs in terms of types of publications,
contributions, methodological quality, and other quantitative insights.
Furthermore, we provide a comprehensive classification for AI-driven PPAs,
delving into their architectural choices, system contexts, types of AI used,
data sources, types of decisions, and control over decisions, among other
facets. Based on our SoK, we further underline the research gaps and challenges
and formulate recommendations for the design and development of AI-driven PPAs
as well as avenues for future research.
|
2502.07694
|
Methodology for Identifying Social Groups within a Transactional Graph
|
cs.SI
|
Social network analysis is pivotal for organizations aiming to leverage the
vast amounts of data generated from user interactions on social media and other
digital platforms. These interactions often reveal complex social structures,
such as tightly-knit groups based on common interests, which are crucial for
enhancing service personalization or fraud detection. Traditional methods like
community detection and graph matching, while useful, often fall short of
accurately identifying specific groups of users. This paper introduces a novel
framework specifically designed to identify groups of users within
transactional graphs by focusing on the contextual and structural nuances that
define these groups.
|
2502.07701
|
Magic 1-For-1: Generating One Minute Video Clips within One Minute
|
cs.CV
|
In this technical report, we present Magic 1-For-1 (Magic141), an efficient
video generation model with optimized memory consumption and inference latency.
The key idea is simple: factorize the text-to-video generation task into two
separate easier tasks for diffusion step distillation, namely text-to-image
generation and image-to-video generation. We verify that with the same
optimization algorithm, the image-to-video task is indeed easier to converge
over the text-to-video task. We also explore a bag of optimization tricks to
reduce the computational cost of training the image-to-video (I2V) models from
three aspects: 1) model convergence speedup by using a multi-modal prior
condition injection; 2) inference latency speed up by applying an adversarial
step distillation, and 3) inference memory cost optimization with parameter
sparsification. With those techniques, we are able to generate 5-second video
clips within 3 seconds. By applying a test time sliding window, we are able to
generate a minute-long video within one minute with significantly improved
visual quality and motion dynamics, spending less than 1 second for generating
1 second video clips on average. We conduct a series of preliminary
explorations to find out the optimal tradeoff between computational cost and
video quality during diffusion step distillation and hope this could be a good
foundation model for open-source explorations. The code and the model weights
are available at https://github.com/DA-Group-PKU/Magic-1-For-1.
|
2502.07703
|
GaRLIO: Gravity enhanced Radar-LiDAR-Inertial Odometry
|
cs.RO
|
Recently, gravity has been highlighted as a crucial constraint for state
estimation to alleviate potential vertical drift. Existing online gravity
estimation methods rely on pose estimation combined with IMU measurements,
which is considered best practice when direct velocity measurements are
unavailable. However, with radar sensors providing direct velocity data-a
measurement not yet utilized for gravity estimation-we found a significant
opportunity to improve gravity estimation accuracy substantially. GaRLIO, the
proposed gravity-enhanced Radar-LiDAR-Inertial Odometry, can robustly predict
gravity to reduce vertical drift while simultaneously enhancing state
estimation performance using pointwise velocity measurements. Furthermore,
GaRLIO ensures robustness in dynamic environments by utilizing radar to remove
dynamic objects from LiDAR point clouds. Our method is validated through
experiments in various environments prone to vertical drift, demonstrating
superior performance compared to traditional LiDAR-Inertial Odometry methods.
We make our source code publicly available to encourage further research and
development. https://github.com/ChiyunNoh/GaRLIO
|
2502.07707
|
PRVQL: Progressive Knowledge-guided Refinement for Robust Egocentric
Visual Query Localization
|
cs.CV
|
Egocentric visual query localization (EgoVQL) focuses on localizing the
target of interest in space and time from first-person videos, given a visual
query. Despite recent progressive, existing methods often struggle to handle
severe object appearance changes and cluttering background in the video due to
lacking sufficient target cues, leading to degradation. Addressing this, we
introduce PRVQL, a novel Progressive knowledge-guided Refinement framework for
EgoVQL. The core is to continuously exploit target-relevant knowledge directly
from videos and utilize it as guidance to refine both query and video features
for improving target localization. Our PRVQL contains multiple processing
stages. The target knowledge from one stage, comprising appearance and spatial
knowledge extracted via two specially designed knowledge learning modules, are
utilized as guidance to refine the query and videos features for the next
stage, which are used to generate more accurate knowledge for further feature
refinement. With such a progressive process, target knowledge in PRVQL can be
gradually improved, which, in turn, leads to better refined query and video
features for localization in the final stage. Compared to previous methods, our
PRVQL, besides the given object cues, enjoys additional crucial target
information from a video as guidance to refine features, and hence enhances
EgoVQL in complicated scenes. In our experiments on challenging Ego4D, PRVQL
achieves state-of-the-art result and largely surpasses other methods, showing
its efficacy. Our code, model and results will be released at
https://github.com/fb-reps/PRVQL.
|
2502.07708
|
Global linearization without hyperbolicity
|
math.DS cs.SY eess.SY
|
We give a proof of an extension of the Hartman-Grobman theorem to
nonhyperbolic but asymptotically stable equilibria of vector fields. Moreover,
the linearizing topological conjugacy is (i) defined on the entire basin of
attraction if the vector field is complete, and (ii) a $C^{k\geq 1}$
diffeomorphism on the complement of the equilibrium if the vector field is
$C^k$ and the underlying space is not $5$-dimensional. We also show that the
$C^k$ statement in the $5$-dimensional case is equivalent to the
$4$-dimensional smooth Poincar\'{e} conjecture.
|
2502.07709
|
MAGELLAN: Metacognitive predictions of learning progress guide autotelic
LLM agents in large goal spaces
|
cs.AI
|
Open-ended learning agents must efficiently prioritize goals in vast
possibility spaces, focusing on those that maximize learning progress (LP).
When such autotelic exploration is achieved by LLM agents trained with online
RL in high-dimensional and evolving goal spaces, a key challenge for LP
prediction is modeling one's own competence, a form of metacognitive
monitoring. Traditional approaches either require extensive sampling or rely on
brittle expert-defined goal groupings. We introduce MAGELLAN, a metacognitive
framework that lets LLM agents learn to predict their competence and LP online.
By capturing semantic relationships between goals, MAGELLAN enables
sample-efficient LP estimation and dynamic adaptation to evolving goal spaces
through generalization. In an interactive learning environment, we show that
MAGELLAN improves LP prediction efficiency and goal prioritization, being the
only method allowing the agent to fully master a large and evolving goal space.
These results demonstrate how augmenting LLM agents with a metacognitive
ability for LP predictions can effectively scale curriculum learning to
open-ended goal spaces.
|
2502.07715
|
Near-Optimal Sample Complexity in Reward-Free Kernel-Based Reinforcement
Learning
|
cs.LG
|
Reinforcement Learning (RL) problems are being considered under increasingly
more complex structures. While tabular and linear models have been thoroughly
explored, the analytical study of RL under nonlinear function approximation,
especially kernel-based models, has recently gained traction for their strong
representational capacity and theoretical tractability. In this context, we
examine the question of statistical efficiency in kernel-based RL within the
reward-free RL framework, specifically asking: how many samples are required to
design a near-optimal policy? Existing work addresses this question under
restrictive assumptions about the class of kernel functions. We first explore
this question by assuming a generative model, then relax this assumption at the
cost of increasing the sample complexity by a factor of H, the length of the
episode. We tackle this fundamental problem using a broad class of kernels and
a simpler algorithm compared to prior work. Our approach derives new confidence
intervals for kernel ridge regression, specific to our RL setting, which may be
of broader applicability. We further validate our theoretical findings through
simulations.
|
2502.07717
|
Making Language Models Robust Against Negation
|
cs.CL
|
Negation has been a long-standing challenge for language models. Previous
studies have shown that they struggle with negation in many natural language
understanding tasks. In this work, we propose a self-supervised method to make
language models more robust against negation. We introduce a novel task, Next
Sentence Polarity Prediction (NSPP), and a variation of the Next Sentence
Prediction (NSP) task. We show that BERT and RoBERTa further pre-trained on our
tasks outperform the off-the-shelf versions on nine negation-related
benchmarks. Most notably, our pre-training tasks yield between 1.8% and 9.1%
improvement on CondaQA, a large question-answering corpus requiring reasoning
over negation.
|
2502.07718
|
Next-to-minimal weight of toric codes defined over hypersimplices
|
cs.IT math.AC math.AG math.IT
|
Toric codes are a type of evaluation codes introduced by J.P. Hansen in 2000.
They are produced by evaluating (a vector space composed by) polynomials at the
points of $(\mathbb{F}_q^*)^s$, the monomials of these polynomials being
related to a certain polytope. Toric codes related to hypersimplices are the
result of the evaluation of a vector space of square-free homogeneous
polynomials of degree $d$. The dimension and minimum distance of toric codes
related to hypersimplices have been determined by Jaramillo et al. in 2021. The
next-to-minimal weight in the case $d = 1$ has been determined by
Jaramillo-Velez et al. in 2023. In this work we use tools from Gr\"obner basis
theory to determine the next-to-minimal weight of these codes for $d$ such that
$3 \leq d \leq \frac{s - 2}{2}$ or $\frac{s + 2}{2} \leq d < s$.
|
2502.07721
|
TMLC-Net: Transferable Meta Label Correction for Noisy Label Learning
|
cs.LG cs.AI
|
The prevalence of noisy labels in real-world datasets poses a significant
impediment to the effective deployment of deep learning models. While
meta-learning strategies have emerged as a promising approach for addressing
this challenge, existing methods often suffer from limited transferability and
task-specific designs. This paper introduces TMLC-Net, a novel Transferable
Meta-Learner for Correcting Noisy Labels, designed to overcome these
limitations. TMLC-Net learns a general-purpose label correction strategy that
can be readily applied across diverse datasets and model architectures without
requiring extensive retraining or fine-tuning. Our approach integrates three
core components: (1) Normalized Noise Perception, which captures and normalizes
training dynamics to handle distribution shifts; (2) Time-Series Encoding,
which models the temporal evolution of sample statistics using a recurrent
neural network; and (3) Subclass Decoding, which predicts a corrected label
distribution based on the learned representations. We conduct extensive
experiments on benchmark datasets with various noise types and levels,
demonstrating that TMLC-Net consistently outperforms state-of-the-art methods
in terms of both accuracy and robustness to label noise. Furthermore, we
analyze the transferability of TMLC-Net, showcasing its adaptability to new
datasets and noise conditions, and establishing its potential as a broadly
applicable solution for robust deep learning in noisy environments.
|
2502.07726
|
DeepVL: Dynamics and Inertial Measurements-based Deep Velocity Learning
for Underwater Odometry
|
cs.RO
|
This paper presents a learned model to predict the robot-centric velocity of
an underwater robot through dynamics-aware proprioception. The method exploits
a recurrent neural network using as inputs inertial cues, motor commands, and
battery voltage readings alongside the hidden state of the previous time-step
to output robust velocity estimates and their associated uncertainty. An
ensemble of networks is utilized to enhance the velocity and uncertainty
predictions. Fusing the network's outputs into an Extended Kalman Filter,
alongside inertial predictions and barometer updates, the method enables
long-term underwater odometry without further exteroception. Furthermore, when
integrated into visual-inertial odometry, the method assists in enhanced
estimation resilience when dealing with an order of magnitude fewer total
features tracked (as few as 1) as compared to conventional visual-inertial
systems. Tested onboard an underwater robot deployed both in a laboratory pool
and the Trondheim Fjord, the method takes less than 5ms for inference either on
the CPU or the GPU of an NVIDIA Orin AGX and demonstrates less than 4% relative
position error in novel trajectories during complete visual blackout, and
approximately 2% relative error when a maximum of 2 visual features from a
monocular camera are available.
|
2502.07728
|
Verifying LLM-Generated Code in the Context of Software Verification
with Ada/SPARK
|
cs.SE cs.AI
|
Large language models (LLMs) have demonstrated remarkable code generation
capabilities, but the correctness of the generated code cannot be inherently
trusted. This paper explores the feasibility of using formal software
verification, specifically the SPARK framework for Ada, to ensure the
reliability of LLM-generated code. We present Marmaragan, a tool that leverages
an LLM in order to generate SPARK annotations for existing programs, enabling
formal verification of the code. The tool is benchmarked on a curated set of
SPARK programs, with annotations selectively removed to test specific
capabilities. The performance of Marmaragan with GPT-4o on the benchmark is
promising, with correct annotations having been generated for 50.7% of the
benchmark cases. The results establish a foundation for future work on
combining the power of LLMs with the reliability of formal software
verification.
|
2502.07730
|
DOGlove: Dexterous Manipulation with a Low-Cost Open-Source Haptic Force
Feedback Glove
|
cs.RO
|
Dexterous hand teleoperation plays a pivotal role in enabling robots to
achieve human-level manipulation dexterity. However, current teleoperation
systems often rely on expensive equipment and lack multi-modal sensory
feedback, restricting human operators' ability to perceive object properties
and perform complex manipulation tasks. To address these limitations, we
present DOGlove, a low-cost, precise, and haptic force feedback glove system
for teleoperation and manipulation. DoGlove can be assembled in hours at a cost
under 600 USD. It features a customized joint structure for 21-DoF motion
capture, a compact cable-driven torque transmission mechanism for 5-DoF
multidirectional force feedback, and a linear resonate actuator for 5-DoF
fingertip haptic feedback. Leveraging action and haptic force retargeting,
DOGlove enables precise and immersive teleoperation of dexterous robotic hands,
achieving high success rates in complex, contact-rich tasks. We further
evaluate DOGlove in scenarios without visual feedback, demonstrating the
critical role of haptic force feedback in task performance. In addition, we
utilize the collected demonstrations to train imitation learning policies,
highlighting the potential and effectiveness of DOGlove. DOGlove's hardware and
software system will be fully open-sourced at https://do-glove.github.io/.
|
2502.07732
|
Economics of Sourcing Human Data
|
cs.CY cs.AI cs.CL cs.CV cs.HC cs.LG
|
Progress in AI has relied on human-generated data, from annotator
marketplaces to the wider Internet. However, the widespread use of large
language models now threatens the quality and integrity of human-generated data
on these very platforms. We argue that this issue goes beyond the immediate
challenge of filtering AI-generated content--it reveals deeper flaws in how
data collection systems are designed. Existing systems often prioritize speed,
scale, and efficiency at the cost of intrinsic human motivation, leading to
declining engagement and data quality. We propose that rethinking data
collection systems to align with contributors' intrinsic motivations--rather
than relying solely on external incentives--can help sustain high-quality data
sourcing at scale while maintaining contributor trust and long-term
participation.
|
2502.07734
|
EdgeEar: Efficient and Accurate Ear Recognition for Edge Devices
|
cs.CV cs.AI
|
Ear recognition is a contactless and unobtrusive biometric technique with
applications across various domains. However, deploying high-performing ear
recognition models on resource-constrained devices is challenging, limiting
their applicability and widespread adoption. This paper introduces EdgeEar, a
lightweight model based on a proposed hybrid CNN-transformer architecture to
solve this problem. By incorporating low-rank approximations into specific
linear layers, EdgeEar reduces its parameter count by a factor of 50 compared
to the current state-of-the-art, bringing it below two million while
maintaining competitive accuracy. Evaluation on the Unconstrained Ear
Recognition Challenge (UERC2023) benchmark shows that EdgeEar achieves the
lowest EER while significantly reducing computational costs. These findings
demonstrate the feasibility of efficient and accurate ear recognition, which we
believe will contribute to the wider adoption of ear biometrics.
|
2502.07735
|
Revisiting Non-Acyclic GFlowNets in Discrete Environments
|
cs.LG stat.ML
|
Generative Flow Networks (GFlowNets) are a family of generative models that
learn to sample objects from a given probability distribution, potentially
known up to a normalizing constant. Instead of working in the object space,
GFlowNets proceed by sampling trajectories in an appropriately constructed
directed acyclic graph environment, greatly relying on the acyclicity of the
graph. In our paper, we revisit the theory that relaxes the acyclicity
assumption and present a simpler theoretical framework for non-acyclic
GFlowNets in discrete environments. Moreover, we provide various novel
theoretical insights related to training with fixed backward policies, the
nature of flow functions, and connections between entropy-regularized RL and
non-acyclic GFlowNets, which naturally generalize the respective concepts and
theoretical results from the acyclic setting. In addition, we experimentally
re-examine the concept of loss stability in non-acyclic GFlowNet training, as
well as validate our own theoretical findings.
|
2502.07737
|
Next Block Prediction: Video Generation via Semi-Autoregressive Modeling
|
cs.CV cs.AI
|
Next-Token Prediction (NTP) is a de facto approach for autoregressive (AR)
video generation, but it suffers from suboptimal unidirectional dependencies
and slow inference speed. In this work, we propose a semi-autoregressive
(semi-AR) framework, called Next-Block Prediction (NBP), for video generation.
By uniformly decomposing video content into equal-sized blocks (e.g., rows or
frames), we shift the generation unit from individual tokens to blocks,
allowing each token in the current block to simultaneously predict the
corresponding token in the next block. Unlike traditional AR modeling, our
framework employs bidirectional attention within each block, enabling tokens to
capture more robust spatial dependencies. By predicting multiple tokens in
parallel, NBP models significantly reduce the number of generation steps,
leading to faster and more efficient inference. Our model achieves FVD scores
of 103.3 on UCF101 and 25.5 on K600, outperforming the vanilla NTP model by an
average of 4.4. Furthermore, thanks to the reduced number of inference steps,
the NBP model generates 8.89 frames (128x128 resolution) per second, achieving
an 11x speedup. We also explored model scales ranging from 700M to 3B
parameters, observing significant improvements in generation quality, with FVD
scores dropping from 103.3 to 55.3 on UCF101 and from 25.5 to 19.5 on K600,
demonstrating the scalability of our approach.
|
2502.07738
|
EIQP: Execution-time-certified and Infeasibility-detecting QP Solver
|
eess.SY cs.SY math.OC
|
Solving real-time quadratic programming (QP) is a ubiquitous task in control
engineering, such as in model predictive control and control barrier
function-based QP. In such real-time scenarios, certifying that the employed QP
algorithm can either return a solution within a predefined level of optimality
or detect QP infeasibility before the predefined sampling time is a pressing
requirement. This article considers convex QP (including linear programming)
and adopts its homogeneous formulation to achieve infeasibility detection.
Exploiting this homogeneous formulation, this article proposes a novel
infeasible interior-point method (IPM) algorithm with the best theoretical
$O(\sqrt{n})$ iteration complexity that feasible IPM algorithms enjoy. The
iteration complexity is proved to be \textit{exact} (rather than an upper
bound), \textit{simple to calculate}, and \textit{data independent}, with the
value
$\left\lceil\frac{\log(\frac{n+1}{\epsilon})}{-\log(1-\frac{0.414213}{\sqrt{n+1}})}\right\rceil$
(where $n$ and $\epsilon$ denote the number of constraints and the predefined
optimality level, respectively), making it appealing to certify the execution
time of online time-varying convex QPs. The proposed algorithm is simple to
implement without requiring a line search procedure (uses the full Newton
step), and its C-code implementation (offering MATLAB, Julia, and Python
interfaces) and numerical examples are publicly available at
https://github.com/liangwu2019/EIQP.
|
2502.07739
|
HRP: High-Rank Preheating for Superior LoRA Initialization
|
cs.LG
|
This paper studies the crucial impact of initialization on the convergence
properties of Low-Rank Adaptation (LoRA). We theoretically demonstrate that
random initialization, a widely used schema, will likely lead LoRA to random
low-rank results, rather than the best low-rank result. While this issue can be
mitigated by adjusting initialization towards a well-informed direction, it
relies on prior knowledge of the target, which is typically unknown in
real-world scenarios. To approximate this well-informed initial direction, we
propose High-Rank Preheating (HRP), which fine-tunes high-rank LoRA for a few
steps and uses the singular value decomposition of the preheated result as a
superior initialization. HRP initialization is theory-supported to combine the
convergence strengths of high-rank LoRA and the generalization strengths of
low-rank LoRA. Extensive experiments demonstrate that HRP significantly
enhances LoRA's effectiveness across various models and tasks, achieving
performance comparable to full-parameter fine-tuning and outperforming other
initialization strategies.
|
2502.07741
|
Advancing climate model interpretability: Feature attribution for Arctic
melt anomalies
|
cs.LG
|
The focus of our work is improving the interpretability of anomalies in
climate models and advancing our understanding of Arctic melt dynamics. The
Arctic and Antarctic ice sheets are experiencing rapid surface melting and
increased freshwater runoff, contributing significantly to global sea level
rise. Understanding the mechanisms driving snowmelt in these regions is
crucial. ERA5, a widely used reanalysis dataset in polar climate studies,
offers extensive climate variables and global data assimilation. However, its
snowmelt model employs an energy imbalance approach that may oversimplify the
complexity of surface melt. In contrast, the Glacier Energy and Mass Balance
(GEMB) model incorporates additional physical processes, such as snow
accumulation, firn densification, and meltwater percolation/refreezing,
providing a more detailed representation of surface melt dynamics. In this
research, we focus on analyzing surface snowmelt dynamics of the Greenland Ice
Sheet using feature attribution for anomalous melt events in ERA5 and GEMB
models. We present a novel unsupervised attribution method leveraging
counterfactual explanation method to analyze detected anomalies in ERA5 and
GEMB. Our anomaly detection results are validated using MEaSUREs ground-truth
data, and the attributions are evaluated against established feature ranking
methods, including XGBoost, Shapley values, and Random Forest. Our attribution
framework identifies the physics behind each model and the climate features
driving melt anomalies. These findings demonstrate the utility of our
attribution method in enhancing the interpretability of anomalies in climate
models and advancing our understanding of Arctic melt dynamics.
|
2502.07746
|
HiPoNet: A Topology-Preserving Multi-View Neural Network For High
Dimensional Point Cloud and Single-Cell Data
|
cs.LG math.AT
|
In this paper, we propose HiPoNet, an end-to-end differentiable neural
network for regression, classification, and representation learning on
high-dimensional point clouds. Single-cell data can have high dimensionality
exceeding the capabilities of existing methods point cloud tailored for 3D
data. Moreover, modern single-cell and spatial experiments now yield entire
cohorts of datasets (i.e. one on every patient), necessitating models that can
process large, high-dimensional point clouds at scale. Most current approaches
build a single nearest-neighbor graph, discarding important geometric
information. In contrast, HiPoNet forms higher-order simplicial complexes
through learnable feature reweighting, generating multiple data views that
disentangle distinct biological processes. It then employs simplicial wavelet
transforms to extract multi-scale features - capturing both local and global
topology. We empirically show that these components preserve topological
information in the learned representations, and that HiPoNet significantly
outperforms state-of-the-art point-cloud and graph-based models on single cell.
We also show an application of HiPoNet on spatial transcriptomics datasets
using spatial co-ordinates as one of the views. Overall, HiPoNet offers a
robust and scalable solution for high-dimensional data analysis.
|
2502.07747
|
WHODUNIT: Evaluation benchmark for culprit detection in mystery stories
|
cs.CL cs.AI
|
We present a novel data set, WhoDunIt, to assess the deductive reasoning
capabilities of large language models (LLM) within narrative contexts.
Constructed from open domain mystery novels and short stories, the dataset
challenges LLMs to identify the perpetrator after reading and comprehending the
story. To evaluate model robustness, we apply a range of character-level name
augmentations, including original names, name swaps, and substitutions with
well-known real and/or fictional entities from popular discourse. We further
use various prompting styles to investigate the influence of prompting on
deductive reasoning accuracy.
We conduct evaluation study with state-of-the-art models, specifically
GPT-4o, GPT-4-turbo, and GPT-4o-mini, evaluated through multiple trials with
majority response selection to ensure reliability. The results demonstrate that
while LLMs perform reliably on unaltered texts, accuracy diminishes with
certain name substitutions, particularly those with wide recognition. This
dataset is publicly available here.
|
2502.07749
|
Whole-Genome Phenotype Prediction with Machine Learning: Open Problems
in Bacterial Genomics
|
q-bio.GN cs.LG
|
How can we identify causal genetic mechanisms that govern bacterial traits?
Initial efforts entrusting machine learning models to handle the task of
predicting phenotype from genotype return high accuracy scores. However,
attempts to extract any meaning from the predictive models are found to be
corrupted by falsely identified "causal" features. Relying solely on pattern
recognition and correlations is unreliable, significantly so in bacterial
genomics settings where high-dimensionality and spurious associations are the
norm. Though it is not yet clear whether we can overcome this hurdle,
significant efforts are being made towards discovering potential high-risk
bacterial genetic variants. In view of this, we set up open problems
surrounding phenotype prediction from bacterial whole-genome datasets and
extending those to learning causal effects, and discuss challenges that impact
the reliability of a machine's decision-making when faced with datasets of this
nature.
|
2502.07750
|
PFedDST: Personalized Federated Learning with Decentralized Selection
Training
|
cs.LG cs.AI
|
Distributed Learning (DL) enables the training of machine learning models
across multiple devices, yet it faces challenges like non-IID data
distributions and device capability disparities, which can impede training
efficiency. Communication bottlenecks further complicate traditional Federated
Learning (FL) setups. To mitigate these issues, we introduce the Personalized
Federated Learning with Decentralized Selection Training (PFedDST) framework.
PFedDST enhances model training by allowing devices to strategically evaluate
and select peers based on a comprehensive communication score. This score
integrates loss, task similarity, and selection frequency, ensuring optimal
peer connections. This selection strategy is tailored to increase local
personalization and promote beneficial peer collaborations to strengthen the
stability and efficiency of the training process. Our experiments demonstrate
that PFedDST not only enhances model accuracy but also accelerates convergence.
This approach outperforms state-of-the-art methods in handling data
heterogeneity, delivering both faster and more effective training in diverse
and decentralized systems.
|
2502.07751
|
CausalGeD: Blending Causality and Diffusion for Spatial Gene Expression
Generation
|
cs.CV q-bio.GN
|
The integration of single-cell RNA sequencing (scRNA-seq) and spatial
transcriptomics (ST) data is crucial for understanding gene expression in
spatial context. Existing methods for such integration have limited
performance, with structural similarity often below 60\%, We attribute this
limitation to the failure to consider causal relationships between genes. We
present CausalGeD, which combines diffusion and autoregressive processes to
leverage these relationships. By generalizing the Causal Attention Transformer
from image generation to gene expression data, our model captures regulatory
mechanisms without predefined relationships. Across 10 tissue datasets,
CausalGeD outperformed state-of-the-art baselines by 5- 32\% in key metrics,
including Pearson's correlation and structural similarity, advancing both
technical and biological insights.
|
2502.07752
|
Towards Efficient Optimizer Design for LLM via Structured Fisher
Approximation with a Low-Rank Extension
|
cs.LG cs.AI stat.ML
|
Designing efficient optimizers for large language models (LLMs) with
low-memory requirements and fast convergence is an important and challenging
problem. This paper makes a step towards the systematic design of such
optimizers through the lens of structured Fisher information matrix (FIM)
approximation. We show that many state-of-the-art efficient optimizers can be
viewed as solutions to FIM approximation (under the Frobenius norm) with
specific structural assumptions. Building on these insights, we propose two
design recommendations of practical efficient optimizers for LLMs, involving
the careful selection of structural assumptions to balance generality and
efficiency, and enhancing memory efficiency of optimizers with general
structures through a novel low-rank extension framework. We demonstrate how to
use each design approach by deriving new memory-efficient optimizers: Row and
Column Scaled SGD (RACS) and Adaptive low-dimensional subspace estimation
(Alice). Experiments on LLaMA pre-training (up to 1B parameters) validate the
effectiveness, showing faster and better convergence than existing
memory-efficient baselines and Adam with little memory overhead. Notably, Alice
achieves better than 2x faster convergence over Adam, while RACS delivers
strong performance on the 1B model with SGD-like memory.
|
2502.07753
|
Direct Ascent Synthesis: Revealing Hidden Generative Capabilities in
Discriminative Models
|
cs.CV
|
We demonstrate that discriminative models inherently contain powerful
generative capabilities, challenging the fundamental distinction between
discriminative and generative architectures. Our method, Direct Ascent
Synthesis (DAS), reveals these latent capabilities through multi-resolution
optimization of CLIP model representations. While traditional inversion
attempts produce adversarial patterns, DAS achieves high-quality image
synthesis by decomposing optimization across multiple spatial scales (1x1 to
224x224), requiring no additional training. This approach not only enables
diverse applications -- from text-to-image generation to style transfer -- but
maintains natural image statistics ($1/f^2$ spectrum) and guides the generation
away from non-robust adversarial patterns. Our results demonstrate that
standard discriminative models encode substantially richer generative knowledge
than previously recognized, providing new perspectives on model
interpretability and the relationship between adversarial examples and natural
image synthesis.
|
2502.07754
|
MeshSplats: Mesh-Based Rendering with Gaussian Splatting Initialization
|
cs.GR cs.CV
|
Gaussian Splatting (GS) is a recent and pivotal technique in 3D computer
graphics. GS-based algorithms almost always bypass classical methods such as
ray tracing, which offers numerous inherent advantages for rendering. For
example, ray tracing is able to handle incoherent rays for advanced lighting
effects, including shadows and reflections. To address this limitation, we
introduce MeshSplats, a method which converts GS to a mesh-like format.
Following the completion of training, MeshSplats transforms Gaussian elements
into mesh faces, enabling rendering using ray tracing methods with all their
associated benefits. Our model can be utilized immediately following
transformation, yielding a mesh of slightly reduced quality without additional
training. Furthermore, we can enhance the reconstruction quality through the
application of a dedicated optimization algorithm that operates on mesh faces
rather than Gaussian components. The efficacy of our method is substantiated by
experimental results, underscoring its extensive applications in computer
graphics and image processing.
|
2502.07755
|
An Advanced NLP Framework for Automated Medical Diagnosis with DeBERTa
and Dynamic Contextual Positional Gating
|
cs.CL cs.AI
|
This paper presents a novel Natural Language Processing (NLP) framework for
enhancing medical diagnosis through the integration of advanced techniques in
data augmentation, feature extraction, and classification. The proposed
approach employs back-translation to generate diverse paraphrased datasets,
improving robustness and mitigating overfitting in classification tasks.
Leveraging Decoding-enhanced BERT with Disentangled Attention (DeBERTa) with
Dynamic Contextual Positional Gating (DCPG), the model captures fine-grained
contextual and positional relationships, dynamically adjusting the influence of
positional information based on semantic context to produce high-quality text
embeddings. For classification, an Attention-Based Feedforward Neural Network
(ABFNN) is utilized, effectively focusing on the most relevant features to
improve decision-making accuracy. Applied to the classification of symptoms,
clinical notes, and other medical texts, this architecture demonstrates its
ability to address the complexities of medical data. The combination of data
augmentation, contextual embedding generation, and advanced classification
mechanisms offers a robust and accurate diagnostic tool, with potential
applications in automated medical diagnosis and clinical decision support. This
method demonstrates the effectiveness of the proposed NLP framework for medical
diagnosis, achieving remarkable results with an accuracy of 99.78%, recall of
99.72%, precision of 99.79%, and an F1-score of 99.75%. These metrics not only
underscore the model's robust performance in classifying medical texts with
exceptional precision and reliability but also highlight its superiority over
existing methods, making it a highly promising tool for automated diagnostic
systems.
|
2502.07758
|
Novel computational workflows for natural and biomedical image
processing based on hypercomplex algebras
|
cs.CV cs.LG
|
Hypercomplex image processing extends conventional techniques in a unified
paradigm encompassing algebraic and geometric principles. This work leverages
quaternions and the two-dimensional orthogonal planes split framework
(splitting of a quaternion - representing a pixel - into pairs of orthogonal 2D
planes) for natural/biomedical image analysis through the following
computational workflows and outcomes: natural/biomedical image re-colorization,
natural image de-colorization, natural/biomedical image contrast enhancement,
computational re-staining and stain separation in histological images, and
performance gains in machine/deep learning pipelines for histological images.
The workflows are analyzed separately for natural and biomedical images to
showcase the effectiveness of the proposed approaches. The proposed workflows
can regulate color appearance (e.g. with alternative renditions and grayscale
conversion) and image contrast, be part of automated image processing pipelines
(e.g. isolating stain components, boosting learning models), and assist in
digital pathology applications (e.g. enhancing biomarker visibility, enabling
colorblind-friendly renditions). Employing only basic arithmetic and matrix
operations, this work offers a computationally accessible methodology - in the
hypercomplex domain - that showcases versatility and consistency across image
processing tasks and a range of computer vision and biomedical applications.
The proposed non-data-driven methods achieve comparable or better results
(particularly in cases involving well-known methods) to those reported in the
literature, showcasing the potential of robust theoretical frameworks with
practical effectiveness. Results, methods, and limitations are detailed
alongside discussion of promising extensions, emphasizing the potential of
feature-rich mathematical/computational frameworks for natural and biomedical
images.
|
2502.07760
|
Scalable Fingerprinting of Large Language Models
|
cs.CR cs.LG
|
Model fingerprinting has emerged as a powerful tool for model owners to
identify their shared model given API access. However, to lower false discovery
rate, fight fingerprint leakage, and defend against coalitions of model users
attempting to bypass detection, we argue that {\em scalability} is critical,
i.e., scaling up the number of fingerprints one can embed into a model. Hence,
we pose scalability as a crucial requirement for fingerprinting schemes. We
experiment with fingerprint design at a scale significantly larger than
previously considered, and introduce a new method, dubbed Perinucleus sampling,
to generate scalable, persistent, and harmless fingerprints. We demonstrate
that this scheme can add 24,576 fingerprints to a Llama-3.1-8B model -- two
orders of magnitude more than existing schemes -- without degrading the model's
utility. Our inserted fingerprints persist even after supervised fine-tuning on
standard post-training data. We further address security risks for
fingerprinting, and theoretically and empirically show how a scalable
fingerprinting scheme like ours can mitigate these risks.
|
2502.07764
|
Polynomial-Time Approximability of Constrained Reinforcement Learning
|
cs.DS cs.AI cs.LG
|
We study the computational complexity of approximating general constrained
Markov decision processes. Our primary contribution is the design of a
polynomial time $(0,\epsilon)$-additive bicriteria approximation algorithm for
finding optimal constrained policies across a broad class of recursively
computable constraints, including almost-sure, chance, expectation, and their
anytime variants. Matching lower bounds imply our approximation guarantees are
optimal so long as $P \neq NP$. The generality of our approach results in
answers to several long-standing open complexity questions in the constrained
reinforcement learning literature. Specifically, we are the first to prove
polynomial-time approximability for the following settings: policies under
chance constraints, deterministic policies under multiple expectation
constraints, policies under non-homogeneous constraints (i.e., constraints of
different types), and policies under constraints for continuous-state
processes.
|
2502.07771
|
Breaking Down Bias: On The Limits of Generalizable Pruning Strategies
|
cs.CL cs.AI cs.CY cs.LG
|
We employ model pruning to examine how LLMs conceptualize racial biases, and
whether a generalizable mitigation strategy for such biases appears feasible.
Our analysis yields several novel insights. We find that pruning can be an
effective method to reduce bias without significantly increasing anomalous
model behavior. Neuron-based pruning strategies generally yield better results
than approaches pruning entire attention heads. However, our results also show
that the effectiveness of either approach quickly deteriorates as pruning
strategies become more generalized. For instance, a model that is trained on
removing racial biases in the context of financial decision-making poorly
generalizes to biases in commercial transactions. Overall, our analysis
suggests that racial biases are only partially represented as a general concept
within language models. The other part of these biases is highly
context-specific, suggesting that generalizable mitigation strategies may be of
limited effectiveness. Our findings have important implications for legal
frameworks surrounding AI. In particular, they suggest that an effective
mitigation strategy should include the allocation of legal responsibility on
those that deploy models in a specific use case.
|
2502.07772
|
Automatic Robot Task Planning by Integrating Large Language Model with
Genetic Programming
|
cs.RO cs.SY eess.SY
|
Accurate task planning is critical for controlling autonomous systems, such
as robots, drones, and self-driving vehicles. Behavior Trees (BTs) are
considered one of the most prominent control-policy-defining frameworks in task
planning, due to their modularity, flexibility, and reusability. Generating
reliable and accurate BT-based control policies for robotic systems remains
challenging and often requires domain expertise. In this paper, we present the
LLM-GP-BT technique that leverages the Large Language Model (LLM) and Genetic
Programming (GP) to automate the generation and configuration of BTs. The
LLM-GP-BT technique processes robot task commands expressed in human natural
language and converts them into accurate and reliable BT-based task plans in a
computationally efficient and user-friendly manner. The proposed technique is
systematically developed and validated through simulation experiments,
demonstrating its potential to streamline task planning for autonomous systems.
|
2502.07774
|
Optimistic Interior Point Methods for Sequential Hypothesis Testing by
Betting
|
cs.LG
|
The technique of "testing by betting" frames nonparametric sequential
hypothesis testing as a multiple-round game, where a player bets on future
observations that arrive in a streaming fashion, accumulates wealth that
quantifies evidence against the null hypothesis, and rejects the null once the
wealth exceeds a specified threshold while controlling the false positive
error. Designing an online learning algorithm that achieves a small regret in
the game can help rapidly accumulate the bettor's wealth, which in turn can
shorten the time to reject the null hypothesis under the alternative $H_1$.
However, many of the existing works employ the Online Newton Step (ONS) to
update within a halved decision space to avoid a gradient explosion issue,
which is potentially conservative for rapid wealth accumulation. In this paper,
we introduce a novel strategy utilizing interior-point methods in optimization
that allows updates across the entire interior of the decision space without
the risk of gradient explosion. Our approach not only maintains strong
statistical guarantees but also facilitates faster null hypothesis rejection in
critical scenarios, overcoming the limitations of existing approaches.
|
2502.07776
|
Auditing Prompt Caching in Language Model APIs
|
cs.CL cs.CR cs.LG
|
Prompt caching in large language models (LLMs) results in data-dependent
timing variations: cached prompts are processed faster than non-cached prompts.
These timing differences introduce the risk of side-channel timing attacks. For
example, if the cache is shared across users, an attacker could identify cached
prompts from fast API response times to learn information about other users'
prompts. Because prompt caching may cause privacy leakage, transparency around
the caching policies of API providers is important. To this end, we develop and
conduct statistical audits to detect prompt caching in real-world LLM API
providers. We detect global cache sharing across users in seven API providers,
including OpenAI, resulting in potential privacy leakage about users' prompts.
Timing variations due to prompt caching can also result in leakage of
information about model architecture. Namely, we find evidence that OpenAI's
embedding model is a decoder-only Transformer, which was previously not
publicly known.
|
2502.07778
|
Stay-Positive: A Case for Ignoring Real Image Features in Fake Image
Detection
|
cs.CV
|
Detecting AI generated images is a challenging yet essential task. A primary
difficulty arises from the detectors tendency to rely on spurious patterns,
such as compression artifacts, which can influence its decisions. These issues
often stem from specific patterns that the detector associates with the real
data distribution, making it difficult to isolate the actual generative traces.
We argue that an image should be classified as fake if and only if it contains
artifacts introduced by the generative model. Based on this premise, we propose
Stay Positive, an algorithm designed to constrain the detectors focus to
generative artifacts while disregarding those associated with real data.
Experimental results demonstrate that detectors trained with Stay Positive
exhibit reduced susceptibility to spurious correlations, leading to improved
generalization and robustness to post processing. Additionally, unlike
detectors that associate artifacts with real images, those that focus purely on
fake artifacts are better at detecting inpainted real images.
|
2502.07780
|
DarwinLM: Evolutionary Structured Pruning of Large Language Models
|
cs.LG cs.CL
|
Large Language Models (LLMs) have achieved significant success across various
NLP tasks. However, their massive computational costs limit their widespread
use, particularly in real-time applications. Structured pruning offers an
effective solution by compressing models and directly providing end-to-end
speed improvements, regardless of the hardware environment. Meanwhile,
different components of the model exhibit varying sensitivities towards
pruning, calling for \emph{non-uniform} model compression. However, a pruning
method should not only identify a capable substructure, but also account for
post-compression training. To this end, we propose \sysname, a method for
\emph{training-aware} structured pruning. \sysname builds upon an evolutionary
search process, generating multiple offspring models in each generation through
mutation, and selecting the fittest for survival. To assess the effect of
post-training, we incorporate a lightweight, multistep training process within
the offspring population, progressively increasing the number of tokens and
eliminating poorly performing models in each selection stage. We validate our
method through extensive experiments on Llama-2-7B, Llama-3.1-8B and
Qwen-2.5-14B-Instruct, achieving state-of-the-art performance for structured
pruning. For instance, \sysname surpasses ShearedLlama while requiring
$5\times$ less training data during post-compression training.
|
2502.07782
|
A Flag Decomposition for Hierarchical Datasets
|
cs.CV
|
Flag manifolds encode hierarchical nested sequences of subspaces and serve as
powerful structures for various computer vision and machine learning
applications. Despite their utility in tasks such as dimensionality reduction,
motion averaging, and subspace clustering, current applications are often
restricted to extracting flags using common matrix decomposition methods like
the singular value decomposition. Here, we address the need for a general
algorithm to factorize and work with hierarchical datasets. In particular, we
propose a novel, flag-based method that decomposes arbitrary hierarchical
real-valued data into a hierarchy-preserving flag representation in Stiefel
coordinates. Our work harnesses the potential of flag manifolds in applications
including denoising, clustering, and few-shot learning.
|
2502.07783
|
Curvature Tuning: Provable Training-free Model Steering From a Single
Parameter
|
cs.LG
|
The scaling of model size and data size has reshaped the paradigm of AI. As a
result, the common protocol to leverage the latest models is to steer them
towards a specific downstream task of interest through {\em fine-tuning}.
Despite its importance, the main methods for fine-tuning remain limited to full
or low-rank adapters--containing countless hyper-parameters and lacking
interpretability. In this paper, we take a step back and demonstrate how novel
and explainable post-training steering solutions can be derived theoretically
from {\em spline operators}, a rich mathematical framing of Deep Networks that
was recently developed. Our method--coined \textbf{Curvature Tuning (CT)}--has
a single parameter that provably modulates the curvature of the model's
decision boundary henceforth allowing training-free steering. This makes CT
both more efficient and interpretable than conventional fine-tuning methods. We
empirically validate its effectiveness in improving generalization and
robustness of pretrained models. For example, CT improves out-of-distribution
transfer performances of ResNet-18/50 by 2.57\%/1.74\% across seventeen
downstream datasets, and improves RobustBench robust accuracy by
11.76\%/348.44\%. Additionally, we apply CT to ReLU-based Swin-T/S, improving
their generalization on nine downstream datasets by 2.43\%/3.33\%. Our code is
available at
\href{https://github.com/Leon-Leyang/curvature-tuning}{https://github.com/Leon-Leyang/curvature-tuning}.
|
2502.07784
|
MatSwap: Light-aware material transfers in images
|
cs.CV cs.GR
|
We present MatSwap, a method to transfer materials to designated surfaces in
an image photorealistically. Such a task is non-trivial due to the large
entanglement of material appearance, geometry, and lighting in a photograph. In
the literature, material editing methods typically rely on either cumbersome
text engineering or extensive manual annotations requiring artist knowledge and
3D scene properties that are impractical to obtain. In contrast, we propose to
directly learn the relationship between the input material -- as observed on a
flat surface -- and its appearance within the scene, without the need for
explicit UV mapping. To achieve this, we rely on a custom light- and
geometry-aware diffusion model. We fine-tune a large-scale pre-trained
text-to-image model for material transfer using our synthetic dataset,
preserving its strong priors to ensure effective generalization to real images.
As a result, our method seamlessly integrates a desired material into the
target location in the photograph while retaining the identity of the scene. We
evaluate our method on synthetic and real images and show that it compares
favorably to recent work both qualitatively and quantitatively. We will release
our code and data upon publication.
|
2502.07785
|
Pippo: High-Resolution Multi-View Humans from a Single Image
|
cs.CV cs.GR
|
We present Pippo, a generative model capable of producing 1K resolution dense
turnaround videos of a person from a single casually clicked photo. Pippo is a
multi-view diffusion transformer and does not require any additional inputs -
e.g., a fitted parametric model or camera parameters of the input image. We
pre-train Pippo on 3B human images without captions, and conduct multi-view
mid-training and post-training on studio captured humans. During mid-training,
to quickly absorb the studio dataset, we denoise several (up to 48) views at
low-resolution, and encode target cameras coarsely using a shallow MLP. During
post-training, we denoise fewer views at high-resolution and use pixel-aligned
controls (e.g., Spatial anchor and Plucker rays) to enable 3D consistent
generations. At inference, we propose an attention biasing technique that
allows Pippo to simultaneously generate greater than 5 times as many views as
seen during training. Finally, we also introduce an improved metric to evaluate
3D consistency of multi-view generations, and show that Pippo outperforms
existing works on multi-view human generation from a single image.
|
2502.07786
|
Counterexample Guided Program Repair Using Zero-Shot Learning and
MaxSAT-based Fault Localization
|
cs.SE cs.AI
|
Automated Program Repair (APR) for introductory programming assignments
(IPAs) is motivated by the large number of student enrollments in programming
courses each year. Since providing feedback on IPAs requires substantial time
and effort from faculty, personalized feedback often involves suggesting fixes
to students' programs. Formal Methods (FM)-based semantic repair approaches,
check a program's execution against a test suite or reference solution, are
effective but limited. These tools excel at identifying buggy parts but can
only fix programs if the correct implementation and the faulty one share the
same control flow graph. Conversely, Large Language Models (LLMs) are used for
APR but often make extensive instead of minimal rewrites. This leads to more
invasive fixes, making it harder for students to learn from their mistakes. In
summary, LLMs excel at completing strings, while FM-based fault localization
excel at identifying buggy parts of a program. In this paper, we propose a
novel approach that combines the strengths of both FM-based fault localization
and LLMs, via zero-shot learning, to enhance APR for IPAs. Our method uses
MaxSAT-based fault localization to identify buggy parts of a program, then
presents the LLM with a program sketch devoid of these buggy statements. This
hybrid approach follows a CEGIS loop to iteratively refine the program. We ask
the LLM to synthesize the missing parts, which are then checked against a test
suite. If the suggested program is incorrect, a counterexample from the test
suite is fed back to the LLM. Our experiments show that our counterexample
guided approach, using MaxSAT-based bug-free program sketches, significantly
improves the repair capabilities of all six evaluated LLMs. This method allows
LLMs to repair more programs with smaller fixes, outperforming other
configurations and state-of-the-art symbolic program repair tools.
|
2502.07789
|
Do AI assistants help students write formal specifications? A study with
ChatGPT and the B-Method
|
cs.CY cs.AI
|
This paper investigates the role of AI assistants, specifically OpenAI's
ChatGPT, in teaching formal methods (FM) to undergraduate students, using the
B-method as a formal specification technique. While existing studies
demonstrate the effectiveness of AI in coding tasks, no study reports on its
impact on formal specifications. We examine whether ChatGPT provides an
advantage when writing B-specifications and analyse student trust in its
outputs. Our findings indicate that the AI does not help students to enhance
the correctness of their specifications, with low trust correlating to better
outcomes. Additionally, we identify a behavioural pattern with which to
interact with ChatGPT which may influence the correctness of B-specifications.
|
2502.07790
|
Can Generative AI be Egalitarian?
|
cs.CY cs.AI
|
The recent explosion of "foundation" generative AI models has been built upon
the extensive extraction of value from online sources, often without
corresponding reciprocation. This pattern mirrors and intensifies the
extractive practices of surveillance capitalism, while the potential for
enormous profit has challenged technology organizations' commitments to
responsible AI practices, raising significant ethical and societal concerns.
However, a promising alternative is emerging: the development of models that
rely on content willingly and collaboratively provided by users. This article
explores this "egalitarian" approach to generative AI, taking inspiration from
the successful model of Wikipedia. We explore the potential implications of
this approach for the design, development, and constraints of future foundation
models. We argue that such an approach is not only ethically sound but may also
lead to models that are more responsive to user needs, more diverse in their
training data, and ultimately more aligned with societal values. Furthermore,
we explore potential challenges and limitations of this approach, including
issues of scalability, quality control, and potential biases inherent in
volunteer-contributed content.
|
2502.07791
|
Simple demonstration of different types of coupling in multiphysics
numerical problems
|
cs.CE physics.comp-ph
|
Numerical modelling of coupled multiphysics phenomena is becoming an
increasingly important subject in applied mathematics. The main challenge in
teaching this subject is the complexity of both the mathematical models and
their numerical implementation. In this note, a simple demonstrator is proposed
that enables demonstration of and some hands-on experience with three types of
coupling commonly used in computational science: one-way coupling, explicit
sequential coupling, and full coupling. It makes use of a familiar nonlinear
heat equation in 1D, with the solution being a propagating heat wave.
|
2502.07794
|
Regulatory Science Innovation for Generative AI and Large Language
Models in Health and Medicine: A Global Call for Action
|
cs.CY cs.AI
|
The integration of generative AI (GenAI) and large language models (LLMs) in
healthcare presents both unprecedented opportunities and challenges,
necessitating innovative regulatory approaches. GenAI and LLMs offer broad
applications, from automating clinical workflows to personalizing diagnostics.
However, the non-deterministic outputs, broad functionalities and complex
integration of GenAI and LLMs challenge existing medical device regulatory
frameworks, including the total product life cycle (TPLC) approach. Here we
discuss the constraints of the TPLC approach to GenAI and LLM-based medical
device regulation, and advocate for global collaboration in regulatory science
research. This serves as the foundation for developing innovative approaches
including adaptive policies and regulatory sandboxes, to test and refine
governance in real-world settings. International harmonization, as seen with
the International Medical Device Regulators Forum, is essential to manage
implications of LLM on global health, including risks of widening health
inequities driven by inherent model biases. By engaging multidisciplinary
expertise, prioritizing iterative, data-driven approaches, and focusing on the
needs of diverse populations, global regulatory science research enables the
responsible and equitable advancement of LLM innovations in healthcare.
|
2502.07800
|
neuro2voc: Decoding Vocalizations from Neural Activity
|
q-bio.NC cs.LG eess.AS
|
Accurate decoding of neural spike trains and relating them to motor output is
a challenging task due to the inherent sparsity and length in neural spikes and
the complexity of brain circuits. This master project investigates experimental
methods for decoding zebra finch motor outputs (in both discrete syllables and
continuous spectrograms), from invasive neural recordings obtained from
Neuropixels.
There are three major achievements: (1) XGBoost with SHAP analysis trained on
spike rates revealed neuronal interaction patterns crucial for syllable
classification. (2) Novel method (tokenizing neural data with GPT2) and
architecture (Mamba2) demonstrated potential for decoding of syllables using
spikes. (3) A combined contrastive learning-VAE framework successfully
generated spectrograms from binned neural data.
This work establishes a promising foundation for neural decoding of complex
motor outputs and offers several novel methodological approaches for processing
sparse neural data.
|
2502.07802
|
Movie Weaver: Tuning-Free Multi-Concept Video Personalization with
Anchored Prompts
|
cs.CV cs.GR cs.LG
|
Video personalization, which generates customized videos using reference
images, has gained significant attention. However, prior methods typically
focus on single-concept personalization, limiting broader applications that
require multi-concept integration. Attempts to extend these models to multiple
concepts often lead to identity blending, which results in composite characters
with fused attributes from multiple sources. This challenge arises due to the
lack of a mechanism to link each concept with its specific reference image. We
address this with anchored prompts, which embed image anchors as unique tokens
within text prompts, guiding accurate referencing during generation.
Additionally, we introduce concept embeddings to encode the order of reference
images. Our approach, Movie Weaver, seamlessly weaves multiple
concepts-including face, body, and animal images-into one video, allowing
flexible combinations in a single model. The evaluation shows that Movie Weaver
outperforms existing methods for multi-concept video personalization in
identity preservation and overall quality.
|
2502.07803
|
Reasoning-as-Logic-Units: Scaling Test-Time Reasoning in Large Language
Models Through Logic Unit Alignment
|
cs.AI cs.LG
|
Chain-of-Thought (CoT) prompting has shown promise in enhancing the reasoning
capabilities of large language models (LLMs) by generating natural language
(NL) rationales that lead to the final answer. However, it struggles with
numerical computation, which has somehow led to the development of
program-aided techniques. Despite their potential, a persistent challenge
remains: inconsistencies between LLM-reported reasoning steps and the logic in
generated programs, which we term ``reasoning hallucinations." This stems from
the inherent ambiguities of NL and the statistical nature of LLMs, which often
lack rigorous logical coherence. To address this challenge, we propose a novel
test-time scaling framework, Reasoning-as-Logic-Units (RaLU), which constructs
a more reliable reasoning path by aligning logical units between the generated
program and their corresponding NL descriptions. By decomposing the initially
generated program into discrete units using static analysis, RaLU engages in an
iterative dialogue with the LLM to judge, refine, and explain each unit. A
rewind-and-correct mechanism ensures alignment between code statements and task
requirements in each unit, ultimately forming a cohesive reasoning path under
the program's logic, from which the model reaches a final solution. Our
experiments demonstrate that RaLU significantly outperforms existing baselines
in mathematical reasoning (GSM8K, MATH) and algorithmic reasoning (HumanEval+,
MBPP+), underscoring its potential to advance LLM reasoning and programming by
offering enhanced accuracy and interpretability.
|
2502.07806
|
Quantum Powered Credit Risk Assessment: A Novel Approach using hybrid
Quantum-Classical Deep Neural Network for Row-Type Dependent Predictive
Analysis
|
q-fin.CP cs.AI cs.LG
|
The integration of Quantum Deep Learning (QDL) techniques into the landscape
of financial risk analysis presents a promising avenue for innovation. This
study introduces a framework for credit risk assessment in the banking sector,
combining quantum deep learning techniques with adaptive modeling for Row-Type
Dependent Predictive Analysis (RTDPA). By leveraging RTDPA, the proposed
approach tailors predictive models to different loan categories, aiming to
enhance the accuracy and efficiency of credit risk evaluation. While this work
explores the potential of integrating quantum methods with classical deep
learning for risk assessment, it focuses on the feasibility and performance of
this hybrid framework rather than claiming transformative industry-wide
impacts. The findings offer insights into how quantum techniques can complement
traditional financial analysis, paving the way for further advancements in
predictive modeling for credit risk.
|
2502.07807
|
CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in
Collaborative Perception
|
cs.CR cs.AI cs.CV cs.LG
|
Collaborative perception (CP) is a promising method for safe connected and
autonomous driving, which enables multiple vehicles to share sensing
information to enhance perception performance. However, compared with
single-vehicle perception, the openness of a CP system makes it more vulnerable
to malicious attacks that can inject malicious information to mislead the
perception of an ego vehicle, resulting in severe risks for safe driving. To
mitigate such vulnerability, we first propose a new paradigm for malicious
agent detection that effectively identifies malicious agents at the feature
level without requiring verification of final perception results, significantly
reducing computational overhead. Building on this paradigm, we introduce
CP-GuardBench, the first comprehensive dataset provided to train and evaluate
various malicious agent detection methods for CP systems. Furthermore, we
develop a robust defense method called CP-Guard+, which enhances the margin
between the representations of benign and malicious features through a
carefully designed Dual-Centered Contrastive Loss (DCCLoss). Finally, we
conduct extensive experiments on both CP-GuardBench and V2X-Sim, and
demonstrate the superiority of CP-Guard+.
|
2502.07809
|
Analyzing the Resource Utilization of Lambda Functions on Mobile
Devices: Case Studies on Kotlin and Swift
|
cs.SE cs.CL cs.PF
|
With billions of smartphones in use globally, the daily time spent on these
devices contributes significantly to overall electricity consumption. Given
this scale, even minor reductions in smartphone power use could result in
substantial energy savings. This study explores the impact of Lambda functions
on resource consumption in mobile programming. While Lambda functions are known
for enhancing code readability and conciseness, their use does not add to the
functional capabilities of a programming language. Our research investigates
the implications of using Lambda functions in terms of battery utilization,
memory usage, and execution time compared to equivalent code structures without
Lambda functions. Our findings reveal that Lambda functions impose a
considerable resource overhead on mobile devices without offering additional
functionalities.
|
2502.07811
|
CrossVideoMAE: Self-Supervised Image-Video Representation Learning with
Masked Autoencoders
|
cs.CV
|
Current video-based Masked Autoencoders (MAEs) primarily focus on learning
effective spatiotemporal representations from a visual perspective, which may
lead the model to prioritize general spatial-temporal patterns but often
overlook nuanced semantic attributes like specific interactions or sequences
that define actions - such as action-specific features that align more closely
with human cognition for space-time correspondence. This can limit the model's
ability to capture the essence of certain actions that are contextually rich
and continuous. Humans are capable of mapping visual concepts, object view
invariance, and semantic attributes available in static instances to comprehend
natural dynamic scenes or videos. Existing MAEs for videos and static images
rely on separate datasets for videos and images, which may lack the rich
semantic attributes necessary for fully understanding the learned concepts,
especially when compared to using video and corresponding sampled frame images
together. To this end, we propose CrossVideoMAE an end-to-end self-supervised
cross-modal contrastive learning MAE that effectively learns both video-level
and frame-level rich spatiotemporal representations and semantic attributes.
Our method integrates mutual spatiotemporal information from videos with
spatial information from sampled frames within a feature-invariant space, while
encouraging invariance to augmentations within the video domain. This objective
is achieved through jointly embedding features of visible tokens and combining
feature correspondence within and across modalities, which is critical for
acquiring rich, label-free guiding signals from both video and frame image
modalities in a self-supervised manner. Extensive experiments demonstrate that
our approach surpasses previous state-of-the-art methods and ablation studies
validate the effectiveness of our approach.
|
2502.07812
|
Unpaired Image Dehazing via Kolmogorov-Arnold Transformation of Latent
Features
|
cs.CV
|
This paper proposes an innovative framework for Unsupervised Image Dehazing
via Kolmogorov-Arnold Transformation, termed UID-KAT. Image dehazing is
recognized as a challenging and ill-posed vision task that requires complex
transformations and interpretations in the feature space. Recent advancements
have introduced Kolmogorov-Arnold Networks (KANs), inspired by the
Kolmogorov-Arnold representation theorem, as promising alternatives to
Multi-Layer Perceptrons (MLPs) since KANs can leverage their polynomial
foundation to more efficiently approximate complex functions while requiring
fewer layers than MLPs. Motivated by this potential, this paper explores the
use of KANs combined with adversarial training and contrastive learning to
model the intricate relationship between hazy and clear images. Adversarial
training is employed due to its capacity in producing high-fidelity images, and
contrastive learning promotes the model's emphasis on significant features
while suppressing the influence of irrelevant information. The proposed UID-KAT
framework is trained in an unsupervised setting to take advantage of the
abundance of real-world data and address the challenge of preparing paired
hazy/clean images. Experimental results show that UID-KAT achieves
state-of-the-art dehazing performance across multiple datasets and scenarios,
outperforming existing unpaired methods while reducing model complexity. The
source code for this work is publicly available at
https://github.com/tranleanh/uid-kat.
|
2502.07813
|
CryptoX : Compositional Reasoning Evaluation of Large Language Models
|
cs.CR cs.AI
|
The compositional reasoning capacity has long been regarded as critical to
the generalization and intelligence emergence of large language models LLMs.
However, despite numerous reasoning-related benchmarks, the compositional
reasoning capacity of LLMs is rarely studied or quantified in the existing
benchmarks. In this paper, we introduce CryptoX, an evaluation framework that,
for the first time, combines existing benchmarks and cryptographic, to quantify
the compositional reasoning capacity of LLMs. Building upon CryptoX, we
construct CryptoBench, which integrates these principles into several
benchmarks for systematic evaluation. We conduct detailed experiments on widely
used open-source and closed-source LLMs using CryptoBench, revealing a huge gap
between open-source and closed-source LLMs. We further conduct thorough
mechanical interpretability experiments to reveal the inner mechanism of LLMs'
compositional reasoning, involving subproblem decomposition, subproblem
inference, and summarizing subproblem conclusions. Through analysis based on
CryptoBench, we highlight the value of independently studying compositional
reasoning and emphasize the need to enhance the compositional reasoning
capabilities of LLMs.
|
2502.07814
|
Satellite Observations Guided Diffusion Model for Accurate
Meteorological States at Arbitrary Resolution
|
cs.LG cs.AI physics.ao-ph
|
Accurate acquisition of surface meteorological conditions at arbitrary
locations holds significant importance for weather forecasting and climate
simulation. Due to the fact that meteorological states derived from satellite
observations are often provided in the form of low-resolution grid fields, the
direct application of spatial interpolation to obtain meteorological states for
specific locations often results in significant discrepancies when compared to
actual observations. Existing downscaling methods for acquiring meteorological
state information at higher resolutions commonly overlook the correlation with
satellite observations. To bridge the gap, we propose Satellite-observations
Guided Diffusion Model (SGD), a conditional diffusion model pre-trained on ERA5
reanalysis data with satellite observations (GridSat) as conditions, which is
employed for sampling downscaled meteorological states through a zero-shot
guided sampling strategy and patch-based methods. During the training process,
we propose to fuse the information from GridSat satellite observations into
ERA5 maps via the attention mechanism, enabling SGD to generate atmospheric
states that align more accurately with actual conditions. In the sampling, we
employed optimizable convolutional kernels to simulate the upscale process,
thereby generating high-resolution ERA5 maps using low-resolution ERA5 maps as
well as observations from weather stations as guidance. Moreover, our devised
patch-based method promotes SGD to generate meteorological states at arbitrary
resolutions. Experiments demonstrate SGD fulfills accurate meteorological
states downscaling to 6.25km.
|
2502.07815
|
Decoding Complexity: Intelligent Pattern Exploration with CHPDA (Context
Aware Hybrid Pattern Detection Algorithm)
|
cs.CR cs.AI
|
Detecting sensitive data such as Personally Identifiable Information (PII)
and Protected Health Information (PHI) is critical for data security platforms.
This study evaluates regex-based pattern matching algorithms and exact-match
search techniques to optimize detection speed, accuracy, and scalability. Our
benchmarking results indicate that Google RE2 provides the best balance of
speed (10-15 ms/MB), memory efficiency (8-16 MB), and accuracy (99.5%) among
regex engines, outperforming PCRE while maintaining broader hardware
compatibility than Hyperscan. For exact matching, Aho-Corasick demonstrated
superior performance (8 ms/MB) and scalability for large datasets. Performance
analysis revealed that regex processing time scales linearly with dataset size
and pattern complexity. A hybrid AI + Regex approach achieved the highest F1
score (91. 6%) by improving recall and minimizing false positives. Device
benchmarking confirmed that our solution maintains efficient CPU and memory
usage on both high-performance and mid-range systems. Despite its
effectiveness, challenges remain, such as limited multilingual support and the
need for regular pattern updates. Future work should focus on expanding
language coverage, integrating data security and privacy management (DSPM) with
data loss prevention (DLP) tools, and enhancing regulatory compliance for
broader global adoption.
|
2502.07817
|
Temporal Model On Quantum Logic
|
cs.AI math.LO quant-ph
|
This paper introduces a unified theoretical framework for modeling temporal
memory dynamics, combining concepts from temporal logic, memory decay models,
and hierarchical contexts. The framework formalizes the evolution of
propositions over time using linear and branching temporal models,
incorporating exponential decay (Ebbinghaus forgetting curve) and reactivation
mechanisms via Bayesian updating. The hierarchical organization of memory is
represented using directed acyclic graphs to model recall dependencies and
interference. Novel insights include feedback dynamics, recursive influences in
memory chains, and the integration of entropy-based recall efficiency. This
approach provides a foundation for understanding memory processes across
cognitive and computational domains.
|
2502.07819
|
Enhancing kidney transplantation through multi-agent kidney exchange
programs: A comprehensive review and optimization models
|
cs.AI math.OC
|
This paper presents a comprehensive review of the last two decades of
research on Kidney Exchange Programs (KEPs), systematically categorizing and
classifying key contributions to provide readers with a structured
understanding of advancements in the field. The review highlights the evolution
of KEP methodologies and lays the foundation for our contribution. We propose
three mathematical models aimed at improving both the quantity and quality of
kidney transplants. Model 1 maximizes the number of transplants by focusing on
compatibility based on blood type and PRA, without additional constraints.
Model 2 introduces a minimum Human Leukocyte Antigen (HLA) compatibility
threshold to enhance transplant quality, though this leads to fewer matches.
Model 3 extends the problem to a Multi-Agent Kidney Exchange Program (MKEP),
pooling incompatible donor-recipient pairs across multiple agents, resulting in
a higher number of successful transplants while ensuring fairness across
agents. Sensitivity analyses demonstrate trade-offs between transplant quantity
and quality, with Model 3 striking the optimal balance by leveraging
multi-agent collaboration to improve both the number and quality of
transplants. These findings underscore the potential benefits of more
integrated kidney exchange systems.
|
2502.07820
|
Low-Rank Compression for IMC Arrays
|
cs.AR cs.AI
|
In this study, we address the challenge of low-rank model compression in the
context of in-memory computing (IMC) architectures. Traditional pruning
approaches, while effective in model size reduction, necessitate additional
peripheral circuitry to manage complex dataflows and mitigate dislocation
issues, leading to increased area and energy overheads. To circumvent these
drawbacks, we propose leveraging low-rank compression techniques, which, unlike
pruning, streamline the dataflow and seamlessly integrate with IMC
architectures. However, low-rank compression presents its own set of
challenges, namely i) suboptimal IMC array utilization and ii) compromised
accuracy. To address these issues, we introduce a novel approach i) employing
shift and duplicate kernel (SDK) mapping technique, which exploits idle IMC
columns for parallel processing, and ii) group low-rank convolution, which
mitigates the information imbalance in the decomposed matrices. Our
experimental results demonstrate that our proposed method achieves up to 2.5x
speedup or +20.9% accuracy boost over existing pruning techniques.
|
2502.07821
|
Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image
Classification and Object Detection
|
cs.CV cs.AI
|
It is well known that query-based attacks tend to have relatively higher
success rates in adversarial black-box attacks. While research on black-box
attacks is actively being conducted, relatively few studies have focused on
pixel attacks that target only a limited number of pixels. In image
classification, query-based pixel attacks often rely on patches, which heavily
depend on randomness and neglect the fact that scattered pixels are more
suitable for adversarial attacks. Moreover, to the best of our knowledge,
query-based pixel attacks have not been explored in the field of object
detection. To address these issues, we propose a novel pixel-based black-box
attack called Remember and Forget Pixel Attack using Reinforcement
Learning(RFPAR), consisting of two main components: the Remember and Forget
processes. RFPAR mitigates randomness and avoids patch dependency by leveraging
rewards generated through a one-step RL algorithm to perturb pixels. RFPAR
effectively creates perturbed images that minimize the confidence scores while
adhering to limited pixel constraints. Furthermore, we advance our proposed
attack beyond image classification to object detection, where RFPAR reduces the
confidence scores of detected objects to avoid detection. Experiments on the
ImageNet-1K dataset for classification show that RFPAR outperformed
state-of-the-art query-based pixel attacks. For object detection, using the
MSCOCO dataset with YOLOv8 and DDQ, RFPAR demonstrates comparable mAP reduction
to state-of-the-art query-based attack while requiring fewer query. Further
experiments on the Argoverse dataset using YOLOv8 confirm that RFPAR
effectively removed objects on a larger scale dataset. Our code is available at
https://github.com/KAU-QuantumAILab/RFPAR.
|
2502.07822
|
PDM-SSD: Single-Stage Three-Dimensional Object Detector With Point
Dilation
|
cs.CV cs.AI
|
Current Point-based detectors can only learn from the provided points, with
limited receptive fields and insufficient global learning capabilities for such
targets. In this paper, we present a novel Point Dilation Mechanism for
single-stage 3D detection (PDM-SSD) that takes advantage of these two
representations. Specifically, we first use a PointNet-style 3D backbone for
efficient feature encoding. Then, a neck with Point Dilation Mechanism (PDM) is
used to expand the feature space, which involves two key steps: point dilation
and feature filling. The former expands points to a certain size grid centered
around the sampled points in Euclidean space. The latter fills the unoccupied
grid with feature for backpropagation using spherical harmonic coefficients and
Gaussian density function in terms of direction and scale. Next, we associate
multiple dilation centers and fuse coefficients to obtain sparse grid features
through height compression. Finally, we design a hybrid detection head for
joint learning, where on one hand, the scene heatmap is predicted to complement
the voting point set for improved detection accuracy, and on the other hand,
the target probability of detected boxes are calibrated through feature fusion.
On the challenging Karlsruhe Institute of Technology and Toyota Technological
Institute (KITTI) dataset, PDM-SSD achieves state-of-the-art results for
multi-class detection among single-modal methods with an inference speed of 68
frames. We also demonstrate the advantages of PDM-SSD in detecting sparse and
incomplete objects through numerous object-level instances. Additionally, PDM
can serve as an auxiliary network to establish a connection between sampling
points and object centers, thereby improving the accuracy of the model without
sacrificing inference speed. Our code will be available at
https://github.com/AlanLiangC/PDM-SSD.git.
|
2502.07823
|
Runtime Tunable Tsetlin Machines for Edge Inference on eFPGAs
|
cs.AR cs.AI cs.LG
|
Embedded Field-Programmable Gate Arrays (eFPGAs) allow for the design of
hardware accelerators of edge Machine Learning (ML) applications at a lower
power budget compared with traditional FPGA platforms. However, the limited
eFPGA logic and memory significantly constrain compute capabilities and model
size. As such, ML application deployment on eFPGAs is in direct contrast with
the most recent FPGA approaches developing architecture-specific
implementations and maximizing throughput over resource frugality. This paper
focuses on the opposite side of this trade-off: the proposed eFPGA accelerator
focuses on minimizing resource usage and allowing flexibility for on-field
recalibration over throughput. This allows for runtime changes in model size,
architecture, and input data dimensionality without offline resynthesis. This
is made possible through the use of a bitwise compressed inference architecture
of the Tsetlin Machine (TM) algorithm. TM compute does not require any
multiplication operations, being limited to only bitwise AND, OR, NOT,
summations and additions. Additionally, TM model compression allows the entire
model to fit within the on-chip block RAM of the eFPGA. The paper uses this
accelerator to propose a strategy for runtime model tuning in the field. The
proposed approach uses 2.5x fewer Look-up-Tables (LUTs) and 3.38x fewer
registers than the current most resource-fugal design and achieves up to 129x
energy reduction compared with low-power microcontrollers running the same ML
application.
|
2502.07825
|
Pre-Trained Video Generative Models as World Simulators
|
cs.CV cs.AI
|
Video generative models pre-trained on large-scale internet datasets have
achieved remarkable success, excelling at producing realistic synthetic videos.
However, they often generate clips based on static prompts (e.g., text or
images), limiting their ability to model interactive and dynamic scenarios. In
this paper, we propose Dynamic World Simulation (DWS), a novel approach to
transform pre-trained video generative models into controllable world
simulators capable of executing specified action trajectories. To achieve
precise alignment between conditioned actions and generated visual changes, we
introduce a lightweight, universal action-conditioned module that seamlessly
integrates into any existing model. Instead of focusing on complex visual
details, we demonstrate that consistent dynamic transition modeling is the key
to building powerful world simulators. Building upon this insight, we further
introduce a motion-reinforced loss that enhances action controllability by
compelling the model to capture dynamic changes more effectively. Experiments
demonstrate that DWS can be versatilely applied to both diffusion and
autoregressive transformer models, achieving significant improvements in
generating action-controllable, dynamically consistent videos across games and
robotics domains. Moreover, to facilitate the applications of the learned world
simulator in downstream tasks such as model-based reinforcement learning, we
propose prioritized imagination to improve sample efficiency, demonstrating
competitive performance compared with state-of-the-art methods.
|
2502.07826
|
Deep Learning in Automated Power Line Inspection: A Review
|
cs.CV eess.IV
|
In recent years, power line maintenance has seen a paradigm shift by moving
towards computer vision-powered automated inspection. The utilization of an
extensive collection of videos and images has become essential for maintaining
the reliability, safety, and sustainability of electricity transmission. A
significant focus on applying deep learning techniques for enhancing power line
inspection processes has been observed in recent research. A comprehensive
review of existing studies has been conducted in this paper, to aid researchers
and industries in developing improved deep learning-based systems for analyzing
power line data. The conventional steps of data analysis in power line
inspections have been examined, and the body of current research has been
systematically categorized into two main areas: the detection of components and
the diagnosis of faults. A detailed summary of the diverse methods and
techniques employed in these areas has been encapsulated, providing insights
into their functionality and use cases. Special attention has been given to the
exploration of deep learning-based methodologies for the analysis of power line
inspection data, with an exposition of their fundamental principles and
practical applications. Moreover, a vision for future research directions has
been outlined, highlighting the need for advancements such as edge-cloud
collaboration, and multi-modal analysis among others. Thus, this paper serves
as a comprehensive resource for researchers delving into deep learning for
power line analysis, illuminating the extent of current knowledge and the
potential areas for future investigation.
|
2502.07827
|
Implicit Language Models are RNNs: Balancing Parallelization and
Expressivity
|
cs.LG cs.AI
|
State-space models (SSMs) and transformers dominate the language modeling
landscape. However, they are constrained to a lower computational complexity
than classical recurrent neural networks (RNNs), limiting their expressivity.
In contrast, RNNs lack parallelization during training, raising fundamental
questions about the trade off between parallelization and expressivity. We
propose implicit SSMs, which iterate a transformation until convergence to a
fixed point. Theoretically, we show that implicit SSMs implement the non-linear
state-transitions of RNNs. Empirically, we find that only approximate
fixed-point convergence suffices, enabling the design of a scalable training
curriculum that largely retains parallelization, with full convergence required
only for a small subset of tokens. Our approach demonstrates superior
state-tracking capabilities on regular languages, surpassing transformers and
SSMs. We further scale implicit SSMs to natural language reasoning tasks and
pretraining of large-scale language models up to 1.3B parameters on 207B tokens
- representing, to our knowledge, the largest implicit model trained to date.
Notably, our implicit models outperform their explicit counterparts on standard
benchmarks.
|
2502.07828
|
Some things to know about achieving artificial general intelligence
|
q-bio.NC cs.AI
|
Current and foreseeable GenAI models are not capable of achieving artificial
general intelligence because they are burdened with anthropogenic debt. They
depend heavily on human input to provide well-structured problems,
architecture, and training data. They cast every problem as a language pattern
learning problem and are thus not capable of the kind of autonomy needed to
achieve artificial general intelligence. Current models succeed at their tasks
because people solve most of the problems to which these models are directed,
leaving only simple computations for the model to perform, such as gradient
descent. Another barrier is the need to recognize that there are multiple kinds
of problems, some of which cannot be solved by available computational methods
(for example, "insight problems"). Current methods for evaluating models
(benchmarks and tests) are not adequate to identify the generality of the
solutions, because it is impossible to infer the means by which a problem was
solved from the fact of its solution. A test could be passed, for example, by a
test-specific or a test-general method. It is a logical fallacy (affirming the
consequent) to infer a method of solution from the observation of success.
|
2502.07829
|
Preference Alignment on Diffusion Model: A Comprehensive Survey for
Image Generation and Editing
|
cs.CV cs.LG
|
The integration of preference alignment with diffusion models (DMs) has
emerged as a transformative approach to enhance image generation and editing
capabilities. Although integrating diffusion models with preference alignment
strategies poses significant challenges for novices at this intersection,
comprehensive and systematic reviews of this subject are still notably lacking.
To bridge this gap, this paper extensively surveys preference alignment with
diffusion models in image generation and editing. First, we systematically
review cutting-edge optimization techniques such as reinforcement learning with
human feedback (RLHF), direct preference optimization (DPO), and others,
highlighting their pivotal role in aligning preferences with DMs. Then, we
thoroughly explore the applications of aligning preferences with DMs in
autonomous driving, medical imaging, robotics, and more. Finally, we
comprehensively discuss the challenges of preference alignment with DMs. To our
knowledge, this is the first survey centered on preference alignment with DMs,
providing insights to drive future innovation in this dynamic area.
|
2502.07830
|
Captured by Captions: On Memorization and its Mitigation in CLIP Models
|
cs.CV cs.AI cs.LG
|
Multi-modal models, such as CLIP, have demonstrated strong performance in
aligning visual and textual representations, excelling in tasks like image
retrieval and zero-shot classification. Despite this success, the mechanisms by
which these models utilize training data, particularly the role of
memorization, remain unclear. In uni-modal models, both supervised and
self-supervised, memorization has been shown to be essential for
generalization. However, it is not well understood how these findings would
apply to CLIP, which incorporates elements from both supervised learning via
captions that provide a supervisory signal similar to labels, and from
self-supervised learning via the contrastive objective. To bridge this gap in
understanding, we propose a formal definition of memorization in CLIP (CLIPMem)
and use it to quantify memorization in CLIP models. Our results indicate that
CLIP's memorization behavior falls between the supervised and self-supervised
paradigms, with "mis-captioned" samples exhibiting highest levels of
memorization. Additionally, we find that the text encoder contributes more to
memorization than the image encoder, suggesting that mitigation strategies
should focus on the text domain. Building on these insights, we propose
multiple strategies to reduce memorization while at the same time improving
utility--something that had not been shown before for traditional learning
paradigms where reducing memorization typically results in utility decrease.
|
2502.07832
|
SHARP: Accelerating Language Model Inference by SHaring Adjacent layers
with Recovery Parameters
|
cs.LG cs.AI
|
While Large language models (LLMs) have advanced natural language processing
tasks, their growing computational and memory demands make deployment on
resource-constrained devices like mobile phones increasingly challenging. In
this paper, we propose SHARP (SHaring Adjacent Layers with Recovery
Parameters), a novel approach to accelerate LLM inference by sharing parameters
across adjacent layers, thus reducing memory load overhead, while introducing
low-rank recovery parameters to maintain performance. Inspired by observations
that consecutive layers have similar outputs, SHARP employs a two-stage
recovery process: Single Layer Warmup (SLW), and Supervised Fine-Tuning (SFT).
The SLW stage aligns the outputs of the shared layers using L_2 loss, providing
a good initialization for the following SFT stage to further restore the model
performance. Extensive experiments demonstrate that SHARP can recover the
model's perplexity on various in-distribution tasks using no more than 50k
fine-tuning data while reducing the number of stored MLP parameters by 38% to
65%. We also conduct several ablation studies of SHARP and show that replacing
layers towards the later parts of the model yields better performance
retention, and that different recovery parameterizations perform similarly when
parameter counts are matched. Furthermore, SHARP saves 42.8% in model storage
and reduces the total inference time by 42.2% compared to the original
Llama2-7b model on mobile devices. Our results highlight SHARP as an efficient
solution for reducing inference costs in deploying LLMs without the need for
pretraining-scale resources.
|
2502.07834
|
MEMHD: Memory-Efficient Multi-Centroid Hyperdimensional Computing for
Fully-Utilized In-Memory Computing Architectures
|
cs.AR cs.AI cs.LG
|
The implementation of Hyperdimensional Computing (HDC) on In-Memory Computing
(IMC) architectures faces significant challenges due to the mismatch between
highdimensional vectors and IMC array sizes, leading to inefficient memory
utilization and increased computation cycles. This paper presents MEMHD, a
Memory-Efficient Multi-centroid HDC framework designed to address these
challenges. MEMHD introduces a clustering-based initialization method and
quantization aware iterative learning for multi-centroid associative memory.
Through these approaches and its overall architecture, MEMHD achieves a
significant reduction in memory requirements while maintaining or improving
classification accuracy. Our approach achieves full utilization of IMC arrays
and enables one-shot (or few-shot) associative search. Experimental results
demonstrate that MEMHD outperforms state-of-the-art binary HDC models,
achieving up to 13.69% higher accuracy with the same memory usage, or 13.25x
more memory efficiency at the same accuracy level. Moreover, MEMHD reduces
computation cycles by up to 80x and array usage by up to 71x compared to
baseline IMC mapping methods when mapped to 128x128 IMC arrays, while
significantly improving energy and computation cycle efficiency.
|
2502.07835
|
Bridging LLM-Generated Code and Requirements: Reverse Generation
technique and SBC Metric for Developer Insights
|
cs.SE cs.AI
|
The rise of Large Language Models (LLMs) in software engineering,
particularly in code generation, has garnered significant attention. However,
assessing the quality of AI-generated code remains a challenge due to the
inherent complexity of programming tasks and the lack of robust evaluation
metrics that align well with human judgment. Traditional token-based metrics
such as BLEU and ROUGE, while commonly used in natural language processing,
exhibit weak correlations with human assessments in code intelligence and
verification tasks. Furthermore, these metrics are primarily research focused
and are not designed for seamless integration into the software development
lifecycle, limiting their practical utility for developers seeking to improve
code quality and security.
AI-assisted coding has been shown to be more beneficial for senior
developers, as they possess the expertise to critically evaluate the generated
code for correctness, completeness, and compliance. In contrast, junior
developers may struggle to identify hallucinations, missing functionality, or
incorrect logic in AI-generated code. To bridge this gap, This paper introduces
a novel scoring mechanism called the SBC score, which is based on a reverse
generation technique that leverages the natural language generation
capabilities of LLMs. Unlike direct code analysis, our approach reconstructs
system requirements from AI-generated code and compares them with the original
specifications to quantify accuracy. The SBC score combines semantic
similarity, BLEU, and completeness analysis, providing actionable insights to
developers by highlighting missing features and hallucinations. Our code and
datasets are available on GitHub
|
2502.07836
|
Advancing Precision Oncology Through Modeling of Longitudinal and
Multimodal Data
|
q-bio.QM cs.LG
|
Cancer evolves continuously over time through a complex interplay of genetic,
epigenetic, microenvironmental, and phenotypic changes. This dynamic behavior
drives uncontrolled cell growth, metastasis, immune evasion, and therapy
resistance, posing challenges for effective monitoring and treatment. However,
today's data-driven research in oncology has primarily focused on
cross-sectional analysis using data from a single modality, limiting the
ability to fully characterize and interpret the disease's dynamic
heterogeneity. Advances in multiscale data collection and computational methods
now enable the discovery of longitudinal multimodal biomarkers for precision
oncology. Longitudinal data reveal patterns of disease progression and
treatment response that are not evident from single-timepoint data, enabling
timely abnormality detection and dynamic treatment adaptation. Multimodal data
integration offers complementary information from diverse sources for more
precise risk assessment and targeting of cancer therapy. In this review, we
survey methods of longitudinal and multimodal modeling, highlighting their
synergy in providing multifaceted insights for personalized care tailored to
the unique characteristics of a patient's cancer. We summarize the current
challenges and future directions of longitudinal multimodal analysis in
advancing precision oncology.
|
2502.07837
|
RoboBERT: An End-to-end Multimodal Robotic Manipulation Model
|
cs.RO cs.LG
|
Embodied intelligence integrates multiple modalities, enabling agents to
understand images, language, and actions simultaneously. However, existing
models always depend on additional datasets or extensive pre-training to
maximize performance improvements, consuming abundant training time and
expensive hardware cost. To tackle this issue, we present RoboBERT, a novel
end-to-end robotic manipulation model integrated with a unique training
strategy. This model utilizes a CNN-based diffusion policy, enhancing and
stabilizing the effectiveness of this model by separating training processes
for different modalities. It also underscores the importance of data
augmentation, verifying various techniques to significantly boost performance.
Unlike models that depend on extra data or large foundation models, RoboBERT
achieves a highly competitive success rate while using only language-labeled
expert demonstrations and maintaining a relatively smaller model size.
Specifically, RoboBERT achieves an average length of 4.52 on the CALVIN
benchmark for \(ABCD \rightarrow D\) task, setting a new state-of-the-art
(SOTA) record. Furthermore, when tested on a real robot, the model demonstrates
superior performance, achieving a higher success rate than other methods
trained with the same data. We propose that these concepts and methodologies of
RoboBERT demonstrate extensive versatility and compatibility, contributing
significantly to the development of lightweight multimodal robotic models. The
code can be accessed on https://github.com/PeterWangsicheng/RoboBERT
|
2502.07838
|
NanoVLMs: How small can we go and still make coherent Vision Language
Models?
|
cs.CV cs.AI
|
Vision-Language Models (VLMs), such as GPT-4V and Llama 3.2 vision, have
garnered significant research attention for their ability to leverage Large
Language Models (LLMs) in multimodal tasks. However, their potential is
constrained by inherent challenges, including proprietary restrictions,
substantial computational demands, and limited accessibility. Smaller models,
such as GIT and BLIP, exhibit marked limitations, often failing to generate
coherent and consistent text beyond a few tokens, even with extensive training.
This underscores a pivotal inquiry: how small can a VLM be and still produce
fluent and consistent text? Drawing inspiration from the exceptional learning
process of 3-4 year old children, who rely heavily on visual cues for
understanding and communication, we introduce two novel datasets: ShortDesc
(featuring concise image descriptions) and LongDesc (containing more detailed
image descriptions). These datasets consist of image-text pairs where the text
is restricted to the simple vocabulary and syntax typically used by young
children, generated with a scaled-down model, GPT-4o. Using these datasets, we
demonstrate that it is possible to train VLMs that are significantly smaller,
up to 10 times smaller than state of the art(SOTA) small VLMs while maintaining
architectural simplicity. To evaluate the outputs, we leverage GPT-4o to grade
the text, as if stories written by students, on creativity, meaningfulness, and
consistency, assigning scores out of 10. This method addresses limitations of
standard benchmarks by accommodating unstructured outputs and providing a
multidimensional evaluation of the model capabilities. Our findings contribute
to the development of lightweight, accessible multimodal models for resource
constrained environments.
|
2502.07839
|
Optimal Actuator Attacks on Autonomous Vehicles Using Reinforcement
Learning
|
cs.RO cs.LG
|
With the increasing prevalence of autonomous vehicles (AVs), their
vulnerability to various types of attacks has grown, presenting significant
security challenges. In this paper, we propose a reinforcement learning
(RL)-based approach for designing optimal stealthy integrity attacks on AV
actuators. We also analyze the limitations of state-of-the-art RL-based secure
controllers developed to counter such attacks. Through extensive simulation
experiments, we demonstrate the effectiveness and efficiency of our proposed
method.
|
2502.07840
|
TranSplat: Surface Embedding-guided 3D Gaussian Splatting for
Transparent Object Manipulation
|
cs.CV cs.RO
|
Transparent object manipulation remains a significant challenge in robotics
due to the difficulty of acquiring accurate and dense depth measurements.
Conventional depth sensors often fail with transparent objects, resulting in
incomplete or erroneous depth data. Existing depth completion methods struggle
with interframe consistency and incorrectly model transparent objects as
Lambertian surfaces, leading to poor depth reconstruction. To address these
challenges, we propose TranSplat, a surface embedding-guided 3D Gaussian
Splatting method tailored for transparent objects. TranSplat uses a latent
diffusion model to generate surface embeddings that provide consistent and
continuous representations, making it robust to changes in viewpoint and
lighting. By integrating these surface embeddings with input RGB images,
TranSplat effectively captures the complexities of transparent surfaces,
enhancing the splatting of 3D Gaussians and improving depth completion.
Evaluations on synthetic and real-world transparent object benchmarks, as well
as robot grasping tasks, show that TranSplat achieves accurate and dense depth
completion, demonstrating its effectiveness in practical applications. We
open-source synthetic dataset and model: https://github.
com/jeongyun0609/TranSplat
|
2502.07842
|
Column-wise Quantization of Weights and Partial Sums for Accurate and
Efficient Compute-In-Memory Accelerators
|
cs.AR cs.AI cs.LG
|
Compute-in-memory (CIM) is an efficient method for implementing deep neural
networks (DNNs) but suffers from substantial overhead from analog-to-digital
converters (ADCs), especially as ADC precision increases. Low-precision ADCs
can reduce this overhead but introduce partial-sum quantization errors
degrading accuracy. Additionally, low-bit weight constraints, imposed by cell
limitations and the need for multiple cells for higher-bit weights, present
further challenges. While fine-grained partial-sum quantization has been
studied to lower ADC resolution effectively, weight granularity, which limits
overall partial-sum quantized accuracy, remains underexplored. This work
addresses these challenges by aligning weight and partial-sum quantization
granularities at the column-wise level. Our method improves accuracy while
maintaining dequantization overhead, simplifies training by removing two-stage
processes, and ensures robustness to memory cell variations via independent
column-wise scale factors. We also propose an open-source CIM-oriented
convolution framework to handle fine-grained weights and partial-sums
efficiently, incorporating a novel tiling method and group convolution.
Experimental results on ResNet-20 (CIFAR-10, CIFAR-100) and ResNet-18
(ImageNet) show accuracy improvements of 0.99%, 2.69%, and 1.01%, respectively,
compared to the best-performing related works. Additionally, variation analysis
reveals the robustness of our method against memory cell variations. These
findings highlight the effectiveness of our quantization scheme in enhancing
accuracy and robustness while maintaining hardware efficiency in CIM-based DNN
implementations. Our code is available at
https://github.com/jiyoonkm/ColumnQuant.
|
2502.07843
|
Emotional EEG Classification using Upscaled Connectivity Matrices
|
cs.LG
|
In recent studies of emotional EEG classification, connectivity matrices have
been successfully employed as input to convolutional neural networks (CNNs),
which can effectively consider inter-regional interaction patterns in EEG.
However, we find that such an approach has a limitation that important patterns
in connectivity matrices may be lost during the convolutional operations in
CNNs. To resolve this issue, we propose and validate an idea to upscale the
connectivity matrices to strengthen the local patterns. Experimental results
demonstrate that this simple idea can significantly enhance the classification
performance.
|
2502.07844
|
The establishment of static digital humans and the integration with
spinal models
|
eess.IV cs.CV
|
Adolescent idiopathic scoliosis (AIS), a prevalent spinal deformity,
significantly affects individuals' health and quality of life. Conventional
imaging techniques, such as X - rays, computed tomography (CT), and magnetic
resonance imaging (MRI), offer static views of the spine. However, they are
restricted in capturing the dynamic changes of the spine and its interactions
with overall body motion. Therefore, developing new techniques to address these
limitations has become extremely important. Dynamic digital human modeling
represents a major breakthrough in digital medicine. It enables a three -
dimensional (3D) view of the spine as it changes during daily activities,
assisting clinicians in detecting deformities that might be missed in static
imaging. Although dynamic modeling holds great potential, constructing an
accurate static digital human model is a crucial initial step for high -
precision simulations. In this study, our focus is on constructing an accurate
static digital human model integrating the spine, which is vital for subsequent
dynamic digital human research on AIS. First, we generate human point - cloud
data by combining the 3D Gaussian method with the Skinned Multi - Person Linear
(SMPL) model from the patient's multi - view images. Then, we fit a standard
skeletal model to the generated human model. Next, we align the real spine
model reconstructed from CT images with the standard skeletal model. We
validated the resulting personalized spine model using X - ray data from six
AIS patients, with Cobb angles (used to measure the severity of scoliosis) as
evaluation metrics. The results indicate that the model's error was within 1
degree of the actual measurements. This study presents an important method for
constructing digital humans.
|
2502.07845
|
Spread them Apart: Towards Robust Watermarking of Generated Content
|
cs.CV cs.AI
|
Generative models that can produce realistic images have improved
significantly in recent years. The quality of the generated content has
increased drastically, so sometimes it is very difficult to distinguish between
the real images and the generated ones. Such an improvement comes at a price of
ethical concerns about the usage of the generative models: the users of
generative models can improperly claim ownership of the generated content
protected by a license. In this paper, we propose an approach to embed
watermarks into the generated content to allow future detection of the
generated content and identification of the user who generated it. The
watermark is embedded during the inference of the model, so the proposed
approach does not require the retraining of the latter. We prove that
watermarks embedded are guaranteed to be robust against additive perturbations
of a bounded magnitude. We apply our method to watermark diffusion models and
show that it matches state-of-the-art watermarking schemes in terms of
robustness to different types of synthetic watermark removal attacks.
|
2502.07846
|
Memory Analysis on the Training Course of DeepSeek Models
|
cs.PF cs.LG
|
We present a theoretical analysis of GPU memory consumption during the
training of DeepSeek models such as DeepSeek-v2 and DeepSeek-v3. Our primary
objective is to clarify the device-level memory requirements associated with
various distributed training configurations. Specifically, we examine critical
factors influencing memory usage, including micro-batch size, activation
recomputation policies, 3D parallelism, and ZeRO optimizations. It is important
to emphasize that the training policies discussed in this report are not
representative of DeepSeek's official configurations. Instead, they are
explored to provide a deeper understanding of memory dynamics in training of
large-scale mixture-of-experts model.
|
2502.07847
|
Technical note on calibrating vision-language models under covariate
shift
|
cs.CV cs.LG
|
Despite being a successful example of emerging capability, vision-language
foundation models for low-shot vision classification have a limited ability to
sufficiently generalize to the target data distribution due to sample poverty,
leading to sensitivity to variations in the data. A popular mitigation strategy
is finetuning over multiple datasets, but domain generalization is expensive
when practiced in this manner. This work examines both covariate shift between
pre-training data and the underspecified target data, and \textit{confidence
misalignment}, where the model's prediction confidence amplified by the limited
data availability. We propose \textit{Confidence-Calibrated Covariate Shift
Correction ($C3SC$)}, a unified framework to mitigate both covariate shift and
confidence misalignment. $C3SC$ leverages Fisher information penalty for
covariate shift correction and confidence misalignment penalty (CMP) to lower
confidence on misclassified examples. Experimental results across various
vision and covariate shift datasets demonstrates that $C3SC$ significantly
improves in calibration (ECE) by $5.82\%$ at maximum. $C3SC$ shows better
robustness as well by showing $3.5\%$ improvement in accuracy metric on
challenging covariate shift datasets, making $C3SC$ a promising solution for
reliable real-world vision-language low-shot applications under distribution
shift.
|
2502.07849
|
Understanding Classifier-Free Guidance: High-Dimensional Theory and
Non-Linear Generalizations
|
cs.LG cs.AI stat.ML
|
Recent studies have raised concerns about the effectiveness of
Classifier-Free Guidance (CFG), indicating that in low-dimensional settings, it
can lead to overshooting the target distribution and reducing sample diversity.
In this work, we demonstrate that in infinite and sufficiently high-dimensional
contexts CFG effectively reproduces the target distribution, revealing a
blessing-of-dimensionality result. Additionally, we explore finite-dimensional
effects, precisely characterizing overshoot and variance reduction. Based on
our analysis, we introduce non-linear generalizations of CFG. Through numerical
simulations on Gaussian mixtures and experiments on class-conditional and
text-to-image diffusion models, we validate our analysis and show that our
non-linear CFG offers improved flexibility and generation quality without
additional computation cost.
|
2502.07850
|
Mathematical reasoning and the computer
|
cs.AI
|
Computers have already changed the way that humans do mathematics: they
enable us to compute efficiently. But will they soon be helping us to reason?
And will they one day start reasoning themselves? We give an overview of recent
developments in neural networks, computer theorem provers and large language
models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.