id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.01941
|
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
|
cs.CL cs.AI
|
This paper investigates an under-explored challenge in large language models
(LLMs): the impact of KV cache compression methods on LLMs' fundamental
capabilities. While existing methods achieve impressive compression ratios on
long-context benchmarks, their effects on core model capabilities remain
understudied. We present a comprehensive empirical study evaluating prominent
KV cache compression methods across diverse tasks, spanning world knowledge,
commonsense reasoning, arithmetic reasoning, code generation, safety, and
long-context understanding and generation.Our analysis reveals that KV cache
compression methods exhibit task-specific performance degradation. Arithmetic
reasoning tasks prove particularly sensitive to aggressive compression, with
different methods showing performance drops of $17.4\%$-$43.3\%$. Notably, the
DeepSeek R1 Distill model exhibits more robust compression tolerance compared
to instruction-tuned models, showing only $9.67\%$-$25.53\%$ performance
degradation. Based on our analysis of attention patterns and cross-task
compression performance, we propose ShotKV, a novel compression approach that
distinctly handles prefill and decoding phases while maintaining shot-level
semantic coherence. Empirical results show that ShotKV achieves $9\%$-$18\%$
performance improvements on long-context generation tasks under aggressive
compression ratios.
|
2502.01942
|
Boundary-Driven Table-Filling with Cross-Granularity Contrastive
Learning for Aspect Sentiment Triplet Extraction
|
cs.CL cs.AI
|
The Aspect Sentiment Triplet Extraction (ASTE) task aims to extract aspect
terms, opinion terms, and their corresponding sentiment polarity from a given
sentence. It remains one of the most prominent subtasks in fine-grained
sentiment analysis. Most existing approaches frame triplet extraction as a 2D
table-filling process in an end-to-end manner, focusing primarily on word-level
interactions while often overlooking sentence-level representations. This
limitation hampers the model's ability to capture global contextual
information, particularly when dealing with multi-word aspect and opinion terms
in complex sentences. To address these issues, we propose boundary-driven
table-filling with cross-granularity contrastive learning (BTF-CCL) to enhance
the semantic consistency between sentence-level representations and word-level
representations. By constructing positive and negative sample pairs, the model
is forced to learn the associations at both the sentence level and the word
level. Additionally, a multi-scale, multi-granularity convolutional method is
proposed to capture rich semantic information better. Our approach can capture
sentence-level contextual information more effectively while maintaining
sensitivity to local details. Experimental results show that the proposed
method achieves state-of-the-art performance on public benchmarks according to
the F1 score.
|
2502.01943
|
DAMA: Data- and Model-aware Alignment of Multi-modal LLMs
|
cs.CV
|
Direct Preference Optimization (DPO) has shown effectiveness in aligning
multi-modal large language models (MLLM) with human preferences. However,
existing methods exhibit an imbalanced responsiveness to the data of varying
hardness, tending to overfit on the easy-to-distinguish data while underfitting
on the hard-to-distinguish data. In this paper, we propose Data- and
Model-aware DPO (DAMA) to dynamically adjust the optimization process from two
key aspects: (1) a data-aware strategy that incorporates data hardness, and (2)
a model-aware strategy that integrates real-time model responses. By combining
the two strategies, DAMA enables the model to effectively adapt to data with
varying levels of hardness. Extensive experiments on five benchmarks
demonstrate that DAMA not only significantly enhances the trustworthiness, but
also improves the effectiveness over general tasks. For instance, on the
Object-HalBench, our DAMA-7B reduces response-level and mentioned-level
hallucination by 90.0% and 95.3%, respectively, surpassing the performance of
GPT-4V.
|
2502.01946
|
HeRCULES: Heterogeneous Radar Dataset in Complex Urban Environment for
Multi-session Radar SLAM
|
cs.RO cs.CV
|
Recently, radars have been widely featured in robotics for their robustness
in challenging weather conditions. Two commonly used radar types are spinning
radars and phased-array radars, each offering distinct sensor characteristics.
Existing datasets typically feature only a single type of radar, leading to the
development of algorithms limited to that specific kind. In this work, we
highlight that combining different radar types offers complementary advantages,
which can be leveraged through a heterogeneous radar dataset. Moreover, this
new dataset fosters research in multi-session and multi-robot scenarios where
robots are equipped with different types of radars. In this context, we
introduce the HeRCULES dataset, a comprehensive, multi-modal dataset with
heterogeneous radars, FMCW LiDAR, IMU, GPS, and cameras. This is the first
dataset to integrate 4D radar and spinning radar alongside FMCW LiDAR, offering
unparalleled localization, mapping, and place recognition capabilities. The
dataset covers diverse weather and lighting conditions and a range of urban
traffic scenarios, enabling a comprehensive analysis across various
environments. The sequence paths with multiple revisits and ground truth pose
for each sensor enhance its suitability for place recognition research. We
expect the HeRCULES dataset to facilitate odometry, mapping, place recognition,
and sensor fusion research. The dataset and development tools are available at
https://sites.google.com/view/herculesdataset.
|
2502.01949
|
LAYOUTDREAMER: Physics-guided Layout for Text-to-3D Compositional Scene
Generation
|
cs.CV cs.AI cs.GR
|
Recently, the field of text-guided 3D scene generation has garnered
significant attention. High-quality generation that aligns with physical
realism and high controllability is crucial for practical 3D scene
applications. However, existing methods face fundamental limitations: (i)
difficulty capturing complex relationships between multiple objects described
in the text, (ii) inability to generate physically plausible scene layouts, and
(iii) lack of controllability and extensibility in compositional scenes. In
this paper, we introduce LayoutDreamer, a framework that leverages 3D Gaussian
Splatting (3DGS) to facilitate high-quality, physically consistent
compositional scene generation guided by text. Specifically, given a text
prompt, we convert it into a directed scene graph and adaptively adjust the
density and layout of the initial compositional 3D Gaussians. Subsequently,
dynamic camera adjustments are made based on the training focal point to ensure
entity-level generation quality. Finally, by extracting directed dependencies
from the scene graph, we tailor physical and layout energy to ensure both
realism and flexibility. Comprehensive experiments demonstrate that
LayoutDreamer outperforms other compositional scene generation quality and
semantic alignment methods. Specifically, it achieves state-of-the-art (SOTA)
performance in the multiple objects generation metric of T3Bench.
|
2502.01951
|
On the Emergence of Position Bias in Transformers
|
cs.LG
|
Recent studies have revealed various manifestations of position bias in
transformer architectures, from the "lost-in-the-middle" phenomenon to
attention sinks, yet a comprehensive theoretical understanding of how attention
masks and positional encodings shape these biases remains elusive. This paper
introduces a novel graph-theoretic framework to analyze position bias in
multi-layer attention. Modeling attention masks as directed graphs, we quantify
how tokens interact with contextual information based on their sequential
positions. We uncover two key insights: First, causal masking inherently biases
attention toward earlier positions, as tokens in deeper layers attend to
increasingly more contextualized representations of earlier tokens. Second, we
characterize the competing effects of the causal mask and relative positional
encodings, such as the decay mask and rotary positional encoding (RoPE): while
both mechanisms introduce distance-based decay within individual attention
maps, their aggregate effect across multiple attention layers -- coupled with
the causal mask -- leads to a trade-off between the long-term decay effects and
the cumulative importance of early sequence positions. Through controlled
numerical experiments, we not only validate our theoretical findings but also
reproduce position biases observed in real-world LLMs. Our framework offers a
principled foundation for understanding positional biases in transformers,
shedding light on the complex interplay of attention mechanism components and
guiding more informed architectural design.
|
2502.01953
|
Local minima of the empirical risk in high dimension: General theorems
and convex examples
|
stat.ML cs.LG math.ST stat.TH
|
We consider a general model for high-dimensional empirical risk minimization
whereby the data $\mathbf{x}_i$ are $d$-dimensional isotropic Gaussian vectors,
the model is parametrized by $\mathbf{\Theta}\in\mathbb{R}^{d\times k}$, and
the loss depends on the data via the projection
$\mathbf{\Theta}^\mathsf{T}\mathbf{x}_i$. This setting covers as special cases
classical statistics methods (e.g. multinomial regression and other generalized
linear models), but also two-layer fully connected neural networks with $k$
hidden neurons. We use the Kac-Rice formula from Gaussian process theory to
derive a bound on the expected number of local minima of this empirical risk,
under the proportional asymptotics in which $n,d\to\infty$, with $n\asymp d$.
Via Markov's inequality, this bound allows to determine the positions of these
minimizers (with exponential deviation bounds) and hence derive sharp
asymptotics on the estimation and prediction error. In this paper, we apply our
characterization to convex losses, where high-dimensional asymptotics were not
(in general) rigorously established for $k\ge 2$. We show that our approach is
tight and allows to prove previously conjectured results. In addition, we
characterize the spectrum of the Hessian at the minimizer. A companion paper
applies our general result to non-convex examples.
|
2502.01954
|
Constrained belief updates explain geometric structures in transformer
representations
|
cs.LG
|
What computational structures emerge in transformers trained on next-token
prediction? In this work, we provide evidence that transformers implement
constrained Bayesian belief updating -- a parallelized version of partial
Bayesian inference shaped by architectural constraints. To do this, we
integrate the model-agnostic theory of optimal prediction with mechanistic
interpretability to analyze transformers trained on a tractable family of
hidden Markov models that generate rich geometric patterns in neural
activations. We find that attention heads carry out an algorithm with a natural
interpretation in the probability simplex, and create representations with
distinctive geometric structure. We show how both the algorithmic behavior and
the underlying geometry of these representations can be theoretically predicted
in detail -- including the attention pattern, OV-vectors, and embedding vectors
-- by modifying the equations for optimal future token predictions to account
for the architectural constraints of attention. Our approach provides a
principled lens on how gradient descent resolves the tension between optimal
prediction and architectural design.
|
2502.01956
|
DHP: Discrete Hierarchical Planning for Hierarchical Reinforcement
Learning Agents
|
cs.RO cs.AI cs.LG
|
In this paper, we address the challenge of long-horizon visual planning tasks
using Hierarchical Reinforcement Learning (HRL). Our key contribution is a
Discrete Hierarchical Planning (DHP) method, an alternative to traditional
distance-based approaches. We provide theoretical foundations for the method
and demonstrate its effectiveness through extensive empirical evaluations.
Our agent recursively predicts subgoals in the context of a long-term goal
and receives discrete rewards for constructing plans as compositions of
abstract actions. The method introduces a novel advantage estimation strategy
for tree trajectories, which inherently encourages shorter plans and enables
generalization beyond the maximum tree depth. The learned policy function
allows the agent to plan efficiently, requiring only $\log N$ computational
steps, making re-planning highly efficient. The agent, based on a soft-actor
critic (SAC) framework, is trained using on-policy imagination data.
Additionally, we propose a novel exploration strategy that enables the agent to
generate relevant training examples for the planning modules. We evaluate our
method on long-horizon visual planning tasks in a 25-room environment, where it
significantly outperforms previous benchmarks at success rate and average
episode length. Furthermore, an ablation study highlights the individual
contributions of key modules to the overall performance.
|
2502.01959
|
MATCNN: Infrared and Visible Image Fusion Method Based on Multi-scale
CNN with Attention Transformer
|
cs.CV
|
While attention-based approaches have shown considerable progress in
enhancing image fusion and addressing the challenges posed by long-range
feature dependencies, their efficacy in capturing local features is compromised
by the lack of diverse receptive field extraction techniques. To overcome the
shortcomings of existing fusion methods in extracting multi-scale local
features and preserving global features, this paper proposes a novel
cross-modal image fusion approach based on a multi-scale convolutional neural
network with attention Transformer (MATCNN). MATCNN utilizes the multi-scale
fusion module (MSFM) to extract local features at different scales and employs
the global feature extraction module (GFEM) to extract global features.
Combining the two reduces the loss of detail features and improves the ability
of global feature representation. Simultaneously, an information mask is used
to label pertinent details within the images, aiming to enhance the proportion
of preserving significant information in infrared images and background
textures in visible images in fused images. Subsequently, a novel optimization
algorithm is developed, leveraging the mask to guide feature extraction through
the integration of content, structural similarity index measurement, and global
feature loss. Quantitative and qualitative evaluations are conducted across
various datasets, revealing that MATCNN effectively highlights infrared salient
targets, preserves additional details in visible images, and achieves better
fusion results for cross-modal images. The code of MATCNN will be available at
https://github.com/zhang3849/MATCNN.git.
|
2502.01960
|
MPIC: Position-Independent Multimodal Context Caching System for
Efficient MLLM Serving
|
cs.LG
|
The context caching technique is employed to accelerate the Multimodal Large
Language Model (MLLM) inference by prevailing serving platforms currently.
However, this approach merely reuses the Key-Value (KV) cache of the initial
sequence of prompt, resulting in full KV cache recomputation even if the prefix
differs slightly. This becomes particularly inefficient in the context of
interleaved text and images, as well as multimodal retrieval-augmented
generation. This paper proposes position-independent caching as a more
effective approach for multimodal information management. We have designed and
implemented a caching system, named MPIC, to address both system-level and
algorithm-level challenges. MPIC stores the KV cache on local or remote disks
when receiving multimodal data, and calculates and loads the KV cache in
parallel during inference. To mitigate accuracy degradation, we have
incorporated integrated reuse and recompute mechanisms within the system. The
experimental results demonstrate that MPIC can achieve up to 54% reduction in
response time compared to existing context caching systems, while maintaining
negligible or no accuracy loss.
|
2502.01961
|
Hierarchical Consensus Network for Multiview Feature Learning
|
cs.CV cs.LG
|
Multiview feature learning aims to learn discriminative features by
integrating the distinct information in each view. However, most existing
methods still face significant challenges in learning view-consistency
features, which are crucial for effective multiview learning. Motivated by the
theories of CCA and contrastive learning in multiview feature learning, we
propose the hierarchical consensus network (HCN) in this paper. The HCN derives
three consensus indices for capturing the hierarchical consensus across views,
which are classifying consensus, coding consensus, and global consensus,
respectively. Specifically, classifying consensus reinforces class-level
correspondence between views from a CCA perspective, while coding consensus
closely resembles contrastive learning and reflects contrastive comparison of
individual instances. Global consensus aims to extract consensus information
from two perspectives simultaneously. By enforcing the hierarchical consensus,
the information within each view is better integrated to obtain more
comprehensive and discriminative features. The extensive experimental results
obtained on four multiview datasets demonstrate that the proposed method
significantly outperforms several state-of-the-art methods.
|
2502.01962
|
Memory Efficient Transformer Adapter for Dense Predictions
|
cs.CV
|
While current Vision Transformer (ViT) adapter methods have shown promising
accuracy, their inference speed is implicitly hindered by inefficient memory
access operations, e.g., standard normalization and frequent reshaping. In this
work, we propose META, a simple and fast ViT adapter that can improve the
model's memory efficiency and decrease memory time consumption by reducing the
inefficient memory access operations. Our method features a memory-efficient
adapter block that enables the common sharing of layer normalization between
the self-attention and feed-forward network layers, thereby reducing the
model's reliance on normalization operations. Within the proposed block, the
cross-shaped self-attention is employed to reduce the model's frequent
reshaping operations. Moreover, we augment the adapter block with a lightweight
convolutional branch that can enhance local inductive biases, particularly
beneficial for the dense prediction tasks, e.g., object detection, instance
segmentation, and semantic segmentation. The adapter block is finally
formulated in a cascaded manner to compute diverse head features, thereby
enriching the variety of feature representations. Empirically, extensive
evaluations on multiple representative datasets validate that META
substantially enhances the predicted quality, while achieving a new
state-of-the-art accuracy-efficiency trade-off. Theoretically, we demonstrate
that META exhibits superior generalization capability and stronger
adaptability.
|
2502.01968
|
Token Cleaning: Fine-Grained Data Selection for LLM Supervised
Fine-Tuning
|
cs.CL cs.AI
|
Recent studies show that in supervised fine-tuning (SFT) of large language
models (LLMs), data quality matters more than quantity. While most data
cleaning methods concentrate on filtering entire samples, the quality of
individual tokens within a sample can vary significantly. After pre-training,
even in high-quality samples, patterns or phrases that are not task-related can
be redundant or uninformative. Continuing to fine-tune on these patterns may
offer limited benefit and even degrade downstream task performance. In this
paper, we investigate token quality from a noisy-label perspective and propose
a generic token cleaning pipeline for SFT tasks. Our method filters out
uninformative tokens while preserving those carrying key task-specific
information. Specifically, we first evaluate token quality by examining the
influence of model updates on each token, then apply a threshold-based
separation. The token influence can be measured in a single pass with a fixed
reference model or iteratively with self-evolving reference models. The
benefits and limitations of both methods are analyzed theoretically by error
upper bounds. Extensive experiments show that our framework consistently
improves performance across multiple downstream tasks.
|
2502.01969
|
Mitigating Object Hallucinations in Large Vision-Language Models via
Attention Calibration
|
cs.CV cs.AI
|
Large Vision-Language Models (LVLMs) exhibit impressive multimodal reasoning
capabilities but remain highly susceptible to object hallucination, where
models generate responses that are not factually aligned with the visual
content. Recent works attribute this issue to an inherent bias of LVLMs where
vision token attention map has a fixed correlation with spatial position, and
propose to mitigate this issue by reordering visual tokens. However, we find
that different LVLMs exhibit different correlations between attention and
spatial position, which makes the existing solution difficult to generalize to
other LVLMs. To address this issue, we first introduce a training-free
solution, Uniform Attention Calibration (UAC), that estimates the bias from
single meaningless input image and applies a calibration matrix to rectify
attention imbalances. To further alleviate the bias, we relax the assumption of
single meaningless input in UAC and introduce a fine-tuning solution, Dynamic
Attention Calibration (DAC), that enforces the consistent outputs wherever the
object locates in the image via a plug-and-plays module. Comprehensive
experiments across multiple benchmarks demonstrate that UAC and DAC
significantly reduce object hallucination while improving general multimodal
alignment. Our methods achieve state-of-the-art performance across diverse LVLM
architectures on various metrics.
|
2502.01971
|
Bottom-Up Reputation Promotes Cooperation with Multi-Agent Reinforcement
Learning
|
cs.MA
|
Reputation serves as a powerful mechanism for promoting cooperation in
multi-agent systems, as agents are more inclined to cooperate with those of
good social standing. While existing multi-agent reinforcement learning methods
typically rely on predefined social norms to assign reputations, the question
of how a population reaches a consensus on judgement when agents hold private,
independent views remains unresolved. In this paper, we propose a novel
bottom-up reputation learning method, Learning with Reputation Reward (LR2),
designed to promote cooperative behaviour through rewards shaping based on
assigned reputation. Our agent architecture includes a dilemma policy that
determines cooperation by considering the impact on neighbours, and an
evaluation policy that assigns reputations to affect the actions of neighbours
while optimizing self-objectives. It operates using local observations and
interaction-based rewards, without relying on centralized modules or predefined
norms. Our findings demonstrate the effectiveness and adaptability of LR2
across various spatial social dilemma scenarios. Interestingly, we find that
LR2 stabilizes and enhances cooperation not only with reward reshaping from
bottom-up reputation but also by fostering strategy clustering in structured
populations, thereby creating environments conducive to sustained cooperation.
|
2502.01972
|
Layer Separation: Adjustable Joint Space Width Images Synthesis in
Conventional Radiography
|
eess.IV cs.AI cs.CV cs.LG
|
Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by
joint inflammation and progressive structural damage. Joint space width (JSW)
is a critical indicator in conventional radiography for evaluating disease
progression, which has become a prominent research topic in computer-aided
diagnostic (CAD) systems. However, deep learning-based radiological CAD systems
for JSW analysis face significant challenges in data quality, including data
imbalance, limited variety, and annotation difficulties. This work introduced a
challenging image synthesis scenario and proposed Layer Separation Networks
(LSN) to accurately separate the soft tissue layer, the upper bone layer, and
the lower bone layer in conventional radiographs of finger joints. Using these
layers, the adjustable JSW images can be synthesized to address data quality
challenges and achieve ground truth (GT) generation. Experimental results
demonstrated that LSN-based synthetic images closely resemble real radiographs,
and significantly enhanced the performance in downstream tasks. The code and
dataset will be available.
|
2502.01976
|
CITER: Collaborative Inference for Efficient Large Language Model
Decoding with Token-Level Routing
|
cs.CL cs.AI cs.LG cs.PF
|
Large language models have achieved remarkable success in various tasks but
suffer from high computational costs during inference, limiting their
deployment in resource-constrained applications. To address this issue, we
propose a novel CITER (Collaborative Inference with Token-lEvel Routing)
framework that enables efficient collaboration between small and large language
models (SLMs & LLMs) through a token-level routing strategy. Specifically,
CITER routes non-critical tokens to an SLM for efficiency and routes critical
tokens to an LLM for generalization quality. We formulate router training as a
policy optimization, where the router receives rewards based on both the
quality of predictions and the inference costs of generation. This allows the
router to learn to predict token-level routing scores and make routing
decisions based on both the current token and the future impact of its
decisions. To further accelerate the reward evaluation process, we introduce a
shortcut which significantly reduces the costs of the reward estimation and
improving the practicality of our approach. Extensive experiments on five
benchmark datasets demonstrate that CITER reduces the inference costs while
preserving high-quality generation, offering a promising solution for real-time
and resource-constrained applications. Our data and code are available at
https://github.com/aiming-lab/CITER.
|
2502.01977
|
AutoGUI: Scaling GUI Grounding with Automatic Functionality Annotations
from LLMs
|
cs.CV
|
User interface understanding with vision-language models has received much
attention due to its potential for enabling next-generation software
automation. However, existing UI datasets either only provide large-scale
context-free element annotations or contextualized functional descriptions for
elements at a much smaller scale. In this work, we propose the \methodname{}
pipeline for automatically annotating UI elements with detailed functionality
descriptions at scale. Specifically, we leverage large language models (LLMs)
to infer element functionality by comparing the UI content changes before and
after simulated interactions with specific UI elements. To improve annotation
quality, we propose LLM-aided rejection and verification, eliminating invalid
and incorrect annotations without human labor. We construct an
\methodname{}-704k dataset using the proposed pipeline, featuring
multi-resolution, multi-device screenshots, diverse data domains, and detailed
functionality annotations that have never been provided by previous datasets.
Human evaluation shows that the AutoGUI pipeline achieves annotation
correctness comparable to trained human annotators. Extensive experimental
results show that our \methodname{}-704k dataset remarkably enhances VLM's UI
grounding capabilities, exhibits significant scaling effects, and outperforms
existing web pre-training data types. We envision AutoGUI as a scalable
pipeline for generating massive data to build GUI-oriented VLMs. AutoGUI
dataset can be viewed at this anonymous URL:
https://autogui-project.github.io/.
|
2502.01979
|
Gradient-Regularized Latent Space Modulation in Large Language Models
for Structured Contextual Synthesis
|
cs.CL
|
Generating structured textual content requires mechanisms that enforce
coherence, stability, and adherence to predefined constraints while maintaining
semantic fidelity. Conventional approaches often rely on rule-based heuristics
or fine-tuning strategies that lack flexibility and generalizability across
diverse tasks. The incorporation of Gradient-Regularized Latent Space
Modulation (GRLSM) introduces a novel paradigm for guiding text generation
through the application of structured constraints within the latent space. The
integration of gradient-based regularization mitigates abrupt variations in
latent representations, ensuring a smoother encoding process that enhances
structural consistency and logical progression within generated sequences.
Comparative evaluations demonstrate that latent space modulation leads to a
reduction in perplexity, increased coherence scores, and improved structural
alignment across multiple domains. Stability assessments further indicate that
the imposition of spectral norm constraints facilitates more controlled
variations in generated text, preserving semantic consistency under input
perturbations. Empirical results confirm that structured latent space
constraints not only refine the organization of generated outputs but also
enhance interpretability through more predictable and reliable synthesis
patterns. Performance metrics illustrate that the GRLSM framework substantially
reduces structural inconsistencies while preserving the generative flexibility
inherent in neural models.
|
2502.01980
|
Generative Data Mining with Longtail-Guided Diffusion
|
cs.LG cs.AI
|
It is difficult to anticipate the myriad challenges that a predictive model
will encounter once deployed. Common practice entails a reactive, cyclical
approach: model deployment, data mining, and retraining. We instead develop a
proactive longtail discovery process by imagining additional data during
training. In particular, we develop general model-based longtail signals,
including a differentiable, single forward pass formulation of epistemic
uncertainty that does not impact model parameters or predictive performance but
can flag rare or hard inputs. We leverage these signals as guidance to generate
additional training data from a latent diffusion model in a process we call
Longtail Guidance (LTG). Crucially, we can perform LTG without retraining the
diffusion model or the predictive model, and we do not need to expose the
predictive model to intermediate diffusion states. Data generated by LTG
exhibit semantically meaningful variation, yield significant generalization
improvements on image classification benchmarks, and can be analyzed to
proactively discover, explain, and address conceptual gaps in a predictive
model.
|
2502.01983
|
Diagrammatics of information
|
math-ph cs.IT math.IT math.MP
|
We introduce a diagrammatic perspective for Shannon entropy created by the
first author and Mikhail Khovanov and connect it to information theory and
mutual information. We also give two complete proofs that the $5$-term
dilogarithm deforms to the $4$-term infinitesimal dilogarithm.
|
2502.01984
|
Efficient Covering Using Reed--Solomon Codes
|
cs.IT eess.SP math.IT
|
We propose an efficient algorithm to find a Reed-Solomon (RS) codeword at a
distance within the covering radius of the code from any point in its ambient
Hamming space. To the best of the authors' knowledge, this is the first attempt
of its kind to solve the covering problem for RS codes. The proposed algorithm
leverages off-the-shelf decoding methods for RS codes, including the
Berlekamp-Welch algorithm for unique decoding and the Guruswami-Sudan algorithm
for list decoding. We also present theoretical and numerical results on the
capabilities of the proposed algorithm and, in particular, the average covering
radius resulting from it. Our numerical results suggest that the overlapping
Hamming spheres of radius close to the Guruswami-Sudan decoding radius centered
at the codewords cover most of the ambient Hamming space.
|
2502.01985
|
Ilargi: a GPU Compatible Factorized ML Model Training Framework
|
cs.LG cs.DC
|
The machine learning (ML) training over disparate data sources traditionally
involves materialization, which can impose substantial time and space overhead
due to data movement and replication. Factorized learning, which leverages
direct computation on disparate sources through linear algebra (LA) rewriting,
has emerged as a viable alternative to improve computational efficiency.
However, the adaptation of factorized learning to leverage the full
capabilities of modern LA-friendly hardware like GPUs has been limited, often
requiring manual intervention for algorithm compatibility. This paper
introduces Ilargi, a novel factorized learning framework that utilizes
matrix-represented data integration (DI) metadata to facilitate automatic
factorization across CPU and GPU environments without the need for costly
relational joins. Ilargi incorporates an ML-based cost estimator to
intelligently selects between factorization and materialization based on data
properties, algorithm complexity, hardware environments, and their
interactions. This strategy ensures up to 8.9x speedups on GPUs and achieves
over 20% acceleration in batch ML training workloads, thereby enhancing the
practicability of ML training across diverse data integration scenarios and
hardware platforms. To our knowledge, this work is the very first effort in
GPU-compatible factorized learning.
|
2502.01986
|
DCT-Mamba3D: Spectral Decorrelation and Spatial-Spectral Feature
Extraction for Hyperspectral Image Classification
|
cs.CV eess.IV
|
Hyperspectral image classification presents challenges due to spectral
redundancy and complex spatial-spectral dependencies. This paper proposes a
novel framework, DCT-Mamba3D, for hyperspectral image classification.
DCT-Mamba3D incorporates: (1) a 3D spectral-spatial decorrelation module that
applies 3D discrete cosine transform basis functions to reduce both spectral
and spatial redundancy, enhancing feature clarity across dimensions; (2) a
3D-Mamba module that leverages a bidirectional state-space model to capture
intricate spatial-spectral dependencies; and (3) a global residual enhancement
module that stabilizes feature representation, improving robustness and
convergence. Extensive experiments on benchmark datasets show that our
DCT-Mamba3D outperforms the state-of-the-art methods in challenging scenarios
such as the same object in different spectra and different objects in the same
spectra.
|
2502.01987
|
Online Adaptive Traversability Estimation through Interaction for
Unstructured, Densely Vegetated Environments
|
cs.RO
|
Navigating densely vegetated environments poses significant challenges for
autonomous ground vehicles. Learning-based systems typically use prior and
in-situ data to predict terrain traversability but often degrade in performance
when encountering out-of-distribution elements caused by rapid environmental
changes or novel conditions. This paper presents a novel, lidar-only, online
adaptive traversability estimation (TE) method that trains a model directly on
the robot using self-supervised data collected through robot-environment
interaction. The proposed approach utilises a probabilistic 3D voxel
representation to integrate lidar measurements and robot experience, creating a
salient environmental model. To ensure computational efficiency, a sparse
graph-based representation is employed to update temporarily evolving voxel
distributions. Extensive experiments with an unmanned ground vehicle in natural
terrain demonstrate that the system adapts to complex environments with as
little as 8 minutes of operational data, achieving a Matthews Correlation
Coefficient (MCC) score of 0.63 and enabling safe navigation in densely
vegetated environments. This work examines different training strategies for
voxel-based TE methods and offers recommendations for training strategies to
improve adaptability. The proposed method is validated on a robotic platform
with limited computational resources (25W GPU), achieving accuracy comparable
to offline-trained models while maintaining reliable performance across varied
environments.
|
2502.01988
|
ReMiDi: Reconstruction of Microstructure Using a Differentiable
Diffusion MRI Simulator
|
eess.IV cs.GR cs.LG physics.med-ph
|
We propose ReMiDi, a novel method for inferring neuronal microstructure as
arbitrary 3D meshes using a differentiable diffusion Magnetic Resonance Imaging
(dMRI) simulator. We first implemented in PyTorch a differentiable dMRI
simulator that simulates the forward diffusion process using a finite-element
method on an input 3D microstructure mesh. To achieve significantly faster
simulations, we solve the differential equation semi-analytically using a
matrix formalism approach. Given a reference dMRI signal $S_{ref}$, we use the
differentiable simulator to iteratively update the input mesh such that it
matches $S_{ref}$ using gradient-based learning. Since directly optimizing the
3D coordinates of the vertices is challenging, particularly due to
ill-posedness of the inverse problem, we instead optimize a lower-dimensional
latent space representation of the mesh. The mesh is first encoded into
spectral coefficients, which are further encoded into a latent $\textbf{z}$
using an auto-encoder, and are then decoded back into the true mesh. We present
an end-to-end differentiable pipeline that simulates signals that can be tuned
to match a reference signal by iteratively updating the latent representation
$\textbf{z}$. We demonstrate the ability to reconstruct microstructures of
arbitrary shapes represented by finite-element meshes, with a focus on axonal
geometries found in the brain white matter, including bending, fanning and
beading fibers. Our source code will be made available online.
|
2502.01989
|
T-SCEND: Test-time Scalable MCTS-enhanced Diffusion Model
|
cs.LG
|
We introduce Test-time Scalable MCTS-enhanced Diffusion Model (T-SCEND), a
novel framework that significantly improves diffusion model's reasoning
capabilities with better energy-based training and scaling up test-time
computation. We first show that na\"ively scaling up inference budget for
diffusion models yields marginal gain. To address this, the training of T-SCEND
consists of a novel linear-regression negative contrastive learning objective
to improve the performance-energy consistency of the energy landscape, and a KL
regularization to reduce adversarial sampling. During inference, T-SCEND
integrates the denoising process with a novel hybrid Monte Carlo Tree Search
(hMCTS), which sequentially performs best-of-N random search and MCTS as
denoising proceeds. On challenging reasoning tasks of Maze and Sudoku, we
demonstrate the effectiveness of T-SCEND's training objective and scalable
inference method. In particular, trained with Maze sizes of up to $6\times6$,
our T-SCEND solves $88\%$ of Maze problems with much larger sizes of
$15\times15$, while standard diffusion completely fails. Code to reproduce the
experiments can be found at https://github.com/AI4Science-WestlakeU/t_scend.
|
2502.01990
|
Rethinking Timesteps Samplers and Prediction Types
|
cs.LG cs.CV
|
Diffusion models suffer from the huge consumption of time and resources to
train. For example, diffusion models need hundreds of GPUs to train for several
weeks for a high-resolution generative task to meet the requirements of an
extremely large number of iterations and a large batch size. Training diffusion
models become a millionaire's game. With limited resources that only fit a
small batch size, training a diffusion model always fails. In this paper, we
investigate the key reasons behind the difficulties of training diffusion
models with limited resources. Through numerous experiments and demonstrations,
we identified a major factor: the significant variation in the training losses
across different timesteps, which can easily disrupt the progress made in
previous iterations. Moreover, different prediction types of $x_0$ exhibit
varying effectiveness depending on the task and timestep. We hypothesize that
using a mixed-prediction approach to identify the most accurate $x_0$
prediction type could potentially serve as a breakthrough in addressing this
issue. In this paper, we outline several challenges and insights, with the hope
of inspiring further research aimed at tackling the limitations of training
diffusion models with constrained resources, particularly for high-resolution
tasks.
|
2502.01991
|
Can LLMs Assist Annotators in Identifying Morality Frames? -- Case Study
on Vaccination Debate on Social Media
|
cs.CL cs.AI cs.CY cs.HC cs.SI
|
Nowadays, social media is pivotal in shaping public discourse, especially on
polarizing issues like vaccination, where diverse moral perspectives influence
individual opinions. In NLP, data scarcity and complexity of psycholinguistic
tasks, such as identifying morality frames, make relying solely on human
annotators costly, time-consuming, and prone to inconsistency due to cognitive
load. To address these issues, we leverage large language models (LLMs), which
are adept at adapting new tasks through few-shot learning, utilizing a handful
of in-context examples coupled with explanations that connect examples to task
principles. Our research explores LLMs' potential to assist human annotators in
identifying morality frames within vaccination debates on social media. We
employ a two-step process: generating concepts and explanations with LLMs,
followed by human evaluation using a "think-aloud" tool. Our study shows that
integrating LLMs into the annotation process enhances accuracy, reduces task
difficulty, lowers cognitive load, suggesting a promising avenue for human-AI
collaboration in complex psycholinguistic tasks.
|
2502.01992
|
FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL
Contest 2024
|
q-fin.TR cs.LG
|
In response to Task II of the FinRL Challenge at ACM ICAIF 2024, this study
proposes a novel prompt framework for fine-tuning large language models (LLM)
with Reinforcement Learning from Market Feedback (RLMF). Our framework
incorporates market-specific features and short-term price dynamics to generate
more precise trading signals. Traditional LLMs, while competent in sentiment
analysis, lack contextual alignment for financial market applications. To
bridge this gap, we fine-tune the LLaMA-3.2-3B-Instruct model using a custom
RLMF prompt design that integrates historical market data and reward-based
feedback. Our evaluation shows that this RLMF-tuned framework outperforms
baseline methods in signal consistency and achieving tighter trading outcomes;
awarded as winner of Task II. You can find the code for this project on GitHub.
|
2502.01993
|
One Diffusion Step to Real-World Super-Resolution via Flow Trajectory
Distillation
|
cs.CV
|
Diffusion models (DMs) have significantly advanced the development of
real-world image super-resolution (Real-ISR), but the computational cost of
multi-step diffusion models limits their application. One-step diffusion models
generate high-quality images in a one sampling step, greatly reducing
computational overhead and inference latency. However, most existing one-step
diffusion methods are constrained by the performance of the teacher model,
where poor teacher performance results in image artifacts. To address this
limitation, we propose FluxSR, a novel one-step diffusion Real-ISR technique
based on flow matching models. We use the state-of-the-art diffusion model
FLUX.1-dev as both the teacher model and the base model. First, we introduce
Flow Trajectory Distillation (FTD) to distill a multi-step flow matching model
into a one-step Real-ISR. Second, to improve image realism and address
high-frequency artifact issues in generated images, we propose TV-LPIPS as a
perceptual loss and introduce Attention Diversification Loss (ADL) as a
regularization term to reduce token similarity in transformer, thereby
eliminating high-frequency artifacts. Comprehensive experiments demonstrate
that our method outperforms existing one-step diffusion-based Real-ISR methods.
The code and model will be released at https://github.com/JianzeLi-114/FluxSR.
|
2502.01995
|
Theoretical and Practical Analysis of Fr\'echet Regression via
Comparison Geometry
|
stat.ML cs.AI cs.LG
|
Fr\'echet regression extends classical regression methods to non-Euclidean
metric spaces, enabling the analysis of data relationships on complex
structures such as manifolds and graphs. This work establishes a rigorous
theoretical analysis for Fr\'echet regression through the lens of comparison
geometry which leads to important considerations for its use in practice. The
analysis provides key results on the existence, uniqueness, and stability of
the Fr\'echet mean, along with statistical guarantees for nonparametric
regression, including exponential concentration bounds and convergence rates.
Additionally, insights into angle stability reveal the interplay between
curvature of the manifold and the behavior of the regression estimator in these
non-Euclidean contexts. Empirical experiments validate the theoretical
findings, demonstrating the effectiveness of proposed hyperbolic mappings,
particularly for data with heteroscedasticity, and highlighting the practical
usefulness of these results.
|
2502.01998
|
Data Guard: A Fine-grained Purpose-based Access Control System for Large
Data Warehouses
|
cs.DB
|
The last few years have witnessed a spate of data protection regulations in
conjunction with an ever-growing appetite for data usage in large businesses,
thus presenting significant challenges for businesses to maintain compliance.
To address this conflict, we present Data Guard - a fine-grained, purpose-based
access control system for large data warehouses. Data Guard enables authoring
policies based on semantic descriptions of data and purpose of data access.
Data Guard then translates these policies into SQL views that mask data from
the underlying warehouse tables. At access time, Data Guard ensures compliance
by transparently routing each table access to the appropriate data-masking view
based on the purpose of the access, thus minimizing the effort of adopting Data
Guard in existing applications. Our enforcement solution allows masking data at
much finer granularities than what traditional solutions allow. In addition to
row and column level data masking, Data Guard can mask data at the sub-cell
level for columns with non-atomic data types such as structs, arrays, and maps.
This fine-grained masking allows Data Guard to preserve data utility for
consumers while ensuring compliance. We implemented a number of performance
optimizations to minimize the overhead of data masking operations. We perform
numerous experiments to identify the key factors that influence the data
masking overhead and demonstrate the efficiency of our implementation.
|
2502.02002
|
The Ball-Proximal (="Broximal") Point Method: a New Algorithm,
Convergence Theory, and Applications
|
math.OC cs.LG stat.ML
|
Non-smooth and non-convex global optimization poses significant challenges
across various applications, where standard gradient-based methods often
struggle. We propose the Ball-Proximal Point Method, Broximal Point Method, or
Ball Point Method (BPM) for short - a novel algorithmic framework inspired by
the classical Proximal Point Method (PPM) (Rockafellar, 1976), which, as we
show, sheds new light on several foundational optimization paradigms and
phenomena, including non-convex and non-smooth optimization, acceleration,
smoothing, adaptive stepsize selection, and trust-region methods. At the core
of BPM lies the ball-proximal ("broximal") operator, which arises from the
classical proximal operator by replacing the quadratic distance penalty by a
ball constraint. Surprisingly, and in sharp contrast with the sublinear rate of
PPM in the nonsmooth convex regime, we prove that BPM converges linearly and in
a finite number of steps in the same regime. Furthermore, by introducing the
concept of ball-convexity, we prove that BPM retains the same global
convergence guarantees under weaker assumptions, making it a powerful tool for
a broader class of potentially non-convex optimization problems. Just like PPM
plays the role of a conceptual method inspiring the development of practically
efficient algorithms and algorithmic elements, e.g., gradient descent, adaptive
step sizes, acceleration (Ahn & Sra, 2020), and "W" in AdamW (Zhuang et al.,
2022), we believe that BPM should be understood in the same manner: as a
blueprint and inspiration for further development.
|
2502.02004
|
Wavelet-based Positional Representation for Long Context
|
cs.CL
|
In the realm of large-scale language models, a significant challenge arises
when extrapolating sequences beyond the maximum allowable length. This is
because the model's position embedding mechanisms are limited to positions
encountered during training, thus preventing effective representation of
positions in longer sequences. We analyzed conventional position encoding
methods for long contexts and found the following characteristics. (1) When the
representation dimension is regarded as the time axis, Rotary Position
Embedding (RoPE) can be interpreted as a restricted wavelet transform using
Haar-like wavelets. However, because it uses only a fixed scale parameter, it
does not fully exploit the advantages of wavelet transforms, which capture the
fine movements of non-stationary signals using multiple scales (window sizes).
This limitation could explain why RoPE performs poorly in extrapolation. (2)
Previous research as well as our own analysis indicates that Attention with
Linear Biases (ALiBi) functions similarly to windowed attention, using windows
of varying sizes. However, it has limitations in capturing deep dependencies
because it restricts the receptive field of the model. From these insights, we
propose a new position representation method that captures multiple scales
(i.e., window sizes) by leveraging wavelet transforms without limiting the
model's attention field. Experimental results show that this new method
improves the performance of the model in both short and long contexts. In
particular, our method allows extrapolation of position information without
limiting the model's attention field.
|
2502.02007
|
Reasoning Bias of Next Token Prediction Training
|
cs.CL cs.LG
|
Since the inception of Large Language Models (LLMs), the quest to efficiently
train them for superior reasoning capabilities has been a pivotal challenge.
The dominant training paradigm for LLMs is based on next token prediction
(NTP). Alternative methodologies, called Critical Token Prediction (CTP),
focused exclusively on specific critical tokens (such as the answer in Q\&A
dataset), aiming to reduce the overfitting of extraneous information and noise.
Contrary to initial assumptions, our research reveals that despite NTP's
exposure to noise during training, it surpasses CTP in reasoning ability. We
attribute this counterintuitive outcome to the regularizing influence of noise
on the training dynamics. Our empirical analysis shows that NTP-trained models
exhibit enhanced generalization and robustness across various benchmark
reasoning datasets, demonstrating greater resilience to perturbations and
achieving flatter loss minima. These findings illuminate that NTP is
instrumental in fostering reasoning abilities during pretraining, whereas CTP
is more effective for finetuning, thereby enriching our comprehension of
optimal training strategies in LLM development.
|
2502.02009
|
LLMSecConfig: An LLM-Based Approach for Fixing Software Container
Misconfigurations
|
cs.SE cs.AI cs.CR cs.LG
|
Security misconfigurations in Container Orchestrators (COs) can pose serious
threats to software systems. While Static Analysis Tools (SATs) can effectively
detect these security vulnerabilities, the industry currently lacks automated
solutions capable of fixing these misconfigurations. The emergence of Large
Language Models (LLMs), with their proven capabilities in code understanding
and generation, presents an opportunity to address this limitation. This study
introduces LLMSecConfig, an innovative framework that bridges this gap by
combining SATs with LLMs. Our approach leverages advanced prompting techniques
and Retrieval-Augmented Generation (RAG) to automatically repair security
misconfigurations while preserving operational functionality. Evaluation of
1,000 real-world Kubernetes configurations achieved a 94\% success rate while
maintaining a low rate of introducing new misconfigurations.
Our work makes a promising step towards automated container security
management, reducing the manual effort required for configuration maintenance.
|
2502.02013
|
Layer by Layer: Uncovering Hidden Representations in Language Models
|
cs.LG cs.AI cs.CL
|
From extracting features to generating text, the outputs of large language
models (LLMs) typically rely on their final layers, following the conventional
wisdom that earlier layers capture only low-level cues. However, our analysis
shows that intermediate layers can encode even richer representations, often
improving performance on a wide range of downstream tasks. To explain and
quantify these hidden-layer properties, we propose a unified framework of
representation quality metrics based on information theory, geometry, and
invariance to input perturbations. Our framework highlights how each model
layer balances information compression and signal preservation, revealing why
mid-depth embeddings can exceed the last layer's performance. Through extensive
experiments on 32 text-embedding tasks and comparisons across model
architectures (transformers, state-space models) and domains (language,
vision), we demonstrate that intermediate layers consistently provide stronger
features. These findings challenge the standard focus on final-layer embeddings
and open new directions for model analysis and optimization, including
strategic use of mid-layer representations for more robust and accurate AI
systems.
|
2502.02014
|
Analytical Lyapunov Function Discovery: An RL-based Generative Approach
|
cs.LG cs.AI cs.SC cs.SY eess.SY
|
Despite advances in learning-based methods, finding valid Lyapunov functions
for nonlinear dynamical systems remains challenging. Current neural network
approaches face two main issues: challenges in scalable verification and
limited interpretability. To address these, we propose an end-to-end framework
using transformers to construct analytical Lyapunov functions (local), which
simplifies formal verification, enhances interpretability, and provides
valuable insights for control engineers. Our framework consists of a
transformer-based trainer that generates candidate Lyapunov functions and a
falsifier that verifies candidate expressions and refines the model via
risk-seeking policy gradient. Unlike Alfarano et al. (2024), which utilizes
pre-training and seeks global Lyapunov functions for low-dimensional systems,
our model is trained from scratch via reinforcement learning (RL) and succeeds
in finding local Lyapunov functions for high-dimensional and non-polynomial
systems. Given the analytical nature of the candidates, we employ efficient
optimization methods for falsification during training and formal verification
tools for the final verification. We demonstrate the efficiency of our approach
on a range of nonlinear dynamical systems with up to ten dimensions and show
that it can discover Lyapunov functions not previously identified in the
control literature.
|
2502.02015
|
The Wisdom of Intellectually Humble Networks
|
cs.SI
|
People's collectively held beliefs can have significant social implications,
including on democratic processes and policies. Unfortunately, as people
interact with peers to form and update their beliefs, various cognitive and
social biases can hinder their collective wisdom. In this paper, we probe
whether and how the psychological construct of intellectual humility can
modulate collective wisdom in a networked interaction setting. Through
agent-based modeling and data-calibrated simulations, we provide a proof of
concept demonstrating that intellectual humility can foster more accurate
estimations while mitigating polarization in social networks. We investigate
the mechanisms behind the performance improvements and confirm robustness
across task settings and network structures. Our work can guide intervention
designs to capitalize on the promises of intellectual humility in boosting
collective wisdom in social networks.
|
2502.02016
|
A Periodic Bayesian Flow for Material Generation
|
cs.LG cs.AI
|
Generative modeling of crystal data distribution is an important yet
challenging task due to the unique periodic physical symmetry of crystals.
Diffusion-based methods have shown early promise in modeling crystal
distribution. More recently, Bayesian Flow Networks were introduced to
aggregate noisy latent variables, resulting in a variance-reduced parameter
space that has been shown to be advantageous for modeling Euclidean data
distributions with structural constraints (Song et al., 2023). Inspired by
this, we seek to unlock its potential for modeling variables located in
non-Euclidean manifolds e.g. those within crystal structures, by overcoming
challenging theoretical issues. We introduce CrysBFN, a novel crystal
generation method by proposing a periodic Bayesian flow, which essentially
differs from the original Gaussian-based BFN by exhibiting non-monotonic
entropy dynamics. To successfully realize the concept of periodic Bayesian
flow, CrysBFN integrates a new entropy conditioning mechanism and empirically
demonstrates its significance compared to time-conditioning. Extensive
experiments over both crystal ab initio generation and crystal structure
prediction tasks demonstrate the superiority of CrysBFN, which consistently
achieves new state-of-the-art on all benchmarks. Surprisingly, we found that
CrysBFN enjoys a significant improvement in sampling efficiency, e.g., ~100x
speedup 10 v.s. 2000 steps network forwards) compared with previous
diffusion-based methods on MP-20 dataset. Code is available at
https://github.com/wu-han-lin/CrysBFN.
|
2502.02017
|
Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via
Topology Alignment
|
cs.SI cs.AI cs.LG
|
Recent advances in CV and NLP have inspired researchers to develop
general-purpose graph foundation models through pre-training across diverse
domains. However, a fundamental challenge arises from the substantial
differences in graph topologies across domains. Additionally, real-world graphs
are often sparse and prone to noisy connections and adversarial attacks. To
address these issues, we propose the Multi-Domain Graph Foundation Model
(MDGFM), a unified framework that aligns and leverages cross-domain topological
information to facilitate robust knowledge transfer. MDGFM bridges different
domains by adaptively balancing features and topology while refining original
graphs to eliminate noise and align topological structures. To further enhance
knowledge transfer, we introduce an efficient prompt-tuning approach. By
aligning topologies, MDGFM not only improves multi-domain pre-training but also
enables robust knowledge transfer to unseen domains. Theoretical analyses
provide guarantees of MDGFM's effectiveness and domain generalization
capabilities. Extensive experiments on both homophilic and heterophilic graph
datasets validate the robustness and efficacy of our method.
|
2502.02018
|
Dual Ensembled Multiagent Q-Learning with Hypernet Regularizer
|
cs.MA cs.LG
|
Overestimation in single-agent reinforcement learning has been extensively
studied. In contrast, overestimation in the multiagent setting has received
comparatively little attention although it increases with the number of agents
and leads to severe learning instability. Previous works concentrate on
reducing overestimation in the estimation process of target Q-value. They
ignore the follow-up optimization process of online Q-network, thus making it
hard to fully address the complex multiagent overestimation problem. To solve
this challenge, in this study, we first establish an iterative
estimation-optimization analysis framework for multiagent value-mixing
Q-learning. Our analysis reveals that multiagent overestimation not only comes
from the computation of target Q-value but also accumulates in the online
Q-network's optimization. Motivated by it, we propose the Dual Ensembled
Multiagent Q-Learning with Hypernet Regularizer algorithm to tackle multiagent
overestimation from two aspects. First, we extend the random ensemble technique
into the estimation of target individual and global Q-values to derive a lower
update target. Second, we propose a novel hypernet regularizer on hypernetwork
weights and biases to constrain the optimization of online global Q-network to
prevent overestimation accumulation. Extensive experiments in MPE and SMAC show
that the proposed method successfully addresses overestimation across various
tasks.
|
2502.02020
|
Causal bandits with backdoor adjustment on unknown Gaussian DAGs
|
cs.LG stat.ME
|
The causal bandit problem aims to sequentially learn the intervention that
maximizes the expectation of a reward variable within a system governed by a
causal graph. Most existing approaches assume prior knowledge of the graph
structure, or impose unrealistically restrictive conditions on the graph. In
this paper, we assume a Gaussian linear directed acyclic graph (DAG) over arms
and the reward variable, and study the causal bandit problem when the graph
structure is unknown. We identify backdoor adjustment sets for each arm using
sequentially generated experimental and observational data during the decision
process, which allows us to estimate causal effects and construct upper
confidence bounds. By integrating estimates from both data sources, we develop
a novel bandit algorithm, based on modified upper confidence bounds, to
sequentially determine the optimal intervention. We establish both
case-dependent and case-independent upper bounds on the cumulative regret for
our algorithm, which improve upon the bounds of the standard multi-armed bandit
algorithms. Our empirical study demonstrates its advantage over existing
methods with respect to cumulative regret and computation time.
|
2502.02021
|
Multi-illuminant Color Constancy via Multi-scale Illuminant Estimation
and Fusion
|
cs.CV eess.IV
|
Multi-illuminant color constancy methods aim to eliminate local color casts
within an image through pixel-wise illuminant estimation. Existing methods
mainly employ deep learning to establish a direct mapping between an image and
its illumination map, which neglects the impact of image scales. To alleviate
this problem, we represent an illuminant map as the linear combination of
components estimated from multi-scale images. Furthermore, we propose a
tri-branch convolution networks to estimate multi-grained illuminant
distribution maps from multi-scale images. These multi-grained illuminant maps
are merged adaptively with an attentional illuminant fusion module. Through
comprehensive experimental analysis and evaluation, the results demonstrate the
effectiveness of our method, and it has achieved state-of-the-art performance.
|
2502.02024
|
UD-Mamba: A pixel-level uncertainty-driven Mamba model for medical image
segmentation
|
eess.IV cs.CV
|
Recent advancements have highlighted the Mamba framework, a state-space model
known for its efficiency in capturing long-range dependencies with linear
computational complexity. While Mamba has shown competitive performance in
medical image segmentation, it encounters difficulties in modeling local
features due to the sporadic nature of traditional location-based scanning
methods and the complex, ambiguous boundaries often present in medical images.
To overcome these challenges, we propose Uncertainty-Driven Mamba (UD-Mamba),
which redefines the pixel-order scanning process by incorporating channel
uncertainty into the scanning mechanism. UD-Mamba introduces two key scanning
techniques: 1) sequential scanning, which prioritizes regions with high
uncertainty by scanning in a row-by-row fashion, and 2) skip scanning, which
processes columns vertically, moving from high-to-low or low-to-high
uncertainty at fixed intervals. Sequential scanning efficiently clusters
high-uncertainty regions, such as boundaries and foreground objects, to improve
segmentation precision, while skip scanning enhances the interaction between
background and foreground regions, allowing for timely integration of
background information to support more accurate foreground inference.
Recognizing the advantages of scanning from certain to uncertain areas, we
introduce four learnable parameters to balance the importance of features
extracted from different scanning methods. Additionally, a cosine consistency
loss is employed to mitigate the drawbacks of transitioning between uncertain
and certain regions during the scanning process. Our method demonstrates robust
segmentation performance, validated across three distinct medical imaging
datasets involving pathology, dermatological lesions, and cardiac tasks.
|
2502.02026
|
ContinuouSP: Generative Model for Crystal Structure Prediction with
Invariance and Continuity
|
cs.LG cond-mat.mtrl-sci
|
The discovery of new materials using crystal structure prediction (CSP) based
on generative machine learning models has become a significant research topic
in recent years. In this paper, we study invariance and continuity in the
generative machine learning for CSP. We propose a new model, called
ContinuouSP, which effectively handles symmetry and periodicity in crystals. We
clearly formulate the invariance and the continuity, and construct a model
based on the energy-based model. Our preliminary evaluation demonstrates the
effectiveness of this model with the CSP task.
|
2502.02027
|
From Fog to Failure: How Dehazing Can Harm Clear Image Object Detection
|
cs.CV cs.AI
|
This study explores the challenges of integrating human visual cue-based
dehazing into object detection, given the selective nature of human perception.
While human vision adapts dynamically to environmental conditions,
computational dehazing does not always enhance detection uniformly. We propose
a multi-stage framework where a lightweight detector identifies regions of
interest (RoIs), which are then enhanced via spatial attention-based dehazing
before final detection by a heavier model. Though effective in foggy
conditions, this approach unexpectedly degrades the performance on clear
images. We analyze this phenomenon, investigate possible causes, and offer
insights for designing hybrid pipelines that balance enhancement and detection.
Our findings highlight the need for selective preprocessing and challenge
assumptions about universal benefits from cascading transformations.
|
2502.02028
|
Fine-tuning Language Models for Recipe Generation: A Comparative
Analysis and Benchmark Study
|
cs.CL cs.AI
|
This research presents an exploration and study of the recipe generation task
by fine-tuning various very small language models, with a focus on developing
robust evaluation metrics and comparing across different language models the
open-ended task of recipe generation. This study presents extensive experiments
with multiple model architectures, ranging from T5-small (Raffel et al., 2023)
and SmolLM-135M(Allal et al., 2024) to Phi-2 (Research, 2023), implementing
both traditional NLP metrics and custom domain-specific evaluation metrics. Our
novel evaluation framework incorporates recipe-specific metrics for assessing
content quality and introduces approaches to allergen substitution. The results
indicate that, while larger models generally perform better on standard
metrics, the relationship between model size and recipe quality is more nuanced
when considering domain-specific metrics. SmolLM-360M and SmolLM-1.7B
demonstrate comparable performance despite their size difference before and
after fine-tuning, while fine-tuning Phi-2 shows notable limitations in recipe
generation despite its larger parameter count. The comprehensive evaluation
framework and allergen substitution systems provide valuable insights for
future work in recipe generation and broader NLG tasks that require domain
expertise and safety considerations.
|
2502.02029
|
MORPH-LER: Log-Euclidean Regularization for Population-Aware Image
Registration
|
cs.CV cs.LG
|
Spatial transformations that capture population-level morphological
statistics are critical for medical image analysis. Commonly used smoothness
regularizers for image registration fail to integrate population statistics,
leading to anatomically inconsistent transformations. Inverse consistency
regularizers promote geometric consistency but lack population morphometrics
integration. Regularizers that constrain deformation to low-dimensional
manifold methods address this. However, they prioritize reconstruction over
interpretability and neglect diffeomorphic properties, such as group
composition and inverse consistency. We introduce MORPH-LER, a Log-Euclidean
regularization framework for population-aware unsupervised image registration.
MORPH-LER learns population morphometrics from spatial transformations to guide
and regularize registration networks, ensuring anatomically plausible
deformations. It features a bottleneck autoencoder that computes the principal
logarithm of deformation fields via iterative square-root predictions. It
creates a linearized latent space that respects diffeomorphic properties and
enforces inverse consistency. By integrating a registration network with a
diffeomorphic autoencoder, MORPH-LER produces smooth, meaningful deformation
fields. The framework offers two main contributions: (1) a data-driven
regularization strategy that incorporates population-level anatomical
statistics to enhance transformation validity and (2) a linearized latent space
that enables compact and interpretable deformation fields for efficient
population morphometrics analysis. We validate MORPH-LER across two families of
deep learning-based registration networks, demonstrating its ability to produce
anatomically accurate, computationally efficient, and statistically meaningful
transformations on the OASIS-1 brain imaging dataset.
|
2502.02032
|
Heteroscedastic Double Bayesian Elastic Net
|
stat.ME cs.AI stat.ML
|
In many practical applications, regression models are employed to uncover
relationships between predictors and a response variable, yet the common
assumption of constant error variance is frequently violated. This issue is
further compounded in high-dimensional settings where the number of predictors
exceeds the sample size, necessitating regularization for effective estimation
and variable selection. To address this problem, we propose the Heteroscedastic
Double Bayesian Elastic Net (HDBEN), a novel framework that jointly models the
mean and log-variance using hierarchical Bayesian priors incorporating both
$\ell_1$ and $\ell_2$ penalties. Our approach simultaneously induces sparsity
and grouping in the regression coefficients and variance parameters, capturing
complex variance structures in the data. Theoretical results demonstrate that
proposed HDBEN achieves posterior concentration, variable selection
consistency, and asymptotic normality under mild conditions which justifying
its behavior. Simulation studies further illustrate that HDBEN outperforms
existing methods, particularly in scenarios characterized by heteroscedasticity
and high dimensionality.
|
2502.02033
|
On Iso-Dual MDS Codes From Elliptic Curves
|
cs.IT math.IT
|
For a linear code $C$ over a finite field, if its dual code $C^{\perp}$ is
equivalent to itself, then the code $C$ is said to be {\it isometry-dual}. In
this paper, we first confirm a conjecture about the isometry-dual MDS elliptic
codes proposed by Han and Ren. Subsequently, two constructions of isometry-dual
maximum distance separable (MDS) codes from elliptic curves are presented. The
new code length $n$ satisfies $n\le\frac{q+\lfloor2\sqrt{q}\rfloor-1}{2}$ when
$q$ is even and $n\le\frac{q+\lfloor2\sqrt{q}\rfloor-3}{2}$ when $q$ is odd.
Additionally, we consider the hull dimension of both constructions. In the case
of finite fields with even characteristics, an isometry-dual MDS code is
equivalent to a self-dual MDS code and a linear complementary dual MDS code.
Finally, we apply our results to entanglement-assisted quantum error correcting
codes (EAQECCs) and obtain two new families of MDS EAQECCs.
|
2502.02034
|
Improving Wireless Federated Learning via Joint Downlink-Uplink
Beamforming over Analog Transmission
|
cs.IT eess.SP math.IT
|
Federated learning (FL) over wireless networks using analog transmission can
efficiently utilize the communication resource but is susceptible to errors
caused by noisy wireless links. In this paper, assuming a multi-antenna base
station, we jointly design downlink-uplink beamforming to maximize FL training
convergence over time-varying wireless channels. We derive the round-trip model
updating equation and use it to analyze the FL training convergence to capture
the effects of downlink and uplink beamforming and the local model training on
the global model update. Aiming to maximize the FL training convergence rate,
we propose a low-complexity joint downlink-uplink beamforming (JDUBF)
algorithm, which adopts a greedy approach to decompose the multi-round joint
optimization and convert it into per-round online joint optimization problems.
The per-round problem is further decomposed into three subproblems over a block
coordinate descent framework, where we show that each subproblem can be
efficiently solved by projected gradient descent with fast closed-form updates.
An efficient initialization method that leads to a closed-form initial point is
also proposed to accelerate the convergence of JDUBF. Simulation demonstrates
that JDUBF substantially outperforms the conventional separate-link beamforming
design.
|
2502.02036
|
From Human Hands to Robotic Limbs: A Study in Motor Skill Embodiment for
Telemanipulation
|
cs.RO cs.AI
|
This paper presents a teleoperation system for controlling a redundant degree
of freedom robot manipulator using human arm gestures. We propose a GRU-based
Variational Autoencoder to learn a latent representation of the manipulator's
configuration space, capturing its complex joint kinematics. A fully connected
neural network maps human arm configurations into this latent space, allowing
the system to mimic and generate corresponding manipulator trajectories in real
time through the VAE decoder. The proposed method shows promising results in
teleoperating the manipulator, enabling the generation of novel manipulator
configurations from human features that were not present during training.
|
2502.02040
|
M2R2: Mixture of Multi-Rate Residuals for Efficient Transformer
Inference
|
cs.CL cs.AI cs.LG
|
Residual transformations enhance the representational depth and expressive
power of large language models (LLMs). However, applying static residual
transformations across all tokens in auto-regressive generation leads to a
suboptimal trade-off between inference efficiency and generation fidelity.
Existing methods, including Early Exiting, Skip Decoding, and Mixture-of-Depth
address this by modulating the residual transformation based on token-level
complexity. Nevertheless, these approaches predominantly consider the distance
traversed by tokens through the model layers, neglecting the underlying
velocity of residual evolution. We introduce Mixture of Multi-rate Residuals
(M2R2), a framework that dynamically modulates residual velocity to improve
early alignment, enhancing inference efficiency. Evaluations on reasoning
oriented tasks such as Koala, Self-Instruct, WizardLM, and MT-Bench show M2R2
surpasses state-of-the-art distance-based strategies, balancing generation
quality and speedup. In self-speculative decoding setup, M2R2 achieves up to
2.8x speedups on MT-Bench, outperforming methods like 2-model speculative
decoding, Medusa, LookAhead Decoding, and DEED. In Mixture-of-Experts (MoE)
architectures, integrating early residual alignment with ahead-of-time expert
loading into high-bandwidth memory (HBM) accelerates decoding, reduces
expert-switching bottlenecks, and achieves a 2.9x speedup, making it highly
effective in resource-constrained environments.
|
2502.02046
|
Contextual Memory Reweaving in Large Language Models Using Layered
Latent State Reconstruction
|
cs.CL
|
Memory retention challenges in deep neural architectures have ongoing
limitations in the ability to process and recall extended contextual
information. Token dependencies degrade as sequence length increases, leading
to a decline in coherence and factual consistency across longer outputs. A
structured approach is introduced to mitigate this issue through the reweaving
of latent states captured at different processing layers, reinforcing token
representations over extended sequences. The proposed Contextual Memory
Reweaving framework incorporates a Layered Latent State Reconstruction
mechanism to systematically integrate past contextual embeddings without
introducing external memory modules. Experimental results demonstrate
improvements in recall accuracy across a range of sequence lengths, with
notable gains in the retention of rarely occurring tokens and numerical
reasoning consistency. Further analysis of computational efficiency indicates
that the additional processing overhead remains within acceptable thresholds,
enabling scalability across different model sizes. Evaluations in long-form
text generation and ambiguous query resolution highlight the capacity of memory
reweaving to enhance continuity and reduce inconsistencies over extended
outputs. Attention weight distributions reveal more structured allocation
patterns, suggesting that reweaved latent states contribute to improved
contextual awareness. The findings establish a framework for refining memory
retention mechanisms in language models, addressing long-standing challenges in
handling complex, multi-step reasoning tasks.
|
2502.02047
|
AmaSQuAD: A Benchmark for Amharic Extractive Question Answering
|
cs.CL
|
This research presents a novel framework for translating extractive
question-answering datasets into low-resource languages, as demonstrated by the
creation of the AmaSQuAD dataset, a translation of SQuAD 2.0 into Amharic. The
methodology addresses challenges related to misalignment between translated
questions and answers, as well as the presence of multiple answer instances in
the translated context. For this purpose, we used cosine similarity utilizing
embeddings from a fine-tuned BERT-based model for Amharic and Longest Common
Subsequence (LCS). Additionally, we fine-tune the XLM-R model on the AmaSQuAD
synthetic dataset for Amharic Question-Answering. The results show an
improvement in baseline performance, with the fine-tuned model achieving an
increase in the F1 score from 36.55% to 44.41% and 50.01% to 57.5% on the
AmaSQuAD development dataset. Moreover, the model demonstrates improvement on
the human-curated AmQA dataset, increasing the F1 score from 67.80% to 68.80%
and the exact match score from 52.50% to 52.66%.The AmaSQuAD dataset is
publicly available Datasets
|
2502.02048
|
Efficient Domain Adaptation of Multimodal Embeddings using Constrastive
Learning
|
cs.LG cs.CL cs.CV
|
Recent advancements in machine learning (ML), natural language processing
(NLP), and foundational models have shown promise for real-life applications in
critical, albeit compute-constrainted fields like healthcare.
In such areas, combining foundational models with supervised ML offers
potential for automating tasks like diagnosis and treatment planning, but the
limited availability of onsite computational resources pose significant
challenges before applying these technologies effectively: Current approaches
either yield subpar results when using pretrained models without task-specific
adaptation, or require substantial computational resources for fine-tuning,
which is often a barrier to entry in such environments.
This renders them inaccessible in applications where performance and quality
standards are high, but computational resources are scarce.
To bridge the gap between best-in-class performance and accessibility, we
propose a novel method for adapting foundational, multimodal embeddings to
downstream tasks, without the need of expensive fine-tuning processes.
Our method leverages frozen embeddings from Large Language Models (LLMs) and
Vision Models, and uses contrastive learning to train a small, task-specific
nonlinear projection that can be used in the downstream task, without having to
fine-tune the original foundational models.
We show that this efficient procedure leads to significant performance
improvements across various downstream tasks, and perhaps more importantly with
minimal computational overhead, offering a practical solution for the use of
advanced, foundational ML models in resource-constrained settings.
|
2502.02050
|
RECCS: Realistic Cluster Connectivity Simulator for Synthetic Network
Generation
|
cs.SI
|
The limited availability of useful ground-truth communities in real-world
networks presents a challenge to evaluating and selecting a "best" community
detection method for a given network or family of networks. The use of
synthetic networks with planted ground-truths is one way to address this
challenge. While several synthetic network generators can be used for this
purpose, Stochastic Block Models (SBMs), when provided input parameters from
real-world networks and clusterings, are well suited to producing networks that
retain the properties of the network they are intended to model. We report,
however, that SBMs can produce disconnected ground truth clusters; even under
conditions where the input clusters are connected. In this study, we describe
the REalistic Cluster Connectivity Simulator (RECCS), which, while retaining
approximately the same quality for other network and cluster parameters,
creates an SBM synthetic network and then modifies it to ensure an improved fit
to cluster connectivity. We report results using parameters obtained from
clustered real-world networks ranging up to 13.9 million nodes in size, and
demonstrate an improvement over the unmodified use of SBMs for network
generation.
|
2502.02051
|
Sound Judgment: Properties of Consequential Sounds Affecting
Human-Perception of Robots
|
cs.RO cs.HC cs.SD eess.AS
|
Positive human-perception of robots is critical to achieving sustained use of
robots in shared environments. One key factor affecting human-perception of
robots are their sounds, especially the consequential sounds which robots (as
machines) must produce as they operate. This paper explores qualitative
responses from 182 participants to gain insight into human-perception of robot
consequential sounds. Participants viewed videos of different robots performing
their typical movements, and responded to an online survey regarding their
perceptions of robots and the sounds they produce. Topic analysis was used to
identify common properties of robot consequential sounds that participants
expressed liking, disliking, wanting or wanting to avoid being produced by
robots. Alongside expected reports of disliking high pitched and loud sounds,
many participants preferred informative and audible sounds (over no sound) to
provide predictability of purpose and trajectory of the robot. Rhythmic sounds
were preferred over acute or continuous sounds, and many participants wanted
more natural sounds (such as wind or cat purrs) in-place of machine-like noise.
The results presented in this paper support future research on methods to
improve consequential sounds produced by robots by highlighting features of
sounds that cause negative perceptions, and providing insights into sound
profile changes for improvement of human-perception of robots, thus enhancing
human robot interaction.
|
2502.02052
|
Multimaterial topology optimization for finite strain elastoplasticity:
theory, methods, and applications
|
cs.CE math.OC
|
Plasticity is inherent to many engineering materials such as metals. While it
can degrade the load-carrying capacity of structures via material yielding, it
can also protect structures through plastic energy dissipation. To fully
harness plasticity, here we present the theory, method, and application of a
topology optimization framework that simultaneously optimizes structural
geometries and material phases to customize the stiffness, strength, and
structural toughness of designs experiencing finite strain elastoplasticity.
The framework accurately predicts structural responses by employing a rigorous,
mechanics-based elastoplasticity theory that ensures isochoric plastic flow. It
also effectively identifies optimal material phase distributions using a
gradient-based optimizer, where gradient information is obtained via a reversed
adjoint method to address history dependence, along with automatic
differentiation to compute the complex partial derivatives. We demonstrate the
framework by optimizing a range of 2D and 3D elastoplastic structures,
including energy-dissipating dampers, load-carrying beams, impact-resisting
bumpers, and cold working profiled sheets. These optimized multimaterial
structures reveal important mechanisms for improving design performance under
large deformation, such as the transition from kinematic to isotropic hardening
with increasing displacement amplitudes and the formation of twisted regions
that concentrate stress, enhancing plastic energy dissipation. Through the
superior performance of these optimized designs, we demonstrate the framework's
effectiveness in tailoring elastoplastic responses across various spatial
configurations, material types, hardening behaviors, and combinations of
candidate materials. This work offers a systematic approach for optimizing
next-generation multimaterial structures with elastoplastic behaviors under
large deformations.
|
2502.02054
|
RAPID: Robust and Agile Planner Using Inverse Reinforcement Learning for
Vision-Based Drone Navigation
|
cs.RO cs.AI cs.CV cs.LG
|
This paper introduces a learning-based visual planner for agile drone flight
in cluttered environments. The proposed planner generates collision-free
waypoints in milliseconds, enabling drones to perform agile maneuvers in
complex environments without building separate perception, mapping, and
planning modules. Learning-based methods, such as behavior cloning (BC) and
reinforcement learning (RL), demonstrate promising performance in visual
navigation but still face inherent limitations. BC is susceptible to
compounding errors due to limited expert imitation, while RL struggles with
reward function design and sample inefficiency. To address these limitations,
this paper proposes an inverse reinforcement learning (IRL)-based framework for
high-speed visual navigation. By leveraging IRL, it is possible to reduce the
number of interactions with simulation environments and improve capability to
deal with high-dimensional spaces while preserving the robustness of RL
policies. A motion primitive-based path planning algorithm collects an expert
dataset with privileged map data from diverse environments, ensuring
comprehensive scenario coverage. By leveraging both the acquired expert and
learner dataset gathered from the agent's interactions with the simulation
environments, a robust reward function and policy are learned across diverse
states. While the proposed method is trained in a simulation environment only,
it can be directly applied to real-world scenarios without additional training
or tuning. The performance of the proposed method is validated in both
simulation and real-world environments, including forests and various
structures. The trained policy achieves an average speed of 7 m/s and a maximum
speed of 8.8 m/s in real flight experiments. To the best of our knowledge, this
is the first work to successfully apply an IRL framework for high-speed visual
navigation of drones.
|
2502.02060
|
CH-MARL: Constrained Hierarchical Multiagent Reinforcement Learning for
Sustainable Maritime Logistics
|
cs.AI cs.MA
|
Addressing global challenges such as greenhouse gas emissions and resource
inequity demands advanced AI-driven coordination among autonomous agents. We
propose CH-MARL (Constrained Hierarchical Multiagent Reinforcement Learning), a
novel framework that integrates hierarchical decision-making with dynamic
constraint enforcement and fairness-aware reward shaping. CH-MARL employs a
real-time constraint-enforcement layer to ensure adherence to global emission
caps, while incorporating fairness metrics that promote equitable resource
distribution among agents. Experiments conducted in a simulated maritime
logistics environment demonstrate considerable reductions in emissions, along
with improvements in fairness and operational efficiency. Beyond this
domain-specific success, CH-MARL provides a scalable, generalizable solution to
multi-agent coordination challenges in constrained, dynamic settings, thus
advancing the state of the art in reinforcement learning.
|
2502.02061
|
Reason4Rec: Large Language Models for Recommendation with Deliberative
User Preference Alignment
|
cs.IR
|
While recent advancements in aligning Large Language Models (LLMs) with
recommendation tasks have shown great potential and promising performance
overall, these aligned recommendation LLMs still face challenges in complex
scenarios. This is primarily due to the current alignment approach focusing on
optimizing LLMs to generate user feedback directly, without incorporating
deliberation. To overcome this limitation and develop more reliable LLMs for
recommendations, we propose a new Deliberative Recommendation task, which
incorporates explicit reasoning about user preferences as an additional
alignment goal. We then introduce the Reasoning-powered Recommender framework
for deliberative user preference alignment, designed to enhance reasoning
capabilities by utilizing verbalized user feedback in a step-wise manner to
tackle this task. The framework employs collaborative step-wise experts and
tailored training strategies for each expert. Experimental results across three
real-world datasets demonstrate the rationality of the deliberative task
formulation and the superior performance of the proposed framework in improving
both prediction accuracy and reasoning quality.
|
2502.02063
|
CASIM: Composite Aware Semantic Injection for Text to Motion Generation
|
cs.CV cs.AI cs.GR
|
Recent advances in generative modeling and tokenization have driven
significant progress in text-to-motion generation, leading to enhanced quality
and realism in generated motions. However, effectively leveraging textual
information for conditional motion generation remains an open challenge. We
observe that current approaches, primarily relying on fixed-length text
embeddings (e.g., CLIP) for global semantic injection, struggle to capture the
composite nature of human motion, resulting in suboptimal motion quality and
controllability. To address this limitation, we propose the Composite Aware
Semantic Injection Mechanism (CASIM), comprising a composite-aware semantic
encoder and a text-motion aligner that learns the dynamic correspondence
between text and motion tokens. Notably, CASIM is model and
representation-agnostic, readily integrating with both autoregressive and
diffusion-based methods. Experiments on HumanML3D and KIT benchmarks
demonstrate that CASIM consistently improves motion quality, text-motion
alignment, and retrieval scores across state-of-the-art methods. Qualitative
analyses further highlight the superiority of our composite-aware approach over
fixed-length semantic injection, enabling precise motion control from text
prompts and stronger generalization to unseen text inputs.
|
2502.02066
|
Anticipate & Act : Integrating LLMs and Classical Planning for Efficient
Task Execution in Household Environments
|
cs.RO cs.CL cs.LG
|
Assistive agents performing household tasks such as making the bed or cooking
breakfast often compute and execute actions that accomplish one task at a time.
However, efficiency can be improved by anticipating upcoming tasks and
computing an action sequence that jointly achieves these tasks.
State-of-the-art methods for task anticipation use data-driven deep networks
and Large Language Models (LLMs), but they do so at the level of high-level
tasks and/or require many training examples. Our framework leverages the
generic knowledge of LLMs through a small number of prompts to perform
high-level task anticipation, using the anticipated tasks as goals in a
classical planning system to compute a sequence of finer-granularity actions
that jointly achieve these goals. We ground and evaluate our framework's
abilities in realistic scenarios in the VirtualHome environment and demonstrate
a 31% reduction in execution time compared with a system that does not consider
upcoming tasks.
|
2502.02067
|
AdaptBot: Combining LLM with Knowledge Graphs and Human Input for
Generic-to-Specific Task Decomposition and Knowledge Refinement
|
cs.RO cs.AI cs.CL cs.LG
|
Embodied agents assisting humans are often asked to complete a new task in a
new scenario. An agent preparing a particular dish in the kitchen based on a
known recipe may be asked to prepare a new dish or to perform cleaning tasks in
the storeroom. There may not be sufficient resources, e.g., time or labeled
examples, to train the agent for these new situations. Large Language Models
(LLMs) trained on considerable knowledge across many domains are able to
predict a sequence of abstract actions for such new tasks and scenarios,
although it may not be possible for the agent to execute this action sequence
due to task-, agent-, or domain-specific constraints. Our framework addresses
these challenges by leveraging the generic predictions provided by LLM and the
prior domain-specific knowledge encoded in a Knowledge Graph (KG), enabling an
agent to quickly adapt to new tasks and scenarios. The robot also solicits and
uses human input as needed to refine its existing knowledge. Based on
experimental evaluation over cooking and cleaning tasks in simulation domains,
we demonstrate that the interplay between LLM, KG, and human input leads to
substantial performance gains compared with just using the LLM output.
|
2502.02068
|
Robust and Secure Code Watermarking for Large Language Models via
ML/Crypto Codesign
|
cs.CR cs.CL cs.LG
|
This paper introduces RoSeMary, the first-of-its-kind ML/Crypto codesign
watermarking framework that regulates LLM-generated code to avoid intellectual
property rights violations and inappropriate misuse in software development.
High-quality watermarks adhering to the detectability-fidelity-robustness
tri-objective are limited due to codes' low-entropy nature. Watermark
verification, however, often needs to reveal the signature and requires
re-encoding new ones for code reuse, which potentially compromising the
system's usability. To overcome these challenges, RoSeMary obtains high-quality
watermarks by training the watermark insertion and extraction modules
end-to-end to ensure (i) unaltered watermarked code functionality and (ii)
enhanced detectability and robustness leveraging pre-trained CodeT5 as the
insertion backbone to enlarge the code syntactic and variable rename
transformation search space. In the deployment, RoSeMary uses zero-knowledge
proofs for secure verification without revealing the underlying signatures.
Extensive evaluations demonstrated RoSeMary achieves high detection accuracy
while preserving the code functionality. RoSeMary is also robust against
attacks and provides efficient secure watermark verification.
|
2502.02069
|
LoRA-TTT: Low-Rank Test-Time Training for Vision-Language Models
|
cs.CV
|
The rapid advancements in vision-language models (VLMs), such as CLIP, have
intensified the need to address distribution shifts between training and
testing datasets. Although prior Test-Time Training (TTT) techniques for VLMs
have demonstrated robust performance, they predominantly rely on tuning text
prompts, a process that demands substantial computational resources and is
heavily dependent on entropy-based loss. In this paper, we propose LoRA-TTT, a
novel TTT method that leverages Low-Rank Adaptation (LoRA), applied exclusively
to the image encoder of VLMs. By introducing LoRA and updating only its
parameters during test time, our method offers a simple yet effective TTT
approach, retaining the model's initial generalization capability while
achieving substantial performance gains with minimal memory and runtime
overhead. Additionally, we introduce a highly efficient reconstruction loss
tailored for TTT. Our method can adapt to diverse domains by combining these
two losses, without increasing memory consumption or runtime. Extensive
experiments on two benchmarks, covering 15 datasets, demonstrate that our
method improves the zero-shot top-1 accuracy of CLIP-ViT-B/16 by an average of
5.79% on the OOD benchmark and 1.36% on the fine-grained benchmark, efficiently
surpassing test-time prompt tuning, without relying on any external models or
cache.
|
2502.02071
|
Sequential Multi-objective Multi-agent Reinforcement Learning Approach
for Predictive Maintenance
|
eess.SY cs.SY
|
Existing predictive maintenance (PdM) methods typically focus solely on
whether to replace system components without considering the costs incurred by
inspection. However, a well-considered approach should be able to minimize
Remaining Useful Life (RUL) at engine replacement while maximizing inspection
interval. To achieve this, multi-agent reinforcement learning (MARL) can be
introduced. However, due to the sequential and mutually constraining nature of
these 2 objectives, conventional MARL is not applicable. Therefore, this paper
introduces a novel framework and develops a Sequential Multi-objective
Multi-agent Proximal Policy Optimization (SMOMA-PPO) algorithm. Furthermore, to
provide comprehensive and effective degradation information to RL agents, we
also employed Gated Recurrent Unit, quantile regression, and probability
distribution fitting to develop a GRU-based RUL Prediction (GRP) model.
Experiments demonstrate that the GRP method significantly improves the accuracy
of RUL predictions in the later stages of system operation compared to existing
methods. When incorporating its output into SMOMA-PPO, we achieve at least a
15% reduction in average RUL without unscheduled replacements (UR), nearly a
10% increase in inspection interval, and an overall decrease in maintenance
costs. Importantly, our approach offers a new perspective for addressing
multi-objective maintenance planning with sequential constraints, effectively
enhancing system reliability and reducing maintenance expenses.
|
2502.02072
|
ASCenD-BDS: Adaptable, Stochastic and Context-aware framework for
Detection of Bias, Discrimination and Stereotyping
|
cs.CL cs.AI cs.CY
|
The rapid evolution of Large Language Models (LLMs) has transformed natural
language processing but raises critical concerns about biases inherent in their
deployment and use across diverse linguistic and sociocultural contexts. This
paper presents a framework named ASCenD BDS (Adaptable, Stochastic and
Context-aware framework for Detection of Bias, Discrimination and
Stereotyping). The framework presents approach to detecting bias,
discrimination, stereotyping across various categories such as gender, caste,
age, disability, socioeconomic status, linguistic variations, etc., using an
approach which is Adaptive, Stochastic and Context-Aware. The existing
frameworks rely heavily on usage of datasets to generate scenarios for
detection of Bias, Discrimination and Stereotyping. Examples include datasets
such as Civil Comments, Wino Gender, WinoBias, BOLD, CrowS Pairs and BBQ.
However, such an approach provides point solutions. As a result, these datasets
provide a finite number of scenarios for assessment. The current framework
overcomes this limitation by having features which enable Adaptability,
Stochasticity, Context Awareness. Context awareness can be customized for any
nation or culture or sub-culture (for example an organization's unique
culture). In this paper, context awareness in the Indian context has been
established. Content has been leveraged from Indian Census 2011 to have a
commonality of categorization. A framework has been developed using Category,
Sub-Category, STEM, X-Factor, Synonym to enable the features for Adaptability,
Stochasticity and Context awareness. The framework has been described in detail
in Section 3. Overall 800 plus STEMs, 10 Categories, 31 unique SubCategories
were developed by a team of consultants at Saint Fox Consultancy Private Ltd.
The concept has been tested out in SFCLabs as part of product development.
|
2502.02074
|
Rethinking stance detection: A theoretically-informed research agenda
for user-level inference using language models
|
cs.CL
|
Stance detection has emerged as a popular task in natural language processing
research, enabled largely by the abundance of target-specific social media
data. While there has been considerable research on the development of stance
detection models, datasets, and application, we highlight important gaps
pertaining to (i) a lack of theoretical conceptualization of stance, and (ii)
the treatment of stance at an individual- or user-level, as opposed to
message-level. In this paper, we first review the interdisciplinary origins of
stance as an individual-level construct to highlight relevant attributes (e.g.,
psychological features) that might be useful to incorporate in stance detection
models. Further, we argue that recent pre-trained and large language models
(LLMs) might offer a way to flexibly infer such user-level attributes and/or
incorporate them in modelling stance. To better illustrate this, we briefly
review and synthesize the emerging corpus of studies on using LLMs for
inferring stance, and specifically on incorporating user attributes in such
tasks. We conclude by proposing a four-point agenda for pursuing stance
detection research that is theoretically informed, inclusive, and practically
impactful.
|
2502.02076
|
Position Paper: Building Trust in Synthetic Data for Clinical AI
|
cs.LG cs.CV
|
Deep generative models and synthetic medical data have shown significant
promise in addressing key challenges in healthcare, such as privacy concerns,
data bias, and the scarcity of realistic datasets. While research in this area
has grown rapidly and demonstrated substantial theoretical potential, its
practical adoption in clinical settings remains limited. Despite the benefits
synthetic data offers, questions surrounding its reliability and credibility
persist, leading to a lack of trust among clinicians. This position paper
argues that fostering trust in synthetic medical data is crucial for its
clinical adoption. It aims to spark a discussion on the viability of synthetic
medical data in clinical practice, particularly in the context of current
advancements in AI. We present empirical evidence from brain tumor segmentation
to demonstrate that the quality, diversity, and proportion of synthetic data
directly impact trust in clinical AI models. Our findings provide insights to
improve the deployment and acceptance of synthetic data-driven AI systems in
real-world clinical workflows.
|
2502.02079
|
Online Clustering of Dueling Bandits
|
cs.LG cs.AI
|
The contextual multi-armed bandit (MAB) is a widely used framework for
problems requiring sequential decision-making under uncertainty, such as
recommendation systems. In applications involving a large number of users, the
performance of contextual MAB can be significantly improved by facilitating
collaboration among multiple users. This has been achieved by the clustering of
bandits (CB) methods, which adaptively group the users into different clusters
and achieve collaboration by allowing the users in the same cluster to share
data. However, classical CB algorithms typically rely on numerical reward
feedback, which may not be practical in certain real-world applications. For
instance, in recommendation systems, it is more realistic and reliable to
solicit preference feedback between pairs of recommended items rather than
absolute rewards. To address this limitation, we introduce the first
"clustering of dueling bandit algorithms" to enable collaborative
decision-making based on preference feedback. We propose two novel algorithms:
(1) Clustering of Linear Dueling Bandits (COLDB) which models the user reward
functions as linear functions of the context vectors, and (2) Clustering of
Neural Dueling Bandits (CONDB) which uses a neural network to model complex,
non-linear user reward functions. Both algorithms are supported by rigorous
theoretical analyses, demonstrating that user collaboration leads to improved
regret bounds. Extensive empirical evaluations on synthetic and real-world
datasets further validate the effectiveness of our methods, establishing their
potential in real-world applications involving multiple users with
preference-based feedback.
|
2502.02083
|
Improving Power Plant CO2 Emission Estimation with Deep Learning and
Satellite/Simulated Data
|
cs.CV eess.IV
|
CO2 emissions from power plants, as significant super emitters, contribute
substantially to global warming. Accurate quantification of these emissions is
crucial for effective climate mitigation strategies. While satellite-based
plume inversion offers a promising approach, challenges arise from data
limitations and the complexity of atmospheric conditions. This study addresses
these challenges by (a) expanding the available dataset through the integration
of NO2 data from Sentinel-5P, generating continuous XCO2 maps, and
incorporating real satellite observations from OCO-2/3 for over 71 power plants
in data-scarce regions; and (b) employing a customized U-Net model capable of
handling diverse spatio-temporal resolutions for emission rate estimation. Our
results demonstrate significant improvements in emission rate accuracy compared
to previous methods. By leveraging this enhanced approach, we can enable near
real-time, precise quantification of major CO2 emission sources, supporting
environmental protection initiatives and informing regulatory frameworks.
|
2502.02085
|
A New Rejection Sampling Approach to $k$-$\mathtt{means}$++ With
Improved Trade-Offs
|
cs.DS cs.LG
|
The $k$-$\mathtt{means}$++ seeding algorithm (Arthur & Vassilvitskii, 2007)
is widely used in practice for the $k$-means clustering problem where the goal
is to cluster a dataset $\mathcal{X} \subset \mathbb{R} ^d$ into $k$ clusters.
The popularity of this algorithm is due to its simplicity and provable
guarantee of being $O(\log k)$ competitive with the optimal solution in
expectation. However, its running time is $O(|\mathcal{X}|kd)$, making it
expensive for large datasets.
In this work, we present a simple and effective rejection sampling based
approach for speeding up $k$-$\mathtt{means}$++.
Our first method runs in time $\tilde{O}(\mathtt{nnz} (\mathcal{X}) + \beta
k^2d)$ while still being $O(\log k )$ competitive in expectation. Here, $\beta$
is a parameter which is the ratio of the variance of the dataset to the optimal
$k$-$\mathtt{means}$ cost in expectation and $\tilde{O}$ hides logarithmic
factors in $k$ and $|\mathcal{X}|$.
Our second method presents a new trade-off between computational cost and
solution quality. It incurs an additional scale-invariant factor of $
k^{-\Omega( m/\beta)} \operatorname{Var} (\mathcal{X})$ in addition to the
$O(\log k)$ guarantee of $k$-$\mathtt{means}$++ improving upon a result of
(Bachem et al, 2016a) who get an additional factor of
$m^{-1}\operatorname{Var}(\mathcal{X})$ while still running in time
$\tilde{O}(\mathtt{nnz}(\mathcal{X}) + mk^2d)$. We perform extensive empirical
evaluations to validate our theoretical results and to show the effectiveness
of our approach on real datasets.
|
2502.02088
|
IPO: Iterative Preference Optimization for Text-to-Video Generation
|
cs.CV cs.AI
|
Video foundation models have achieved significant advancement with the help
of network upgrade as well as model scale-up. However, they are still hard to
meet requirements of applications due to unsatisfied generation quality. To
solve this problem, we propose to align video foundation models with human
preferences from the perspective of post-training in this paper. Consequently,
we introduce an Iterative Preference Optimization strategy to enhance generated
video quality by incorporating human feedback. Specifically, IPO exploits a
critic model to justify video generations for pairwise ranking as in Direct
Preference Optimization or point-wise scoring as in Kahneman-Tversky
Optimization. Given this, IPO optimizes video foundation models with guidance
of signals from preference feedback, which helps improve generated video
quality in subject consistency, motion smoothness and aesthetic quality, etc.
In addition, IPO incorporates the critic model with the multi-modality large
language model, which enables it to automatically assign preference labels
without need of retraining or relabeling. In this way, IPO can efficiently
perform multi-round preference optimization in an iterative manner, without the
need of tediously manual labeling. Comprehensive experiments demonstrate that
the proposed IPO can effectively improve the video generation quality of a
pretrained model and help a model with only 2B parameters surpass the one with
5B parameters. Besides, IPO achieves new state-of-the-art performance on VBench
benchmark. We will release our source codes, models as well as dataset to
advance future research and applications.
|
2502.02091
|
Efficient Dynamic Scene Editing via 4D Gaussian-based Static-Dynamic
Separation
|
cs.CV
|
Recent 4D dynamic scene editing methods require editing thousands of 2D
images used for dynamic scene synthesis and updating the entire scene with
additional training loops, resulting in several hours of processing to edit a
single dynamic scene. Therefore, these methods are not scalable with respect to
the temporal dimension of the dynamic scene (i.e., the number of timesteps). In
this work, we propose an efficient dynamic scene editing method that is more
scalable in terms of temporal dimension. To achieve computational efficiency,
we leverage a 4D Gaussian representation that models a 4D dynamic scene by
combining static 3D Gaussians with a Hexplane-based deformation field, which
handles dynamic information. We then perform editing solely on the static 3D
Gaussians, which is the minimal but sufficient component required for visual
editing. To resolve the misalignment between the edited 3D Gaussians and the
deformation field potentially resulting from the editing process, we
additionally conducted a refinement stage using a score distillation mechanism.
Extensive editing results demonstrate that our method is efficient, reducing
editing time by more than half compared to existing methods, while achieving
high editing quality that better follows user instructions.
|
2502.02092
|
Sum of Squared Extended {\eta}-{\mu} and {\kappa}-{\mu} RVs: A New
Framework Applied to FR3 and Sub-THz Systems
|
cs.IT eess.SP math.IT
|
The analysis of systems operating in future frequency ranges calls for a
proper statistical channel characterization through generalized fading models.
In this paper, we adopt the Extended {\eta}-{\mu} and {\kappa}-{\mu} models to
characterize the propagation in FR3 and the sub-THz band, respectively. For
these models, we develop a new exact representation of the sum of squared
independent and identically distributed random variables, which can be used to
express the power of the received signal in multi-antenna systems. Unlike
existing ones, the proposed analytical framework is remarkably tractable and
computationally efficient, and thus can be conveniently employed to analyze
systems with massive antenna arrays. For both the Extended {\eta}-{\mu} and
{\kappa}-{\mu} distributions, we derive novel expressions for the probability
density function and cumulative distribution function, we analyze their
convergence and truncation error, and we discuss the computational complexity
and implementation aspects. Moreover, we derive expressions for the outage and
coverage probability, bit error probability for coherent binary modulations,
and symbol error probability for M-ary phase-shift keying and quadrature
amplitude modulation. Lastly, we provide an extensive performance evaluation of
FR3 and sub-THz systems focusing on a downlink scenario where a single-antenna
user is served by a base station employing maximum ratio transmission.
|
2502.02095
|
LongDPO: Unlock Better Long-form Generation Abilities for LLMs via
Critique-augmented Stepwise Information
|
cs.CL
|
Long-form generation is crucial for academic writing papers and repo-level
code generation. Despite this, current models, including GPT-4o, still exhibit
unsatisfactory performance. Existing methods that utilize preference learning
with outcome supervision often fail to provide detailed feedback for extended
contexts. This shortcoming can lead to content that does not fully satisfy
query requirements, resulting in issues like length deviations, and diminished
quality. In this paper, we propose enhancing long-form generation by
incorporating process supervision. We employ Monte Carlo Tree Search to gather
stepwise preference pairs, utilizing a global memory pool to maintain
consistency. To address the issue of suboptimal candidate selection, we
integrate external critiques to refine and improve the quality of the
preference pairs. Finally, we apply step-level DPO using the collected stepwise
preference pairs. Experimental results show that our method improves length and
quality on long-form generation benchmarks, with almost lossless performance on
general benchmarks across various model backbones.
|
2502.02096
|
Dual-Flow: Transferable Multi-Target, Instance-Agnostic Attacks via
In-the-wild Cascading Flow Optimization
|
cs.CV
|
Adversarial attacks are widely used to evaluate model robustness, and in
black-box scenarios, the transferability of these attacks becomes crucial.
Existing generator-based attacks have excellent generalization and
transferability due to their instance-agnostic nature. However, when training
generators for multi-target tasks, the success rate of transfer attacks is
relatively low due to the limitations of the model's capacity. To address these
challenges, we propose a novel Dual-Flow framework for multi-target
instance-agnostic adversarial attacks, utilizing Cascading Distribution Shift
Training to develop an adversarial velocity function. Extensive experiments
demonstrate that Dual-Flow significantly improves transferability over previous
multi-target generative attacks. For example, it increases the success rate
from Inception-v3 to ResNet-152 by 34.58%. Furthermore, our attack method shows
substantially stronger robustness against defense mechanisms, such as
adversarially trained models.
|
2502.02097
|
VerteNet -- A Multi-Context Hybrid CNN Transformer for Accurate
Vertebral Landmark Localization in Lateral Spine DXA Images
|
cs.CV
|
Lateral Spine Image (LSI) analysis is important for medical diagnosis,
treatment planning, and detailed spinal health assessments. Although modalities
like Computed Tomography and Digital X-ray Imaging are commonly used, Dual
Energy X-ray Absorptiometry (DXA) is often preferred due to lower radiation
exposure, seamless capture, and cost-effectiveness. Accurate Vertebral Landmark
Localization (VLL) on LSIs is important to detect spinal conditions like
kyphosis and lordosis, as well as assessing Abdominal Aortic Calcification
(AAC) using Inter-Vertebral Guides (IVGs). Nonetheless, few automated VLL
methodologies have concentrated on DXA LSIs. We present VerteNet, a hybrid
CNN-Transformer model featuring a novel dual-resolution attention mechanism in
self and cross-attention domains, referred to as Dual Resolution Self-Attention
(DRSA) and Dual Resolution Cross-Attention (DRCA). These mechanisms capture the
diverse frequencies in DXA images by operating at two different feature map
resolutions. Additionally, we design a Multi-Context Feature Fusion Block
(MCFB) that efficiently integrates the features using DRSA and DRCA. We train
VerteNet on 620 DXA LSIs from various machines and achieve superior results
compared to existing methods. We also design an algorithm that utilizes
VerteNet's predictions in estimating the Region of Interest (ROI) to detect
potential abdominal aorta cropping, where inadequate soft tissue hinders
calcification assessment. Additionally, we present a small proof-of-concept
study to show that IVGs generated from VLL information can improve inter-reader
correlation in AAC scoring, addressing two key areas of disagreement in expert
AAC-24 scoring: IVG placement and quality control for full abdominal aorta
assessment. The code for this work can be found at
https://github.com/zaidilyas89/VerteNet.
|
2502.02100
|
Topic Modeling in Marathi
|
cs.CL cs.LG
|
While topic modeling in English has become a prevalent and well-explored
area, venturing into topic modeling for Indic languages remains relatively
rare. The limited availability of resources, diverse linguistic structures, and
unique challenges posed by Indic languages contribute to the scarcity of
research and applications in this domain. Despite the growing interest in
natural language processing and machine learning, there exists a noticeable gap
in the comprehensive exploration of topic modeling methodologies tailored
specifically for languages such as Hindi, Marathi, Tamil, and others. In this
paper, we examine several topic modeling approaches applied to the Marathi
language. Specifically, we compare various BERT and non-BERT approaches,
including multilingual and monolingual BERT models, using topic coherence and
topic diversity as evaluation metrics. Our analysis provides insights into the
performance of these approaches for Marathi language topic modeling. The key
finding of the paper is that BERTopic, when combined with BERT models trained
on Indic languages, outperforms LDA in terms of topic modeling performance.
|
2502.02103
|
Neural Networks Learn Distance Metrics
|
cs.LG cs.AI stat.ML
|
Neural networks may naturally favor distance-based representations, where
smaller activations indicate closer proximity to learned prototypes. This
contrasts with intensity-based approaches, which rely on activation magnitudes.
To test this hypothesis, we conducted experiments with six MNIST architectural
variants constrained to learn either distance or intensity representations. Our
results reveal that the underlying representation affects model performance. We
develop a novel geometric framework that explains these findings and introduce
OffsetL2, a new architecture based on Mahalanobis distance equations, to
further validate this framework. This work highlights the importance of
considering distance-based learning in neural network design.
|
2502.02104
|
Concept-Aware Latent and Explicit Knowledge Integration for Enhanced
Cognitive Diagnosis
|
cs.LG
|
Cognitive diagnosis can infer the students' mastery of specific knowledge
concepts based on historical response logs. However, the existing cognitive
diagnostic models (CDMs) represent students' proficiency via a unidimensional
perspective, which can't assess the students' mastery on each knowledge concept
comprehensively. Moreover, the Q-matrix binarizes the relationship between
exercises and knowledge concepts, and it can't represent the latent
relationship between exercises and knowledge concepts. Especially, when the
granularity of knowledge attributes refines increasingly, the Q-matrix becomes
incomplete correspondingly and the sparse binary representation (0/1) fails to
capture the intricate relationships among knowledge concepts. To address these
issues, we propose a Concept-aware Latent and Explicit Knowledge Integration
model for cognitive diagnosis (CLEKI-CD). Specifically, a multidimensional
vector is constructed according to the students' mastery and exercise
difficulty for each knowledge concept from multiple perspectives, which
enhances the representation capabilities of the model. Moreover, a latent
Q-matrix is generated by our proposed attention-based knowledge aggregation
method, and it can uncover the coverage degree of exercises over latent
knowledge. The latent Q-matrix can supplement the sparse explicit Q-matrix with
the inherent relationships among knowledge concepts, and mitigate the knowledge
coverage problem. Furthermore, we employ a combined cognitive diagnosis layer
to integrate both latent and explicit knowledge, further enhancing cognitive
diagnosis performance. Extensive experiments on real-world datasets demonstrate
that CLEKI-CD outperforms the state-of-the-art models. The proposed CLEKI-CD is
promising in practical applications in the field of intelligent education, as
it exhibits good interpretability with diagnostic results.
|
2502.02109
|
Causally-informed Deep Learning towards Explainable and Generalizable
Outcomes Prediction in Critical Care
|
cs.LG cs.AI
|
Recent advances in deep learning (DL) have prompted the development of
high-performing early warning score (EWS) systems, predicting clinical
deteriorations such as acute kidney injury, acute myocardial infarction, or
circulatory failure. DL models have proven to be powerful tools for various
tasks but come with the cost of lacking interpretability and limited
generalizability, hindering their clinical applications. To develop a practical
EWS system applicable to various outcomes, we propose causally-informed
explainable early prediction model, which leverages causal discovery to
identify the underlying causal relationships of prediction and thus owns two
unique advantages: demonstrating the explicit interpretation of the prediction
while exhibiting decent performance when applied to unfamiliar environments.
Benefiting from these features, our approach achieves superior accuracy for 6
different critical deteriorations and achieves better generalizability across
different patient groups, compared to various baseline algorithms. Besides, we
provide explicit causal pathways to serve as references for assistant clinical
diagnosis and potential interventions. The proposed approach enhances the
practical application of deep learning in various medical scenarios.
|
2502.02112
|
The Induced Matching Distance: A Novel Topological Metric with
Applications in Robotics
|
math.AT cs.RO
|
This paper introduces the induced matching distance, a novel topological
metric designed to compare discrete structures represented by a symmetric
non-negative function. We apply this notion to analyze agent trajectories over
time. We use dynamic time warping to measure trajectory similarity and compute
the 0-dimensional persistent homology to identify relevant connected
components, which, in our context, correspond to groups of similar
trajectories. To track the evolution of these components across time, we
compute induced matching distances, which preserve the coherence of their
dynamic behavior. We then obtain a 1-dimensional signal that quantifies the
consistency of trajectory groups over time. Our experiments demonstrate that
our approach effectively differentiates between various agent behaviors,
highlighting its potential as a robust tool for topological analysis in
robotics and related fields.
|
2502.02118
|
BRIDLE: Generalized Self-supervised Learning with Quantization
|
cs.LG cs.CV
|
Self-supervised learning has been a powerful approach for learning meaningful
representations from unlabeled data across various domains, reducing the
reliance on large labeled datasets. Inspired by BERT's success in capturing
deep bidirectional contexts in natural language processing, similar frameworks
have been adapted to other modalities such as audio, with models like BEATs
extending the bidirectional training paradigm to audio signals using vector
quantization (VQ). However, these frameworks face challenges, notably their
dependence on a single codebook for quantization, which may not capture the
complex, multifaceted nature of signals. In addition, inefficiencies in
codebook utilization lead to underutilized code vectors. To address these
limitations, we introduce BRIDLE (Bidirectional Residual Quantization
Interleaved Discrete Learning Encoder), a self-supervised encoder pretraining
framework that incorporates residual quantization (RQ) into the bidirectional
training process, and is generalized for pretraining with audio, image, and
video. Using multiple hierarchical codebooks, RQ enables fine-grained
discretization in the latent space, enhancing representation quality. BRIDLE
involves an interleaved training procedure between the encoder and tokenizer.
We evaluate BRIDLE on audio understanding tasks using classification
benchmarks, achieving state-of-the-art results, and demonstrate competitive
performance on image classification and video classification tasks, showing
consistent improvements over traditional VQ methods in downstream performance.
|
2502.02121
|
BILBO: BILevel Bayesian Optimization
|
cs.LG stat.ML
|
Bilevel optimization is characterized by a two-level optimization structure,
where the upper-level problem is constrained by optimal lower-level solutions,
and such structures are prevalent in real-world problems. The constraint by
optimal lower-level solutions poses significant challenges, especially in
noisy, constrained, and derivative-free settings, as repeating lower-level
optimizations is sample inefficient and predicted lower-level solutions may be
suboptimal. We present BILevel Bayesian Optimization (BILBO), a novel Bayesian
optimization algorithm for general bilevel problems with blackbox functions,
which optimizes both upper- and lower-level problems simultaneously, without
the repeated lower-level optimization required by existing methods. BILBO
samples from confidence-bounds based trusted sets, which bounds the
suboptimality on the lower level. Moreover, BILBO selects only one function
query per iteration, where the function query selection strategy incorporates
the uncertainty of estimated lower-level solutions and includes a conditional
reassignment of the query to encourage exploration of the lower-level
objective. The performance of BILBO is theoretically guaranteed with a
sublinear regret bound for commonly used kernels and is empirically evaluated
on several synthetic and real-world problems.
|
2502.02129
|
Deep Neural Cellular Potts Models
|
cs.LG q-bio.QM
|
The cellular Potts model (CPM) is a powerful computational method for
simulating collective spatiotemporal dynamics of biological cells. To drive the
dynamics, CPMs rely on physics-inspired Hamiltonians. However, as first
principles remain elusive in biology, these Hamiltonians only approximate the
full complexity of real multicellular systems. To address this limitation, we
propose NeuralCPM, a more expressive cellular Potts model that can be trained
directly on observational data. At the core of NeuralCPM lies the Neural
Hamiltonian, a neural network architecture that respects universal symmetries
in collective cellular dynamics. Moreover, this approach enables seamless
integration of domain knowledge by combining known biological mechanisms and
the expressive Neural Hamiltonian into a hybrid model. Our evaluation with
synthetic and real-world multicellular systems demonstrates that NeuralCPM is
able to model cellular dynamics that cannot be accounted for by traditional
analytical Hamiltonians.
|
2502.02132
|
How Memory in Optimization Algorithms Implicitly Modifies the Loss
|
cs.LG cs.AI math.OC stat.ML
|
In modern optimization methods used in deep learning, each update depends on
the history of previous iterations, often referred to as memory, and this
dependence decays fast as the iterates go further into the past. For example,
gradient descent with momentum has exponentially decaying memory through
exponentially averaged past gradients. We introduce a general technique for
identifying a memoryless algorithm that approximates an optimization algorithm
with memory. It is obtained by replacing all past iterates in the update by the
current one, and then adding a correction term arising from memory (also a
function of the current iterate). This correction term can be interpreted as a
perturbation of the loss, and the nature of this perturbation can inform how
memory implicitly (anti-)regularizes the optimization dynamics. As an
application of our theory, we find that Lion does not have the kind of implicit
anti-regularization induced by memory that AdamW does, providing a theory-based
explanation for Lion's better generalization performance recently documented.
|
2502.02133
|
Synthesis of Model Predictive Control and Reinforcement Learning: Survey
and Classification
|
eess.SY cs.AI cs.LG cs.SY
|
The fields of MPC and RL consider two successful control techniques for
Markov decision processes. Both approaches are derived from similar fundamental
principles, and both are widely used in practical applications, including
robotics, process control, energy systems, and autonomous driving. Despite
their similarities, MPC and RL follow distinct paradigms that emerged from
diverse communities and different requirements. Various technical
discrepancies, particularly the role of an environment model as part of the
algorithm, lead to methodologies with nearly complementary advantages. Due to
their orthogonal benefits, research interest in combination methods has
recently increased significantly, leading to a large and growing set of complex
ideas leveraging MPC and RL. This work illuminates the differences,
similarities, and fundamentals that allow for different combination algorithms
and categorizes existing work accordingly. Particularly, we focus on the
versatile actor-critic RL approach as a basis for our categorization and
examine how the online optimization approach of MPC can be used to improve the
overall closed-loop performance of a policy.
|
2502.02135
|
Standard Neural Computation Alone Is Insufficient for Logical
Intelligence
|
cs.AI cs.LG
|
Neural networks, as currently designed, fall short of achieving true logical
intelligence. Modern AI models rely on standard neural
computation-inner-product-based transformations and nonlinear activations-to
approximate patterns from data. While effective for inductive learning, this
architecture lacks the structural guarantees necessary for deductive inference
and logical consistency. As a result, deep networks struggle with rule-based
reasoning, structured generalization, and interpretability without extensive
post-hoc modifications. This position paper argues that standard neural layers
must be fundamentally rethought to integrate logical reasoning. We advocate for
Logical Neural Units (LNUs)-modular components that embed differentiable
approximations of logical operations (e.g., AND, OR, NOT) directly within
neural architectures. We critique existing neurosymbolic approaches, highlight
the limitations of standard neural computation for logical inference, and
present LNUs as a necessary paradigm shift in AI. Finally, we outline a roadmap
for implementation, discussing theoretical foundations, architectural
integration, and key challenges for future research.
|
2502.02140
|
An Information-Theoretic Analysis of Thompson Sampling with Infinite
Action Spaces
|
stat.ML cs.LG
|
This paper studies the Bayesian regret of the Thompson Sampling algorithm for
bandit problems, building on the information-theoretic framework introduced by
Russo and Van Roy (2015). Specifically, it extends the rate-distortion analysis
of Dong and Van Roy (2018), which provides near-optimal bounds for linear
bandits. A limitation of these results is the assumption of a finite action
space. We address this by extending the analysis to settings with infinite and
continuous action spaces. Additionally, we specialize our results to bandit
problems with expected rewards that are Lipschitz continuous with respect to
the action space, deriving a regret bound that explicitly accounts for the
complexity of the action space.
|
2502.02144
|
DOC-Depth: A novel approach for dense depth ground truth generation
|
cs.CV cs.RO
|
Accurate depth information is essential for many computer vision
applications. Yet, no available dataset recording method allows for fully dense
accurate depth estimation in a large scale dynamic environment. In this paper,
we introduce DOC-Depth, a novel, efficient and easy-to-deploy approach for
dense depth generation from any LiDAR sensor. After reconstructing consistent
dense 3D environment using LiDAR odometry, we address dynamic objects
occlusions automatically thanks to DOC, our state-of-the art dynamic object
classification method. Additionally, DOC-Depth is fast and scalable, allowing
for the creation of unbounded datasets in terms of size and time. We
demonstrate the effectiveness of our approach on the KITTI dataset, improving
its density from 16.1% to 71.2% and release this new fully dense depth
annotation, to facilitate future research in the domain. We also showcase
results using various LiDAR sensors and in multiple environments. All software
components are publicly available for the research community.
|
2502.02145
|
Risk-Aware Driving Scenario Analysis with Large Language Models
|
cs.AI cs.CL cs.RO
|
Large Language Models (LLMs) can capture nuanced contextual relationships,
reasoning, and complex problem-solving. By leveraging their ability to process
and interpret large-scale information, LLMs have shown potential to address
domain-specific challenges, including those in autonomous driving systems. This
paper proposes a novel framework that leverages LLMs for risk-aware analysis of
generated driving scenarios. We hypothesize that LLMs can effectively evaluate
whether driving scenarios generated by autonomous driving testing simulators
are safety-critical. To validate this hypothesis, we conducted an empirical
evaluation to assess the effectiveness of LLMs in performing this task. This
framework will also provide feedback to generate the new safety-critical
scenario by using adversarial method to modify existing non-critical scenarios
and test their effectiveness in validating motion planning algorithms. Code and
scenarios are available at:
https://github.com/yuangao-tum/Riskaware-Scenario-analyse
|
2502.02150
|
On the Guidance of Flow Matching
|
cs.CV cs.LG
|
Flow matching has shown state-of-the-art performance in various generative
tasks, ranging from image generation to decision-making, where guided
generation is pivotal. However, the guidance of flow matching is more general
than and thus substantially different from that of its predecessor, diffusion
models. Therefore, the challenge in guidance for general flow matching remains
largely underexplored. In this paper, we propose the first framework of general
guidance for flow matching. From this framework, we derive a family of guidance
techniques that can be applied to general flow matching. These include a new
training-free asymptotically exact guidance, novel training losses for
training-based guidance, and two classes of approximate guidance that cover
classical gradient guidance methods as special cases. We theoretically
investigate these different methods to give a practical guideline for choosing
suitable methods in different scenarios. Experiments on synthetic datasets,
image inverse problems, and offline reinforcement learning demonstrate the
effectiveness of our proposed guidance methods and verify the correctness of
our flow matching guidance framework. Code to reproduce the experiments can be
found at https://github.com/AI4Science-WestlakeU/flow_guidance.
|
2502.02153
|
Vulnerability Mitigation for Safety-Aligned Language Models via
Debiasing
|
cs.AI cs.CL cs.LG
|
Safety alignment is an essential research topic for real-world AI
applications. Despite the multifaceted nature of safety and trustworthiness in
AI, current safety alignment methods often focus on a comprehensive notion of
safety. By carefully assessing models from the existing safety-alignment
methods, we found that, while they generally improved overall safety
performance, they failed to ensure safety in specific categories. Our study
first identified the difficulty of eliminating such vulnerabilities without
sacrificing the model's helpfulness. We observed that, while smaller KL penalty
parameters, increased training iterations, and dataset cleansing can enhance
safety, they do not necessarily improve the trade-off between safety and
helpfulness. We discovered that safety alignment could even induce undesired
effects and result in a model that prefers generating negative tokens leading
to rejective responses, regardless of the input context. To address this, we
introduced a learning-free method, Token-level Safety-Debiased Inference
(TSDI), to estimate and correct this bias during the generation process using
randomly constructed prompts. Our experiments demonstrated that our method
could enhance the model's helpfulness while maintaining safety, thus improving
the trade-off Pareto-front.
|
2502.02163
|
Progressive Correspondence Regenerator for Robust 3D Registration
|
cs.CV
|
Obtaining enough high-quality correspondences is crucial for robust
registration. Existing correspondence refinement methods mostly follow the
paradigm of outlier removal, which either fails to correctly identify the
accurate correspondences under extreme outlier ratios, or select too few
correct correspondences to support robust registration. To address this
challenge, we propose a novel approach named Regor, which is a progressive
correspondence regenerator that generates higher-quality matches whist
sufficiently robust for numerous outliers. In each iteration, we first apply
prior-guided local grouping and generalized mutual matching to generate the
local region correspondences. A powerful center-aware three-point consistency
is then presented to achieve local correspondence correction, instead of
removal. Further, we employ global correspondence refinement to obtain accurate
correspondences from a global perspective. Through progressive iterations, this
process yields a large number of high-quality correspondences. Extensive
experiments on both indoor and outdoor datasets demonstrate that the proposed
Regor significantly outperforms existing outlier removal techniques. More
critically, our approach obtain 10 times more correct correspondences than
outlier removal methods. As a result, our method is able to achieve robust
registration even with weak features. The code will be released.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.