id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.00687
|
A Flexible Precision Scaling Deep Neural Network Accelerator with
Efficient Weight Combination
|
cs.AR cs.SY eess.SY
|
Deploying mixed-precision neural networks on edge devices is friendly to
hardware resources and power consumption. To support fully mixed-precision
neural network inference, it is necessary to design flexible hardware
accelerators for continuous varying precision operations. However, the previous
works have issues on hardware utilization and overhead of reconfigurable logic.
In this paper, we propose an efficient accelerator for 2~8-bit precision
scaling with serial activation input and parallel weight preloaded. First, we
set two loading modes for the weight operands and decompose the weight into the
corresponding bitwidths, which extends the weight precision support
efficiently. Then, to improve hardware utilization of low-precision operations,
we design the architecture that performs bit-serial MAC operation with systolic
dataflow, and the partial sums are combined spatially. Furthermore, we designed
an efficient carry save adder tree supporting both signed and unsigned number
summation across rows. The experiment result shows that the proposed
accelerator, synthesized with TSMC 28nm CMOS technology, achieves peak
throughput of 4.09TOPS and peak energy efficiency of 68.94TOPS/W at 2/2-bit
operations.
|
2502.00688
|
High-Order Matching for One-Step Shortcut Diffusion Models
|
cs.CV cs.AI cs.LG
|
One-step shortcut diffusion models [Frans, Hafner, Levine and Abbeel, ICLR
2025] have shown potential in vision generation, but their reliance on
first-order trajectory supervision is fundamentally limited. The Shortcut
model's simplistic velocity-only approach fails to capture intrinsic manifold
geometry, leading to erratic trajectories, poor geometric alignment, and
instability-especially in high-curvature regions. These shortcomings stem from
its inability to model mid-horizon dependencies or complex distributional
features, leaving it ill-equipped for robust generative modeling. In this work,
we introduce HOMO (High-Order Matching for One-Step Shortcut Diffusion), a
game-changing framework that leverages high-order supervision to revolutionize
distribution transportation. By incorporating acceleration, jerk, and beyond,
HOMO not only fixes the flaws of the Shortcut model but also achieves
unprecedented smoothness, stability, and geometric precision. Theoretically, we
prove that HOMO's high-order supervision ensures superior approximation
accuracy, outperforming first-order methods. Empirically, HOMO dominates in
complex settings, particularly in high-curvature regions where the Shortcut
model struggles. Our experiments show that HOMO delivers smoother trajectories
and better distributional alignment, setting a new standard for one-step
generative models.
|
2502.00690
|
Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis
of Fairness in AI Conference Policies
|
cs.LG cs.AI cs.CY cs.DL
|
As AI research surges in both impact and volume, conferences have imposed
submission limits to maintain paper quality and alleviate organizational
pressure. In this work, we examine the fairness of desk-rejection systems under
submission limits and reveal that existing practices can result in substantial
inequities. Specifically, we formally define the paper submission limit problem
and identify a critical dilemma: when the number of authors exceeds three, it
becomes impossible to reject papers solely based on excessive submissions
without negatively impacting innocent authors. Thus, this issue may unfairly
affect early-career researchers, as their submissions may be penalized due to
co-authors with significantly higher submission counts, while senior
researchers with numerous papers face minimal consequences. To address this, we
propose an optimization-based fairness-aware desk-rejection mechanism and
formally define two fairness metrics: individual fairness and group fairness.
We prove that optimizing individual fairness is NP-hard, whereas group fairness
can be efficiently optimized via linear programming. Through case studies, we
demonstrate that our proposed system ensures greater equity than existing
methods, including those used in CVPR 2025, offering a more socially just
approach to managing excessive submissions in AI conferences.
|
2502.00691
|
Learning Autonomous Code Integration for Math Language Models
|
cs.AI cs.CL cs.LG
|
Recent advances in mathematical problem-solving with language models (LMs)
integrate chain-of-thought (CoT) reasoning and code execution to harness their
complementary strengths. However, existing hybrid frameworks exhibit a critical
limitation: they depend on externally dictated instructions or rigid
code-integration templates, lacking metacognitive awareness -- the capacity to
dynamically evaluate intrinsic capabilities and autonomously determine when and
how to integrate tools. This rigidity motivates our study of autonomous code
integration, enabling models to adapt tool-usage strategies as their reasoning
abilities evolve during training.
While reinforcement learning (RL) shows promise for boosting LLM reasoning at
scale (e.g., DeepSeek-R1), we demonstrate its inefficiency in learning
autonomous code integration due to inadequate exploration of the vast
combinatorial space of CoT-code interleaving patterns. To address this
challenge, we propose a novel Expectation-Maximization (EM) framework that
synergizes structured exploration (E-step) with off-policy RL optimization
(M-step), creating a self-reinforcing cycle between metacognitive tool-use
decisions and evolving capabilities. Experiments reveal our method achieves
superior results through improved exploration. Notably, our 7B model improves
over 11% on MATH500 and 9.4% on AIME without o1-like CoT.
|
2502.00694
|
Leveraging Large Language Models to Predict Antibody Biological Activity
Against Influenza A Hemagglutinin
|
cs.LG cs.AI q-bio.QM
|
Monoclonal antibodies (mAbs) represent one of the most prevalent FDA-approved
modalities for treating autoimmune diseases, infectious diseases, and cancers.
However, discovery and development of therapeutic antibodies remains a
time-consuming and expensive process. Recent advancements in machine learning
(ML) and artificial intelligence (AI) have shown significant promise in
revolutionizing antibody discovery and optimization. In particular, models that
predict antibody biological activity enable in-silico evaluation of binding and
functional properties; such models can prioritize antibodies with the highest
likelihoods of success in costly and time-intensive laboratory testing
procedures. We here explore an AI model for predicting the binding and receptor
blocking activity of antibodies against influenza A hemagglutinin (HA)
antigens. Our present model is developed with the MAMMAL framework for
biologics discovery to predict antibody-antigen interactions using only
sequence information. To evaluate the model's performance, we tested it under
various data split conditions to mimic real-world scenarios.
Our models achieved an AUROC $\geq$ 0.91 for predicting the activity of
existing antibodies against seen HAs and an AUROC of 0.9 for unseen HAs. For
novel antibody activity prediction, the AUROC was 0.73, which further declined
to 0.63-0.66 under stringent constraints on similarity to existing antibodies.
These results demonstrate the potential of AI foundation models to transform
antibody design by reducing dependence on extensive laboratory testing and
enabling more efficient prioritization of antibody candidates. Moreover, our
findings emphasize the critical importance of diverse and comprehensive
antibody datasets to improve the generalization of prediction models,
particularly for novel antibody development.
|
2502.00695
|
TMI-CLNet: Triple-Modal Interaction Network for Chronic Liver Disease
Prognosis From Imaging, Clinical, and Radiomic Data Fusion
|
cs.CV cs.AI
|
Chronic liver disease represents a significant health challenge worldwide and
accurate prognostic evaluations are essential for personalized treatment plans.
Recent evidence suggests that integrating multimodal data, such as computed
tomography imaging, radiomic features, and clinical information, can provide
more comprehensive prognostic information. However, modalities have an inherent
heterogeneity, and incorporating additional modalities may exacerbate the
challenges of heterogeneous data fusion. Moreover, existing multimodal fusion
methods often struggle to adapt to richer medical modalities, making it
difficult to capture inter-modal relationships. To overcome these limitations,
We present the Triple-Modal Interaction Chronic Liver Network (TMI-CLNet).
Specifically, we develop an Intra-Modality Aggregation module and a
Triple-Modal Cross-Attention Fusion module, which are designed to eliminate
intra-modality redundancy and extract cross-modal information, respectively.
Furthermore, we design a Triple-Modal Feature Fusion loss function to align
feature representations across modalities. Extensive experiments on the liver
prognosis dataset demonstrate that our approach significantly outperforms
existing state-of-the-art unimodal models and other multi-modal techniques. Our
code is available at https://github.com/Mysterwll/liver.git.
|
2502.00698
|
MM-IQ: Benchmarking Human-Like Abstraction and Reasoning in Multimodal
Models
|
cs.AI cs.CV
|
IQ testing has served as a foundational methodology for evaluating human
cognitive capabilities, deliberately decoupling assessment from linguistic
background, language proficiency, or domain-specific knowledge to isolate core
competencies in abstraction and reasoning. Yet, artificial intelligence
research currently lacks systematic benchmarks to quantify these critical
cognitive dimensions in multimodal systems. To address this critical gap, we
propose MM-IQ, a comprehensive evaluation framework comprising 2,710
meticulously curated test items spanning 8 distinct reasoning paradigms.
Through systematic evaluation of leading open-source and proprietary
multimodal models, our benchmark reveals striking limitations: even
state-of-the-art architectures achieve only marginally superior performance to
random chance (27.49% vs. 25% baseline accuracy). This substantial performance
chasm highlights the inadequacy of current multimodal systems in approximating
fundamental human reasoning capacities, underscoring the need for
paradigm-shifting advancements to bridge this cognitive divide.
|
2502.00700
|
S2CFormer: Reorienting Learned Image Compression from Spatial
Interaction to Channel Aggregation
|
cs.CV eess.IV
|
Transformers have achieved significant success in learned image compression
(LIC), with Swin Transformers emerging as the mainstream choice for nonlinear
transforms. A common belief is that their sophisticated spatial operations
contribute most to their efficacy. However, the crucial role of the
feed-forward network (FFN) based Channel Aggregation module within the
transformer architecture has been largely overlooked, and the over-design of
spatial operations leads to a suboptimal trade-off between decoding latency and
R-D performance. In this paper, we reevaluate the key factors behind the
competence of transformers in LIC. By replacing spatial operations with
identity mapping, we are surprised to find that channel operations alone can
approach the R-D performance of the leading methods. This solid lower bound of
performance emphasizes that the presence of channel aggregation is more
essential for the LIC model to achieve competitive performance, while the
previously complex spatial interactions are partly redundant. Based on this
insight, we initiate the "S2CFormer" paradigm, a general architecture that
reorients the focus of LIC from Spatial Interaction to Channel Aggregation. We
present two instantiations of the S2CFormer: S2C-Conv, and S2C-Attention. Each
one incorporates a simple operator for spatial interaction and serves as
nonlinear transform blocks for our LIC models. Both models demonstrate
state-of-the-art (SOTA) R-D performance and significantly faster decoding
speed. These results also motivate further exploration of advanced FFN
structures to enhance the R-D performance while maintaining model efficiency.
With these foundations, we introduce S2C-Hybrid, an enhanced LIC model that
combines the strengths of different S2CFormer instantiations. This model
outperforms all the existing methods on several datasets, setting a new
benchmark for efficient and high-performance LIC.
|
2502.00705
|
Optimization for Neural Operators can Benefit from Width
|
cs.LG math.OC
|
Neural Operators that directly learn mappings between function spaces, such
as Deep Operator Networks (DONs) and Fourier Neural Operators (FNOs), have
received considerable attention. Despite the universal approximation guarantees
for DONs and FNOs, there is currently no optimization convergence guarantee for
learning such networks using gradient descent (GD). In this paper, we address
this open problem by presenting a unified framework for optimization based on
GD and applying it to establish convergence guarantees for both DONs and FNOs.
In particular, we show that the losses associated with both of these neural
operators satisfy two conditions -- restricted strong convexity (RSC) and
smoothness -- that guarantee a decrease on their loss values due to GD.
Remarkably, these two conditions are satisfied for each neural operator due to
different reasons associated with the architectural differences of the
respective models. One takeaway that emerges from the theory is that wider
networks should lead to better optimization convergence for both DONs and FNOs.
We present empirical results on canonical operator learning problems to support
our theoretical results.
|
2502.00706
|
Model Provenance Testing for Large Language Models
|
cs.CR cs.CL cs.LG
|
Large language models are increasingly customized through fine-tuning and
other adaptations, creating challenges in enforcing licensing terms and
managing downstream impacts. Tracking model origins is crucial both for
protecting intellectual property and for identifying derived models when biases
or vulnerabilities are discovered in foundation models. We address this
challenge by developing a framework for testing model provenance: Whether one
model is derived from another. Our approach is based on the key observation
that real-world model derivations preserve significant similarities in model
outputs that can be detected through statistical analysis. Using only black-box
access to models, we employ multiple hypothesis testing to compare model
similarities against a baseline established by unrelated models. On two
comprehensive real-world benchmarks spanning models from 30M to 4B parameters
and comprising over 600 models, our tester achieves 90-95% precision and 80-90%
recall in identifying derived models. These results demonstrate the viability
of systematic provenance verification in production environments even when only
API access is available.
|
2502.00708
|
PhiP-G: Physics-Guided Text-to-3D Compositional Scene Generation
|
cs.CV cs.AI
|
Text-to-3D asset generation has achieved significant optimization under the
supervision of 2D diffusion priors. However, when dealing with compositional
scenes, existing methods encounter several challenges: 1). failure to ensure
that composite scene layouts comply with physical laws; 2). difficulty in
accurately capturing the assets and relationships described in complex scene
descriptions; 3). limited autonomous asset generation capabilities among layout
approaches leveraging large language models (LLMs). To avoid these compromises,
we propose a novel framework for compositional scene generation, PhiP-G, which
seamlessly integrates generation techniques with layout guidance based on a
world model. Leveraging LLM-based agents, PhiP-G analyzes the complex scene
description to generate a scene graph, and integrating a multimodal 2D
generation agent and a 3D Gaussian generation method for targeted assets
creation. For the stage of layout, PhiP-G employs a physical pool with adhesion
capabilities and a visual supervision agent, forming a world model for layout
prediction and planning. Extensive experiments demonstrate that PhiP-G
significantly enhances the generation quality and physical rationality of the
compositional scenes. Notably, PhiP-G attains state-of-the-art (SOTA)
performance in CLIP scores, achieves parity with the leading methods in
generation quality as measured by the T$^3$Bench, and improves efficiency by
24x.
|
2502.00709
|
RankFlow: A Multi-Role Collaborative Reranking Workflow Utilizing Large
Language Models
|
cs.IR
|
In an Information Retrieval (IR) system, reranking plays a critical role by
sorting candidate passages according to their relevance to a specific query.
This process demands a nuanced understanding of the variations among passages
linked to the query. In this work, we introduce RankFlow, a multi-role
reranking workflow that leverages the capabilities of Large Language Models
(LLMs) and role specializations to improve reranking performance. RankFlow
enlists LLMs to fulfill four distinct roles: the query Rewriter, the pseudo
Answerer, the passage Summarizer, and the Reranker. This orchestrated approach
enables RankFlow to: (1) accurately interpret queries, (2) draw upon LLMs'
extensive pre-existing knowledge, (3) distill passages into concise versions,
and (4) assess passages in a comprehensive manner, resulting in notably better
reranking results. Our experimental results reveal that RankFlow outperforms
existing leading approaches on widely recognized IR benchmarks, such as
TREC-DL, BEIR, and NovelEval. Additionally, we investigate the individual
contributions of each role in RankFlow. Code is available at
https://github.com/jincan333/RankFlow.
|
2502.00711
|
VIKSER: Visual Knowledge-Driven Self-Reinforcing Reasoning Framework
|
cs.CV cs.AI
|
Visual reasoning refers to the task of solving questions about visual
information. Current visual reasoning methods typically employ pre-trained
vision-language model (VLM) strategies or deep neural network approaches.
However, existing efforts are constrained by limited reasoning
interpretability, while hindering by the phenomenon of underspecification in
the question text. Additionally, the absence of fine-grained visual knowledge
limits the precise understanding of subject behavior in visual reasoning tasks.
To address these issues, we propose VIKSER (Visual Knowledge-Driven
Self-Reinforcing Reasoning Framework). Specifically, VIKSER, trained using
knowledge distilled from large language models, extracts fine-grained visual
knowledge with the assistance of visual relationship detection techniques.
Subsequently, VIKSER utilizes fine-grained visual knowledge to paraphrase the
question with underspecification. Additionally, we design a novel prompting
method called Chain-of-Evidence (CoE), which leverages the power of ``evidence
for reasoning'' to endow VIKSER with interpretable reasoning capabilities.
Meanwhile, the integration of self-reflection technology empowers VIKSER with
the ability to learn and improve from its mistakes. Experiments conducted on
widely used datasets demonstrate that VIKSER achieves new state-of-the-art
(SOTA) results in relevant tasks.
|
2502.00712
|
Registration-Enhanced Segmentation Method for Prostate Cancer in
Ultrasound Images
|
eess.IV cs.AI cs.CV
|
Prostate cancer is a major cause of cancer-related deaths in men, where early
detection greatly improves survival rates. Although MRI-TRUS fusion biopsy
offers superior accuracy by combining MRI's detailed visualization with TRUS's
real-time guidance, it is a complex and time-intensive procedure that relies
heavily on manual annotations, leading to potential errors. To address these
challenges, we propose a fully automatic MRI-TRUS fusion-based segmentation
method that identifies prostate tumors directly in TRUS images without
requiring manual annotations. Unlike traditional multimodal fusion approaches
that rely on naive data concatenation, our method integrates a
registration-segmentation framework to align and leverage spatial information
between MRI and TRUS modalities. This alignment enhances segmentation accuracy
and reduces reliance on manual effort. Our approach was validated on a dataset
of 1,747 patients from Stanford Hospital, achieving an average Dice coefficient
of 0.212, outperforming TRUS-only (0.117) and naive MRI-TRUS fusion (0.132)
methods, with significant improvements (p $<$ 0.01). This framework
demonstrates the potential for reducing the complexity of prostate cancer
diagnosis and provides a flexible architecture applicable to other multimodal
medical imaging tasks.
|
2502.00714
|
Harnessing Discrete Differential Geometry: A Virtual Playground for the
Bilayer Soft Robotics
|
cs.RO cond-mat.soft
|
Soft robots have garnered significant attention due to their promising
applications across various domains. A hallmark of these systems is their
bilayer structure, where strain mismatch caused by differential expansion
between layers induces complex deformations. Despite progress in theoretical
modeling and numerical simulation, accurately capturing their dynamic behavior,
especially during environmental interactions, remains challenging. This study
presents a novel simulation environment based on the Discrete Elastic Rod (DER)
model to address the challenge. By leveraging discrete differential geometry
(DDG), the DER approach offers superior convergence compared to conventional
methods like Finite Element Method (FEM), particularly in handling contact
interactions -- an essential aspect of soft robot dynamics in real-world
scenarios. Our simulation framework incorporates key features of bilayer
structures, including stretching, bending, twisting, and inter-layer coupling.
This enables the exploration of a wide range of dynamic behaviors for bilayer
soft robots, such as gripping, crawling, jumping, and swimming. The insights
gained from this work provide a robust foundation for the design and control of
advanced bilayer soft robotic systems.
|
2502.00716
|
UPL: Uncertainty-aware Pseudo-labeling for Imbalance Transductive Node
Classification
|
cs.LG
|
Graph-structured datasets often suffer from class imbalance, which
complicates node classification tasks. In this work, we address this issue by
first providing an upper bound on population risk for imbalanced transductive
node classification. We then propose a simple and novel algorithm,
Uncertainty-aware Pseudo-labeling (UPL). Our approach leverages pseudo-labels
assigned to unlabeled nodes to mitigate the adverse effects of imbalance on
classification accuracy. Furthermore, the UPL algorithm enhances the accuracy
of pseudo-labeling by reducing training noise of pseudo-labels through a novel
uncertainty-aware approach. We comprehensively evaluate the UPL algorithm
across various benchmark datasets, demonstrating its superior performance
compared to existing state-of-the-art methods.
|
2502.00717
|
MINT: Mitigating Hallucinations in Large Vision-Language Models via
Token Reduction
|
cs.CV
|
Hallucination has been a long-standing and inevitable problem that hinders
the application of Large Vision-Language Models (LVLMs) in domains that require
high reliability. Various methods focus on improvement depending on data
annotations or training strategies, yet place less emphasis on LLM's inherent
problems. To fill this gap, we delve into the attention mechanism of the
decoding process in the LVLM. Intriguingly, our investigation uncovers the
prevalent attention redundancy within the hierarchical architecture of the
LVLM, manifesting as overextended image processing in deep layers and an
overabundance of non-essential image tokens. Stemming from the observation, we
thus propose MINT, a novel training-free decoding strategy, MItigating
hallucinations via tokeN reducTion. Specifically, we dynamically intensify the
LVLM's local perception capability by masking its attention to irrelevant image
tokens. In addition, we use contrastive decoding that pushes the model to focus
more on those key image regions. Our full method aims to guide the model in
concentrating more on key visual elements during generation. Extensive
experimental results on several popular public benchmarks show that our
approach achieves a 4% improvement in mitigating hallucinations caused by
distracted perception compared to original models. Meanwhile, our approach is
demonstrated to make the model perceive 5% more visual points even though we
reduce a suite of image tokens.
|
2502.00718
|
"I am bad": Interpreting Stealthy, Universal and Robust Audio Jailbreaks
in Audio-Language Models
|
cs.LG cs.SD eess.AS
|
The rise of multimodal large language models has introduced innovative
human-machine interaction paradigms but also significant challenges in machine
learning safety. Audio-Language Models (ALMs) are especially relevant due to
the intuitive nature of spoken communication, yet little is known about their
failure modes. This paper explores audio jailbreaks targeting ALMs, focusing on
their ability to bypass alignment mechanisms. We construct adversarial
perturbations that generalize across prompts, tasks, and even base audio
samples, demonstrating the first universal jailbreaks in the audio modality,
and show that these remain effective in simulated real-world conditions. Beyond
demonstrating attack feasibility, we analyze how ALMs interpret these audio
adversarial examples and reveal them to encode imperceptible first-person toxic
speech - suggesting that the most effective perturbations for eliciting toxic
outputs specifically embed linguistic features within the audio signal. These
results have important implications for understanding the interactions between
different modalities in multimodal models, and offer actionable insights for
enhancing defenses against adversarial audio attacks.
|
2502.00719
|
Vision and Language Reference Prompt into SAM for Few-shot Segmentation
|
cs.CV
|
Segment Anything Model (SAM) represents a large-scale segmentation model that
enables powerful zero-shot capabilities with flexible prompts. While SAM can
segment any object in zero-shot, it requires user-provided prompts for each
target image and does not attach any label information to masks. Few-shot
segmentation models addressed these issues by inputting annotated reference
images as prompts to SAM and can segment specific objects in target images
without user-provided prompts. Previous SAM-based few-shot segmentation models
only use annotated reference images as prompts, resulting in limited accuracy
due to a lack of reference information. In this paper, we propose a novel
few-shot segmentation model, Vision and Language reference Prompt into SAM
(VLP-SAM), that utilizes the visual information of the reference images and the
semantic information of the text labels by inputting not only images but also
language as reference information. In particular, VLP-SAM is a simple and
scalable structure with minimal learnable parameters, which inputs prompt
embeddings with vision-language information into SAM using a multimodal
vision-language model. To demonstrate the effectiveness of VLP-SAM, we
conducted experiments on the PASCAL-5i and COCO-20i datasets, and achieved high
performance in the few-shot segmentation task, outperforming the previous
state-of-the-art model by a large margin (6.3% and 9.5% in mIoU, respectively).
Furthermore, VLP-SAM demonstrates its generality in unseen objects that are not
included in the training data. Our code is available at
https://github.com/kosukesakurai1/VLP-SAM.
|
2502.00724
|
Learned Bayesian Cram\'er-Rao Bound for Unknown Measurement Models Using
Score Neural Networks
|
eess.SP cs.AI cs.LG stat.ML
|
The Bayesian Cram\'er-Rao bound (BCRB) is a crucial tool in signal processing
for assessing the fundamental limitations of any estimation problem as well as
benchmarking within a Bayesian frameworks. However, the BCRB cannot be computed
without full knowledge of the prior and the measurement distributions. In this
work, we propose a fully learned Bayesian Cram\'er-Rao bound (LBCRB) that
learns both the prior and the measurement distributions. Specifically, we
suggest two approaches to obtain the LBCRB: the Posterior Approach and the
Measurement-Prior Approach. The Posterior Approach provides a simple method to
obtain the LBCRB, whereas the Measurement-Prior Approach enables us to
incorporate domain knowledge to improve the sample complexity and
{interpretability}. To achieve this, we introduce a Physics-encoded score
neural network which enables us to easily incorporate such domain knowledge
into a neural network. We {study the learning} errors of the two suggested
approaches theoretically, and validate them numerically. We demonstrate the two
approaches on several signal processing examples, including a linear
measurement problem with unknown mixing and Gaussian noise covariance matrices,
frequency estimation, and quantized measurement. In addition, we test our
approach on a nonlinear signal processing problem of frequency estimation with
real-world underwater ambient noise.
|
2502.00725
|
Understanding and Mitigating the High Computational Cost in Path Data
Diffusion
|
cs.LG
|
Advancements in mobility services, navigation systems, and smart
transportation technologies have made it possible to collect large amounts of
path data. Modeling the distribution of this path data, known as the Path
Generation (PG) problem, is crucial for understanding urban mobility patterns
and developing intelligent transportation systems. Recent studies have explored
using diffusion models to address the PG problem due to their ability to
capture multimodal distributions and support conditional generation. A recent
work devises a diffusion process explicitly in graph space and achieves
state-of-the-art performance. However, this method suffers a high computation
cost in terms of both time and memory, which prohibits its application. In this
paper, we analyze this method both theoretically and experimentally and find
that the main culprit of its high computation cost is its explicit design of
the diffusion process in graph space. To improve efficiency, we devise a
Latent-space Path Diffusion (LPD) model, which operates in latent space instead
of graph space. Our LPD significantly reduces both time and memory costs by up
to 82.8% and 83.1%, respectively. Despite these reductions, our approach does
not suffer from performance degradation. It outperforms the state-of-the-art
method in most scenarios by 24.5%~34.0%.
|
2502.00726
|
Perspectives for Direct Interpretability in Multi-Agent Deep
Reinforcement Learning
|
cs.AI
|
Multi-Agent Deep Reinforcement Learning (MADRL) was proven efficient in
solving complex problems in robotics or games, yet most of the trained models
are hard to interpret. While learning intrinsically interpretable models
remains a prominent approach, its scalability and flexibility are limited in
handling complex tasks or multi-agent dynamics. This paper advocates for direct
interpretability, generating post hoc explanations directly from trained
models, as a versatile and scalable alternative, offering insights into agents'
behaviour, emergent phenomena, and biases without altering models'
architectures. We explore modern methods, including relevance backpropagation,
knowledge edition, model steering, activation patching, sparse autoencoders and
circuit discovery, to highlight their applicability to single-agent,
multi-agent, and training process challenges. By addressing MADRL
interpretability, we propose directions aiming to advance active topics such as
team identification, swarm coordination and sample efficiency.
|
2502.00728
|
Meta-Prompt Optimization for LLM-Based Sequential Decision Making
|
cs.LG
|
Large language models (LLMs) have recently been employed as agents to solve
sequential decision-making tasks such as Bayesian optimization and multi-armed
bandits (MAB). These works usually adopt an LLM for sequential action selection
by providing it with a fixed, manually designed meta-prompt. However, numerous
previous works have found that the prompt has a significant impact on the
performance of the LLM, which calls for a method to automatically optimize the
meta-prompt for LLM-based agents. Unfortunately, the non-stationarity in the
reward observations during LLM-based sequential decision-making makes
meta-prompt optimization highly challenging. To address this challenge, we draw
inspirations from adversarial bandit algorithms, which are inherently capable
of handling non-stationary reward observations. Building on this foundation, we
propose our EXPonential-weight algorithm for prompt Optimization} (EXPO) to
automatically optimize the task description and meta-instruction in the
meta-prompt for LLM-based agents. We also extend EXPO to additionally optimize
the exemplars (i.e., history of interactions) in the meta-prompt to further
enhance the performance, hence introducing our EXPO-ES algorithm. We use
extensive experiments to show that our algorithms significantly improve the
performance of LLM-based sequential decision-making.
|
2502.00729
|
Selective Response Strategies for GenAI
|
cs.AI cs.GT cs.SI
|
The rise of Generative AI (GenAI) has significantly impacted human-based
forums like Stack Overflow, which are essential for generating high-quality
data. This creates a negative feedback loop, hindering the development of GenAI
systems, which rely on such data to provide accurate responses. In this paper,
we provide a possible remedy: A novel strategy we call selective response.
Selective response implies that GenAI could strategically provide inaccurate
(or conservative) responses to queries involving emerging topics and novel
technologies, thereby driving users to use human-based forums like Stack
Overflow. We show that selective response can potentially have a compounding
effect on the data generation process, increasing both GenAI's revenue and user
welfare in the long term. From an algorithmic perspective, we propose an
approximately optimal approach to maximize GenAI's revenue under social welfare
constraints. From a regulatory perspective, we derive sufficient and necessary
conditions for selective response to improve welfare improvements.
|
2502.00730
|
Spatio-Temporal Progressive Attention Model for EEG Classification in
Rapid Serial Visual Presentation Task
|
cs.CV
|
As a type of multi-dimensional sequential data, the spatial and temporal
dependencies of electroencephalogram (EEG) signals should be further
investigated. Thus, in this paper, we propose a novel spatial-temporal
progressive attention model (STPAM) to improve EEG classification in rapid
serial visual presentation (RSVP) tasks. STPAM first adopts three distinct
spatial experts to learn the spatial topological information of brain regions
progressively, which is used to minimize the interference of irrelevant brain
regions. Concretely, the former expert filters out EEG electrodes in the
relative brain regions to be used as prior knowledge for the next expert,
ensuring that the subsequent experts gradually focus their attention on
information from significant EEG electrodes. This process strengthens the
effect of the important brain regions. Then, based on the above-obtained
feature sequence with spatial information, three temporal experts are adopted
to capture the temporal dependence by progressively assigning attention to the
crucial EEG slices. Except for the above EEG classification method, in this
paper, we build a novel Infrared RSVP EEG Dataset (IRED) which is based on dim
infrared images with small targets for the first time, and conduct extensive
experiments on it. The results show that our STPAM can achieve better
performance than all the compared methods.
|
2502.00734
|
CycleGuardian: A Framework for Automatic RespiratorySound classification
Based on Improved Deep clustering and Contrastive Learning
|
cs.SD cs.AI eess.AS
|
Auscultation plays a pivotal role in early respiratory and pulmonary disease
diagnosis. Despite the emergence of deep learning-based methods for automatic
respiratory sound classification post-Covid-19, limited datasets impede
performance enhancement. Distinguishing between normal and abnormal respiratory
sounds poses challenges due to the coexistence of normal respiratory components
and noise components in both types. Moreover, different abnormal respiratory
sounds exhibit similar anomalous features, hindering their differentiation.
Besides, existing state-of-the-art models suffer from excessive parameter size,
impeding deployment on resource-constrained mobile platforms. To address these
issues, we design a lightweight network CycleGuardian and propose a framework
based on an improved deep clustering and contrastive learning. We first
generate a hybrid spectrogram for feature diversity and grouping spectrograms
to facilitating intermittent abnormal sound capture.Then, CycleGuardian
integrates a deep clustering module with a similarity-constrained clustering
component to improve the ability to capture abnormal features and a contrastive
learning module with group mixing for enhanced abnormal feature discernment.
Multi-objective optimization enhances overall performance during training. In
experiments we use the ICBHI2017 dataset, following the official split method
and without any pre-trained weights, our method achieves Sp: 82.06 $\%$, Se:
44.47$\%$, and Score: 63.26$\%$ with a network model size of 38M, comparing to
the current model, our method leads by nearly 7$\%$, achieving the current best
performances. Additionally, we deploy the network on Android devices,
showcasing a comprehensive intelligent respiratory sound auscultation system.
|
2502.00735
|
`Do as I say not as I do': A Semi-Automated Approach for Jailbreak
Prompt Attack against Multimodal LLMs
|
cs.CR cs.AI cs.SE
|
Large Language Models (LLMs) have seen widespread applications across various
domains due to their growing ability to process diverse types of input data,
including text, audio, image and video. While LLMs have demonstrated
outstanding performance in understanding and generating contexts for different
scenarios, they are vulnerable to prompt-based attacks, which are mostly via
text input. In this paper, we introduce the first voice-based jailbreak attack
against multimodal LLMs, termed as Flanking Attack, which can process different
types of input simultaneously towards the multimodal LLMs. Our work is
motivated by recent advancements in monolingual voice-driven large language
models, which have introduced new attack surfaces beyond traditional text-based
vulnerabilities for LLMs. To investigate these risks, we examine the
state-of-the-art multimodal LLMs, which can be accessed via different types of
inputs such as audio input, focusing on how adversarial prompts can bypass its
defense mechanisms. We propose a novel strategy, in which the disallowed prompt
is flanked by benign, narrative-driven prompts. It is integrated in the
Flanking Attack which attempts to humanizes the interaction context and execute
the attack through a fictional setting. Further, to better evaluate the attack
performance, we present a semi-automated self-assessment framework for policy
violation detection. We demonstrate that Flanking Attack is capable of
manipulating state-of-the-art LLMs into generating misaligned and forbidden
outputs, which achieves an average attack success rate ranging from 0.67 to
0.93 across seven forbidden scenarios.
|
2502.00737
|
Scalable Sobolev IPM for Probability Measures on a Graph
|
stat.ML cs.LG
|
We investigate the Sobolev IPM problem for probability measures supported on
a graph metric space. Sobolev IPM is an important instance of integral
probability metrics (IPM), and is obtained by constraining a critic function
within a unit ball defined by the Sobolev norm. In particular, it has been used
to compare probability measures and is crucial for several theoretical works in
machine learning. However, to our knowledge, there are no efficient algorithmic
approaches to compute Sobolev IPM effectively, which hinders its practical
applications. In this work, we establish a relation between Sobolev norm and
weighted $L^p$-norm, and leverage it to propose a \emph{novel regularization}
for Sobolev IPM. By exploiting the graph structure, we demonstrate that the
regularized Sobolev IPM provides a \emph{closed-form} expression for fast
computation. This advancement addresses long-standing computational challenges,
and paves the way to apply Sobolev IPM for practical applications, even in
large-scale settings. Additionally, the regularized Sobolev IPM is negative
definite. Utilizing this property, we design positive-definite kernels upon the
regularized Sobolev IPM, and provide preliminary evidences of their advantages
on document classification and topological data analysis for measures on a
graph.
|
2502.00739
|
Orlicz-Sobolev Transport for Unbalanced Measures on a Graph
|
stat.ML cs.LG
|
Moving beyond $L^p$ geometric structure, Orlicz-Wasserstein (OW) leverages a
specific class of convex functions for Orlicz geometric structure. While OW
remarkably helps to advance certain machine learning approaches, it has a high
computational complexity due to its two-level optimization formula. Recently,
Le et al. (2024) exploits graph structure to propose generalized Sobolev
transport (GST), i.e., a scalable variant for OW. However, GST assumes that
input measures have the same mass. Unlike optimal transport (OT), it is
nontrivial to incorporate a mass constraint to extend GST for measures on a
graph, possibly having different total mass. In this work, we propose to take a
step back by considering the entropy partial transport (EPT) for nonnegative
measures on a graph. By leveraging Caffarelli & McCann (2010)'s observations,
EPT can be reformulated as a standard complete OT between two corresponding
balanced measures. Consequently, we develop a novel EPT with Orlicz geometric
structure, namely Orlicz-EPT, for unbalanced measures on a graph. Especially,
by exploiting the dual EPT formulation and geometric structures of the
graph-based Orlicz-Sobolev space, we derive a novel regularization to propose
Orlicz-Sobolev transport (OST). The resulting distance can be efficiently
computed by simply solving a univariate optimization problem, unlike the
high-computational two-level optimization problem for Orlicz-EPT. Additionally,
we derive geometric structures for the OST and draw its relations to other
transport distances. We empirically show that OST is several-order faster than
Orlicz-EPT. We further illustrate preliminary evidences on the advantages of
OST for document classification, and several tasks in topological data
analysis.
|
2502.00744
|
CoNNect: A Swiss-Army-Knife Regularizer for Pruning of Neural Networks
|
cs.LG
|
Pruning encompasses a range of techniques aimed at increasing the sparsity of
neural networks (NNs). These techniques can generally be framed as minimizing a
loss function subject to an $L_0$-norm constraint. This paper introduces
CoNNect, a novel differentiable regularizer for sparse NN training that ensures
connectivity between input and output layers. CoNNect integrates with
established pruning strategies and supports both structured and unstructured
pruning. We proof that CoNNect approximates $L_0$-regularization, guaranteeing
maximally connected network structures while avoiding issues like layer
collapse. Numerical experiments demonstrate that CoNNect improves classical
pruning strategies and enhances state-of-the-art one-shot pruners, such as
DepGraph and LLM-pruner.
|
2502.00745
|
BEEM: Boosting Performance of Early Exit DNNs using Multi-Exit
Classifiers as Experts
|
cs.LG cs.CL cs.CV
|
Early Exit (EE) techniques have emerged as a means to reduce inference
latency in Deep Neural Networks (DNNs). The latency improvement and accuracy in
these techniques crucially depend on the criteria used to make exit decisions.
We propose a new decision criterion where exit classifiers are treated as
experts BEEM and aggregate their confidence scores. The confidence scores are
aggregated only if neighbouring experts are consistent in prediction as the
samples pass through them, thus capturing their ensemble effect. A sample exits
when the aggregated confidence value exceeds a threshold. The threshold is set
using the error rates of the intermediate exits aiming to surpass the
performance of conventional DNN inference. Experimental results on the COCO
dataset for Image captioning and GLUE datasets for various language tasks
demonstrate that our method enhances the performance of state-of-the-art EE
methods, achieving improvements in speed-up by a factor 1.5x to 2.1x. When
compared to the final layer, its accuracy is comparable in harder Image
Captioning and improves in the easier language tasks. The source code for this
work is publicly available at https://github.com/Div290/BEEM1/tree/main
|
2502.00747
|
Universal Post-Processing Networks for Joint Optimization of Modules in
Task-Oriented Dialogue Systems
|
cs.CL cs.AI
|
Post-processing networks (PPNs) are components that modify the outputs of
arbitrary modules in task-oriented dialogue systems and are optimized using
reinforcement learning (RL) to improve the overall task completion capability
of the system. However, previous PPN-based approaches have been limited to
handling only a subset of modules within a system, which poses a significant
limitation in improving the system performance. In this study, we propose a
joint optimization method for post-processing the outputs of all modules using
universal post-processing networks (UniPPNs), which are language-model-based
networks that can modify the outputs of arbitrary modules in a system as a
sequence-transformation task. Moreover, our RL algorithm, which employs a
module-level Markov decision process, enables fine-grained value and advantage
estimation for each module, thereby stabilizing joint learning for
post-processing the outputs of all modules. Through both simulation-based and
human evaluation experiments using the MultiWOZ dataset, we demonstrated that
UniPPN outperforms conventional PPNs in the task completion capability of
task-oriented dialogue systems.
|
2502.00749
|
An Event-Based Perception Pipeline for a Table Tennis Robot
|
cs.RO cs.CV
|
Table tennis robots gained traction over the last years and have become a
popular research challenge for control and perception algorithms. Fast and
accurate ball detection is crucial for enabling a robotic arm to rally the ball
back successfully. So far, most table tennis robots use conventional,
frame-based cameras for the perception pipeline. However, frame-based cameras
suffer from motion blur if the frame rate is not high enough for fast-moving
objects. Event-based cameras, on the other hand, do not have this drawback
since pixels report changes in intensity asynchronously and independently,
leading to an event stream with a temporal resolution on the order of us. To
the best of our knowledge, we present the first real-time perception pipeline
for a table tennis robot that uses only event-based cameras. We show that
compared to a frame-based pipeline, event-based perception pipelines have an
update rate which is an order of magnitude higher. This is beneficial for the
estimation and prediction of the ball's position, velocity, and spin, resulting
in lower mean errors and uncertainties. These improvements are an advantage for
the robot control, which has to be fast, given the short time a table tennis
ball is flying until the robot has to hit back.
|
2502.00752
|
Zero-Shot Warning Generation for Misinformative Multimodal Content
|
cs.AI cs.CL cs.IR
|
The widespread prevalence of misinformation poses significant societal
concerns. Out-of-context misinformation, where authentic images are paired with
false text, is particularly deceptive and easily misleads audiences. Most
existing detection methods primarily evaluate image-text consistency but often
lack sufficient explanations, which are essential for effectively debunking
misinformation. We present a model that detects multimodal misinformation
through cross-modality consistency checks, requiring minimal training time.
Additionally, we propose a lightweight model that achieves competitive
performance using only one-third of the parameters. We also introduce a
dual-purpose zero-shot learning task for generating contextualized warnings,
enabling automated debunking and enhancing user comprehension. Qualitative and
human evaluations of the generated warnings highlight both the potential and
limitations of our approach.
|
2502.00753
|
Mirror Descent Under Generalized Smoothness
|
math.OC cs.LG
|
Smoothness is crucial for attaining fast rates in first-order optimization.
However, many optimization problems in modern machine learning involve
non-smooth objectives. Recent studies relax the smoothness assumption by
allowing the Lipschitz constant of the gradient to grow with respect to the
gradient norm, which accommodates a broad range of objectives in practice.
Despite this progress, existing generalizations of smoothness are restricted to
Euclidean geometry with $\ell_2$-norm and only have theoretical guarantees for
optimization in the Euclidean space. In this paper, we address this limitation
by introducing a new $\ell*$-smoothness concept that measures the norm of
Hessian in terms of a general norm and its dual, and establish convergence for
mirror-descent-type algorithms, matching the rates under the classic
smoothness. Notably, we propose a generalized self-bounding property that
facilitates bounding the gradients via controlling suboptimality gaps, serving
as a principal component for convergence analysis. Beyond deterministic
optimization, we establish an anytime convergence for stochastic mirror descent
based on a new bounded noise condition that encompasses the widely adopted
bounded or affine noise assumptions.
|
2502.00754
|
Continuity-Preserving Convolutional Autoencoders for Learning Continuous
Latent Dynamical Models from Images
|
cs.LG cs.CV
|
Continuous dynamical systems are cornerstones of many scientific and
engineering disciplines. While machine learning offers powerful tools to model
these systems from trajectory data, challenges arise when these trajectories
are captured as images, resulting in pixel-level observations that are discrete
in nature. Consequently, a naive application of a convolutional autoencoder can
result in latent coordinates that are discontinuous in time. To resolve this,
we propose continuity-preserving convolutional autoencoders (CpAEs) to learn
continuous latent states and their corresponding continuous latent dynamical
models from discrete image frames. We present a mathematical formulation for
learning dynamics from image frames, which illustrates issues with previous
approaches and motivates our methodology based on promoting the continuity of
convolution filters, thereby preserving the continuity of the latent states.
This approach enables CpAEs to produce latent states that evolve continuously
with the underlying dynamics, leading to more accurate latent dynamical models.
Extensive experiments across various scenarios demonstrate the effectiveness of
CpAEs.
|
2502.00757
|
AgentBreeder: Mitigating the AI Safety Impact of Multi-Agent Scaffolds
|
cs.CR cs.AI cs.NE
|
Scaffolding Large Language Models (LLMs) into multi-agent systems often
improves performance on complex tasks, but the safety impact of such scaffolds
has not been as thoroughly explored. In this paper, we introduce AGENTBREEDER a
framework for multi-objective evolutionary search over scaffolds. Our
REDAGENTBREEDER evolves scaffolds towards jailbreaking the base LLM while
achieving high task success, while BLUEAGENTBREEDER instead aims to combine
safety with task reward. We evaluate the systems discovered by the different
instances of AGENTBREEDER and popular baselines using widely recognized
reasoning, mathematics, and safety benchmarks. Our work highlights and
mitigates the safety risks due to multi-agent scaffolding.
|
2502.00758
|
Structural Latency Perturbation in Large Language Models Through
Recursive State Induction
|
cs.CL
|
Computational efficiency has remained a critical consideration in scaling
high-capacity language models, with inference latency and resource consumption
presenting significant constraints on real-time applications. The study has
introduced a structured latency perturbation mechanism that modifies
computational pathways through recursive state induction, enabling dynamic
suppression of redundant activations while preserving generative fidelity. A
formal mathematical framework has been established to describe recursive
perturbations, ensuring that modifications remain adaptive rather than
statically imposed. Experiments have demonstrated that applying recursive state
adjustments reduces inference latency across varying sequence lengths, with
longer text generations benefiting from cumulative efficiency improvements.
Comparative evaluations against structured pruning and quantization have
indicated that latency gains can be achieved without compromising token
retention or memory utilization. The analysis of computational overhead has
suggested that selectively suppressing redundant activations contributes to
improved power efficiency, particularly in scenarios requiring extended text
generation. An assessment of linguistic stability has shown that token-level
consistency remains largely intact under controlled perturbation thresholds,
reinforcing the viability of structural latency modifications as an alternative
to weight-centric optimization techniques. The results have supported the
hypothesis that recursive state induction offers an effective method for
reducing computational complexity without requiring architectural modifications
or external augmentation.
|
2502.00760
|
Privacy Preserving Properties of Vision Classifiers
|
cs.LG cs.CR cs.CV
|
Vision classifiers are often trained on proprietary datasets containing
sensitive information, yet the models themselves are frequently shared openly
under the privacy-preserving assumption. Although these models are assumed to
protect sensitive information in their training data, the extent to which this
assumption holds for different architectures remains unexplored. This
assumption is challenged by inversion attacks which attempt to reconstruct
training data from model weights, exposing significant privacy vulnerabilities.
In this study, we systematically evaluate the privacy-preserving properties of
vision classifiers across diverse architectures, including Multi-Layer
Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Vision
Transformers (ViTs). Using network inversion-based reconstruction techniques,
we assess the extent to which these architectures memorize and reveal training
data, quantifying the relative ease of reconstruction across models. Our
analysis highlights how architectural differences, such as input
representation, feature extraction mechanisms, and weight structures, influence
privacy risks. By comparing these architectures, we identify which are more
resilient to inversion attacks and examine the trade-offs between model
performance and privacy preservation, contributing to the development of secure
and privacy-respecting machine learning models for sensitive applications. Our
findings provide actionable insights into the design of secure and
privacy-aware machine learning systems, emphasizing the importance of
evaluating architectural decisions in sensitive applications involving
proprietary or personal data.
|
2502.00761
|
FIRE: Flexible Integration of Data Quality Ratings for Effective
Pre-Training
|
cs.CL
|
Selecting high-quality data can significantly improve the pretraining
efficiency of large language models (LLMs). Existing methods generally rely on
heuristic techniques and single-quality signals, limiting their ability to
evaluate data quality comprehensively. In this work, we propose FIRE, a
flexible and scalable framework for integrating multiple data quality raters,
which allows for a comprehensive assessment of data quality across various
dimensions. FIRE aligns multiple quality signals into a unified space, and
integrates diverse data quality raters to provide a comprehensive quality
signal for each data point. Further, we introduce a progressive data selection
scheme based on FIRE that iteratively refines the selection of high-quality
data points. Experiments on the SlimPajama dataset reveal that FIRE outperforms
other data selection methods and significantly enhances the pretrained model
across a wide range of downstream tasks, with a 2.9% average performance
improvement over Random and reducing the FLOPs necessary to achieve a certain
performance level by more than half.
|
2502.00762
|
On Overlap Ratio in Defocused Electron Ptychography
|
eess.SP cs.IR physics.app-ph physics.med-ph
|
Four-dimensional Scanning Transmission Electron Microscopy (4D STEM) with
data acquired using a defocused electron probe is a promising tool for
characterising complex biological specimens and materials through a phase
retrieval process known as Electron Ptychography (EP). The efficacy of 4D STEM
acquisition and the resulting quality of EP reconstruction depends on the
overlap ratio of adjacent illuminated areas. This paper demonstrates how the
overlap ratio impacts the data redundancy and the quality of the EP
reconstruction. We define two quantities as a function of the overlap ratio
that are independent of both the object and the EP algorithm. Subsequently, we
evaluate an EP algorithm for varying overlap ratios using simulated 4D STEM
datasets. Notably, a 40% or greater overlap ratio yields stable, high-quality
reconstructions.
|
2502.00767
|
Learning-Based TSP-Solvers Tend to Be Overly Greedy
|
cs.LG cs.AI cs.DS
|
Deep learning has shown significant potential in solving combinatorial
optimization problems such as the Euclidean traveling salesman problem (TSP).
However, most training and test instances for existing TSP algorithms are
generated randomly from specific distributions like uniform distribution. This
has led to a lack of analysis and understanding of the performance of deep
learning algorithms in out-of-distribution (OOD) generalization scenarios,
which has a close relationship with the worst-case performance in the
combinatorial optimization field. For data-driven algorithms, the statistical
properties of randomly generated datasets are critical. This study constructs a
statistical measure called nearest-neighbor density to verify the asymptotic
properties of randomly generated datasets and reveal the greedy behavior of
learning-based solvers, i.e., always choosing the nearest neighbor nodes to
construct the solution path. Based on this statistical measure, we develop
interpretable data augmentation methods that rely on distribution shifts or
instance perturbations and validate that the performance of the learning-based
solvers degenerates much on such augmented data. Moreover, fine-tuning
learning-based solvers with augmented data further enhances their
generalization abilities. In short, we decipher the limitations of
learning-based TSP solvers tending to be overly greedy, which may have profound
implications for AI-empowered combinatorial optimization solvers.
|
2502.00775
|
ATA: Adaptive Task Allocation for Efficient Resource Management in
Distributed Machine Learning
|
cs.LG cs.DC math.OC stat.ML
|
Asynchronous methods are fundamental for parallelizing computations in
distributed machine learning. They aim to accelerate training by fully
utilizing all available resources. However, their greedy approach can lead to
inefficiencies using more computation than required, especially when
computation times vary across devices. If the computation times were known in
advance, training could be fast and resource-efficient by assigning more tasks
to faster workers. The challenge lies in achieving this optimal allocation
without prior knowledge of the computation time distributions. In this paper,
we propose ATA (Adaptive Task Allocation), a method that adapts to
heterogeneous and random distributions of worker computation times. Through
rigorous theoretical analysis, we show that ATA identifies the optimal task
allocation and performs comparably to methods with prior knowledge of
computation times. Experimental results further demonstrate that ATA is
resource-efficient, significantly reducing costs compared to the greedy
approach, which can be arbitrarily expensive depending on the number of
workers.
|
2502.00779
|
Role of Mixup in Topological Persistence Based Knowledge Distillation
for Wearable Sensor Data
|
cs.LG cs.AI eess.SP
|
The analysis of wearable sensor data has enabled many successes in several
applications. To represent the high-sampling rate time-series with sufficient
detail, the use of topological data analysis (TDA) has been considered, and it
is found that TDA can complement other time-series features. Nonetheless, due
to the large time consumption and high computational resource requirements of
extracting topological features through TDA, it is difficult to deploy
topological knowledge in various applications. To tackle this problem,
knowledge distillation (KD) can be adopted, which is a technique facilitating
model compression and transfer learning to generate a smaller model by
transferring knowledge from a larger network. By leveraging multiple teachers
in KD, both time-series and topological features can be transferred, and
finally, a superior student using only time-series data is distilled. On the
other hand, mixup has been popularly used as a robust data augmentation
technique to enhance model performance during training. Mixup and KD employ
similar learning strategies. In KD, the student model learns from the smoothed
distribution generated by the teacher model, while mixup creates smoothed
labels by blending two labels. Hence, this common smoothness serves as the
connecting link that establishes a connection between these two methods. In
this paper, we analyze the role of mixup in KD with time-series as well as
topological persistence, employing multiple teachers. We present a
comprehensive analysis of various methods in KD and mixup on wearable sensor
data.
|
2502.00780
|
Constructing Fundamentals for the Theory of Proportions and Symbolic
Allusions Applied Interdisciplinarily
|
cs.IT math.IT q-bio.NC
|
The Theory of Proportions and Symbolic Allusions applied Interdisciplinary
(TPASAI) is a framework that integrates mathematics, linguistics, psychology,
and game theory to uncover hidden patterns and proportions in reality. Its
central idea is that numerical encoding of symbols, dates, and language can
reveal recurring structures and connections that reflect universal principles.
By applying fractal analysis, the theory identifies patterns across different
scales, offering a unifying perspective on the structure of the world. One key
aspect of TPASAI is symbolic analysis, which allows for the reinterpretation of
traumatic experiences in psychotherapy. For example, assigning numerical values
to elements like fingers, dates, or words can help individuals uncover
meaningful associations between personal experiences and collective symbols.
This approach encourages cognitive flexibility and provides a therapeutic
avenue for recontextualizing emotions. The theory also incorporates principles
of game theory, which frame reality as a system of symbolic "codes" governed by
rules that can be understood and strategically used. This perspective is
especially useful for psychological conditions like obsessive-compulsive
disorder (OCD), enabling patients to approach their obsessions as decipherable
patterns rather than rigid constraints. TPASAI has practical applications in
psychology, education, and technology. In education, it aids in teaching
mathematical and linguistic concepts by exploring connections between symbolic
representations and real-world events. In technology, the methodology can be
employed in ciphering and natural language processing. The innovation of TPASAI
lies in its ability to merge the structured rigor of mathematics with the
interpretative flexibility of symbolic analysis, offering a deeper
understanding of events and relationships.
|
2502.00782
|
Transfer Learning in Physics-Informed Neural Networks: Full Fine-Tuning,
Lightweight Fine-Tuning, and Low-Rank Adaptation
|
cs.LG
|
AI for PDEs has garnered significant attention, particularly Physics-Informed
Neural Networks (PINNs). However, PINNs are typically limited to solving
specific problems, and any changes in problem conditions necessitate
retraining. Therefore, we explore the generalization capability of transfer
learning in the strong and energy form of PINNs across different boundary
conditions, materials, and geometries. The transfer learning methods we employ
include full finetuning, lightweight finetuning, and Low-Rank Adaptation
(LoRA). The results demonstrate that full finetuning and LoRA can significantly
improve convergence speed while providing a slight enhancement in accuracy.
|
2502.00783
|
A method for estimating forest carbon storage distribution density via
artificial intelligence generated content model
|
cs.CV eess.IV
|
Forest is the most significant land-based carbon storage mechanism. The
forest carbon sink can effectively decrease the atmospheric CO2 concentration
and mitigate climate change. Remote sensing estimation not only ensures high
accuracy of data, but also enables large-scale area observation. Optical images
provide the possibility for long-term monitoring, which is a potential issue in
the future carbon storage estimation research. We chose Huize County, Qujing
City, Yunnan Province, China as the study area, took GF-1 WFV satellite image
as the data, introduced the KD-VGG module to extract the initial features, and
proposed the improved implicit diffusion model (IIDM). The results showed that:
(1) The VGG-19 module after knowledge distillation can realize the initial
feature extraction, reduce the inference time and improve the accuracy in the
case of reducing the number of model parameters. (2) The Attention + MLP module
was added for feature fusion to obtain the relationship between global and
local features and realized the restoration of high-fidelity images in the
continuous scale range. (3) The IIDM model proposed in this paper had the
highest estimation accuracy, with RMSE of 28.68, which was 13.16 higher than
that of the regression model, about 31.45%. In the estimation of carbon
storage, the generative model can extract deeper features, and its performance
was significantly better than other models. It demonstrated the feasibility of
artificial intelligence-generated content (AIGC) in the field of quantitative
remote sensing and provided valuable insights for the study of carbon
neutralization effect. By combining the actual characteristics of the forest,
the regional carbon storage estimation with a resolution of 16-meter was
utilized to provide a significant theoretical basis for the formulation of
forest carbon sink regulation.
|
2502.00784
|
Estimating forest carbon stocks from high-resolution remote sensing
imagery by reducing domain shift with style transfer
|
cs.CV eess.IV
|
Forests function as crucial carbon reservoirs on land, and their carbon sinks
can efficiently reduce atmospheric CO2 concentrations and mitigate climate
change. Currently, the overall trend for monitoring and assessing forest carbon
stocks is to integrate ground monitoring sample data with satellite remote
sensing imagery. This style of analysis facilitates large-scale observation.
However, these techniques require improvement in accuracy. We used GF-1 WFV and
Landsat TM images to analyze Huize County, Qujing City, Yunnan Province in
China. Using the style transfer method, we introduced Swin Transformer to
extract global features through attention mechanisms, converting the carbon
stock estimation into an image translation.
|
2502.00791
|
Vision-centric Token Compression in Large Language Model
|
cs.CL cs.CV
|
Large Language Models (LLMs) have revolutionized natural language processing,
excelling in handling longer sequences. However, the inefficiency and
redundancy in processing extended in-context tokens remain a challenge. Many
attempts to address this rely on compressing tokens with smaller text encoders,
yet we question whether text encoders are truly indispensable. Our journey
leads to an unexpected discovery-a much smaller vision encoder, applied
directly to sequences of text tokens, can rival text encoders on text tasks.
When pre-trained on large amounts of data and transferred to multiple mid-sized
or small text understanding benchmarks, VIST leads to comparable results with
16% fewer FLOPs and 50% less memory usage. We further uncover significant token
redundancy and devise a frequency-based masking strategy to guide the focus of
the visual encoder toward the most critical tokens. Interestingly, we observe
the trained visual encoder performs like a summarizer, selectively ignoring
less important words such as prepositions and conjunctions. This approach
delivers remarkable results, outperforming traditional text encoder-based
methods by 5.7% on average over benchmarks like TriviaQA, NQ, PopQA, TREF,
SST2, and SST5, setting a new standard for token efficiency in LLMs.
|
2502.00792
|
RTBAgent: A LLM-based Agent System for Real-Time Bidding
|
cs.AI
|
Real-Time Bidding (RTB) enables advertisers to place competitive bids on
impression opportunities instantaneously, striving for cost-effectiveness in a
highly competitive landscape. Although RTB has widely benefited from the
utilization of technologies such as deep learning and reinforcement learning,
the reliability of related methods often encounters challenges due to the
discrepancies between online and offline environments and the rapid
fluctuations of online bidding. To handle these challenges, RTBAgent is
proposed as the first RTB agent system based on large language models (LLMs),
which synchronizes real competitive advertising bidding environments and
obtains bidding prices through an integrated decision-making process.
Specifically, obtaining reasoning ability through LLMs, RTBAgent is further
tailored to be more professional for RTB via involved auxiliary modules, i.e.,
click-through rate estimation model, expert strategy knowledge, and daily
reflection. In addition, we propose a two-step decision-making process and
multi-memory retrieval mechanism, which enables RTBAgent to review historical
decisions and transaction records and subsequently make decisions more adaptive
to market changes in real-time bidding. Empirical testing with real advertising
datasets demonstrates that RTBAgent significantly enhances profitability. The
RTBAgent code will be publicly accessible at:
https://github.com/CaiLeng/RTBAgent.
|
2502.00795
|
Data Fusion for Full-Range Response Reconstruction via Diffusion Models
|
cs.CE
|
Accurately capturing the full-range response of structures is crucial in
structural health monitoring (SHM) for ensuring safety and operational
integrity. However, limited sensor deployment due to cost, accessibility, or
scale often hinders comprehensive monitoring. This paper presents a novel data
fusion framework utilizing diffusion models to reconstruct the full-range
structural response from sparse and heterogeneous sensor measurements. We
incorporate Diffusion Posterior Sampling (DPS) into the reconstruction
framework, using sensor measurements as probabilistic constraints to guide the
sampling process. A lightweight neural network serves as the surrogate forward
model within the DPS algorithm, which maps full-range structural responses to
local sensor data. This approach enables flexibility in sensor configurations
while reducing computational costs. The proposed framework is validated on a
steel plate shear wall exhibiting nonlinear responses. Comparative experiments
are conducted with three forward models. Among these, the neural network
surrogate model achieves a desirable reconstruction accuracy, with a weighted
mean absolute percentage error (WMAPE) as low as 1.57%, while also
demonstrating superior adaptability and computational efficiency. Additional
experiments explore the impact of sensor placement strategies and noise levels.
Results show that even under sparse measurements or high noise conditions, the
WMAPE remains capped at 15%, demonstrating the robustness in challenging
scenarios. The proposed framework shows new possibilities for probabilistic
modeling and decision-making in SHM, offering a novel data fusion approach for
full-range monitoring of structures.
|
2502.00796
|
Task-Specific Adaptation with Restricted Model Access
|
cs.CV
|
The emergence of foundational models has greatly improved performance across
various downstream tasks, with fine-tuning often yielding even better results.
However, existing fine-tuning approaches typically require access to model
weights and layers, leading to challenges such as managing multiple model
copies or inference pipelines, inefficiencies in edge device optimization, and
concerns over proprietary rights, privacy, and exposure to unsafe model
variants. In this paper, we address these challenges by exploring "Gray-box"
fine-tuning approaches, where the model's architecture and weights remain
hidden, allowing only gradient propagation. We introduce a novel yet simple and
effective framework that adapts to new tasks using two lightweight learnable
modules at the model's input and output. Additionally, we present a less
restrictive variant that offers more entry points into the model, balancing
performance with model exposure. We evaluate our approaches across several
backbones on benchmarks such as text-image alignment, text-video alignment, and
sketch-image alignment. Results show that our Gray-box approaches are
competitive with full-access fine-tuning methods, despite having limited access
to the model.
|
2502.00798
|
Deep Neural Network for Phonon-Assisted Optical Spectra in
Semiconductors
|
cond-mat.mtrl-sci cs.LG
|
Phonon-assisted optical absorption in semiconductors is crucial for
understanding and optimizing optoelectronic devices, yet its accurate
simulation remains a significant challenge in computational materials science.
We present an efficient approach that combines deep learning tight-binding (TB)
and potential models to efficiently calculate the phonon-assisted optical
absorption in semiconductors with $ab$ $initio$ accuracy. Our strategy enables
efficient sampling of atomic configurations through molecular dynamics and
rapid computation of electronic structure and optical properties from the TB
models. We demonstrate its efficacy by calculating the temperature-dependent
optical absorption spectra and band gap renormalization of Si and GaAs due to
electron-phonon coupling over a temperature range of 100-400 K. Our results
show excellent agreement with experimental data, capturing both indirect and
direct absorption processes, including subtle features like the Urbach tail.
This approach offers a powerful tool for studying complex materials with high
accuracy and efficiency, paving the way for high-throughput screening of
optoelectronic materials.
|
2502.00800
|
Adversarial Semantic Augmentation for Training Generative Adversarial
Networks under Limited Data
|
cs.CV eess.IV
|
Generative adversarial networks (GANs) have made remarkable achievements in
synthesizing images in recent years. Typically, training GANs requires massive
data, and the performance of GANs deteriorates significantly when training data
is limited. To improve the synthesis performance of GANs in low-data regimes,
existing approaches use various data augmentation techniques to enlarge the
training sets. However, it is identified that these augmentation techniques may
leak or even alter the data distribution. To remedy this, we propose an
adversarial semantic augmentation (ASA) technique to enlarge the training data
at the semantic level instead of the image level. Concretely, considering
semantic features usually encode informative information of images, we estimate
the covariance matrices of semantic features for both real and generated images
to find meaningful transformation directions. Such directions translate
original features to another semantic representation, e.g., changing the
backgrounds or expressions of the human face dataset. Moreover, we derive an
upper bound of the expected adversarial loss. By optimizing the upper bound,
our semantic augmentation is implicitly achieved. Such design avoids redundant
sampling of the augmented features and introduces negligible computation
overhead, making our approach computation efficient. Extensive experiments on
both few-shot and large-scale datasets demonstrate that our method consistently
improve the synthesis quality under various data regimes, and further
visualized and analytic results suggesting satisfactory versatility of our
proposed method.
|
2502.00801
|
Environment-Driven Online LiDAR-Camera Extrinsic Calibration
|
cs.CV cs.AI cs.RO
|
LiDAR-camera extrinsic calibration (LCEC) is the core for data fusion in
computer vision. Existing methods typically rely on customized calibration
targets or fixed scene types, lacking the flexibility to handle variations in
sensor data and environmental contexts. This paper introduces EdO-LCEC, the
first environment-driven, online calibration approach that achieves human-like
adaptability. Inspired by the human perceptual system, EdO-LCEC incorporates a
generalizable scene discriminator to actively interpret environmental
conditions, creating multiple virtual cameras that capture detailed spatial and
textural information. To overcome cross-modal feature matching challenges
between LiDAR and camera, we propose dual-path correspondence matching (DPCM),
which leverages both structural and textural consistency to achieve reliable
3D-2D correspondences. Our approach formulates the calibration process as a
spatial-temporal joint optimization problem, utilizing global constraints from
multiple views and scenes to improve accuracy, particularly in sparse or
partially overlapping sensor views. Extensive experiments on real-world
datasets demonstrate that EdO-LCEC achieves state-of-the-art performance,
providing reliable and precise calibration across diverse, challenging
environments.
|
2502.00802
|
Fisher-Guided Selective Forgetting: Mitigating The Primacy Bias in Deep
Reinforcement Learning
|
cs.LG cs.AI
|
Deep Reinforcement Learning (DRL) systems often tend to overfit to early
experiences, a phenomenon known as the primacy bias (PB). This bias can
severely hinder learning efficiency and final performance, particularly in
complex environments. This paper presents a comprehensive investigation of PB
through the lens of the Fisher Information Matrix (FIM). We develop a framework
characterizing PB through distinct patterns in the FIM trace, identifying
critical memorization and reorganization phases during learning. Building on
this understanding, we propose Fisher-Guided Selective Forgetting (FGSF), a
novel method that leverages the geometric structure of the parameter space to
selectively modify network weights, preventing early experiences from
dominating the learning process. Empirical results across DeepMind Control
Suite (DMC) environments show that FGSF consistently outperforms baselines,
particularly in complex tasks. We analyze the different impacts of PB on actor
and critic networks, the role of replay ratios in exacerbating the effect, and
the effectiveness of even simple noise injection methods. Our findings provide
a deeper understanding of PB and practical mitigation strategies, offering a
FIM-based geometric perspective for advancing DRL.
|
2502.00803
|
ProPINN: Demystifying Propagation Failures in Physics-Informed Neural
Networks
|
cs.LG
|
Physics-informed neural networks (PINNs) have earned high expectations in
solving partial differential equations (PDEs), but their optimization usually
faces thorny challenges due to the unique derivative-dependent loss function.
By analyzing the loss distribution, previous research observed the propagation
failure phenomenon of PINNs, intuitively described as the correct supervision
for model outputs cannot ``propagate'' from initial states or boundaries to the
interior domain. Going beyond intuitive understanding, this paper provides the
first formal and in-depth study of propagation failure and its root cause.
Based on a detailed comparison with classical finite element methods, we
ascribe the failure to the conventional single-point-processing architecture of
PINNs and further prove that propagation failure is essentially caused by the
lower gradient correlation of PINN models on nearby collocation points.
Compared to superficial loss maps, this new perspective provides a more precise
quantitative criterion to identify where and why PINN fails. The theoretical
finding also inspires us to present a new PINN architecture, named ProPINN,
which can effectively unite the gradient of region points for better
propagation. ProPINN can reliably resolve PINN failure modes and significantly
surpass advanced Transformer-based models with 46% relative promotion.
|
2502.00806
|
UniGraph2: Learning a Unified Embedding Space to Bind Multimodal Graphs
|
cs.LG
|
Existing foundation models, such as CLIP, aim to learn a unified embedding
space for multimodal data, enabling a wide range of downstream web-based
applications like search, recommendation, and content classification. However,
these models often overlook the inherent graph structures in multimodal
datasets, where entities and their relationships are crucial. Multimodal graphs
(MMGs) represent such graphs where each node is associated with features from
different modalities, while the edges capture the relationships between these
entities. On the other hand, existing graph foundation models primarily focus
on text-attributed graphs (TAGs) and are not designed to handle the
complexities of MMGs. To address these limitations, we propose UniGraph2, a
novel cross-domain graph foundation model that enables general representation
learning on MMGs, providing a unified embedding space. UniGraph2 employs
modality-specific encoders alongside a graph neural network (GNN) to learn a
unified low-dimensional embedding space that captures both the multimodal
information and the underlying graph structure. We propose a new cross-domain
multi-graph pre-training algorithm at scale to ensure effective transfer
learning across diverse graph domains and modalities. Additionally, we adopt a
Mixture of Experts (MoE) component to align features from different domains and
modalities, ensuring coherent and robust embeddings that unify the information
across modalities. Extensive experiments on a variety of multimodal graph tasks
demonstrate that UniGraph2 significantly outperforms state-of-the-art models in
tasks such as representation learning, transfer learning, and multimodal
generative tasks, offering a scalable and flexible solution for learning on
MMGs.
|
2502.00808
|
Synthetic Artifact Auditing: Tracing LLM-Generated Synthetic Data Usage
in Downstream Applications
|
cs.LG cs.CR cs.CY
|
Large language models (LLMs) have facilitated the generation of high-quality,
cost-effective synthetic data for developing downstream models and conducting
statistical analyses in various domains. However, the increased reliance on
synthetic data may pose potential negative impacts. Numerous studies have
demonstrated that LLM-generated synthetic data can perpetuate and even amplify
societal biases and stereotypes, and produce erroneous outputs known as
``hallucinations'' that deviate from factual knowledge. In this paper, we aim
to audit artifacts, such as classifiers, generators, or statistical plots, to
identify those trained on or derived from synthetic data and raise user
awareness, thereby reducing unexpected consequences and risks in downstream
applications. To this end, we take the first step to introduce synthetic
artifact auditing to assess whether a given artifact is derived from
LLM-generated synthetic data. We then propose an auditing framework with three
methods including metric-based auditing, tuning-based auditing, and
classification-based auditing. These methods operate without requiring the
artifact owner to disclose proprietary training details. We evaluate our
auditing framework on three text classification tasks, two text summarization
tasks, and two data visualization tasks across three training scenarios. Our
evaluation demonstrates the effectiveness of all proposed auditing methods
across all these tasks. For instance, black-box metric-based auditing can
achieve an average accuracy of $0.868 \pm 0.071$ for auditing classifiers and
$0.880 \pm 0.052$ for auditing generators using only 200 random queries across
three scenarios. We hope our research will enhance model transparency and
regulatory compliance, ensuring the ethical and responsible use of synthetic
data.
|
2502.00814
|
Disentangling Length Bias In Preference Learning Via
Response-Conditioned Modeling
|
cs.LG cs.CL
|
Reinforcement Learning from Human Feedback (RLHF) has achieved considerable
success in aligning large language models (LLMs) by modeling human preferences
with a learnable reward model and employing a reinforcement learning algorithm
to maximize the reward model's scores. However, these reward models are
susceptible to exploitation through various superficial confounding factors,
with length bias emerging as a particularly significant concern. Moreover,
while the pronounced impact of length bias on preference modeling suggests that
LLMs possess an inherent sensitivity to length perception, our preliminary
investigations reveal that fine-tuned LLMs consistently struggle to adhere to
explicit length instructions. To address these two limitations, we propose a
novel framework wherein the reward model explicitly differentiates between
human semantic preferences and response length requirements. Specifically, we
introduce a Response-conditioned Bradley-Terry (Rc-BT) model that enhances the
reward model's capability in length bias mitigating and length instruction
following, through training on our augmented dataset. Furthermore, we propose
the Rc-DPO algorithm to leverage the Rc-BT model for direct policy optimization
(DPO) of LLMs, simultaneously mitigating length bias and promoting adherence to
length instructions. Extensive evaluations demonstrate that our approach
substantially improves both preference modeling and length instruction
compliance, with its effectiveness validated across various foundational models
and preference datasets.
|
2502.00816
|
Sundial: A Family of Highly Capable Time Series Foundation Models
|
cs.LG
|
We introduce Sundial, a family of native, flexible, and scalable time series
foundation models. To predict the next-patch's distribution, we propose a
TimeFlow Loss based on flow-matching, which facilitates native pre-training of
Transformers on time series without discrete tokenization. Conditioned on
arbitrary-length time series, our model is pre-trained without specifying any
prior distribution and can generate multiple probable predictions, achieving
flexibility in representation learning beyond using parametric densities.
Towards time series foundation models, we leverage minimal but crucial
adaptations of Transformers and curate TimeBench with 1 trillion time points,
comprising mostly real-world datasets and synthetic data. By mitigating mode
collapse through TimeFlow Loss, we pre-train a family of Sundial models on
TimeBench, which exhibit unprecedented model capacity and generalization
performance on zero-shot forecasting. In addition to presenting good scaling
behavior, Sundial achieves new state-of-the-art on both point forecasting and
probabilistic forecasting benchmarks. We believe that Sundial's pioneering
generative paradigm will facilitate a wide variety of forecasting scenarios.
|
2502.00817
|
Probing Large Language Models in Reasoning and Translating Complex
Linguistic Puzzles
|
cs.CL
|
This paper investigates the utilization of Large Language Models (LLMs) for
solving complex linguistic puzzles, a domain requiring advanced reasoning and
adept translation capabilities akin to human cognitive processes. We explore
specific prompting techniques designed to enhance ability of LLMs to reason and
elucidate their decision-making pathways, with a focus on Input-Output
Prompting (IO), Chain-of-Thought Prompting (CoT), and Solo Performance
Prompting (SPP). Utilizing datasets from the Puzzling Machine Competition and
various Linguistics Olympiads, we employ a comprehensive set of metrics to
assess the performance of GPT-4 0603, a prominent LLM, across these prompting
methods. Our findings illuminate the potential of LLMs in linguistic reasoning
and complex translation tasks, highlighting their capabilities and identifying
limitations in the context of linguistic puzzles. This research contributes
significantly to the broader field of Natural Language Processing (NLP) by
providing insights into the optimization of LLM applications for improved
reasoning and translation accuracy, thereby enriching the ongoing dialogue in
NLP advancements.
|
2502.00818
|
Error-quantified Conformal Inference for Time Series
|
stat.ML cs.LG
|
Uncertainty quantification in time series prediction is challenging due to
the temporal dependence and distribution shift on sequential data. Conformal
inference provides a pivotal and flexible instrument for assessing the
uncertainty of machine learning models through prediction sets. Recently, a
series of online conformal inference methods updated thresholds of prediction
sets by performing online gradient descent on a sequence of quantile loss
functions. A drawback of such methods is that they only use the information of
revealed non-conformity scores via miscoverage indicators but ignore error
quantification, namely the distance between the non-conformity score and the
current threshold. To accurately leverage the dynamic of miscoverage error, we
propose \textit{Error-quantified Conformal Inference} (ECI) by smoothing the
quantile loss function. ECI introduces a continuous and adaptive feedback scale
with the miscoverage error, rather than simple binary feedback in existing
methods. We establish a long-term coverage guarantee for ECI under arbitrary
dependence and distribution shift. The extensive experimental results show that
ECI can achieve valid miscoverage control and output tighter prediction sets
than other baselines.
|
2502.00820
|
OOD Detection with immature Models
|
cs.LG cs.CV
|
Likelihood-based deep generative models (DGMs) have gained significant
attention for their ability to approximate the distributions of
high-dimensional data. However, these models lack a performance guarantee in
assigning higher likelihood values to in-distribution (ID) inputs, data the
models are trained on, compared to out-of-distribution (OOD) inputs. This
counter-intuitive behaviour is particularly pronounced when ID inputs are more
complex than OOD data points. One potential approach to address this challenge
involves leveraging the gradient of a data point with respect to the parameters
of the DGMs. A recent OOD detection framework proposed estimating the joint
density of layer-wise gradient norms for a given data point as a model-agnostic
method, demonstrating superior performance compared to the Typicality Test
across likelihood-based DGMs and image dataset pairs. In particular, most
existing methods presuppose access to fully converged models, the training of
which is both time-intensive and computationally demanding. In this work, we
demonstrate that using immature models,stopped at early stages of training, can
mostly achieve equivalent or even superior results on this downstream task
compared to mature models capable of generating high-quality samples that
closely resemble ID data. This novel finding enhances our understanding of how
DGMs learn the distribution of ID data and highlights the potential of
leveraging partially trained models for downstream tasks. Furthermore, we offer
a possible explanation for this unexpected behaviour through the concept of
support overlap.
|
2502.00823
|
Online Learning of Pure States is as Hard as Mixed States
|
quant-ph cs.LG
|
Quantum state tomography, the task of learning an unknown quantum state, is a
fundamental problem in quantum information. In standard settings, the
complexity of this problem depends significantly on the type of quantum state
that one is trying to learn, with pure states being substantially easier to
learn than general mixed states. A natural question is whether this separation
holds for any quantum state learning setting. In this work, we consider the
online learning framework and prove the surprising result that learning pure
states in this setting is as hard as learning mixed states. More specifically,
we show that both classes share almost the same sequential fat-shattering
dimension, leading to identical regret scaling under the $L_1$-loss. We also
generalize previous results on full quantum state tomography in the online
setting to learning only partially the density matrix, using smooth analysis.
|
2502.00826
|
Weak Supervision Dynamic KL-Weighted Diffusion Models Guided by Large
Language Models
|
cs.CL
|
In this paper, we presents a novel method for improving text-to-image
generation by combining Large Language Models (LLMs) with diffusion models, a
hybrid approach aimed at achieving both higher quality and efficiency in image
synthesis from text descriptions. Our approach introduces a new dynamic
KL-weighting strategy to optimize the diffusion process, along with
incorporating semantic understanding from pre-trained LLMs to guide the
generation process. The proposed method significantly improves both the visual
quality and alignment of generated images with text descriptions, addressing
challenges such as computational inefficiency, instability in training, and
robustness to textual variability. We evaluate our method on the COCO dataset
and demonstrate its superior performance over traditional GAN-based models,
both quantitatively and qualitatively. Extensive experiments, including
ablation studies and human evaluations, confirm that our method outperforms
existing approaches in terms of image realism, relevance to the input text, and
overall aesthetic quality. Our approach also shows promise in scalability to
other multimodal tasks, making it a versatile solution for a wide range of
generative applications.
|
2502.00828
|
Decision-informed Neural Networks with Large Language Model Integration
for Portfolio Optimization
|
q-fin.PM cs.AI q-fin.CP
|
This paper addresses the critical disconnect between prediction and decision
quality in portfolio optimization by integrating Large Language Models (LLMs)
with decision-focused learning. We demonstrate both theoretically and
empirically that minimizing the prediction error alone leads to suboptimal
portfolio decisions. We aim to exploit the representational power of LLMs for
investment decisions. An attention mechanism processes asset relationships,
temporal dependencies, and macro variables, which are then directly integrated
into a portfolio optimization layer. This enables the model to capture complex
market dynamics and align predictions with the decision objectives. Extensive
experiments on S\&P100 and DOW30 datasets show that our model consistently
outperforms state-of-the-art deep learning models. In addition, gradient-based
analyses show that our model prioritizes the assets most crucial to decision
making, thus mitigating the effects of prediction errors on portfolio
performance. These findings underscore the value of integrating decision
objectives into predictions for more robust and context-aware portfolio
management.
|
2502.00829
|
A Comprehensive Analysis on LLM-based Node Classification Algorithms
|
cs.LG cs.SI
|
Node classification is a fundamental task in graph analysis, with broad
applications across various fields. Recent breakthroughs in Large Language
Models (LLMs) have enabled LLM-based approaches for this task. Although many
studies demonstrate the impressive performance of LLM-based methods, the lack
of clear design guidelines may hinder their practical application. In this
work, we aim to establish such guidelines through a fair and systematic
comparison of these algorithms. As a first step, we developed LLMNodeBed, a
comprehensive codebase and testbed for node classification using LLMs. It
includes ten datasets, eight LLM-based algorithms, and three learning
paradigms, and is designed for easy extension with new methods and datasets.
Subsequently, we conducted extensive experiments, training and evaluating over
2,200 models, to determine the key settings (e.g., learning paradigms and
homophily) and components (e.g., model size) that affect performance. Our
findings uncover eight insights, e.g., (1) LLM-based methods can significantly
outperform traditional methods in a semi-supervised setting, while the
advantage is marginal in a supervised setting; (2) Graph Foundation Models can
beat open-source LLMs but still fall short of strong LLMs like GPT-4o in a
zero-shot setting. We hope that the release of LLMNodeBed, along with our
insights, will facilitate reproducible research and inspire future studies in
this field. Codes and datasets are released at
\href{https://llmnodebed.github.io/}{https://llmnodebed.github.io/}.
|
2502.00832
|
Generalization of Medical Large Language Models through Cross-Domain
Weak Supervision
|
cs.CL
|
The advancement of large language models (LLMs) has opened new frontiers in
natural language processing, particularly in specialized domains like
healthcare. In this paper, we propose the Incremental Curriculum-Based
Fine-Tuning (ICFT) framework to enhance the generative capabilities of medical
large language models (MLLMs). ICFT combines curriculum-based learning,
dual-stage memory coordination, and parameter-efficient fine-tuning to enable a
progressive transition from general linguistic knowledge to strong
domain-specific expertise. Experimental results across diverse medical NLP
tasks, including question answering, preference classification, and response
generation, demonstrate that ICFT consistently outperforms state-of-the-art
baselines, achieving improvements in both accuracy and efficiency. Further
analysis reveals the framework's ability to generalize to unseen data, reduce
errors, and deliver diverse, contextually relevant medical responses. These
findings establish ICFT as a robust and scalable solution for adapting LLMs to
the medical domain, offering practical benefits for real-world healthcare
applications.
|
2502.00833
|
Cross multiscale vision transformer for deep fake detection
|
cs.CV
|
The proliferation of deep fake technology poses significant challenges to
digital media authenticity, necessitating robust detection mechanisms. This
project evaluates deep fake detection using the SP Cup's 2025 deep fake
detection challenge dataset. We focused on exploring various deep learning
models for detecting deep fake content, utilizing traditional deep learning
techniques alongside newer architectures. Our approach involved training a
series of models and rigorously assessing their performance using metrics such
as accuracy.
|
2502.00834
|
Boosting Adversarial Robustness and Generalization with Structural Prior
|
cs.LG cs.CR cs.NE
|
This work investigates a novel approach to boost adversarial robustness and
generalization by incorporating structural prior into the design of deep
learning models. Specifically, our study surprisingly reveals that existing
dictionary learning-inspired convolutional neural networks (CNNs) provide a
false sense of security against adversarial attacks. To address this, we
propose Elastic Dictionary Learning Networks (EDLNets), a novel ResNet
architecture that significantly enhances adversarial robustness and
generalization. This novel and effective approach is supported by a theoretical
robustness analysis using influence functions. Moreover, extensive and reliable
experiments demonstrate consistent and significant performance improvement on
open robustness leaderboards such as RobustBench, surpassing state-of-the-art
baselines. To the best of our knowledge, this is the first work to discover and
validate that structural prior can reliably enhance deep learning robustness
under strong adaptive attacks, unveiling a promising direction for future
research.
|
2502.00835
|
CAIMAN: Causal Action Influence Detection for Sample Efficient
Loco-manipulation
|
cs.RO cs.LG
|
Enabling legged robots to perform non-prehensile loco-manipulation with large
and heavy objects is crucial for enhancing their versatility. However, this is
a challenging task, often requiring sophisticated planning strategies or
extensive task-specific reward shaping, especially in unstructured scenarios
with obstacles. In this work, we present CAIMAN, a novel framework for learning
loco-manipulation that relies solely on sparse task rewards. We leverage causal
action influence to detect states where the robot is in control over other
entities in the environment, and use this measure as an intrinsically motivated
objective to enable sample-efficient learning. We employ a hierarchical control
strategy, combining a low-level locomotion policy with a high-level policy that
prioritizes task-relevant velocity commands. Through simulated and real-world
experiments, including object manipulation with obstacles, we demonstrate the
framework's superior sample efficiency, adaptability to diverse environments,
and successful transfer to hardware without fine-tuning. The proposed approach
paves the way for scalable, robust, and autonomous loco-manipulation in
real-world applications.
|
2502.00837
|
Explainability in Practice: A Survey of Explainable NLP Across Various
Domains
|
cs.CL cs.AI
|
Natural Language Processing (NLP) has become a cornerstone in many critical
sectors, including healthcare, finance, and customer relationship management.
This is especially true with the development and use of advanced models such as
GPT-based architectures and BERT, which are widely used in decision-making
processes. However, the black-box nature of these advanced NLP models has
created an urgent need for transparency and explainability. This review
explores explainable NLP (XNLP) with a focus on its practical deployment and
real-world applications, examining its implementation and the challenges faced
in domain-specific contexts. The paper underscores the importance of
explainability in NLP and provides a comprehensive perspective on how XNLP can
be designed to meet the unique demands of various sectors, from healthcare's
need for clear insights to finance's emphasis on fraud detection and risk
assessment. Additionally, this review aims to bridge the knowledge gap in XNLP
literature by offering a domain-specific exploration and discussing
underrepresented areas such as real-world applicability, metric evaluation, and
the role of human interaction in model assessment. The paper concludes by
suggesting future research directions that could enhance the understanding and
broader application of XNLP.
|
2502.00840
|
Activation Approximations Can Incur Safety Vulnerabilities Even in
Aligned LLMs: Comprehensive Analysis and Defense
|
cs.CR cs.AI
|
Large Language Models (LLMs) have showcased remarkable capabilities across
various domains. Accompanying the evolving capabilities and expanding
deployment scenarios of LLMs, their deployment challenges escalate due to their
sheer scale and the advanced yet complex activation designs prevalent in
notable model series, such as Llama, Gemma, and Mistral. These challenges have
become particularly pronounced in resource-constrained deployment scenarios,
where mitigating inference efficiency bottlenecks is imperative. Among various
recent efforts, activation approximation has emerged as a promising avenue for
pursuing inference efficiency, sometimes considered indispensable in
applications such as private inference. Despite achieving substantial speedups
with minimal impact on utility, even appearing sound and practical for
real-world deployment, the safety implications of activation approximations
remain unclear. In this work, we fill this critical gap in LLM safety by
conducting the first systematic safety evaluation of activation approximations.
Our safety vetting spans seven sota techniques across three popular categories,
revealing consistent safety degradation across ten safety-aligned LLMs.
|
2502.00843
|
VLM-Assisted Continual learning for Visual Question Answering in
Self-Driving
|
cs.CV
|
In this paper, we propose a novel approach for solving the Visual Question
Answering (VQA) task in autonomous driving by integrating Vision-Language
Models (VLMs) with continual learning. In autonomous driving, VQA plays a vital
role in enabling the system to understand and reason about its surroundings.
However, traditional models often struggle with catastrophic forgetting when
sequentially exposed to new driving tasks, such as perception, prediction, and
planning, each requiring different forms of knowledge. To address this
challenge, we present a novel continual learning framework that combines VLMs
with selective memory replay and knowledge distillation, reinforced by
task-specific projection layer regularization. The knowledge distillation
allows a previously trained model to act as a "teacher" to guide the model
through subsequent tasks, minimizing forgetting. Meanwhile, task-specific
projection layers calculate the loss based on the divergence of feature
representations, ensuring continuity in learning and reducing the shift between
tasks. Evaluated on the DriveLM dataset, our framework shows substantial
performance improvements, with gains ranging from 21.40% to 32.28% across
various metrics. These results highlight the effectiveness of combining
continual learning with VLMs in enhancing the resilience and reliability of VQA
systems in autonomous driving. We will release our source code.
|
2502.00846
|
Federated Generalised Variational Inference: A Robust Probabilistic
Federated Learning Framework
|
cs.LG stat.ML
|
We introduce FedGVI, a probabilistic Federated Learning (FL) framework that
is provably robust to both prior and likelihood misspecification. FedGVI
addresses limitations in both frequentist and Bayesian FL by providing unbiased
predictions under model misspecification, with calibrated uncertainty
quantification. Our approach generalises previous FL approaches, specifically
Partitioned Variational Inference (Ashman et al., 2022), by allowing robust and
conjugate updates, decreasing computational complexity at the clients. We offer
theoretical analysis in terms of fixed-point convergence, optimality of the
cavity distribution, and provable robustness. Additionally, we empirically
demonstrate the effectiveness of FedGVI in terms of improved robustness and
predictive performance on multiple synthetic and real world classification data
sets.
|
2502.00847
|
SecPE: Secure Prompt Ensembling for Private and Robust Large Language
Models
|
cs.CR cs.AI
|
With the growing popularity of LLMs among the general public users,
privacy-preserving and adversarial robustness have become two pressing demands
for LLM-based services, which have largely been pursued separately but rarely
jointly. In this paper, to the best of our knowledge, we are among the first
attempts towards robust and private LLM inference by tightly integrating two
disconnected fields: private inference and prompt ensembling. The former
protects users' privacy by encrypting inference data transmitted and processed
by LLMs, while the latter enhances adversarial robustness by yielding an
aggregated output from multiple prompted LLM responses. Although widely
recognized as effective individually, private inference for prompt ensembling
together entails new challenges that render the naive combination of existing
techniques inefficient. To overcome the hurdles, we propose SecPE, which
designs efficient fully homomorphic encryption (FHE) counterparts for the core
algorithmic building blocks of prompt ensembling. We conduct extensive
experiments on 8 tasks to evaluate the accuracy, robustness, and efficiency of
SecPE. The results show that SecPE maintains high clean accuracy and offers
better robustness at the expense of merely $2.5\%$ efficiency overhead compared
to baseline private inference methods, indicating a satisfactory
``accuracy-robustness-efficiency'' tradeoff. For the efficiency of the
encrypted Argmax operation that incurs major slowdown for prompt ensembling,
SecPE is 35.4x faster than the state-of-the-art peers, which can be of
independent interest beyond this work.
|
2502.00848
|
RealRAG: Retrieval-augmented Realistic Image Generation via
Self-reflective Contrastive Learning
|
cs.CV
|
Recent text-to-image generative models, e.g., Stable Diffusion V3 and Flux,
have achieved notable progress. However, these models are strongly restricted
to their limited knowledge, a.k.a., their own fixed parameters, that are
trained with closed datasets. This leads to significant hallucinations or
distortions when facing fine-grained and unseen novel real-world objects, e.g.,
the appearance of the Tesla Cybertruck. To this end, we present the first
real-object-based retrieval-augmented generation framework (RealRAG), which
augments fine-grained and unseen novel object generation by learning and
retrieving real-world images to overcome the knowledge gaps of generative
models. Specifically, to integrate missing memory for unseen novel object
generation, we train a reflective retriever by self-reflective contrastive
learning, which injects the generator's knowledge into the sef-reflective
negatives, ensuring that the retrieved augmented images compensate for the
model's missing knowledge. Furthermore, the real-object-based framework
integrates fine-grained visual knowledge for the generative models, tackling
the distortion problem and improving the realism for fine-grained object
generation. Our Real-RAG is superior in its modular application to all types of
state-of-the-art text-to-image generative models and also delivers remarkable
performance boosts with all of them, such as a gain of 16.18% FID score with
the auto-regressive model on the Stanford Car benchmark.
|
2502.00850
|
Dual Alignment Maximin Optimization for Offline Model-based RL
|
cs.LG cs.AI
|
Offline reinforcement learning agents face significant deployment challenges
due to the synthetic-to-real distribution mismatch. While most prior research
has focused on improving the fidelity of synthetic sampling and incorporating
off-policy mechanisms, the directly integrated paradigm often fails to ensure
consistent policy behavior in biased models and underlying environmental
dynamics, which inherently arise from discrepancies between behavior and
learning policies. In this paper, we first shift the focus from model
reliability to policy discrepancies while optimizing for expected returns, and
then self-consistently incorporate synthetic data, deriving a novel
actor-critic paradigm, Dual Alignment Maximin Optimization (DAMO). It is a
unified framework to ensure both model-environment policy consistency and
synthetic and offline data compatibility. The inner minimization performs dual
conservative value estimation, aligning policies and trajectories to avoid
out-of-distribution states and actions, while the outer maximization ensures
that policy improvements remain consistent with inner value estimates.
Empirical evaluations demonstrate that DAMO effectively ensures model and
policy alignments, achieving competitive performance across diverse benchmark
tasks.
|
2502.00854
|
High-Dimensional Bayesian Optimization Using Both Random and Supervised
Embeddings
|
math.OC cs.LG stat.ML
|
Bayesian optimization (BO) is one of the most powerful strategies to solve
computationally expensive-to-evaluate blackbox optimization problems. However,
BO methods are conventionally used for optimization problems of small dimension
because of the curse of dimensionality. In this paper, a high-dimensionnal
optimization method incorporating linear embedding subspaces of small dimension
is proposed to efficiently perform the optimization. An adaptive learning
strategy for these linear embeddings is carried out in conjunction with the
optimization. The resulting BO method, named efficient global optimization
coupled with random and supervised embedding (EGORSE), combines in an adaptive
way both random and supervised linear embeddings. EGORSE has been compared to
state-of-the-art algorithms and tested on academic examples with a number of
design variables ranging from 10 to 600. The obtained results show the high
potential of EGORSE to solve high-dimensional blackbox optimization problems,
in terms of both CPU time and the limited number of calls to the expensive
blackbox simulation.
|
2502.00855
|
Psychometric-Based Evaluation for Theorem Proving with Large Language
Models
|
cs.AI
|
Large language models (LLMs) for formal theorem proving have become a
prominent research focus. At present, the proving ability of these LLMs is
mainly evaluated through proof pass rates on datasets such as miniF2F. However,
this evaluation method overlooks the varying importance of theorems. As a
result, it fails to highlight the real performance disparities between LLMs and
leads to high evaluation costs. This study proposes a psychometric-based
evaluation method for theorem proving with LLMs, comprising two main
components: Dataset Annotation and Adaptive Evaluation. First, we propose a
metric calculation method to annotate the dataset with difficulty and
discrimination metrics. Specifically, we annotate each theorem in the miniF2F
dataset and grade them into varying difficulty levels according to the
performance of LLMs, resulting in an enhanced dataset: miniF2F-Graded.
Experimental results show that the difficulty grading in miniF2F-Graded better
reflects the theorem difficulty perceived by LLMs. Secondly, we design an
adaptive evaluation method to dynamically select the most suitable theorems for
testing based on the annotated metrics and the real-time performance of LLMs.
We apply this method to evaluate 10 LLMs. The results show that our method
finely highlights the performance disparities between LLMs. It also reduces
evaluation costs by using only 23% of the theorems in the dataset.
|
2502.00857
|
HintEval: A Comprehensive Framework for Hint Generation and Evaluation
for Questions
|
cs.CL cs.IR
|
Large Language Models (LLMs) are transforming how people find information,
and many users turn nowadays to chatbots to obtain answers to their questions.
Despite the instant access to abundant information that LLMs offer, it is still
important to promote critical thinking and problem-solving skills. Automatic
hint generation is a new task that aims to support humans in answering
questions by themselves by creating hints that guide users toward answers
without directly revealing them. In this context, hint evaluation focuses on
measuring the quality of hints, helping to improve the hint generation
approaches. However, resources for hint research are currently spanning
different formats and datasets, while the evaluation tools are missing or
incompatible, making it hard for researchers to compare and test their models.
To overcome these challenges, we introduce HintEval, a Python library that
makes it easy to access diverse datasets and provides multiple approaches to
generate and evaluate hints. HintEval aggregates the scattered resources into a
single toolkit that supports a range of research goals and enables a clear,
multi-faceted, and reliable evaluation. The proposed library also includes
detailed online documentation, helping users quickly explore its features and
get started. By reducing barriers to entry and encouraging consistent
evaluation practices, HintEval offers a major step forward for facilitating
hint generation and analysis research within the NLP/IR community.
|
2502.00858
|
Learning to Plan with Personalized Preferences
|
cs.AI cs.HC
|
Effective integration of AI agents into daily life requires them to
understand and adapt to individual human preferences, particularly in
collaborative roles. Although recent studies on embodied intelligence have
advanced significantly, they typically adopt generalized approaches that
overlook personal preferences in planning. We address this limitation by
developing agents that not only learn preferences from few demonstrations but
also learn to adapt their planning strategies based on these preferences. Our
research leverages the observation that preferences, though implicitly
expressed through minimal demonstrations, can generalize across diverse
planning scenarios. To systematically evaluate this hypothesis, we introduce
Preference-based Planning (PbP) benchmark, an embodied benchmark featuring
hundreds of diverse preferences spanning from atomic actions to complex
sequences. Our evaluation of SOTA methods reveals that while symbol-based
approaches show promise in scalability, significant challenges remain in
learning to generate and execute plans that satisfy personalized preferences.
We further demonstrate that incorporating learned preferences as intermediate
representations in planning significantly improves the agent's ability to
construct personalized plans. These findings establish preferences as a
valuable abstraction layer for adaptive planning, opening new directions for
research in preference-guided plan generation and execution.
|
2502.00859
|
FedRIR: Rethinking Information Representation in Federated Learning
|
cs.LG cs.DC
|
Mobile and Web-of-Things (WoT) devices at the network edge generate vast
amounts of data for machine learning applications, yet privacy concerns hinder
centralized model training. Federated Learning (FL) allows clients (devices) to
collaboratively train a shared model coordinated by a central server without
transfer private data, but inherent statistical heterogeneity among clients
presents challenges, often leading to a dilemma between clients' needs for
personalized local models and the server's goal of building a generalized
global model. Existing FL methods typically prioritize either global
generalization or local personalization, resulting in a trade-off between these
two objectives and limiting the full potential of diverse client data. To
address this challenge, we propose a novel framework that simultaneously
enhances global generalization and local personalization by Rethinking
Information Representation in the Federated learning process (FedRIR).
Specifically, we introduce Masked Client-Specific Learning (MCSL), which
isolates and extracts fine-grained client-specific features tailored to each
client's unique data characteristics, thereby enhancing personalization.
Concurrently, the Information Distillation Module (IDM) refines the global
shared features by filtering out redundant client-specific information,
resulting in a purer and more robust global representation that enhances
generalization. By integrating the refined global features with the isolated
client-specific features, we construct enriched representations that
effectively capture both global patterns and local nuances, thereby improving
the performance of downstream tasks on the client. The code is available at
https://github.com/Deep-Imaging-Group/FedRIR.
|
2502.00861
|
Multivariable Stochastic Newton-Based Extremum Seeking with Delays
|
math.OC cs.SY eess.SY
|
This paper presents a Newton-based stochastic extremum-seeking control method
for real-time optimization in multi-input systems with distinct input delays.
It combines predictor-based feedback and Hessian inverse estimation via
stochastic perturbations to enable delay compensation with user-defined
convergence rates. The method ensures exponential stability and convergence
near the unknown extremum, even under long delays. It extends to multi-input,
single-output systems with cross-coupled channels. Stability is analyzed using
backstepping and infinite-dimensional averaging. Numerical simulations
demonstrate its effectiveness in handling time-delayed channels, showcasing
both the challenges and benefits of real-time optimization in distributed
parameter settings.
|
2502.00865
|
Predicting potentially unfair clauses in Chilean terms of services with
natural language processing
|
cs.CL cs.AI cs.CY cs.LG
|
This study addresses the growing concern of information asymmetry in consumer
contracts, exacerbated by the proliferation of online services with complex
Terms of Service that are rarely even read. Even though research on automatic
analysis methods is conducted, the problem is aggravated by the general focus
on English-language Machine Learning approaches and on major jurisdictions,
such as the European Union. We introduce a new methodology and a substantial
dataset addressing this gap. We propose a novel annotation scheme with four
categories and a total of 20 classes, and apply it on 50 online Terms of
Service used in Chile. Our evaluation of transformer-based models highlights
how factors like language- and/or domain-specific pre-training, few-shot sample
size, and model architecture affect the detection and classification of
potentially abusive clauses. Results show a large variability in performance
for the different tasks and models, with the highest macro-F1 scores for the
detection task ranging from 79% to 89% and micro-F1 scores up to 96%, while
macro-F1 scores for the classification task range from 60% to 70% and micro-F1
scores from 64% to 80%. Notably, this is the first Spanish-language multi-label
classification dataset for legal clauses, applying Chilean law and offering a
comprehensive evaluation of Spanish-language models in the legal domain. Our
work lays the ground for future research in method development for rarely
considered legal analysis and potentially leads to practical applications to
support consumers in Chile and Latin America as a whole.
|
2502.00869
|
STAF: Sinusoidal Trainable Activation Functions for Implicit Neural
Representation
|
cs.CV
|
Implicit Neural Representations (INRs) have emerged as a powerful framework
for modeling continuous signals. The spectral bias of ReLU-based networks is a
well-established limitation, restricting their ability to capture fine-grained
details in target signals. While previous works have attempted to mitigate this
issue through frequency-based encodings or architectural modifications, these
approaches often introduce additional complexity and do not fully address the
underlying challenge of learning high-frequency components efficiently. We
introduce Sinusoidal Trainable Activation Functions (STAF), designed to
directly tackle this limitation by enabling networks to adaptively learn and
represent complex signals with higher precision and efficiency. STAF inherently
modulates its frequency components, allowing for self-adaptive spectral
learning. This capability significantly improves convergence speed and
expressivity, making STAF highly effective for both signal representations and
inverse problems. Through extensive evaluations, we demonstrate that STAF
outperforms state-of-the-art (SOTA) methods in accuracy and reconstruction
fidelity with superior Peak Signal-to-Noise Ratio (PSNR). These results
establish STAF as a robust solution for overcoming spectral bias and the
capacity-convergence gap, making it valuable for computer graphics and related
fields. Our codebase is publicly accessible on the
https://github.com/AlirezaMorsali/STAF.
|
2502.00870
|
FedHPD: Heterogeneous Federated Reinforcement Learning via Policy
Distillation
|
cs.LG cs.AI cs.MA
|
Federated Reinforcement Learning (FedRL) improves sample efficiency while
preserving privacy; however, most existing studies assume homogeneous agents,
limiting its applicability in real-world scenarios. This paper investigates
FedRL in black-box settings with heterogeneous agents, where each agent employs
distinct policy networks and training configurations without disclosing their
internal details. Knowledge Distillation (KD) is a promising method for
facilitating knowledge sharing among heterogeneous models, but it faces
challenges related to the scarcity of public datasets and limitations in
knowledge representation when applied to FedRL. To address these challenges, we
propose Federated Heterogeneous Policy Distillation (FedHPD), which solves the
problem of heterogeneous FedRL by utilizing action probability distributions as
a medium for knowledge sharing. We provide a theoretical analysis of FedHPD's
convergence under standard assumptions. Extensive experiments corroborate that
FedHPD shows significant improvements across various reinforcement learning
benchmark tasks, further validating our theoretical findings. Moreover,
additional experiments demonstrate that FedHPD operates effectively without the
need for an elaborate selection of public datasets.
|
2502.00871
|
Modified Adaptive Tree-Structured Parzen Estimator for Hyperparameter
Optimization
|
cs.LG
|
In this paper, we review hyperparameter optimization methods for machine
learning models, with a particular focus on the Adaptive Tree-Structured Parzen
Estimator (ATPE) algorithm. We propose several modifications to ATPE and assess
their efficacy on a diverse set of standard benchmark functions. Experimental
results demonstrate that the proposed modifications significantly improve the
effectiveness of ATPE hyperparameter optimization on selected benchmarks, a
finding that holds practical relevance for their application in real-world
machine learning / optimization tasks.
|
2502.00873
|
Language Models Use Trigonometry to Do Addition
|
cs.AI cs.CL cs.LG
|
Mathematical reasoning is an increasingly important indicator of large
language model (LLM) capabilities, yet we lack understanding of how LLMs
process even simple mathematical tasks. To address this, we reverse engineer
how three mid-sized LLMs compute addition. We first discover that numbers are
represented in these LLMs as a generalized helix, which is strongly causally
implicated for the tasks of addition and subtraction, and is also causally
relevant for integer division, multiplication, and modular arithmetic. We then
propose that LLMs compute addition by manipulating this generalized helix using
the "Clock" algorithm: to solve $a+b$, the helices for $a$ and $b$ are
manipulated to produce the $a+b$ answer helix which is then read out to model
logits. We model influential MLP outputs, attention head outputs, and even
individual neuron preactivations with these helices and verify our
understanding with causal interventions. By demonstrating that LLMs represent
numbers on a helix and manipulate this helix to perform addition, we present
the first representation-level explanation of an LLM's mathematical capability.
|
2502.00874
|
Paper Copilot: The Artificial Intelligence and Machine Learning
Community Should Adopt a More Transparent and Regulated Peer Review Process
|
cs.DL cs.AI cs.CV cs.CY
|
The rapid growth of submissions to top-tier Artificial Intelligence (AI) and
Machine Learning (ML) conferences has prompted many venues to transition from
closed to open review platforms. Some have fully embraced open peer reviews,
allowing public visibility throughout the process, while others adopt hybrid
approaches, such as releasing reviews only after final decisions or keeping
reviews private despite using open peer review systems. In this work, we
analyze the strengths and limitations of these models, highlighting the growing
community interest in transparent peer review. To support this discussion, we
examine insights from Paper Copilot, a website launched two years ago to
aggregate and analyze AI / ML conference data while engaging a global audience.
The site has attracted over 200,000 early-career researchers, particularly
those aged 18-34 from 177 countries, many of whom are actively engaged in the
peer review process. Drawing on our findings, this position paper advocates for
a more transparent, open, and well-regulated peer review aiming to foster
greater community involvement and propel advancements in the field.
|
2502.00879
|
Towards Automation of Cognitive Modeling using Large Language Models
|
cs.LG
|
Computational cognitive models, which formalize theories of cognition, enable
researchers to quantify cognitive processes and arbitrate between competing
theories by fitting models to behavioral data. Traditionally, these models are
handcrafted, which requires significant domain knowledge, coding expertise, and
time investment. Previous work has demonstrated that Large Language Models
(LLMs) are adept at pattern recognition in-context, solving complex problems,
and generating executable code. In this work, we leverage these abilities to
explore the potential of LLMs in automating the generation of cognitive models
based on behavioral data. We evaluated the LLM in two different tasks: model
identification (relating data to a source model), and model generation
(generating the underlying cognitive model). We performed these tasks across
two cognitive domains - decision making and learning. In the case of data
simulated from canonical cognitive models, we found that the LLM successfully
identified and generated the ground truth model. In the case of human data,
where behavioral noise and lack of knowledge of the true underlying process
pose significant challenges, the LLM generated models that are identical or
close to the winning model from cognitive science literature. Our findings
suggest that LLMs can have a transformative impact on cognitive modeling. With
this project, we aim to contribute to an ongoing effort of automating
scientific discovery in cognitive science.
|
2502.00882
|
Worth Their Weight: Randomized and Regularized Block Kaczmarz Algorithms
without Preprocessing
|
cs.LG cs.NA math.NA math.OC stat.ML
|
Due to the ever growing amounts of data leveraged for machine learning and
scientific computing, it is increasingly important to develop algorithms that
sample only a small portion of the data at a time. In the case of linear
least-squares, the randomized block Kaczmarz method (RBK) is an appealing
example of such an algorithm, but its convergence is only understood under
sampling distributions that require potentially prohibitively expensive
preprocessing steps. To address this limitation, we analyze RBK when the data
is sampled uniformly, showing that its iterates converge in a Monte Carlo sense
to a $\textit{weighted}$ least-squares solution. Unfortunately, for general
problems the condition number of the weight matrix and the variance of the
iterates can become arbitrarily large. We resolve these issues by incorporating
regularization into the RBK iterations. Numerical experiments, including
examples arising from natural gradient optimization, suggest that the
regularized algorithm, ReBlocK, outperforms minibatch stochastic gradient
descent for realistic problems that exhibit fast singular value decay.
|
2502.00883
|
SimPER: A Minimalist Approach to Preference Alignment without
Hyperparameters
|
cs.LG cs.CL
|
Existing preference optimization objectives for language model alignment
require additional hyperparameters that must be extensively tuned to achieve
optimal performance, increasing both the complexity and time required for
fine-tuning large language models. In this paper, we propose a simple yet
effective hyperparameter-free preference optimization algorithm for alignment.
We observe that promising performance can be achieved simply by optimizing
inverse perplexity, which is calculated as the inverse of the exponentiated
average log-likelihood of the chosen and rejected responses in the preference
dataset. The resulting simple learning objective, SimPER, is easy to implement
and eliminates the need for expensive hyperparameter tuning and a reference
model, making it both computationally and memory efficient. Extensive
experiments on widely used real-world benchmarks, including MT-Bench,
AlpacaEval 2, and 10 key benchmarks of the Open LLM Leaderboard with 5 base
models, demonstrate that SimPER consistently and significantly outperforms
existing approaches-even without any hyperparameters or a reference model . For
example, despite its simplicity, SimPER outperforms state-of-the-art methods by
up to 5.7 points on AlpacaEval 2 and achieves the highest average ranking
across 10 benchmarks on the Open LLM Leaderboard. The source code for SimPER is
publicly available at: https://github.com/tengxiao1/SimPER.
|
2502.00885
|
Algorithmic Stability of Stochastic Gradient Descent with Momentum under
Heavy-Tailed Noise
|
stat.ML cs.LG math.OC math.PR
|
Understanding the generalization properties of optimization algorithms under
heavy-tailed noise has gained growing attention. However, the existing
theoretical results mainly focus on stochastic gradient descent (SGD) and the
analysis of heavy-tailed optimizers beyond SGD is still missing. In this work,
we establish generalization bounds for SGD with momentum (SGDm) under
heavy-tailed gradient noise. We first consider the continuous-time limit of
SGDm, i.e., a Levy-driven stochastic differential equation (SDE), and establish
quantitative Wasserstein algorithmic stability bounds for a class of
potentially non-convex loss functions. Our bounds reveal a remarkable
observation: For quadratic loss functions, we show that SGDm admits a worse
generalization bound in the presence of heavy-tailed noise, indicating that the
interaction of momentum and heavy tails can be harmful for generalization. We
then extend our analysis to discrete-time and develop a uniform-in-time
discretization error bound, which, to our knowledge, is the first result of its
kind for SDEs with degenerate noise. This result shows that, with appropriately
chosen step-sizes, the discrete dynamics retain the generalization properties
of the limiting SDE. We illustrate our theory on both synthetic quadratic
problems and neural networks.
|
2502.00893
|
ToddlerBot: Open-Source ML-Compatible Humanoid Platform for
Loco-Manipulation
|
cs.RO
|
Learning-based robotics research driven by data demands a new approach to
robot hardware design-one that serves as both a platform for policy execution
and a tool for embodied data collection to train policies. We introduce
ToddlerBot, a low-cost, open-source humanoid robot platform designed for
scalable policy learning and research in robotics and AI. ToddlerBot enables
seamless acquisition of high-quality simulation and real-world data. The
plug-and-play zero-point calibration and transferable motor system
identification ensure a high-fidelity digital twin, enabling zero-shot policy
transfer from simulation to the real world. A user-friendly teleoperation
interface facilitates streamlined real-world data collection for learning motor
skills from human demonstrations. Utilizing its data collection ability and
anthropomorphic design, ToddlerBot is an ideal platform to perform whole-body
loco-manipulation. Additionally, ToddlerBot's compact size (0.56m, 3.4kg)
ensures safe operation in real-world environments. Reproducibility is achieved
with an entirely 3D-printed, open-source design and commercially available
components, keeping the total cost under 6,000 USD. Comprehensive documentation
allows assembly and maintenance with basic technical expertise, as validated by
a successful independent replication of the system. We demonstrate ToddlerBot's
capabilities through arm span, payload, endurance tests, loco-manipulation
tasks, and a collaborative long-horizon scenario where two robots tidy a toy
session together. By advancing ML-compatibility, capability, and
reproducibility, ToddlerBot provides a robust platform for scalable learning
and dynamic policy execution in robotics research.
|
2502.00894
|
MorphBPE: A Morpho-Aware Tokenizer Bridging Linguistic Complexity for
Efficient LLM Training Across Morphologies
|
cs.CL cs.AI
|
Tokenization is fundamental to Natural Language Processing (NLP), directly
impacting model efficiency and linguistic fidelity. While Byte Pair Encoding
(BPE) is widely used in Large Language Models (LLMs), it often disregards
morpheme boundaries, leading to suboptimal segmentation, particularly in
morphologically rich languages. We introduce MorphBPE, a morphology-aware
extension of BPE that integrates linguistic structure into subword tokenization
while preserving statistical efficiency. Additionally, we propose two
morphology-based evaluation metrics: (i) Morphological Consistency F1-Score,
which quantifies the consistency between morpheme sharing and token sharing,
contributing to LLM training convergence, and (ii) Morphological Edit Distance,
which measures alignment between morphemes and tokens concerning
interpretability. Experiments on English, Russian, Hungarian, and Arabic across
300M and 1B parameter LLMs demonstrate that MorphBPE consistently reduces
cross-entropy loss, accelerates convergence, and improves morphological
alignment scores. Fully compatible with existing LLM pipelines, MorphBPE
requires minimal modifications for integration. The MorphBPE codebase and
tokenizer playground will be available at:
https://github.com/llm-lab-org/MorphBPE and https://tokenizer.llm-lab.org
|
2502.00896
|
LoR-VP: Low-Rank Visual Prompting for Efficient Vision Model Adaptation
|
cs.CV
|
Visual prompting has gained popularity as a method for adapting pre-trained
models to specific tasks, particularly in the realm of parameter-efficient
tuning. However, existing visual prompting techniques often pad the prompt
parameters around the image, limiting the interaction between the visual
prompts and the original image to a small set of patches while neglecting the
inductive bias present in shared information across different patches. In this
study, we conduct a thorough preliminary investigation to identify and address
these limitations. We propose a novel visual prompt design, introducing
Low-Rank matrix multiplication for Visual Prompting (LoR-VP), which enables
shared and patch-specific information across rows and columns of image pixels.
Extensive experiments across seven network architectures and four datasets
demonstrate significant improvements in both performance and efficiency
compared to state-of-the-art visual prompting methods, achieving up to 6 times
faster training times, utilizing 18 times fewer visual prompt parameters, and
delivering a 3.1% improvement in performance. The code is available as
https://github.com/jincan333/LoR-VP.
|
2502.00897
|
Multi-frequency wavefield solutions for variable velocity models using
meta-learning enhanced low-rank physics-informed neural network
|
cs.LG physics.geo-ph
|
Physics-informed neural networks (PINNs) face significant challenges in
modeling multi-frequency wavefields in complex velocity models due to their
slow convergence, difficulty in representing high-frequency details, and lack
of generalization to varying frequencies and velocity scenarios. To address
these issues, we propose Meta-LRPINN, a novel framework that combines low-rank
parameterization using singular value decomposition (SVD) with meta-learning
and frequency embedding. Specifically, we decompose the weights of PINN's
hidden layers using SVD and introduce an innovative frequency embedding
hypernetwork (FEH) that links input frequencies with the singular values,
enabling efficient and frequency-adaptive wavefield representation.
Meta-learning is employed to provide robust initialization, improving
optimization stability and reducing training time. Additionally, we implement
adaptive rank reduction and FEH pruning during the meta-testing phase to
further enhance efficiency. Numerical experiments, which are presented on
multi-frequency scattered wavefields for different velocity models, demonstrate
that Meta-LRPINN achieves much fast convergence speed and much high accuracy
compared to baseline methods such as Meta-PINN and vanilla PINN. Also, the
proposed framework shows strong generalization to out-of-distribution
frequencies while maintaining computational efficiency. These results highlight
the potential of our Meta-LRPINN for scalable and adaptable seismic wavefield
modeling.
|
2502.00899
|
HASSLE-free: A unified Framework for Sparse plus Low-Rank Matrix
Decomposition for LLMs
|
stat.ML cs.LG
|
The impressive capabilities of large foundation models come at a cost of
substantial computing resources to serve them. Compressing these pre-trained
models is of practical interest as it can democratize deploying them to the
machine learning community at large by lowering the costs associated with
inference. A promising compression scheme is to decompose foundation models'
dense weights into a sum of sparse plus low-rank matrices. In this paper, we
design a unified framework coined HASSLE-free for (semi-structured) sparse plus
low-rank matrix decomposition of foundation models. Our framework introduces
the local layer-wise reconstruction error objective for this decomposition, we
demonstrate that prior work solves a relaxation of this optimization problem;
and we provide efficient and scalable methods to minimize the exact introduced
optimization problem. HASSLE-free substantially outperforms state-of-the-art
methods in terms of the introduced objective and a wide range of LLM evaluation
benchmarks. For the Llama3-8B model with a 2:4 sparsity component plus a
64-rank component decomposition, a compression scheme for which recent work
shows important inference acceleration on GPUs, HASSLE-free reduces the test
perplexity by 12% for the WikiText-2 dataset and reduces the gap (compared to
the dense model) of the average of eight popular zero-shot tasks by 15%
compared to existing methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.