id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.00299
|
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient
Long-Context LLM Inference
|
cs.CL
|
To reduce memory costs in long-context inference with Large Language Models
(LLMs), many recent works focus on compressing the key-value (KV) cache of
different tokens. However, we identify that the previous KV cache compression
methods measure token importance individually, neglecting the dependency
between different tokens in the real-world language characterics. In light of
this, we introduce ChunkKV, grouping the tokens in a chunk as a basic
compressing unit, and retaining the most informative semantic chunks while
discarding the less important ones. Furthermore, observing that ChunkKV
exhibits higher similarity in the preserved indices across different layers, we
propose layer-wise index reuse to further reduce computational overhead. We
evaluated ChunkKV on cutting-edge long-context benchmarks including LongBench
and Needle-In-A-HayStack, as well as the GSM8K and JailbreakV in-context
learning benchmark. Our experiments with instruction tuning and multi-step
reasoning (O1 and R1) LLMs, achieve up to 10\% performance improvement under
aggressive compression ratios compared to existing methods.
|
2502.00300
|
Uncertainty Quantification of Wind Gust Predictions in the Northeast US:
An Evidential Neural Network and Explainable Artificial Intelligence Approach
|
cs.LG physics.ao-ph stat.ML
|
Machine learning has shown promise in reducing bias in numerical weather
model predictions of wind gusts. Yet, they underperform to predict high gusts
even with additional observations due to the right-skewed distribution of
gusts. Uncertainty quantification (UQ) addresses this by identifying when
predictions are reliable or needs cautious interpretation. Using data from 61
extratropical storms in the Northeastern USA, we introduce evidential neural
network (ENN) as a novel approach for UQ in gust predictions, leveraging
atmospheric variables from the Weather Research and Forecasting (WRF) model as
features and gust observations as targets. Explainable artificial intelligence
(XAI) techniques demonstrated that key predictive features also contributed to
higher uncertainty. Estimated uncertainty correlated with storm intensity and
spatial gust gradients. ENN allowed constructing gust prediction intervals
without requiring an ensemble. From an operational perspective, providing gust
forecasts with quantified uncertainty enhances stakeholders' confidence in risk
assessment and response planning for extreme gust events.
|
2502.00301
|
Contextual Morphogenesis in Large Language Models: A Novel Approach to
Self-Organizing Token Representations
|
cs.CL
|
Token representations influence the efficiency and adaptability of language
models, yet conventional tokenization strategies impose rigid segmentation
boundaries that do not adjust dynamically to evolving contextual relationships.
The introduction of contextual morphogenesis establishes a self-organizing
mechanism that restructures token boundaries based on learned contextual
dependencies, allowing embeddings to evolve progressively across iterative
processing steps. Empirical evaluations demonstrate that dynamically adjusted
tokenization contributes to reductions in perplexity while maintaining
representational stability, particularly in linguistically complex domains
where static segmentation fails to capture nuanced dependencies. Computational
trade-offs associated with self-organizing token structures indicate that
additional processing overhead remains within feasible limits, provided that
optimization strategies account for segmentation update efficiency. Comparative
assessments across different linguistic corpora suggest that adaptive
tokenization preserves interpretability while improving alignment with
contextual cues, reinforcing the potential of morphogenetic segmentation
mechanisms to refine predictive accuracy. Stability analyses confirm that
evolving token structures maintain consistent segmentation behaviors across
varied text distributions, ensuring that representational adaptations remain
linguistically coherent. The effectiveness of contextual morphogenesis in
refining structural stability and predictive performance highlights its
viability as an alternative to traditional tokenization methods. Further
analysis of computational efficiency considerations suggests that hybrid
strategies integrating both static and dynamic segmentation techniques may
offer a balanced approach to optimizing representational flexibility while
maintaining inference efficiency.
|
2502.00302
|
Learning to Fuse Temporal Proximity Networks: A Case Study in Chimpanzee
Social Interactions
|
stat.ML cs.AI cs.LG math.OC math.ST stat.TH
|
How can we identify groups of primate individuals which could be conjectured
to drive social structure? To address this question, one of us has collected a
time series of data for social interactions between chimpanzees. Here we use a
network representation, leading to the task of combining these data into a time
series of a single weighted network per time stamp, where different proximities
should be given different weights reflecting their relative importance. We
optimize these proximity-type weights in a principled way, using an innovative
loss function which rewards structural consistency across time. The approach is
empirically validated by carefully designed synthetic data. Using statistical
tests, we provide a way of identifying groups of individuals that stay related
for a significant length of time. Applying the approach to the chimpanzee data
set, we detect cliques in the animal social network time series, which can be
validated by real-world intuition from prior research and qualitative
observations by chimpanzee experts.
|
2502.00304
|
HoP: Homeomorphic Polar Learning for Hard Constrained Optimization
|
cs.LG cs.AI math.OC
|
Constrained optimization demands highly efficient solvers which promotes the
development of learn-to-optimize (L2O) approaches. As a data-driven method, L2O
leverages neural networks to efficiently produce approximate solutions.
However, a significant challenge remains in ensuring both optimality and
feasibility of neural networks' output. To tackle this issue, we introduce
Homeomorphic Polar Learning (HoP) to solve the star-convex hard-constrained
optimization by embedding homeomorphic mapping in neural networks. The
bijective structure enables end-to-end training without extra penalty or
correction. For performance evaluation, we evaluate HoP's performance across a
variety of synthetic optimization tasks and real-world applications in wireless
communications. In all cases, HoP achieves solutions closer to the optimum than
existing L2O methods while strictly maintaining feasibility.
|
2502.00305
|
DEUCE: Dual-diversity Enhancement and Uncertainty-awareness for
Cold-start Active Learning
|
cs.CL cs.AI cs.IR
|
Cold-start active learning (CSAL) selects valuable instances from an
unlabeled dataset for manual annotation. It provides high-quality data at a low
annotation cost for label-scarce text classification. However, existing CSAL
methods overlook weak classes and hard representative examples, resulting in
biased learning. To address these issues, this paper proposes a novel
dual-diversity enhancing and uncertainty-aware (DEUCE) framework for CSAL.
Specifically, DEUCE leverages a pretrained language model (PLM) to efficiently
extract textual representations, class predictions, and predictive uncertainty.
Then, it constructs a Dual-Neighbor Graph (DNG) to combine information on both
textual diversity and class diversity, ensuring a balanced data distribution.
It further propagates uncertainty information via density-based clustering to
select hard representative instances. DEUCE performs well in selecting
class-balanced and hard representative data by dual-diversity and
informativeness. Experiments on six NLP datasets demonstrate the superiority
and efficiency of DEUCE.
|
2502.00306
|
Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented
Generation
|
cs.CR cs.AI cs.CL cs.IR cs.LG
|
Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to
generate grounded responses by leveraging external knowledge databases without
altering model parameters. Although the absence of weight tuning prevents
leakage via model parameters, it introduces the risk of inference adversaries
exploiting retrieved documents in the model's context. Existing methods for
membership inference and data extraction often rely on jailbreaking or
carefully crafted unnatural queries, which can be easily detected or thwarted
with query rewriting techniques common in RAG systems. In this work, we present
Interrogation Attack (IA), a membership inference technique targeting documents
in the RAG datastore. By crafting natural-text queries that are answerable only
with the target document's presence, our approach demonstrates successful
inference with just 30 queries while remaining stealthy; straightforward
detectors identify adversarial prompts from existing methods up to ~76x more
frequently than those generated by our attack. We observe a 2x improvement in
TPR@1%FPR over prior inference attacks across diverse RAG configurations, all
while costing less than $0.02 per document inference.
|
2502.00307
|
A Diffusion Model Translator for Efficient Image-to-Image Translation
|
cs.CV
|
Applying diffusion models to image-to-image translation (I2I) has recently
received increasing attention due to its practical applications. Previous
attempts inject information from the source image into each denoising step for
an iterative refinement, thus resulting in a time-consuming implementation. We
propose an efficient method that equips a diffusion model with a lightweight
translator, dubbed a Diffusion Model Translator (DMT), to accomplish I2I.
Specifically, we first offer theoretical justification that in employing the
pioneering DDPM work for the I2I task, it is both feasible and sufficient to
transfer the distribution from one domain to another only at some intermediate
step. We further observe that the translation performance highly depends on the
chosen timestep for domain transfer, and therefore propose a practical strategy
to automatically select an appropriate timestep for a given task. We evaluate
our approach on a range of I2I applications, including image stylization, image
colorization, segmentation to image, and sketch to image, to validate its
efficacy and general utility. The comparisons show that our DMT surpasses
existing methods in both quality and efficiency. Code will be made publicly
available.
|
2502.00309
|
Decentralized Inference for Spatial Data Using Low-Rank Models
|
stat.ML cs.LG stat.CO stat.ME
|
Advancements in information technology have enabled the creation of massive
spatial datasets, driving the need for scalable and efficient computational
methodologies. While offering viable solutions, centralized frameworks are
limited by vulnerabilities such as single-point failures and communication
bottlenecks. This paper presents a decentralized framework tailored for
parameter inference in spatial low-rank models to address these challenges. A
key obstacle arises from the spatial dependence among observations, which
prevents the log-likelihood from being expressed as a summation-a critical
requirement for decentralized optimization approaches. To overcome this
challenge, we propose a novel objective function leveraging the evidence lower
bound, which facilitates the use of decentralized optimization techniques. Our
approach employs a block descent method integrated with multi-consensus and
dynamic consensus averaging for effective parameter optimization. We prove the
convexity of the new objective function in the vicinity of the true parameters,
ensuring the convergence of the proposed method. Additionally, we present the
first theoretical results establishing the consistency and asymptotic normality
of the estimator within the context of spatial low-rank models. Extensive
simulations and real-world data experiments corroborate these theoretical
findings, showcasing the robustness and scalability of the framework.
|
2502.00310
|
SigWavNet: Learning Multiresolution Signal Wavelet Network for Speech
Emotion Recognition
|
cs.SD cs.AI cs.CL eess.AS
|
In the field of human-computer interaction and psychological assessment,
speech emotion recognition (SER) plays an important role in deciphering
emotional states from speech signals. Despite advancements, challenges persist
due to system complexity, feature distinctiveness issues, and noise
interference. This paper introduces a new end-to-end (E2E) deep learning
multi-resolution framework for SER, addressing these limitations by extracting
meaningful representations directly from raw waveform speech signals. By
leveraging the properties of the fast discrete wavelet transform (FDWT),
including the cascade algorithm, conjugate quadrature filter, and coefficient
denoising, our approach introduces a learnable model for both wavelet bases and
denoising through deep learning techniques. The framework incorporates an
activation function for learnable asymmetric hard thresholding of wavelet
coefficients. Our approach exploits the capabilities of wavelets for effective
localization in both time and frequency domains. We then combine
one-dimensional dilated convolutional neural networks (1D dilated CNN) with a
spatial attention layer and bidirectional gated recurrent units (Bi-GRU) with a
temporal attention layer to efficiently capture the nuanced spatial and
temporal characteristics of emotional features. By handling variable-length
speech without segmentation and eliminating the need for pre or
post-processing, the proposed model outperformed state-of-the-art methods on
IEMOCAP and EMO-DB datasets. The source code of this paper is shared on the
Github repository:
https://github.com/alaaNfissi/SigWavNet-Learning-Multiresolution-Signal-Wavelet-Network-for-Speech-Emotion-Recognition.
|
2502.00311
|
Sparse Gradient Compression for Fine-Tuning Large Language Models
|
cs.LG
|
Fine-tuning large language models (LLMs) for downstream tasks has become
increasingly crucial due to their widespread use and the growing availability
of open-source models. However, the high memory costs associated with
fine-tuning remain a significant challenge, especially as models increase in
size. To address this, parameter efficient fine-tuning (PEFT) methods have been
proposed to minimize the number of parameters required for fine-tuning LLMs.
However, these approaches often tie the number of optimizer states to
dimensions of model parameters, limiting flexibility and control during
fine-tuning. In this paper, we propose sparse gradient compression (SGC), a
training regime designed to address these limitations. Our approach leverages
inherent sparsity in gradients to compress optimizer states by projecting them
onto a low-dimensonal subspace, with dimensionality independent of the original
model's parameters. By enabling optimizer state updates in an arbitrary
low-dimensional subspace, SGC offers a flexible tradeoff between memory
efficiency and performance. We demonstrate through experiments that SGC can
decrease memory usage in optimizer states more effectively than existing PEFT
methods. Furthermore, by fine-tuning LLMs on various downstream tasks, we show
that SGC can deliver superior performance while substantially lowering
optimizer state memory requirements, particularly in both data-limited and
memory-limited settings.
|
2502.00313
|
Distributive Fairness in Large Language Models: Evaluating Alignment
with Human Values
|
cs.GT cs.AI cs.CL cs.MA
|
The growing interest in employing large language models (LLMs) for
decision-making in social and economic contexts has raised questions about
their potential to function as agents in these domains. A significant number of
societal problems involve the distribution of resources, where fairness, along
with economic efficiency, play a critical role in the desirability of outcomes.
In this paper, we examine whether LLM responses adhere to fundamental fairness
concepts such as equitability, envy-freeness, and Rawlsian maximin, and
investigate their alignment with human preferences. We evaluate the performance
of several LLMs, providing a comparative benchmark of their ability to reflect
these measures. Our results demonstrate a lack of alignment between current LLM
responses and human distributional preferences. Moreover, LLMs are unable to
utilize money as a transferable resource to mitigate inequality. Nonetheless,
we demonstrate a stark contrast when (some) LLMs are tasked with selecting from
a predefined menu of options rather than generating one. In addition, we
analyze the robustness of LLM responses to variations in semantic factors (e.g.
intentions or personas) or non-semantic prompting changes (e.g. templates or
orderings). Finally, we highlight potential strategies aimed at enhancing the
alignment of LLM behavior with well-established fairness concepts.
|
2502.00314
|
A Study on the Performance of U-Net Modifications in Retroperitoneal
Tumor Segmentation
|
eess.IV cs.CV
|
The retroperitoneum hosts a variety of tumors, including rare benign and
malignant types, which pose diagnostic and treatment challenges due to their
infrequency and proximity to vital structures. Estimating tumor volume is
difficult due to their irregular shapes, and manual segmentation is
time-consuming. Automatic segmentation using U-Net and its variants,
incorporating Vision Transformer (ViT) elements, has shown promising results
but struggles with high computational demands. To address this, architectures
like the Mamba State Space Model (SSM) and Extended Long-Short Term Memory
(xLSTM) offer efficient solutions by handling long-range dependencies with
lower resource consumption. This study evaluates U-Net enhancements, including
CNN, ViT, Mamba, and xLSTM, on a new in-house CT dataset and a public organ
segmentation dataset. The proposed ViLU-Net model integrates Vi-blocks for
improved segmentation. Results highlight xLSTM's efficiency in the U-Net
framework. The code is publicly accessible on GitHub.
|
2502.00315
|
MonoDINO-DETR: Depth-Enhanced Monocular 3D Object Detection Using a
Vision Foundation Model
|
cs.CV
|
This paper proposes novel methods to enhance the performance of monocular 3D
object detection models by leveraging the generalized feature extraction
capabilities of a vision foundation model. Unlike traditional CNN-based
approaches, which often suffer from inaccurate depth estimation and rely on
multi-stage object detection pipelines, this study employs a Vision Transformer
(ViT)-based foundation model as the backbone, which excels at capturing global
features for depth estimation. It integrates a detection transformer (DETR)
architecture to improve both depth estimation and object detection performance
in a one-stage manner. Specifically, a hierarchical feature fusion block is
introduced to extract richer visual features from the foundation model, further
enhancing feature extraction capabilities. Depth estimation accuracy is further
improved by incorporating a relative depth estimation model trained on
large-scale data and fine-tuning it through transfer learning. Additionally,
the use of queries in the transformer's decoder, which consider reference
points and the dimensions of 2D bounding boxes, enhances recognition
performance. The proposed model outperforms recent state-of-the-art methods, as
demonstrated through quantitative and qualitative evaluations on the KITTI 3D
benchmark and a custom dataset collected from high-elevation racing
environments. Code is available at https://github.com/JihyeokKim/MonoDINO-DETR.
|
2502.00317
|
DIST: Efficient k-Clique Listing via Induced Subgraph Trie
|
cs.DB
|
Listing k-cliques plays a fundamental role in various data mining tasks, such
as community detection and mining of cohesive substructures. Existing
algorithms for the k-clique listing problem are built upon a general framework,
which finds k-cliques by recursively finding (k-1)-cliques within subgraphs
induced by the out-neighbors of each vertex. However, this framework has
inherent inefficiency of finding smaller cliques within certain subgraphs
repeatedly. In this paper, we propose an algorithm DIST for the k-clique
listing problem. In contrast to existing works, the main idea in our approach
is to compute each clique in the given graph only once and store it into a data
structure called Induced Subgraph Trie, which allows us to retrieve the cliques
efficiently. Furthermore, we propose a method to prune search space based on a
novel concept called soft embedding of an l-tree, which further improves the
running time. We show the superiority of our approach in terms of time and
space usage through comprehensive experiments conducted on real-world networks;
DIST outperforms the state-of-the-art algorithm by up to two orders of
magnitude in both single-threaded and parallel experiments.
|
2502.00318
|
Sub-Sequential Physics-Informed Learning with State Space Model
|
cs.LG cs.NA math.NA
|
Physics-Informed Neural Networks (PINNs) are a kind of deep-learning-based
numerical solvers for partial differential equations (PDEs). Existing PINNs
often suffer from failure modes of being unable to propagate patterns of
initial conditions. We discover that these failure modes are caused by the
simplicity bias of neural networks and the mismatch between PDE's continuity
and PINN's discrete sampling. We reveal that the State Space Model (SSM) can be
a continuous-discrete articulation allowing initial condition propagation, and
that simplicity bias can be eliminated by aligning a sequence of moderate
granularity. Accordingly, we propose PINNMamba, a novel framework that
introduces sub-sequence modeling with SSM. Experimental results show that
PINNMamba can reduce errors by up to 86.3\% compared with state-of-the-art
architecture. Our code is available at https://github.com/miniHuiHui/PINNMamba.
|
2502.00319
|
Physics-Inspired Distributed Radio Map Estimation
|
cs.LG cs.DC eess.SP
|
To gain panoramic awareness of spectrum coverage in complex wireless
environments, data-driven learning approaches have recently been introduced for
radio map estimation (RME). While existing deep learning based methods conduct
RME given spectrum measurements gathered from dispersed sensors in the region
of interest, they rely on centralized data at a fusion center, which however
raises critical concerns on data privacy leakages and high communication
overloads. Federated learning (FL) enhance data security and communication
efficiency in RME by allowing multiple clients to collaborate in model training
without directly sharing local data. However, the performance of the FL-based
RME can be hindered by the problem of task heterogeneity across clients due to
their unavailable or inaccurate landscaping information. To fill this gap, in
this paper, we propose a physics-inspired distributed RME solution in the
absence of landscaping information. The main idea is to develop a novel
distributed RME framework empowered by leveraging the domain knowledge of radio
propagation models, and by designing a new distributed learning approach that
splits the entire RME model into two modules. A global autoencoder module is
shared among clients to capture the common pathloss influence on radio
propagation pattern, while a client-specific autoencoder module focuses on
learning the individual features produced by local shadowing effects from the
unique building distributions in local environment. Simulation results show
that our proposed method outperforms the benchmarks in achieving higher
performance.
|
2502.00320
|
$k$-SVD with Gradient Descent
|
cs.LG math.OC
|
We show that a gradient-descent with a simple, universal rule for step-size
selection provably finds $k$-SVD, i.e., the $k\geq 1$ largest singular values
and corresponding vectors, of any matrix, despite nonconvexity. There has been
substantial progress towards this in the past few years where existing results
are able to establish such guarantees for the \emph{exact-parameterized} and
\emph{over-parameterized} settings, with choice of oracle-provided step size.
But guarantees for generic setting with a step size selection that does not
require oracle-provided information has remained a challenge. We overcome this
challenge and establish that gradient descent with an appealingly simple
adaptive step size (akin to preconditioning) and random initialization enjoys
global linear convergence for generic setting. Our convergence analysis reveals
that the gradient method has an attracting region, and within this attracting
region, the method behaves like Heron's method (a.k.a. the Babylonian method).
Empirically, we validate the theoretical results. The emergence of modern
compute infrastructure for iterative optimization coupled with this work is
likely to provide means to solve $k$-SVD for very large matrices.
|
2502.00321
|
MIM: Multi-modal Content Interest Modeling Paradigm for User Behavior
Modeling
|
cs.IR cs.AI
|
Click-Through Rate (CTR) prediction is a crucial task in recommendation
systems, online searches, and advertising platforms, where accurately capturing
users' real interests in content is essential for performance. However,
existing methods heavily rely on ID embeddings, which fail to reflect users'
true preferences for content such as images and titles. This limitation becomes
particularly evident in cold-start and long-tail scenarios, where traditional
approaches struggle to deliver effective results. To address these challenges,
we propose a novel Multi-modal Content Interest Modeling paradigm (MIM), which
consists of three key stages: Pre-training, Content-Interest-Aware Supervised
Fine-Tuning (C-SFT), and Content-Interest-Aware UBM (CiUBM). The pre-training
stage adapts foundational models to domain-specific data, enabling the
extraction of high-quality multi-modal embeddings. The C-SFT stage bridges the
semantic gap between content and user interests by leveraging user behavior
signals to guide the alignment of embeddings with user preferences. Finally,
the CiUBM stage integrates multi-modal embeddings and ID-based collaborative
filtering signals into a unified framework. Comprehensive offline experiments
and online A/B tests conducted on the Taobao, one of the world's largest
e-commerce platforms, demonstrated the effectiveness and efficiency of MIM
method. The method has been successfully deployed online, achieving a
significant increase of +14.14% in CTR and +4.12% in RPM, showcasing its
industrial applicability and substantial impact on platform performance. To
promote further research, we have publicly released the code and dataset at
https://pan.quark.cn/s/8fc8ec3e74f3.
|
2502.00322
|
MODS: Moderating a Mixture of Document Speakers to Summarize Debatable
Queries in Document Collections
|
cs.CL cs.IR
|
Query-focused summarization (QFS) gives a summary of documents to answer a
query. Past QFS work assumes queries have one answer, ignoring debatable ones
(Is law school worth it?). We introduce Debatable QFS (DQFS), a task to create
summaries that answer debatable queries via documents with opposing
perspectives; summaries must comprehensively cover all sources and balance
perspectives, favoring no side. These goals elude LLM QFS systems, which: 1)
lack structured content plans, failing to guide LLMs to write balanced
summaries, and 2) use the same query to retrieve contexts across documents,
failing to cover all perspectives specific to each document's content. To
overcome this, we design MODS, a multi-LLM framework mirroring human panel
discussions. MODS treats documents as individual Speaker LLMs and has a
Moderator LLM that picks speakers to respond to tailored queries for planned
topics. Speakers use tailored queries to retrieve relevant contexts from their
documents and supply perspectives, which are tracked in a rich outline,
yielding a content plan to guide the final summary. Experiments on
ConflictingQA with controversial web queries and DebateQFS, our new dataset of
debate queries from Debatepedia, show MODS beats SOTA by 38-59% in topic
paragraph coverage and balance, based on new citation metrics. Users also find
MODS's summaries to be readable and more balanced.
|
2502.00329
|
CoddLLM: Empowering Large Language Models for Data Analytics
|
cs.DB cs.AI
|
Large Language Models (LLMs) have the potential to revolutionize data
analytics by simplifying tasks such as data discovery and SQL query synthesis
through natural language interactions. This work serves as a pivotal first step
toward the development of foundation models explicitly designed for data
analytics applications. To propel this vision forward, we unveil a new data
recipe for post-training LLMs, enhancing their comprehension of data management
and empowering them to tackle complex real-world analytics tasks. Specifically,
our innovative approach includes a scalable synthetic data generation method
that enables the creation of a broad spectrum of topics centered on data
representation and manipulation. Furthermore, we introduce two new tasks that
seamlessly bridge tables and text. We show that such tasks can enhance models'
understanding of schema creation and the nuanced translation between natural
language and tabular data. Leveraging this data recipe, we post-train a new
foundation model, named CoddLLM, based on Mistral-NeMo-12B. To assess the
language understanding and reasoning capabilities of LLMs in the realm of data
analytics, we contribute AnalyticsMMLU, a benchmark containing thousands of
multiple-choice questions on databases, data analysis, and machine learning.
Our focus on data discovery, has resulted in the contribution of three
comprehensive benchmarks that address both database and data lake scenarios.
CoddLLM not only excels in performance but also sets a new standard, achieving
the highest average accuracy across eight datasets. It outperforms
GPT-3.5-Turbo on AnalyticsMMLU, exceeding GPT-4o by 12.1% in table selection
and showing an average improvement of 24.9% in Text-to-SQL compared to the base
model.
|
2502.00330
|
From Few to Many: Self-Improving Many-Shot Reasoners Through Iterative
Optimization and Generation
|
cs.LG cs.AI stat.ML
|
Recent advances in long-context large language models (LLMs) have led to the
emerging paradigm of many-shot in-context learning (ICL), where it is observed
that scaling many more demonstrating examples beyond the conventional few-shot
setup in the context can lead to performance benefits. However, despite its
promise, it is unclear what aspects dominate the benefits and whether simply
scaling to more examples is the most effective way of improving many-shot ICL.
In this work, we first provide an analysis of the factors driving many-shot
ICL, and we find that 1) many-shot performance can still be attributed to often
a few disproportionately influential examples and 2) identifying such
influential examples ("optimize") and using them as demonstrations to
regenerate new examples ("generate") can lead to further improvements. Inspired
by the findings, we propose BRIDGE, an algorithm that alternates between the
optimize step with Bayesian optimization to discover the influential sets of
examples and the generate step to reuse this set to expand the reasoning paths
of the examples back to the many-shot regime automatically. On Gemini, Claude,
and Mistral LLMs of different sizes, we show that BRIDGE to significant
improvements across a diverse set of tasks, including symbolic reasoning,
numerical reasoning, and code generation.
|
2502.00333
|
BiMaCoSR: Binary One-Step Diffusion Model Leveraging Flexible Matrix
Compression for Real Super-Resolution
|
cs.CV
|
While super-resolution (SR) methods based on diffusion models (DM) have
demonstrated inspiring performance, their deployment is impeded due to the
heavy request of memory and computation. Recent researchers apply two kinds of
methods to compress or fasten the DM. One is to compress the DM into 1-bit, aka
binarization, alleviating the storage and computation pressure. The other
distills the multi-step DM into only one step, significantly speeding up
inference process. Nonetheless, it remains impossible to deploy DM to
resource-limited edge devices. To address this problem, we propose BiMaCoSR,
which combines binarization and one-step distillation to obtain extreme
compression and acceleration. To prevent the catastrophic collapse of the model
caused by binarization, we proposed sparse matrix branch (SMB) and low rank
matrix branch (LRMB). Both auxiliary branches pass the full-precision (FP)
information but in different ways. SMB absorbs the extreme values and its
output is high rank, carrying abundant FP information. Whereas, the design of
LRMB is inspired by LoRA and is initialized with the top r SVD components,
outputting low rank representation. The computation and storage overhead of our
proposed branches can be safely ignored. Comprehensive comparison experiments
are conducted to exhibit BiMaCoSR outperforms current state-of-the-art
binarization methods and gains competitive performance compared with FP
one-step model. BiMaCoSR achieves a 23.8x compression ratio and a 27.4x speedup
ratio compared to FP counterpart. Our code and model are available at
https://github.com/Kai-Liu001/BiMaCoSR.
|
2502.00334
|
UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning
with Large Language Models
|
cs.CL cs.AI
|
Large language models (LLMs) have demonstrated remarkable capabilities in
solving complex reasoning tasks, particularly in mathematics. However, the
domain of physics reasoning presents unique challenges that have received
significantly less attention. Existing benchmarks often fall short in
evaluating LLMs' abilities on the breadth and depth of undergraduate-level
physics, underscoring the need for a comprehensive evaluation. To fill this
gap, we introduce UGPhysics, a large-scale and comprehensive benchmark
specifically designed to evaluate UnderGraduate-level Physics (UGPhysics)
reasoning with LLMs. UGPhysics includes 5,520 undergraduate-level physics
problems in both English and Chinese, covering 13 subjects with seven different
answer types and four distinct physics reasoning skills, all rigorously
screened for data leakage. Additionally, we develop a Model-Assistant
Rule-based Judgment (MARJ) pipeline specifically tailored for assessing answer
correctness of physics problems, ensuring accurate evaluation. Our evaluation
of 31 leading LLMs shows that the highest overall accuracy, 49.8% (achieved by
OpenAI-o1-mini), emphasizes the necessity for models with stronger physics
reasoning skills, beyond math abilities. We hope UGPhysics, along with MARJ,
will drive future advancements in AI for physics reasoning. Codes and data are
available at https://github.com/YangLabHKUST/UGPhysics .
|
2502.00336
|
Denoising Score Matching with Random Features: Insights on Diffusion
Models from Precise Learning Curves
|
cs.LG stat.ML
|
We derive asymptotically precise expressions for test and train errors of
denoising score matching (DSM) in generative diffusion models. The score
function is parameterized by random features neural networks, with the target
distribution being $d$-dimensional standard Gaussian. We operate in a regime
where the dimension $d$, number of data samples $n$, and number of features $p$
tend to infinity while keeping the ratios $\psi_n=\frac{n}{d}$ and
$\psi_p=\frac{p}{d}$ fixed. By characterizing the test and train errors, we
identify regimes of generalization and memorization in diffusion models.
Furthermore, our work sheds light on the conditions enhancing either
generalization or memorization. Consistent with prior empirical observations,
our findings indicate that the model complexity ($p$) and the number of noise
samples per data sample ($m$) used during DSM significantly influence
generalization and memorization behaviors.
|
2502.00338
|
OneForecast: A Universal Framework for Global and Regional Weather
Forecasting
|
cs.LG physics.ao-ph
|
Accurate weather forecasts are important for disaster prevention,
agricultural planning, and water resource management. Traditional numerical
weather prediction (NWP) methods offer physically interpretable high-accuracy
predictions but are computationally expensive and fail to fully leverage
rapidly growing historical data. In recent years, deep learning methods have
made significant progress in weather forecasting, but challenges remain, such
as balancing global and regional high-resolution forecasts, excessive smoothing
in extreme event predictions, and insufficient dynamic system modeling. To
address these issues, this paper proposes a global-regional nested weather
forecasting framework based on graph neural networks (GNNs). By combining a
dynamic system perspective with multi-grid theory, we construct a multi-scale
graph structure and densify the target region to capture local high-frequency
features. We introduce an adaptive information propagation mechanism, using
dynamic gating units to deeply integrate node and edge features for more
accurate extreme event forecasting. For high-resolution regional forecasts, we
propose a neural nested grid method to mitigate boundary information loss.
Experimental results show that the proposed method performs excellently across
global to regional scales and short-term to long-term forecasts, especially in
extreme event predictions (e.g., typhoons), significantly improving forecast
accuracy. Our codes are available at https://github.com/YuanGao-YG/OneForecast.
|
2502.00339
|
Challenges and Innovations in LLM-Powered Fake News Detection: A
Synthesis of Approaches and Future Directions
|
cs.CL cs.CY
|
The pervasiveness of the dissemination of fake news through social media
platforms poses critical risks to the trust of the general public, societal
stability, and democratic institutions. This challenge calls for novel
methodologies in detection, which can keep pace with the dynamic and
multi-modal nature of misinformation. Recent works include powering the
detection using large language model advances in multimodal frameworks,
methodologies using graphs, and adversarial training in the literature of fake
news. Based on the different approaches which can bring success, some key
highlights will be underlined: enhanced LLM-improves accuracy through more
advanced semantics and cross-modality fusion for robust detections. The review
further identifies critical gaps in adaptability to dynamic social media
trends, real-time, and cross-platform detection capabilities, as well as the
ethical challenges thrown up by the misuse of LLMs. Future directions underline
the development of style-agnostic models, cross-lingual detection frameworks,
and robust policies with a view to mitigating LLM-driven misinformation. This
synthesis thus lays a concrete foundation for those researchers and
practitioners committed to reinforcing fake news detection systems with
complications that keep on growing in the digital landscape.
|
2502.00340
|
Enhancing Token Filtering Efficiency in Large Language Model Training
with Collider
|
cs.LG cs.CL cs.DC
|
Token filtering has been proposed to enhance utility of large language models
(LLMs) by eliminating inconsequential tokens during training. While using fewer
tokens should reduce computational workloads, existing studies have not
succeeded in achieving higher efficiency. This is primarily due to the
insufficient sparsity caused by filtering tokens only in the output layers, as
well as inefficient sparse GEMM (General Matrix Multiplication), even when
having sufficient sparsity.
This paper presents Collider, a system unleashing the full efficiency of
token filtering in LLM training. At its core, Collider filters activations of
inconsequential tokens across all layers to maintain sparsity. Additionally, it
features an automatic workflow that transforms sparse GEMM into
dimension-reduced dense GEMM for optimized efficiency. Evaluations on three
LLMs-TinyLlama-1.1B, Qwen2.5-1.5B, and Phi1.5-1.4B-demonstrate that Collider
reduces backpropagation time by up to 35.1% and end-to-end training time by up
to 22.0% when filtering 40% of tokens. Utility assessments of training
TinyLlama on 15B tokens indicate that Collider sustains the utility
advancements of token filtering by relatively improving model utility by 16.3%
comparing to regular training, and reduces training time from 4.7 days to 3.5
days using 8 GPUs. Collider is designed for easy integration into existing LLM
training frameworks, allowing systems already using token filtering to
accelerate training with just one line of code.
|
2502.00342
|
Embodied Intelligence for 3D Understanding: A Survey on 3D Scene
Question Answering
|
cs.CV
|
3D Scene Question Answering (3D SQA) represents an interdisciplinary task
that integrates 3D visual perception and natural language processing,
empowering intelligent agents to comprehend and interact with complex 3D
environments. Recent advances in large multimodal modelling have driven the
creation of diverse datasets and spurred the development of instruction-tuning
and zero-shot methods for 3D SQA. However, this rapid progress introduces
challenges, particularly in achieving unified analysis and comparison across
datasets and baselines. This paper presents the first comprehensive survey of
3D SQA, systematically reviewing datasets, methodologies, and evaluation
metrics while highlighting critical challenges and future opportunities in
dataset standardization, multimodal fusion, and task design.
|
2502.00343
|
A Novel Approach to Translate Structural Aggregation Queries to
MapReduce Code
|
cs.DB cs.DC
|
Data management applications are growing and require more attention,
especially in the "big data" era. Thus, supporting such applications with novel
and efficient algorithms that achieve higher performance is critical. Array
database management systems are one way to support these applications by
dealing with data represented in n-dimensional data structures. For instance,
software like SciDB and RasDaMan can be powerful tools to achieve the required
performance on large-scale problems with multidimensional data. Like their
relational counterparts, these management systems support specific array query
languages as the user interface. As a popular programming model, MapReduce
allows large-scale data analysis, facilitates query processing, and is used as
a DB engine. Nevertheless, one major obstacle is the low productivity of
developing MapReduce applications. Unlike high-level declarative languages such
as SQL, MapReduce jobs are written in a low-level descriptive language, often
requiring massive programming efforts and complicated debugging processes. This
work presents a system that supports translating array queries expressed in the
Array Query Language (AQL) in SciDB into MapReduce jobs. We focus on
translating some unique structural aggregations, including circular, grid,
hierarchical, and sliding aggregations. Unlike traditional aggregations in
relational DBs, these structural aggregations are designed explicitly for array
manipulation. Thus, our work can be considered an array-view counterpart of
existing SQL to MapReduce translators like HiveQL and YSmart. Our translator
supports structural aggregations over arrays to meet various array
manipulations. The translator can also help user-defined aggregation functions
with minimal user effort. We show that our translator can generate optimized
MapReduce code, which performs better than the short handwritten code by up to
10.84x.
|
2502.00344
|
FinchGPT: a Transformer based language model for birdsong analysis
|
cs.CL
|
The long-range dependencies among the tokens, which originate from
hierarchical structures, are a defining hallmark of human language. However,
whether similar dependencies exist within the sequential vocalization of
non-human animals remains a topic of investigation. Transformer architectures,
known for their ability to model long-range dependencies among tokens, provide
a powerful tool for investigating this phenomenon. In this study, we employed
the Transformer architecture to analyze the songs of Bengalese finch (Lonchura
striata domestica), which are characterized by their highly variable and
complex syllable sequences. To this end, we developed FinchGPT, a
Transformer-based model trained on a textualized corpus of birdsongs, which
outperformed other architecture models in this domain. Attention weight
analysis revealed that FinchGPT effectively captures long-range dependencies
within syllables sequences. Furthermore, reverse engineering approaches
demonstrated the impact of computational and biological manipulations on its
performance: restricting FinchGPT's attention span and disrupting birdsong
syntax through the ablation of specific brain nuclei markedly influenced the
model's outputs. Our study highlights the transformative potential of large
language models (LLMs) in deciphering the complexities of animal vocalizations,
offering a novel framework for exploring the structural properties of non-human
communication systems while shedding light on the computational distinctions
between biological brains and artificial neural networks.
|
2502.00345
|
The Composite Task Challenge for Cooperative Multi-Agent Reinforcement
Learning
|
cs.LG cs.AI cs.MA
|
The significant role of division of labor (DOL) in promoting cooperation is
widely recognized in real-world applications.Many cooperative multi-agent
reinforcement learning (MARL) methods have incorporated the concept of DOL to
improve cooperation among agents.However, the tasks used in existing testbeds
typically correspond to tasks where DOL is often not a necessary feature for
achieving optimal policies.Additionally, the full utilize of DOL concept in
MARL methods remains unrealized due to the absence of appropriate tasks.To
enhance the generality and applicability of MARL methods in real-world
scenarios, there is a necessary to develop tasks that demand multi-agent DOL
and cooperation.In this paper, we propose a series of tasks designed to meet
these requirements, drawing on real-world rules as the guidance for their
design.We guarantee that DOL and cooperation are necessary condition for
completing tasks and introduce three factors to expand the diversity of
proposed tasks to cover more realistic situations.We evaluate 10 cooperative
MARL methods on the proposed tasks.The results indicate that all baselines
perform poorly on these tasks.To further validate the solvability of these
tasks, we also propose simplified variants of proposed tasks.Experimental
results show that baselines are able to handle these simplified variants,
providing evidence of the solvability of the proposed tasks.The source files is
available at https://github.com/Yurui-Li/CTC.
|
2502.00346
|
Actor Critic with Experience Replay-based automatic treatment planning
for prostate cancer intensity modulated radiotherapy
|
cs.LG cs.AI physics.med-ph
|
Background: Real-time treatment planning in IMRT is challenging due to
complex beam interactions. AI has improved automation, but existing models
require large, high-quality datasets and lack universal applicability. Deep
reinforcement learning (DRL) offers a promising alternative by mimicking human
trial-and-error planning.
Purpose: Develop a stochastic policy-based DRL agent for automatic treatment
planning with efficient training, broad applicability, and robustness against
adversarial attacks using Fast Gradient Sign Method (FGSM).
Methods: Using the Actor-Critic with Experience Replay (ACER) architecture,
the agent tunes treatment planning parameters (TPPs) in inverse planning.
Training is based on prostate cancer IMRT cases, using dose-volume histograms
(DVHs) as input. The model is trained on a single patient case, validated on
two independent cases, and tested on 300+ plans across three datasets. Plan
quality is assessed using ProKnow scores, and robustness is tested against
adversarial attacks.
Results: Despite training on a single case, the model generalizes well.
Before ACER-based planning, the mean plan score was 6.20$\pm$1.84; after,
93.09% of cases achieved a perfect score of 9, with a mean of 8.93$\pm$0.27.
The agent effectively prioritizes optimal TPP tuning and remains robust against
adversarial attacks.
Conclusions: The ACER-based DRL agent enables efficient, high-quality
treatment planning in prostate cancer IMRT, demonstrating strong
generalizability and robustness.
|
2502.00348
|
Personalized Denoising Implicit Feedback for Robust Recommender System
|
cs.IR
|
While implicit feedback is foundational to modern recommender systems,
factors such as human error, uncertainty, and ambiguity in user behavior
inevitably introduce significant noise into this feedback, adversely affecting
the accuracy and robustness of recommendations. To address this issue, existing
methods typically aim to reduce the training weight of noisy feedback or
discard it entirely, based on the observation that noisy interactions often
exhibit higher losses in the overall loss distribution. However, we identify
two key issues: (1) there is a significant overlap between normal and noisy
interactions in the overall loss distribution, and (2) this overlap becomes
even more pronounced when transitioning from pointwise loss functions (e.g.,
BCE loss) to pairwise loss functions (e.g., BPR loss). This overlap leads
traditional methods to misclassify noisy interactions as normal, and vice
versa. To tackle these challenges, we further investigate the loss overlap and
find that for a given user, there is a clear distinction between normal and
noisy interactions in the user's personal loss distribution. Based on this
insight, we propose a resampling strategy to Denoise using the user's Personal
Loss distribution, named PLD, which reduces the probability of noisy
interactions being optimized. Specifically, during each optimization iteration,
we create a candidate item pool for each user and resample the items from this
pool based on the user's personal loss distribution, prioritizing normal
interactions. Additionally, we conduct a theoretical analysis to validate PLD's
effectiveness and suggest ways to further enhance its performance. Extensive
experiments conducted on three datasets with varying noise ratios demonstrate
PLD's efficacy and robustness.
|
2502.00350
|
OrcaLoca: An LLM Agent Framework for Software Issue Localization
|
cs.SE cs.AI
|
Recent developments in Large Language Model (LLM) agents are revolutionizing
Autonomous Software Engineering (ASE), enabling automated coding, problem
fixes, and feature improvements. However, localization -- precisely identifying
software problems by navigating to relevant code sections -- remains a
significant challenge. Current approaches often yield suboptimal results due to
a lack of effective integration between LLM agents and precise code search
mechanisms. This paper introduces OrcaLoca, an LLM agent framework that
improves accuracy for software issue localization by integrating priority-based
scheduling for LLM-guided action, action decomposition with relevance scoring,
and distance-aware context pruning. Experimental results demonstrate that
OrcaLoca becomes the new open-source state-of-the-art (SOTA) in function match
rate (65.33%) on SWE-bench Lite. It also improves the final resolved rate of an
open-source framework by 6.33 percentage points through its patch generation
integration.
|
2502.00351
|
Multi-Order Hyperbolic Graph Convolution and Aggregated Attention for
Social Event Detection
|
cs.SI cs.AI
|
Social event detection (SED) is a task focused on identifying specific
real-world events and has broad applications across various domains. It is
integral to many mobile applications with social features, including major
platforms like Twitter, Weibo, and Facebook. By enabling the analysis of social
events, SED provides valuable insights for businesses to understand consumer
preferences and supports public services in handling emergencies and disaster
management. Due to the hierarchical structure of event detection data,
traditional approaches in Euclidean space often fall short in capturing the
complexity of such relationships. While existing methods in both Euclidean and
hyperbolic spaces have shown promising results, they tend to overlook
multi-order relationships between events. To address these limitations, this
paper introduces a novel framework, Multi-Order Hyperbolic Graph Convolution
with Aggregated Attention (MOHGCAA), designed to enhance the performance of
SED. Experimental results demonstrate significant improvements under both
supervised and unsupervised settings. To further validate the effectiveness and
robustness of the proposed framework, we conducted extensive evaluations across
multiple datasets, confirming its superiority in tackling common challenges in
social event detection.
|
2502.00352
|
A Differentiated Reward Method for Reinforcement Learning based
Multi-Vehicle Cooperative Decision-Making Algorithms
|
cs.AI cs.MA cs.RO
|
Reinforcement learning (RL) shows great potential for optimizing
multi-vehicle cooperative driving strategies through the state-action-reward
feedback loop, but it still faces challenges such as low sample efficiency.
This paper proposes a differentiated reward method based on steady-state
transition systems, which incorporates state transition gradient information
into the reward design by analyzing traffic flow characteristics, aiming to
optimize action selection and policy learning in multi-vehicle cooperative
decision-making. The performance of the proposed method is validated in RL
algorithms such as MAPPO, MADQN, and QMIX under varying autonomous vehicle
penetration. The results show that the differentiated reward method
significantly accelerates training convergence and outperforms centering reward
and others in terms of traffic efficiency, safety, and action rationality.
Additionally, the method demonstrates strong scalability and environmental
adaptability, providing a novel approach for multi-agent cooperative
decision-making in complex traffic scenarios.
|
2502.00354
|
PM-MOE: Mixture of Experts on Private Model Parameters for Personalized
Federated Learning
|
cs.LG cs.AI cs.CR
|
Federated learning (FL) has gained widespread attention for its
privacy-preserving and collaborative learning capabilities. Due to significant
statistical heterogeneity, traditional FL struggles to generalize a shared
model across diverse data domains. Personalized federated learning addresses
this issue by dividing the model into a globally shared part and a locally
private part, with the local model correcting representation biases introduced
by the global model. Nevertheless, locally converged parameters more accurately
capture domain-specific knowledge, and current methods overlook the potential
benefits of these parameters. To address these limitations, we propose PM-MoE
architecture. This architecture integrates a mixture of personalized modules
and an energy-based personalized modules denoising, enabling each client to
select beneficial personalized parameters from other clients. We applied the
PM-MoE architecture to nine recent model-split-based personalized federated
learning algorithms, achieving performance improvements with minimal additional
training. Extensive experiments on six widely adopted datasets and two
heterogeneity settings validate the effectiveness of our approach. The source
code is available at \url{https://github.com/dannis97500/PM-MOE}.
|
2502.00355
|
Sampling in High-Dimensions using Stochastic Interpolants and
Forward-Backward Stochastic Differential Equations
|
cs.LG stat.ML
|
We present a class of diffusion-based algorithms to draw samples from
high-dimensional probability distributions given their unnormalized densities.
Ideally, our methods can transport samples from a Gaussian distribution to a
specified target distribution in finite time. Our approach relies on the
stochastic interpolants framework to define a time-indexed collection of
probability densities that bridge a Gaussian distribution to the target
distribution. Subsequently, we derive a diffusion process that obeys the
aforementioned probability density at each time instant. Obtaining such a
diffusion process involves solving certain Hamilton-Jacobi-Bellman PDEs. We
solve these PDEs using the theory of forward-backward stochastic differential
equations (FBSDE) together with machine learning-based methods. Through
numerical experiments, we demonstrate that our algorithm can effectively draw
samples from distributions that conventional methods struggle to handle.
|
2502.00358
|
Do Audio-Visual Segmentation Models Truly Segment Sounding Objects?
|
cs.SD cs.AI cs.LG cs.MM eess.AS
|
Unlike traditional visual segmentation, audio-visual segmentation (AVS)
requires the model not only to identify and segment objects but also to
determine whether they are sound sources. Recent AVS approaches, leveraging
transformer architectures and powerful foundation models like SAM, have
achieved impressive performance on standard benchmarks. Yet, an important
question remains: Do these models genuinely integrate audio-visual cues to
segment sounding objects? In this paper, we systematically investigate this
issue in the context of robust AVS. Our study reveals a fundamental bias in
current methods: they tend to generate segmentation masks based predominantly
on visual salience, irrespective of the audio context. This bias results in
unreliable predictions when sounds are absent or irrelevant. To address this
challenge, we introduce AVSBench-Robust, a comprehensive benchmark
incorporating diverse negative audio scenarios including silence, ambient
noise, and off-screen sounds. We also propose a simple yet effective approach
combining balanced training with negative samples and classifier-guided
similarity learning. Our extensive experiments show that state-of-theart AVS
methods consistently fail under negative audio conditions, demonstrating the
prevalence of visual bias. In contrast, our approach achieves remarkable
improvements in both standard metrics and robustness measures, maintaining
near-perfect false positive rates while preserving highquality segmentation
performance.
|
2502.00359
|
Exploring Representation-Aligned Latent Space for Better Generation
|
cs.LG
|
Generative models serve as powerful tools for modeling the real world, with
mainstream diffusion models, particularly those based on the latent diffusion
model paradigm, achieving remarkable progress across various tasks, such as
image and video synthesis. Latent diffusion models are typically trained using
Variational Autoencoders (VAEs), interacting with VAE latents rather than the
real samples. While this generative paradigm speeds up training and inference,
the quality of the generated outputs is limited by the latents' quality.
Traditional VAE latents are often seen as spatial compression in pixel space
and lack explicit semantic representations, which are essential for modeling
the real world. In this paper, we introduce ReaLS (Representation-Aligned
Latent Space), which integrates semantic priors to improve generation
performance. Extensive experiments show that fundamental DiT and SiT trained on
ReaLS can achieve a 15% improvement in FID metric. Furthermore, the enhanced
semantic latent space enables more perceptual downstream tasks, such as
segmentation and depth estimation.
|
2502.00360
|
Shape from Semantics: 3D Shape Generation from Multi-View Semantics
|
cs.CV cs.GR
|
We propose ``Shape from Semantics'', which is able to create 3D models whose
geometry and appearance match given semantics when observed from different
views. Traditional ``Shape from X'' tasks usually use visual input (e.g., RGB
images or depth maps) to reconstruct geometry, imposing strict constraints that
limit creative explorations. As applications, works like Shadow Art and Wire
Art often struggle to grasp the embedded semantics of their design through
direct observation and rely heavily on specific setups for proper display. To
address these limitations, our framework uses semantics as input, greatly
expanding the design space to create objects that integrate multiple semantic
elements and are easily discernible by observers. Considering that this task
requires a rich imagination, we adopt various generative models and
structure-to-detail pipelines. Specifically, we adopt multi-semantics Score
Distillation Sampling (SDS) to distill 3D geometry and appearance from 2D
diffusion models, ensuring that the initial shape is consistent with the
semantic input. We then use image restoration and video generation models to
add more details as supervision. Finally, we introduce neural signed distance
field (SDF) representation to achieve detailed shape reconstruction. Our
framework generates meshes with complex details, well-structured geometry,
coherent textures, and smooth transitions, resulting in visually appealing and
eye-catching designs. Project page: https://shapefromsemantics.github.io
|
2502.00361
|
Soft Diffusion Actor-Critic: Efficient Online Reinforcement Learning for
Diffusion Policy
|
cs.LG
|
Diffusion policies have achieved superior performance in imitation learning
and offline reinforcement learning (RL) due to their rich expressiveness.
However, the vanilla diffusion training procedure requires samples from target
distribution, which is impossible in online RL since we cannot sample from the
optimal policy, making training diffusion policies highly non-trivial in online
RL. Backpropagating policy gradient through the diffusion process incurs huge
computational costs and instability, thus being expensive and impractical. To
enable efficient diffusion policy training for online RL, we propose Soft
Diffusion Actor-Critic (SDAC), exploiting the viewpoint of diffusion models as
noise-perturbed energy-based models. The proposed SDAC relies solely on the
state-action value function as the energy functions to train diffusion
policies, bypassing sampling from the optimal policy while maintaining
lightweight computations. We conducted comprehensive comparisons on MuJoCo
benchmarks. The empirical results show that SDAC outperforms all recent
diffusion-policy online RLs on most tasks, and improves more than 120% over
soft actor-critic on complex locomotion tasks such as Humanoid and Ant.
|
2502.00362
|
Left-Deep Join Order Selection with Higher-Order Unconstrained Binary
Optimization on Quantum Computers
|
quant-ph cs.DB
|
Join order optimization is among the most crucial query optimization
problems, and its central position is also evident in the new research field
where quantum computing is applied to database optimization and data
management. In the field, join order optimization is the most studied database
problem, usually tackled with a quadratic unconstrained binary optimization
model, which is solved with various meta-heuristics such as quantum annealing,
quantum approximate optimization algorithm, or variational quantum eigensolver.
In this work, we continue developing quantum computing techniques for join
order optimization by presenting three novel quantum optimization algorithms.
These algorithms are based on a higher-order unconstrained binary optimization
model, which is a generalization of the quadratic model and has not previously
been applied to database problems. Theoretically, these optimization problems
naturally map to universal quantum computers and quantum annealers. Compared to
previous research, two of our algorithms are the first quantum algorithms to
precisely model the join order cost function. We prove theoretical bounds by
showing that these two methods encode the same plans as the dynamic programming
algorithm without cross-products, which provides the optimal result up to
cross-products. The third algorithm reaches at least as good plans as the
greedy algorithm without cross-products. These results set an important
theoretical connection between the classical and quantum algorithms for join
order selection, which has not been studied in the previous research. To
demonstrate our algorithms' practical usability, we have conducted an
experimental evaluation on thousands of clique, cycle, star, tree, and chain
query graphs using quantum and classical solvers.
|
2502.00363
|
Machine Learning Models for Reinforced Concrete Pipes Condition
Prediction: The State-of-the-Art Using Artificial Neural Networks and
Multiple Linear Regression in a Wisconsin Case Study
|
cs.LG cond-mat.mtrl-sci
|
The aging sewer infrastructure in the U.S., covering 2.1 million kilometers,
encounters increasing structural issues, resulting in around 75,000 yearly
sanitary sewer overflows that present serious economic, environmental, and
public health hazards. Conventional inspection techniques and deterministic
models do not account for the unpredictable nature of sewer decline, whereas
probabilistic methods depend on extensive historical data, which is frequently
lacking or incomplete. This research intends to enhance predictive accuracy for
the condition of sewer pipelines through machine learning models artificial
neural networks (ANNs) and multiple linear regression (MLR) by integrating
factors such as pipe age, material, diameter, environmental influences, and
PACP ratings. ANNs utilized ReLU activation functions and Adam optimization,
whereas MLR applied regularization to address multicollinearity, with both
models assessed through metrics like RMSE, MAE, and R2. The findings indicated
that ANNs surpassed MLR, attaining an R2 of 0.9066 compared to MLRs 0.8474,
successfully modeling nonlinear relationships while preserving generalization.
MLR, on the other hand, offered enhanced interpretability by pinpointing
significant predictors such as residual buildup. As a result, pipeline
degradation is driven by pipe length, age, and pipe diameter as key predictors,
while depth, soil type, and segment show minimal influence in this analysis.
Future studies ought to prioritize hybrid models that merge the accuracy of
ANNs with the interpretability of MLR, incorporating advanced methods such as
SHAP analysis and transfer learning to improve scalability in managing
infrastructure and promoting environmental sustainability.
|
2502.00365
|
What should an AI assessor optimise for?
|
cs.LG cs.AI
|
An AI assessor is an external, ideally indepen-dent system that predicts an
indicator, e.g., a loss value, of another AI system. Assessors can lever-age
information from the test results of many other AI systems and have the
flexibility of be-ing trained on any loss function or scoring rule: from
squared error to toxicity metrics. Here we address the question: is it always
optimal to train the assessor for the target metric? Or could it be better to
train for a different metric and then map predictions back to the target
metric? Us-ing twenty regression and classification problems with tabular data,
we experimentally explore this question for, respectively, regression losses
and classification scores with monotonic and non-monotonic mappings and find
that, contrary to intuition, optimising for more informative met-rics is not
generally better. Surprisingly, some monotonic transformations are promising.
For example, the logistic loss is useful for minimis-ing absolute or quadratic
errors in regression, and the logarithmic score helps maximise quadratic or
spherical scores in classification.
|
2502.00366
|
Prostate-Specific Foundation Models for Enhanced Detection of Clinically
Significant Cancer
|
eess.IV cs.CV
|
Accurate prostate cancer diagnosis remains challenging. Even when using MRI,
radiologists exhibit low specificity and significant inter-observer
variability, leading to potential delays or inaccuracies in identifying
clinically significant cancers. This leads to numerous unnecessary biopsies and
risks of missing clinically significant cancers. Here we present prostate
vision contrastive network (ProViCNet), prostate organ-specific vision
foundation models for Magnetic Resonance Imaging (MRI) and Trans-Rectal
Ultrasound imaging (TRUS) for comprehensive cancer detection. ProViCNet was
trained and validated using 4,401 patients across six institutions, as a
prostate cancer detection model on radiology images relying on patch-level
contrastive learning guided by biopsy confirmed radiologist annotations.
ProViCNet demonstrated consistent performance across multiple internal and
external validation cohorts with area under the receiver operating curve values
ranging from 0.875 to 0.966, significantly outperforming radiologists in the
reader study (0.907 versus 0.805, p<0.001) for mpMRI, while achieving 0.670 to
0.740 for TRUS. We also integrated ProViCNet with standard PSA to develop a
virtual screening test, and we showed that we can maintain the high sensitivity
for detecting clinically significant cancers while more than doubling
specificity from 15% to 38% (p<0.001), thereby substantially reducing
unnecessary biopsies. These findings highlight that ProViCNet's potential for
enhancing prostate cancer diagnosis accuracy and reduce unnecessary biopsies,
thereby optimizing diagnostic pathways.
|
2502.00372
|
NAVER: A Neuro-Symbolic Compositional Automaton for Visual Grounding
with Explicit Logic Reasoning
|
cs.CV
|
Visual Grounding (VG) tasks, such as referring expression detection and
segmentation tasks are important for linking visual entities to context,
especially in complex reasoning tasks that require detailed query
interpretation. This paper explores VG beyond basic perception, highlighting
challenges for methods that require reasoning like human cognition. Recent
advances in large language methods (LLMs) and Vision-Language methods (VLMs)
have improved abilities for visual comprehension, contextual understanding, and
reasoning. These methods are mainly split into end-to-end and compositional
methods, with the latter offering more flexibility. Compositional approaches
that integrate LLMs and foundation models show promising performance but still
struggle with complex reasoning with language-based logical representations. To
address these limitations, we propose NAVER, a compositional visual grounding
method that integrates explicit probabilistic logic reasoning within a
finite-state automaton, equipped with a self-correcting mechanism. This design
improves robustness and interpretability in inference through explicit logic
reasoning. Our results show that NAVER achieves SoTA performance comparing to
recent end-to-end and compositional baselines. The code is available at
https://github.com/ControlNet/NAVER .
|
2502.00373
|
Generalized Lie Symmetries in Physics-Informed Neural Operators
|
cs.LG physics.comp-ph
|
Physics-informed neural operators (PINOs) have emerged as powerful tools for
learning solution operators of partial differential equations (PDEs). Recent
research has demonstrated that incorporating Lie point symmetry information can
significantly enhance the training efficiency of PINOs, primarily through
techniques like data, architecture, and loss augmentation. In this work, we
focus on the latter, highlighting that point symmetries oftentimes result in no
training signal, limiting their effectiveness in many problems. To address
this, we propose a novel loss augmentation strategy that leverages evolutionary
representatives of point symmetries, a specific class of generalized symmetries
of the underlying PDE. These generalized symmetries provide a richer set of
generators compared to standard symmetries, leading to a more informative
training signal. We demonstrate that leveraging evolutionary representatives
enhances the performance of neural operators, resulting in improved data
efficiency and accuracy during training.
|
2502.00374
|
A Unit-based System and Dataset for Expressive Direct Speech-to-Speech
Translation
|
cs.CL cs.CV cs.MM cs.SD eess.AS
|
Current research in speech-to-speech translation (S2ST) primarily
concentrates on translation accuracy and speech naturalness, often overlooking
key elements like paralinguistic information, which is essential for conveying
emotions and attitudes in communication. To address this, our research
introduces a novel, carefully curated multilingual dataset from various movie
audio tracks. Each dataset pair is precisely matched for paralinguistic
information and duration. We enhance this by integrating multiple prosody
transfer techniques, aiming for translations that are accurate,
natural-sounding, and rich in paralinguistic details. Our experimental results
confirm that our model retains more paralinguistic information from the source
speech while maintaining high standards of translation accuracy and
naturalness.
|
2502.00375
|
Scalable Framework for Classifying AI-Generated Content Across
Modalities
|
cs.CV
|
The rapid growth of generative AI technologies has heightened the importance
of effectively distinguishing between human and AI-generated content, as well
as classifying outputs from diverse generative models. This paper presents a
scalable framework that integrates perceptual hashing, similarity measurement,
and pseudo-labeling to address these challenges. Our method enables the
incorporation of new generative models without retraining, ensuring
adaptability and robustness in dynamic scenarios. Comprehensive evaluations on
the Defactify4 dataset demonstrate competitive performance in text and image
classification tasks, achieving high accuracy across both distinguishing human
and AI-generated content and classifying among generative methods. These
results highlight the framework's potential for real-world applications as
generative AI continues to evolve. Source codes are publicly available at
https://github.com/ffyyytt/defactify4.
|
2502.00376
|
SSRepL-ADHD: Adaptive Complex Representation Learning Framework for ADHD
Detection from Visual Attention Tasks
|
cs.LG cs.HC eess.SP
|
Self Supervised Representation Learning (SSRepL) can capture meaningful and
robust representations of the Attention Deficit Hyperactivity Disorder (ADHD)
data and have the potential to improve the model's performance on also
downstream different types of Neurodevelopmental disorder (NDD) detection. In
this paper, a novel SSRepL and Transfer Learning (TL)-based framework that
incorporates a Long Short-Term Memory (LSTM) and a Gated Recurrent Units (GRU)
model is proposed to detect children with potential symptoms of ADHD. This
model uses Electroencephalogram (EEG) signals extracted during visual attention
tasks to accurately detect ADHD by preprocessing EEG signal quality through
normalization, filtering, and data balancing. For the experimental analysis, we
use three different models: 1) SSRepL and TL-based LSTM-GRU model named as
SSRepL-ADHD, which integrates LSTM and GRU layers to capture temporal
dependencies in the data, 2) lightweight SSRepL-based DNN model (LSSRepL-DNN),
and 3) Random Forest (RF). In the study, these models are thoroughly evaluated
using well-known performance metrics (i.e., accuracy, precision, recall, and
F1-score). The results show that the proposed SSRepL-ADHD model achieves the
maximum accuracy of 81.11% while admitting the difficulties associated with
dataset imbalance and feature selection.
|
2502.00377
|
When End-to-End is Overkill: Rethinking Cascaded Speech-to-Text
Translation
|
cs.CL cs.AI cs.MM cs.SD eess.AS
|
Though end-to-end speech-to-text translation has been a great success, we
argue that the cascaded speech-to-text translation model still has its place,
which is usually criticized for the error propagation between automatic speech
recognition (ASR) and machine translation (MT) models. In this paper, we
explore the benefits of incorporating multiple candidates from ASR and
self-supervised speech features into MT. Our analysis reveals that the primary
cause of cascading errors stems from the increased divergence between similar
samples in the speech domain when mapped to the text domain. By including
multiple candidates and self-supervised speech features, our approach allows
the machine translation model to choose the right words and ensure precise
translation using various speech samples. This strategy minimizes error spread
and takes advantage of large ASR and MT datasets, along with pre-trained ASR/MT
models, while addressing associated issues.
|
2502.00379
|
Latent Action Learning Requires Supervision in the Presence of
Distractors
|
cs.CV cs.AI cs.LG
|
Recently, latent action learning, pioneered by Latent Action Policies (LAPO),
have shown remarkable pre-training efficiency on observation-only data,
offering potential for leveraging vast amounts of video available on the web
for embodied AI. However, prior work has focused on distractor-free data, where
changes between observations are primarily explained by ground-truth actions.
Unfortunately, real-world videos contain action-correlated distractors that may
hinder latent action learning. Using Distracting Control Suite (DCS) we
empirically investigate the effect of distractors on latent action learning and
demonstrate that LAPO struggle in such scenario. We propose LAOM, a simple LAPO
modification that improves the quality of latent actions by 8x, as measured by
linear probing. Importantly, we show that providing supervision with
ground-truth actions, as few as 2.5% of the full dataset, during latent action
learning improves downstream performance by 4.2x on average. Our findings
suggest that integrating supervision during Latent Action Models (LAM) training
is critical in the presence of distractors, challenging the conventional
pipeline of first learning LAM and only then decoding from latent to
ground-truth actions.
|
2502.00380
|
CoHiRF: A Scalable and Interpretable Clustering Framework for
High-Dimensional Data
|
cs.LG stat.ML
|
Clustering high-dimensional data poses significant challenges due to the
curse of dimensionality, scalability issues, and the presence of noisy and
irrelevant features. We propose Consensus Hierarchical Random Feature (CoHiRF),
a novel clustering method designed to address these challenges effectively.
CoHiRF leverages random feature selection to mitigate noise and dimensionality
effects, repeatedly applies K-Means clustering in reduced feature spaces, and
combines results through a unanimous consensus criterion. This iterative
approach constructs a cluster assignment matrix, where each row records the
cluster assignments of a sample across repetitions, enabling the identification
of stable clusters by comparing identical rows. Clusters are organized
hierarchically, enabling the interpretation of the hierarchy to gain insights
into the dataset. CoHiRF is computationally efficient with a running time
comparable to K-Means, scalable to massive datasets, and exhibits robust
performance against state-of-the-art methods such as SC-SRGF, HDBSCAN, and
OPTICS. Experimental results on synthetic and real-world datasets confirm the
method's ability to reveal meaningful patterns while maintaining scalability,
making it a powerful tool for high-dimensional data analysis.
|
2502.00382
|
Masked Generative Nested Transformers with Decode Time Scaling
|
cs.CV cs.AI cs.LG
|
Recent advances in visual generation have made significant strides in
producing content of exceptional quality. However, most methods suffer from a
fundamental problem - a bottleneck of inference computational efficiency. Most
of these algorithms involve multiple passes over a transformer model to
generate tokens or denoise inputs. However, the model size is kept consistent
throughout all iterations, which makes it computationally expensive. In this
work, we aim to address this issue primarily through two key ideas - (a) not
all parts of the generation process need equal compute, and we design a decode
time model scaling schedule to utilize compute effectively, and (b) we can
cache and reuse some of the computation. Combining these two ideas leads to
using smaller models to process more tokens while large models process fewer
tokens. These different-sized models do not increase the parameter size, as
they share parameters. We rigorously experiment with ImageNet256$\times$256 ,
UCF101, and Kinetics600 to showcase the efficacy of the proposed method for
image/video generation and frame prediction. Our experiments show that with
almost $3\times$ less compute than baseline, our model obtains competitive
performance.
|
2502.00384
|
It's Not Just a Phase: On Investigating Phase Transitions in Deep
Learning-based Side-channel Analysis
|
cs.CR cs.LG
|
Side-channel analysis (SCA) represents a realistic threat where the attacker
can observe unintentional information to obtain secret data. Evaluation labs
also use the same SCA techniques in the security certification process. The
results in the last decade have shown that machine learning, especially deep
learning, is an extremely powerful SCA approach, allowing the breaking of
protected devices while achieving optimal attack performance. Unfortunately,
deep learning operates as a black-box, making it less useful for security
evaluators who must understand how attacks work to prevent them in the future.
This work demonstrates that mechanistic interpretability can effectively scale
to realistic scenarios where relevant information is sparse and well-defined
interchange interventions to the input are impossible due to side-channel
protections. Concretely, we reverse engineer the features the network learns
during phase transitions, eventually retrieving secret masks, allowing us to
move from black-box to white-box evaluation.
|
2502.00385
|
The Impact of Persona-based Political Perspectives on Hateful Content
Detection
|
cs.CL cs.AI
|
While pretraining language models with politically diverse content has been
shown to improve downstream task fairness, such approaches require significant
computational resources often inaccessible to many researchers and
organizations. Recent work has established that persona-based prompting can
introduce political diversity in model outputs without additional training.
However, it remains unclear whether such prompting strategies can achieve
results comparable to political pretraining for downstream tasks. We
investigate this question using persona-based prompting strategies in
multimodal hate-speech detection tasks, specifically focusing on hate speech in
memes. Our analysis reveals that when mapping personas onto a political compass
and measuring persona agreement, inherent political positioning has
surprisingly little correlation with classification decisions. Notably, this
lack of correlation persists even when personas are explicitly injected with
stronger ideological descriptors. Our findings suggest that while LLMs can
exhibit political biases in their responses to direct political questions,
these biases may have less impact on practical classification tasks than
previously assumed. This raises important questions about the necessity of
computationally expensive political pretraining for achieving fair performance
in downstream tasks.
|
2502.00386
|
Efficient Adaptive Label Refinement for Label Noise Learning
|
cs.CV
|
Deep neural networks are highly susceptible to overfitting noisy labels,
which leads to degraded performance. Existing methods address this issue by
employing manually defined criteria, aiming to achieve optimal partitioning in
each iteration to avoid fitting noisy labels while thoroughly learning clean
samples. However, this often results in overly complex and difficult-to-train
models. To address this issue, we decouple the tasks of avoiding fitting
incorrect labels and thoroughly learning clean samples and propose a simple yet
highly applicable method called Adaptive Label Refinement (ALR). First,
inspired by label refurbishment techniques, we update the original hard labels
to soft labels using the model's predictions to reduce the risk of fitting
incorrect labels. Then, by introducing the entropy loss, we gradually `harden'
the high-confidence soft labels, guiding the model to better learn from clean
samples. This approach is simple and efficient, requiring no prior knowledge of
noise or auxiliary datasets, making it more accessible compared to existing
methods. We validate ALR's effectiveness through experiments on benchmark
datasets with artificial label noise (CIFAR-10/100) and real-world datasets
with inherent noise (ANIMAL-10N, Clothing1M, WebVision). The results show that
ALR outperforms state-of-the-art methods.
|
2502.00392
|
RefDrone: A Challenging Benchmark for Referring Expression Comprehension
in Drone Scenes
|
cs.CV
|
Drones have become prevalent robotic platforms with diverse applications,
showing significant potential in Embodied Artificial Intelligence (Embodied
AI). Referring Expression Comprehension (REC) enables drones to locate objects
based on natural language expressions, a crucial capability for Embodied AI.
Despite advances in REC for ground-level scenes, aerial views introduce unique
challenges including varying viewpoints, occlusions and scale variations. To
address this gap, we introduce RefDrone, a REC benchmark for drone scenes.
RefDrone reveals three key challenges in REC: 1) multi-scale and small-scale
target detection; 2) multi-target and no-target samples; 3) complex environment
with rich contextual expressions. To efficiently construct this dataset, we
develop RDAgent (referring drone annotation framework with multi-agent system),
a semi-automated annotation tool for REC tasks. RDAgent ensures high-quality
contextual expressions and reduces annotation cost. Furthermore, we propose
Number GroundingDINO (NGDINO), a novel method designed to handle multi-target
and no-target cases. NGDINO explicitly learns and utilizes the number of
objects referred to in the expression. Comprehensive experiments with
state-of-the-art REC methods demonstrate that NGDINO achieves superior
performance on both the proposed RefDrone and the existing gRefCOCO datasets.
The dataset and code will be publicly at
https://github.com/sunzc-sunny/refdrone.
|
2502.00395
|
FlexCloud: Direct, Modular Georeferencing and Drift-Correction of Point
Cloud Maps
|
cs.RO cs.CV
|
Current software stacks for real-world applications of autonomous driving
leverage map information to ensure reliable localization, path planning, and
motion prediction. An important field of research is the generation of point
cloud maps, referring to the topic of simultaneous localization and mapping
(SLAM). As most recent developments do not include global position data, the
resulting point cloud maps suffer from internal distortion and missing
georeferencing, preventing their use for map-based localization approaches.
Therefore, we propose FlexCloud for an automatic georeferencing of point cloud
maps created from SLAM. Our approach is designed to work modularly with
different SLAM methods, utilizing only the generated local point cloud map and
its odometry. Using the corresponding GNSS positions enables direct
georeferencing without additional control points. By leveraging a 3D
rubber-sheet transformation, we can correct distortions within the map caused
by long-term drift while maintaining its structure. Our approach enables the
creation of consistent, globally referenced point cloud maps from data
collected by a mobile mapping system (MMS). The source code of our work is
available at https://github.com/TUMFTM/FlexCloud.
|
2502.00396
|
Dexterous Cable Manipulation: Taxonomy, Multi-Fingered Hand Design, and
Long-Horizon Manipulation
|
cs.RO
|
Existing research that addressed cable manipulation relied on two-fingered
grippers, which make it difficult to perform similar cable manipulation tasks
that humans perform. However, unlike dexterous manipulation of rigid objects,
the development of dexterous cable manipulation skills in robotics remains
underexplored due to the unique challenges posed by a cable's deformability and
inherent uncertainty. In addition, using a dexterous hand introduces specific
difficulties in tasks, such as cable grasping, pulling, and in-hand bending,
for which no dedicated task definitions, benchmarks, or evaluation metrics
exist. Furthermore, we observed that most existing dexterous hands are designed
with structures identical to humans', typically featuring only one thumb, which
often limits their effectiveness during dexterous cable manipulation. Lastly,
existing non-task-specific methods did not have enough generalization ability
to solve these cable manipulation tasks or are unsuitable due to the designed
hardware. We have three contributions in real-world dexterous cable
manipulation in the following steps: (1) We first defined and organized a set
of dexterous cable manipulation tasks into a comprehensive taxonomy, covering
most short-horizon action primitives and long-horizon tasks for one-handed
cable manipulation. This taxonomy revealed that coordination between the thumb
and the index finger is critical for cable manipulation, which decomposes
long-horizon tasks into simpler primitives. (2) We designed a novel
five-fingered hand with 25 degrees of freedom (DoF), featuring two symmetric
thumb-index configurations and a rotatable joint on each fingertip, which
enables dexterous cable manipulation. (3) We developed a demonstration
collection pipeline for this non-anthropomorphic hand, which is difficult to
operate by previous motion capture methods.
|
2502.00397
|
Minimalistic Video Saliency Prediction via Efficient Decoder & Spatio
Temporal Action Cues
|
cs.CV
|
This paper introduces ViNet-S, a 36MB model based on the ViNet architecture
with a U-Net design, featuring a lightweight decoder that significantly reduces
model size and parameters without compromising performance. Additionally,
ViNet-A (148MB) incorporates spatio-temporal action localization (STAL)
features, differing from traditional video saliency models that use action
classification backbones. Our studies show that an ensemble of ViNet-S and
ViNet-A, by averaging predicted saliency maps, achieves state-of-the-art
performance on three visual-only and six audio-visual saliency datasets,
outperforming transformer-based models in both parameter efficiency and
real-time performance, with ViNet-S reaching over 1000fps.
|
2502.00401
|
Spectro-Riemannian Graph Neural Networks
|
cs.LG cs.AI stat.ML
|
Can integrating spectral and curvature signals unlock new potential in graph
representation learning? Non-Euclidean geometries, particularly Riemannian
manifolds such as hyperbolic (negative curvature) and spherical (positive
curvature), offer powerful inductive biases for embedding complex graph
structures like scale-free, hierarchical, and cyclic patterns. Meanwhile,
spectral filtering excels at processing signal variations across graphs, making
it effective in homophilic and heterophilic settings. Leveraging both can
significantly enhance the learned representations. To this end, we propose
Spectro-Riemannian Graph Neural Networks (CUSP) - the first graph
representation learning paradigm that unifies both CUrvature (geometric) and
SPectral insights. CUSP is a mixed-curvature spectral GNN that learns spectral
filters to optimize node embeddings in products of constant-curvature manifolds
(hyperbolic, spherical, and Euclidean). Specifically, CUSP introduces three
novel components: (a) Cusp Laplacian, an extension of the traditional graph
Laplacian based on Ollivier-Ricci curvature, designed to capture the curvature
signals better; (b) Cusp Filtering, which employs multiple Riemannian graph
filters to obtain cues from various bands in the eigenspectrum; and (c) Cusp
Pooling, a hierarchical attention mechanism combined with a curvature-based
positional encoding to assess the relative importance of differently curved
substructures in our graph. Empirical evaluation across eight homophilic and
heterophilic datasets demonstrates the superiority of CUSP in node
classification and link prediction tasks, with a gain of up to 5.3% over
state-of-the-art models.
|
2502.00402
|
Enhancing Highway Safety: Accident Detection on the A9 Test Stretch
Using Roadside Sensors
|
cs.CV
|
Road traffic injuries are the leading cause of death for people aged 5-29,
resulting in about 1.19 million deaths each year. To reduce these fatalities,
it is essential to address human errors like speeding, drunk driving, and
distractions. Additionally, faster accident detection and quicker medical
response can help save lives. We propose an accident detection framework that
combines a rule-based approach with a learning-based one. We introduce a
dataset of real-world highway accidents featuring high-speed crash sequences.
It includes 294,924 labeled 2D boxes, 93,012 labeled 3D boxes, and track IDs
across 48,144 frames captured at 10 Hz using four roadside cameras and LiDAR
sensors. The dataset covers ten object classes and is released in the OpenLABEL
format. Our experiments and analysis demonstrate the reliability of our method.
|
2502.00404
|
Exploring Linear Attention Alternative for Single Image Super-Resolution
|
cs.CV eess.IV
|
Deep learning-based single-image super-resolution (SISR) technology focuses
on enhancing low-resolution (LR) images into high-resolution (HR) ones.
Although significant progress has been made, challenges remain in computational
complexity and quality, particularly in remote sensing image processing. To
address these issues, we propose our Omni-Scale RWKV Super-Resolution
(OmniRWKVSR) model which presents a novel approach that combines the Receptance
Weighted Key Value (RWKV) architecture with feature extraction techniques such
as Visual RWKV Spatial Mixing (VRSM) and Visual RWKV Channel Mixing (VRCM),
aiming to overcome the limitations of existing methods and achieve superior
SISR performance. This work has proved able to provide effective solutions for
high-quality image reconstruction. Under the 4x Super-Resolution tasks,
compared to the MambaIR model, we achieved an average improvement of 0.26% in
PSNR and 0.16% in SSIM.
|
2502.00406
|
ALU: Agentic LLM Unlearning
|
cs.AI cs.CL
|
Information removal or suppression in large language models (LLMs) is a
desired functionality, useful in AI regulation, legal compliance, safety, and
privacy. LLM unlearning methods aim to remove information on demand from LLMs.
Current LLM unlearning methods struggle to balance the unlearning efficacy and
utility due to the competing nature of these objectives. Keeping the unlearning
process computationally feasible without assuming access to the model weights
is an overlooked area. We present the first agentic LLM unlearning (ALU)
method, a multi-agent, retrain-free, model-agnostic approach to LLM unlearning
that achieves effective unlearning while preserving the utility. Our ALU
framework unlearns by involving multiple LLM agents, each designed for a
specific step in the unlearning process, without the need to update model
weights for any of the agents in the framework. Users can easily request any
set of unlearning instances in any sequence, and ALU seamlessly adapts in real
time. This is facilitated without requiring any changes in the underlying LLM
model. Through extensive experiments on established benchmarks (TOFU, WMDP,
WPU) and jailbreaking techniques (many shot, target masking, other languages),
we demonstrate that ALU consistently stands out as the most robust LLM
unlearning framework among current state-of-the-art methods while incurring a
low constant-time cost. We further highlight ALU's superior performance
compared to existing methods when evaluated at scale. Specifically, ALU is
assessed on up to 1000 unlearning targets, exceeding the evaluation scope of
all previously proposed LLM unlearning methods.
|
2502.00407
|
Causal Abstraction Learning based on the Semantic Embedding Principle
|
cs.LG cs.AI
|
Structural causal models (SCMs) allow us to investigate complex systems at
multiple levels of resolution. The causal abstraction (CA) framework formalizes
the mapping between high- and low-level SCMs. We address CA learning in a
challenging and realistic setting, where SCMs are inaccessible, interventional
data is unavailable, and sample data is misaligned. A key principle of our
framework is $\textit{semantic embedding}$, formalized as the high-level
distribution lying on a subspace of the low-level one. This principle naturally
links linear CA to the geometry of the $\textit{Stiefel manifold}$. We present
a category-theoretic approach to SCMs that enables the learning of a CA by
finding a morphism between the low- and high-level probability measures,
adhering to the semantic embedding principle. Consequently, we formulate a
general CA learning problem. As an application, we solve the latter problem for
linear CA; considering Gaussian measures and the Kullback-Leibler divergence as
an objective. Given the nonconvexity of the learning task, we develop three
algorithms building upon existing paradigms for Riemannian optimization. We
demonstrate that the proposed methods succeed on both synthetic and real-world
brain data with different degrees of prior information about the structure of
CA.
|
2502.00408
|
Segment Anything for Histopathology
|
eess.IV cs.CV
|
Nucleus segmentation is an important analysis task in digital pathology.
However, methods for automatic segmentation often struggle with new data from a
different distribution, requiring users to manually annotate nuclei and retrain
data-specific models. Vision foundation models (VFMs), such as the Segment
Anything Model (SAM), offer a more robust alternative for automatic and
interactive segmentation. Despite their success in natural images, a foundation
model for nucleus segmentation in histopathology is still missing. Initial
efforts to adapt SAM have shown some success, but did not yet introduce a
comprehensive model for diverse segmentation tasks. To close this gap, we
introduce PathoSAM, a VFM for nucleus segmentation, based on training SAM on a
diverse dataset. Our extensive experiments show that it is the new
state-of-the-art model for automatic and interactive nucleus instance
segmentation in histopathology. We also demonstrate how it can be adapted for
other segmentation tasks, including semantic nucleus segmentation. For this
task, we show that it yields results better than popular methods, while not yet
beating the state-of-the-art, CellViT. Our models are open-source and
compatible with popular tools for data annotation. We also provide scripts for
whole-slide image segmentation. Our code and models are publicly available at
https://github.com/computational-cell-analytics/patho-sam.
|
2502.00409
|
Doing More with Less -- Implementing Routing Strategies in Large
Language Model-Based Systems: An Extended Survey
|
cs.AI cs.CL
|
Large Language Models (LLM)-based systems, i.e. interconnected elements that
include an LLM as a central component (e.g., conversational agents), are
typically monolithic static architectures that rely on a single LLM for all
user queries. However, they often require different preprocessing strategies,
levels of reasoning, or knowledge. Generalist LLMs (e.g. GPT-4) trained on very
large multi-topic corpora can perform well in a variety of tasks. They require
significant financial, energy, and hardware resources that may not be justified
for basic tasks. This implies potentially investing in unnecessary costs for a
given query. To overcome this problem, a routing mechanism routes user queries
to the most suitable components, such as smaller LLMs or experts in specific
topics. This approach may improve response quality while minimising costs.
Routing can be expanded to other components of the conversational agent
architecture, such as the selection of optimal embedding strategies. This paper
explores key considerations for integrating routing into LLM-based systems,
focusing on resource management, cost definition, and strategy selection. Our
main contributions include a formalisation of the problem, a novel taxonomy of
existing approaches emphasising relevance and resource efficiency, and a
comparative analysis of these strategies in relation to industry practices.
Finally, we identify critical challenges and directions for future research.
|
2502.00412
|
TROI: Cross-Subject Pretraining with Sparse Voxel Selection for Enhanced
fMRI Visual Decoding
|
cs.CV
|
fMRI (functional Magnetic Resonance Imaging) visual decoding involves
decoding the original image from brain signals elicited by visual stimuli. This
often relies on manually labeled ROIs (Regions of Interest) to select brain
voxels. However, these ROIs can contain redundant information and noise,
reducing decoding performance. Additionally, the lack of automated ROI labeling
methods hinders the practical application of fMRI visual decoding technology,
especially for new subjects. This work presents TROI (Trainable Region of
Interest), a novel two-stage, data-driven ROI labeling method for cross-subject
fMRI decoding tasks, particularly when subject samples are limited. TROI
leverages labeled ROIs in the dataset to pretrain an image decoding backbone on
a cross-subject dataset, enabling efficient optimization of the input layer for
new subjects without retraining the entire model from scratch. In the first
stage, we introduce a voxel selection method that combines sparse mask training
and low-pass filtering to quickly generate the voxel mask and determine input
layer dimensions. In the second stage, we apply a learning rate rewinding
strategy to fine-tune the input layer for downstream tasks. Experimental
results on the same small sample dataset as the baseline method for brain
visual retrieval and reconstruction tasks show that our voxel selection method
surpasses the state-of-the-art method MindEye2 with an annotated ROI mask.
|
2502.00413
|
Predictive modeling and anomaly detection in large-scale web portals
through the CAWAL framework
|
cs.LG cs.IR
|
This study presents an approach that uses session and page view data
collected through the CAWAL framework, enriched through specialized processes,
for advanced predictive modeling and anomaly detection in web usage mining
(WUM) applications. Traditional WUM methods often rely on web server logs,
which limit data diversity and quality. Integrating application logs with web
analytics, the CAWAL framework creates comprehensive session and page view
datasets, providing a more detailed view of user interactions and effectively
addressing these limitations. This integration enhances data diversity and
quality while eliminating the preprocessing stage required in conventional WUM,
leading to greater process efficiency. The enriched datasets, created by
cross-integrating session and page view data, were applied to advanced machine
learning models, such as Gradient Boosting and Random Forest, which are known
for their effectiveness in capturing complex patterns and modeling non-linear
relationships. These models achieved over 92% accuracy in predicting user
behavior and significantly improved anomaly detection capabilities. The results
show that this approach offers detailed insights into user behavior and system
performance metrics, making it a reliable solution for improving large-scale
web portals' efficiency, reliability, and scalability.
|
2502.00414
|
Social media polarization during conflict: Insights from an ideological
stance dataset on Israel-Palestine Reddit comments
|
cs.CL
|
In politically sensitive scenarios like wars, social media serves as a
platform for polarized discourse and expressions of strong ideological stances.
While prior studies have explored ideological stance detection in general
contexts, limited attention has been given to conflict-specific settings. This
study addresses this gap by analyzing 9,969 Reddit comments related to the
Israel-Palestine conflict, collected between October 2023 and August 2024. The
comments were categorized into three stance classes: Pro-Israel, Pro-Palestine,
and Neutral. Various approaches, including machine learning, pre-trained
language models, neural networks, and prompt engineering strategies for open
source large language models (LLMs), were employed to classify these stances.
Performance was assessed using metrics such as accuracy, precision, recall, and
F1-score. Among the tested methods, the Scoring and Reflective Re-read prompt
in Mixtral 8x7B demonstrated the highest performance across all metrics. This
study provides comparative insights into the effectiveness of different models
for detecting ideological stances in highly polarized social media contexts.
The dataset used in this research is publicly available for further exploration
and validation.
|
2502.00415
|
MarketSenseAI 2.0: Enhancing Stock Analysis through LLM Agents
|
q-fin.CP cs.AI cs.CL cs.MA q-fin.PM
|
MarketSenseAI is a novel framework for holistic stock analysis which
leverages Large Language Models (LLMs) to process financial news, historical
prices, company fundamentals and the macroeconomic environment to support
decision making in stock analysis and selection. In this paper, we present the
latest advancements on MarketSenseAI, driven by rapid technological expansion
in LLMs. Through a novel architecture combining Retrieval-Augmented Generation
and LLM agents, the framework processes SEC filings and earnings calls, while
enriching macroeconomic analysis through systematic processing of diverse
institutional reports. We demonstrate a significant improvement in fundamental
analysis accuracy over the previous version. Empirical evaluation on S\&P 100
stocks over two years (2023-2024) shows MarketSenseAI achieving cumulative
returns of 125.9% compared to the index return of 73.5%, while maintaining
comparable risk profiles. Further validation on S\&P 500 stocks during 2024
demonstrates the framework's scalability, delivering a 33.8% higher Sortino
ratio than the market. This work marks a significant advancement in applying
LLM technology to financial analysis, offering insights into the robustness of
LLM-driven investment strategies.
|
2502.00416
|
GO-GAN: Geometry Optimization Generative Adversarial Network for
Achieving Optimized Structures with Targeted Physical Properties
|
cs.CE
|
This paper presents GO-GAN, a novel Generative Adversarial Network (GAN)
architecture for geometry optimization (GO), specifically to generate
structures based on user-specified input parameters. The architecture for
GO-GAN proposed here combines a \texttt{Pix2Pix} GAN with a new input
mechanism, involving a dynamic batch gradient descent-based training loop that
leverages dataset symmetries. The model, implemented here using
\texttt{TensorFlow} and \texttt{Keras}, is trained using input images
representing scalar physical properties generated by a custom MatLab code.
After training, GO-GAN rapidly generates optimized geometries from input images
representing scalar inputs of the physical properties. Results demonstrate
GO-GAN's ability to produce acceptable designs with desirable variations. These
variations are followed by the influence of discriminators during training and
are of practical significance in ensuring adherence to specifications while
enabling creative exploration of the design space.
|
2502.00418
|
Parameter Efficient Fine-Tuning of Segment Anything Model
|
cs.CV
|
Segmentation is an important analysis task for biomedical images, enabling
the study of individual organelles, cells or organs. Deep learning has
massively improved segmentation methods, but challenges remain in
generalization to new conditions, requiring costly data annotation. Vision
foundation models, such as Segment Anything Model (SAM), address this issue
through broad segmentation capabilities. However, these models still require
finetuning on annotated data, although with less annotations, to achieve
optimal results for new conditions. As a downside, they require more
computational resources. This makes parameter-efficient finetuning (PEFT)
relevant for their application. We contribute the first comprehensive study of
PEFT for SAM applied to biomedical segmentation by evaluating 9 PEFT methods on
diverse datasets. We also provide an implementation of QLoRA for vision
transformers and a new approach for resource-efficient finetuning of SAM. Our
code is publicly available at
https://github.com/computational-cell-analytics/peft-sam.
|
2502.00421
|
Sagalee: an Open Source Automatic Speech Recognition Dataset for Oromo
Language
|
cs.CL cs.SD eess.AS
|
We present a novel Automatic Speech Recognition (ASR) dataset for the Oromo
language, a widely spoken language in Ethiopia and neighboring regions. The
dataset was collected through a crowd-sourcing initiative, encompassing a
diverse range of speakers and phonetic variations. It consists of 100 hours of
real-world audio recordings paired with transcriptions, covering read speech in
both clean and noisy environments. This dataset addresses the critical need for
ASR resources for the Oromo language which is underrepresented. To show its
applicability for the ASR task, we conducted experiments using the Conformer
model, achieving a Word Error Rate (WER) of 15.32% with hybrid CTC and AED loss
and WER of 18.74% with pure CTC loss. Additionally, fine-tuning the Whisper
model resulted in a significantly improved WER of 10.82%. These results
establish baselines for Oromo ASR, highlighting both the challenges and the
potential for improving ASR performance in Oromo. The dataset is publicly
available at https://github.com/turinaf/sagalee and we encourage its use for
further research and development in Oromo speech processing.
|
2502.00423
|
Stochastic Linear Bandits with Latent Heterogeneity
|
cs.LG stat.ME stat.ML
|
This paper addresses the critical challenge of latent heterogeneity in online
decision-making, where individual responses to business actions vary due to
unobserved characteristics. While existing approaches in data-driven
decision-making have focused on observable heterogeneity through contextual
features, they fall short when heterogeneity stems from unobservable factors
such as lifestyle preferences and personal experiences. We propose a novel
latent heterogeneous bandit framework that explicitly models this unobserved
heterogeneity in customer responses, with promotion targeting as our primary
example. Our methodology introduces an innovative algorithm that simultaneously
learns latent group memberships and group-specific reward functions. Through
theoretical analysis and empirical validation using data from a mobile commerce
platform, we establish high-probability bounds for parameter estimation,
convergence rates for group classification, and comprehensive regret bounds.
Notably, our theoretical analysis reveals two distinct types of regret
measures: a ``strong regret'' against an oracle with perfect knowledge of
customer memberships, which remains non-sub-linear due to inherent
classification uncertainty, and a ``regular regret'' against an oracle aware
only of deterministic components, for which our algorithm achieves a sub-linear
rate that is minimax optimal in horizon length and dimension. We further
demonstrate that existing bandit algorithms ignoring latent heterogeneity incur
constant average regret that accumulates linearly over time. Our framework
provides practitioners with new tools for decision-making under latent
heterogeneity and extends to various business applications, including
personalized pricing, resource allocation, and inventory management.
|
2502.00425
|
MQuant: Unleashing the Inference Potential of Multimodal Large Language
Models via Full Static Quantization
|
cs.CV cs.AI
|
Multimodal large language models (MLLMs) have garnered widespread attention
due to their ability to understand multimodal input. However, their large
parameter sizes and substantial computational demands severely hinder their
practical deployment and application.While quantization is an effective way to
reduce model size and inference latency, its application to MLLMs remains
underexplored. In this paper, we propose MQuant, a post-training quantization
(PTQ) framework designed to tackle the unique challenges of multimodal large
language models (MLLMs). Conventional quantization often struggles with MLLMs
because of (a) high inference latency from large visual token counts, (b)
distributional disparities between visual and textual tokens, and (c) extreme
outliers introduced by Hadamard-based transformations. To address these issues,
MQuant introduces: Modality-Specific Static Quantization (MSQ), assigning
distinct static scales for visual vs. textual tokens; Attention-Invariant
Flexible Switching (AIFS), reordering tokens to preserve casual attention while
eliminating expensive token-wise scale computations; Rotation Magnitude
Suppression (RMS), mitigating weight outliers arising from online Hadamard
rotations. On five mainstream MLLMs (including Qwen-VL, MiniCPM-V, CogVLM2),
MQuant under W4A8 achieves near-floating-point accuracy (<1% degradation) while
reducing inference latency by up to 30%, significantly outperforming existing
PTQ baselines. Our MQuant effectively bridges the gap for efficient and
accurate MLLMs inference in resource-constrained devices. Code will be
released.
|
2502.00426
|
TEST-V: TEst-time Support-set Tuning for Zero-shot Video Classification
|
cs.CV
|
Recently, adapting Vision Language Models (VLMs) to zero-shot visual
classification by tuning class embedding with a few prompts (Test-time Prompt
Tuning, TPT) or replacing class names with generated visual samples
(support-set) has shown promising results. However, TPT cannot avoid the
semantic gap between modalities while the support-set cannot be tuned. To this
end, we draw on each other's strengths and propose a novel framework namely
TEst-time Support-set Tuning for zero-shot Video Classification (TEST-V). It
first dilates the support-set with multiple prompts (Multi-prompting
Support-set Dilation, MSD) and then erodes the support-set via learnable
weights to mine key cues dynamically (Temporal-aware Support-set Erosion, TSE).
Specifically, i) MSD expands the support samples for each class based on
multiple prompts enquired from LLMs to enrich the diversity of the support-set.
ii) TSE tunes the support-set with factorized learnable weights according to
the temporal prediction consistency in a self-supervised manner to dig pivotal
supporting cues for each class. $\textbf{TEST-V}$ achieves state-of-the-art
results across four benchmarks and has good interpretability for the
support-set dilation and erosion.
|
2502.00429
|
How Do Model Export Formats Impact the Development of ML-Enabled
Systems? A Case Study on Model Integration
|
cs.SE cs.LG
|
Machine learning (ML) models are often integrated into ML-enabled systems to
provide software functionality that would otherwise be impossible. This
integration requires the selection of an appropriate ML model export format,
for which many options are available. These formats are crucial for ensuring a
seamless integration, and choosing a suboptimal one can negatively impact
system development. However, little evidence is available to guide
practitioners during the export format selection.
We therefore evaluated various model export formats regarding their impact on
the development of ML-enabled systems from an integration perspective. Based on
the results of a preliminary questionnaire survey (n=17), we designed an
extensive embedded case study with two ML-enabled systems in three versions
with different technologies. We then analyzed the effect of five popular export
formats, namely ONNX, Pickle, TensorFlow's SavedModel, PyTorch's TorchScript,
and Joblib. In total, we studied 30 units of analysis (2 systems x 3 tech
stacks x 5 formats) and collected data via structured field notes.
The holistic qualitative analysis of the results indicated that ONNX offered
the most efficient integration and portability across most cases. SavedModel
and TorchScript were very convenient to use in Python-based systems, but
otherwise required workarounds (TorchScript more than SavedModel). SavedModel
also allowed the easy incorporation of preprocessing logic into a single file,
which made it scalable for complex deep learning use cases. Pickle and Joblib
were the most challenging to integrate, even in Python-based systems. Regarding
technical support, all model export formats had strong technical documentation
and strong community support across platforms such as Stack Overflow and
Reddit. Practitioners can use our findings to inform the selection of ML export
formats suited to their context.
|
2502.00432
|
Community Membership Hiding via Gradient-based Optimization
|
cs.SI
|
We tackle the problem of \emph{community membership hiding}, which involves
strategically altering a network's structure to obscure a target node's
membership in a specific community identified by a detection algorithm. We
reformulate the original discrete counterfactual graph objective as a
differentiable constrained optimization task. To solve this, we propose
\method{}, a gradient-based method that modifies the network's structure within
the feasible bounds for an individual target node, effectively concealing its
membership. Experimental results across multiple datasets and community
detection algorithms show that our approach surpasses existing baselines,
offering a better balance between accuracy and computational efficiency.
|
2502.00433
|
CAT Pruning: Cluster-Aware Token Pruning For Text-to-Image Diffusion
Models
|
cs.CV
|
Diffusion models have revolutionized generative tasks, especially in the
domain of text-to-image synthesis; however, their iterative denoising process
demands substantial computational resources. In this paper, we present a novel
acceleration strategy that integrates token-level pruning with caching
techniques to tackle this computational challenge. By employing noise relative
magnitude, we identify significant token changes across denoising iterations.
Additionally, we enhance token selection by incorporating spatial clustering
and ensuring distributional balance. Our experiments demonstrate reveal a
50%-60% reduction in computational costs while preserving the performance of
the model, thereby markedly increasing the efficiency of diffusion models. The
code is available at https://github.com/ada-cheng/CAT-Pruning
|
2502.00434
|
Compilation and Fast Model Counting beyond CNF
|
cs.CC cs.AI cs.LO
|
Circuits in deterministic decomposable negation normal form (d-DNNF) are
representations of Boolean functions that enable linear-time model counting.
This paper strengthens our theoretical knowledge of what classes of functions
can be efficiently transformed, or compiled, into d-DNNF. Our main contribution
is the fixed-parameter tractable (FPT) compilation of conjunctions of specific
constraints parameterized by incidence treewidth. This subsumes the known
result for CNF. The constraints in question are all functions representable by
constant-width ordered binary decision diagrams (OBDDs) for all variable
orderings. For instance, this includes parity constraints and cardinality
constraints with constant threshold. The running time of the FPT compilation is
singly exponential in the incidence treewidth but hides large constants in the
exponent. To balance that, we give a more efficient FPT algorithm for model
counting that applies to a sub-family of the constraints and does not require
compilation.
|
2502.00435
|
SatMamba: Development of Foundation Models for Remote Sensing Imagery
Using State Space Models
|
cs.CV
|
Foundation models refer to deep learning models pretrained on large unlabeled
datasets through self-supervised algorithms. In the Earth science and remote
sensing communities, there is growing interest in transforming the use of Earth
observation data, including satellite and aerial imagery, through foundation
models. Various foundation models have been developed for remote sensing, such
as those for multispectral, high-resolution, and hyperspectral images, and have
demonstrated superior performance on various downstream tasks compared to
traditional supervised models. These models are evolving rapidly, with
capabilities to handle multispectral, multitemporal, and multisensor data. Most
studies use masked autoencoders in combination with Vision Transformers (ViTs)
as the backbone for pretraining. While the models showed promising performance,
ViTs face challenges, such as quadratic computational scaling with input
length, which may limit performance on multiband and multitemporal data with
long sequences. This research aims to address these challenges by proposing
SatMamba, a new pretraining framework that combines masked autoencoders with
State Space Model, offering linear computational scaling. Experiments on
high-resolution imagery across various downstream tasks show promising results,
paving the way for more efficient foundation models and unlocking the full
potential of Earth observation data. The source code is available in
https://github.com/mdchuc/HRSFM.
|
2502.00436
|
Secure Data Reconstruction: A Direct Data-Driven Approach
|
eess.SY cs.SY
|
This paper addresses the problem of secure data reconstruction for unknown
systems, where data collected from the system are susceptible to malicious
manipulation. We aim to recover the real trajectory without prior knowledge of
the system model. To achieve this, a behavioral language is used to represent
the system, describing it using input/output trajectories instead of
state-space models. We consider two attack scenarios. In the first scenario, up
to $k$ entries of the collected data are malicious. On the other hand, the
second scenario assumes that at most $k$ channels from sensors or actuators can
be compromised, implying that any data collected from these channels might be
falsified. For both scenarios, we formulate the trajectory recovery problem as
an optimization problem and introduce sufficient conditions to ensure
successful recovery of the true data. Since finding exact solutions to these
problems can be computationally inefficient, we further approximate them using
an $\ell_1$-norm and group Least Absolute Shrinkage and Selection Operator
(LASSO). We demonstrate that under certain conditions, these approximation
problems also find the true trajectory while maintaining low computation
complexity. Finally, we extend the proposed algorithms to noisy data. By
reconstructing the secure trajectory, this work serves as a safeguard mechanism
for subsequent data-driven control methods.
|
2502.00439
|
UniAttn: Reducing Inference Costs via Softmax Unification for
Post-Training LLMs
|
cs.CL
|
Post-training is essential for adapting Large Language Models (LLMs) to
real-world applications. Deploying post-trained models faces significant
challenges due to substantial memory overhead and noticeable inference latency.
Existing work has identified significant redundancies in LLMs and proposed
efficient architectures, namely intra-layer KV sharing and cross-layer KV
sharing. However, intra-layer KV sharing still results in high inference costs,
while cross-layer KV sharing leads to significant performance degradation. As a
result, both methods remain suboptimal for post-training pre-trained LLMs. In
this paper, we identify that the \texttt{Softmax} operation is a primary
bottleneck for LLM inference and discover that it is actually highly redundant
during post-training. We propose Softmax \textbf{Uni}fication in
\textbf{Att}e\textbf{n}tion (\textbf{UniAttn}), a novel post-training method
that unifies Softmax activations across transformer blocks to reduce LLM
inference costs. Additionally, UniAttn adopts a linear projection to compensate
for the errors induced by Softmax unification. Experiments show that UniAttn
matches the performance of standard post-training while significantly reducing
inference costs, outperforming existing efficient architectures during
post-training. Our code will be available at
\url{https://github.com/Bostoncake/UniAttn}.
|
2502.00443
|
Model-Free Predictive Control: Introductory Algebraic Calculations, and
a Comparison with HEOL and ANNs
|
eess.SY cs.AI cs.SY
|
Model predictive control (MPC) is a popular control engineering practice, but
requires a sound knowledge of the model. Model-free predictive control (MFPC),
a burning issue today, also related to reinforcement learning (RL) in AI, is
reformulated here via a linear differential equation with constant
coefficients, thanks to a new perspective on optimal control combined with
recent advances in the field of model-free control. It is replacing Dynamic
Programming, the Hamilton-Jacobi-Bellman equation, and Pontryagin's Maximum
Principle. The computing burden is low. The implementation is straightforward.
Two nonlinear examples, a chemical reactor and a two tank system, are
illustrating our approach. A comparison with the HEOL setting, where some
expertise of the process model is needed, shows only a slight superiority of
the later. A recent identification of the two tank system via a complex ANN
architecture might indicate that a full modeling and the corresponding machine
learning mechanism are not always necessary neither in control, nor, more
generally, in AI.
|
2502.00448
|
HERA: Improving Long Document Summarization using Large Language Models
with Context Packaging and Reordering
|
cs.CL
|
Despite the rapid growth of context length of large language models (LLMs) ,
LLMs still perform poorly in long document summarization. An important reason
for this is that relevant information about an event is scattered throughout
long documents, and the messy narrative order impairs the accurate
understanding and utilization of LLMs for long documents. To address these
issues, we propose a novel summary generation framework, called HERA.
Specifically, we first segment a long document by its semantic structure and
retrieve text segments about the same event, and finally reorder them to form
the input context. We evaluate our approach on two long document summarization
datasets. The experimental results show that HERA outperforms foundation models
in ROUGE, BERTScore and faithfulness metrics, while HERA does not require
additional fine-tuning and resources.
|
2502.00451
|
Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and
Opportunities
|
cs.CL cs.AI
|
Mental illness is a widespread and debilitating condition with substantial
societal and personal costs. Traditional diagnostic and treatment approaches,
such as self-reported questionnaires and psychotherapy sessions, often impose
significant burdens on both patients and clinicians, limiting accessibility and
efficiency. Recent advances in Artificial Intelligence (AI), particularly in
Natural Language Processing and multimodal techniques, hold great potential for
recognizing and addressing conditions such as depression, anxiety, bipolar
disorder, schizophrenia, and post-traumatic stress disorder. However, privacy
concerns, including the risk of sensitive data leakage from datasets and
trained models, remain a critical barrier to deploying these AI systems in
real-world clinical settings. These challenges are amplified in multimodal
methods, where personal identifiers such as voice and facial data can be
misused. This paper presents a critical and comprehensive study of the privacy
challenges associated with developing and deploying AI models for mental
health. We further prescribe potential solutions, including data anonymization,
synthetic data generation, and privacy-preserving model training, to strengthen
privacy safeguards in practical applications. Additionally, we discuss
evaluation frameworks to assess the privacy-utility trade-offs in these
approaches. By addressing these challenges, our work aims to advance the
development of reliable, privacy-aware AI tools to support clinical
decision-making and improve mental health outcomes.
|
2502.00455
|
Line Balancing in the Modern Garment Industry
|
cs.RO
|
This article presents applied research on line balancing within the modern
garment industry, focusing on the significant impact of intelligent hanger
systems and hanger lines on the stitching process, by Lean Methodology for
garment modernization. It explores the application of line balancing in the
modern garment industry, focusing on the significant impact of intelligent
hanger systems and hanger lines on the stitching process. It aligns with Lean
Methodology principles for garment modernization. Without the implementation of
line balancing technology, the garment manufacturing process using hanger
systems cannot improve output rates. The case study demonstrates that
implementing intelligent line balancing in a straightforward practical setup
facilitates lean practices combined with a digitalization system and automaton.
This approach illustrates how to enhance output and reduce accumulated work in
progress.
|
2502.00456
|
Explorations of the Softmax Space: Knowing When the Neural Network
Doesn't Know...
|
cs.LG cs.CV
|
Ensuring the reliability and safety of automated decision-making is crucial.
This paper proposes a new approach for measuring the reliability of predictions
in machine learning models. We analyze how the outputs of a trained neural
network change using clustering to measure distances between outputs and class
centroids. We propose this distance as a metric to evaluate the confidence of
predictions. We assign each prediction to a cluster with centroid representing
the mean softmax output for all correct predictions of a given class. We then
define a safety threshold for a class as the smallest distance from an
incorrect prediction to the given class centroid. We evaluate the approach on
the MNIST and CIFAR-10 datasets using a Convolutional Neural Network and a
Vision Transformer, respectively. The results show that our approach is
consistent across these data sets and network models, and indicate that the
proposed metric can offer an efficient way of determining when automated
predictions are acceptable and when they should be deferred to human operators.
|
2502.00459
|
AudioGenX: Explainability on Text-to-Audio Generative Models
|
cs.SD cs.AI cs.LG eess.AS
|
Text-to-audio generation models (TAG) have achieved significant advances in
generating audio conditioned on text descriptions. However, a critical
challenge lies in the lack of transparency regarding how each textual input
impacts the generated audio. To address this issue, we introduce AudioGenX, an
Explainable AI (XAI) method that provides explanations for text-to-audio
generation models by highlighting the importance of input tokens. AudioGenX
optimizes an Explainer by leveraging factual and counterfactual objective
functions to provide faithful explanations at the audio token level. This
method offers a detailed and comprehensive understanding of the relationship
between text inputs and audio outputs, enhancing both the explainability and
trustworthiness of TAG models. Extensive experiments demonstrate the
effectiveness of AudioGenX in producing faithful explanations, benchmarked
against existing methods using novel evaluation metrics specifically designed
for audio generation tasks.
|
2502.00461
|
On Multiquantum Bits, Segre Embeddings and Coxeter Chambers
|
quant-ph cs.IT math.AG math.IT math.NT
|
This work explores the interplay between quantum information theory,
algebraic geometry, and number theory, with a particular focus on multiqubit
systems, their entanglement structure, and their classification via geometric
embeddings. The Segre embedding, a fundamental construction in algebraic
geometry, provides an algebraic framework to distinguish separable and
entangled states, encoding quantum correlations in projective geometry. We
develop a systematic study of qubit moduli spaces, illustrating the geometric
structure of entanglement through hypercube constructions and Coxeter chamber
decompositions.
We establish a bijection between the Segre embeddings of tensor products of
projective spaces and binary words of length $n-1$, structured as an
$(n-1)$-dimensional hypercube, where adjacency corresponds to a single Segre
operation. This reveals a combinatorial structure underlying the hierarchy of
embeddings, with direct implications for quantum error correction schemes. The
symmetry of the Segre variety under the Coxeter group of type $A$ allows us to
analyze quantum states and errors through the lens of reflection groups,
viewing separable states as lying in distinct Coxeter chambers on a Segre
variety. The transitive action of the permutation group on these chambers
provides a natural method for tracking errors in quantum states and potentially
reversing them. Beyond foundational aspects, we highlight relations between
Segre varieties and Dixon elliptic curves, drawing connections between
entanglement and number theory.
|
2502.00462
|
MambaGlue: Fast and Robust Local Feature Matching With Mamba
|
cs.CV cs.RO
|
In recent years, robust matching methods using deep learning-based approaches
have been actively studied and improved in computer vision tasks. However,
there remains a persistent demand for both robust and fast matching techniques.
To address this, we propose a novel Mamba-based local feature matching
approach, called MambaGlue, where Mamba is an emerging state-of-the-art
architecture rapidly gaining recognition for its superior speed in both
training and inference, and promising performance compared with Transformer
architectures. In particular, we propose two modules: a) MambaAttention mixer
to simultaneously and selectively understand the local and global context
through the Mamba-based self-attention structure and b) deep confidence score
regressor, which is a multi-layer perceptron (MLP)-based architecture that
evaluates a score indicating how confidently matching predictions correspond to
the ground-truth correspondences. Consequently, our MambaGlue achieves a
balance between robustness and efficiency in real-world applications. As
verified on various public datasets, we demonstrate that our MambaGlue yields a
substantial performance improvement over baseline approaches while maintaining
fast inference speed. Our code will be available on
https://github.com/url-kaist/MambaGlue
|
2502.00463
|
Efficient Over-parameterized Matrix Sensing from Noisy Measurements via
Alternating Preconditioned Gradient Descent
|
cs.LG math.OC stat.ML
|
We consider the noisy matrix sensing problem in the over-parameterization
setting, where the estimated rank $r$ is larger than the true rank $r_\star$.
Specifically, our main objective is to recover a matrix $ X_\star \in
\mathbb{R}^{n_1 \times n_2} $ with rank $ r_\star $ from noisy measurements
using an over-parameterized factorized form $ LR^\top $, where $ L \in
\mathbb{R}^{n_1 \times r}, \, R \in \mathbb{R}^{n_2 \times r} $ and $
\min\{n_1, n_2\} \ge r > r_\star $, with the true rank $ r_\star $ being
unknown. Recently, preconditioning methods have been proposed to accelerate the
convergence of matrix sensing problem compared to vanilla gradient descent,
incorporating preconditioning terms $ (L^\top L + \lambda I)^{-1} $ and $
(R^\top R + \lambda I)^{-1} $ into the original gradient. However, these
methods require careful tuning of the damping parameter $\lambda$ and are
sensitive to initial points and step size. To address these limitations, we
propose the alternating preconditioned gradient descent (APGD) algorithm, which
alternately updates the two factor matrices, eliminating the need for the
damping parameter and enabling faster convergence with larger step sizes. We
theoretically prove that APGD achieves near-optimal error convergence at a
linear rate, starting from arbitrary random initializations. Through extensive
experiments, we validate our theoretical results and demonstrate that APGD
outperforms other methods, achieving the fastest convergence rate. Notably,
both our theoretical analysis and experimental results illustrate that APGD
does not rely on the initialization procedure, making it more practical and
versatile.
|
2502.00464
|
Evaluation of End-to-End Continuous Spanish Lipreading in Different Data
Conditions
|
cs.CV
|
Visual speech recognition remains an open research problem where different
challenges must be considered by dispensing with the auditory sense, such as
visual ambiguities, the inter-personal variability among speakers, and the
complex modeling of silence. Nonetheless, recent remarkable results have been
achieved in the field thanks to the availability of large-scale databases and
the use of powerful attention mechanisms. Besides, multiple languages apart
from English are nowadays a focus of interest. This paper presents noticeable
advances in automatic continuous lipreading for Spanish. First, an end-to-end
system based on the hybrid CTC/Attention architecture is presented. Experiments
are conducted on two corpora of disparate nature, reaching state-of-the-art
results that significantly improve the best performance obtained to date for
both databases. In addition, a thorough ablation study is carried out, where it
is studied how the different components that form the architecture influence
the quality of speech recognition. Then, a rigorous error analysis is carried
out to investigate the different factors that could affect the learning of the
automatic system. Finally, a new Spanish lipreading benchmark is consolidated.
Code and trained models are available at
https://github.com/david-gimeno/evaluating-end2end-spanish-lipreading.
|
2502.00465
|
Enhance Learning Efficiency of Oblique Decision Tree via Feature
Concatenation
|
cs.LG cs.AI stat.ML
|
Oblique Decision Tree (ODT) separates the feature space by linear
projections, as opposed to the conventional Decision Tree (DT) that forces
axis-parallel splits. ODT has been proven to have a stronger representation
ability than DT, as it provides a way to create shallower tree structures while
still approximating complex decision boundaries. However, its learning
efficiency is still insufficient, since the linear projections cannot be
transmitted to the child nodes, resulting in a waste of model parameters. In
this work, we propose an enhanced ODT method with Feature Concatenation
(\texttt{FC-ODT}), which enables in-model feature transformation to transmit
the projections along the decision paths. Theoretically, we prove that our
method enjoys a faster consistency rate w.r.t. the tree depth, indicating that
our method possesses a significant advantage in generalization performance,
especially for shallow trees. Experiments show that \texttt{FC-ODT} can
outperform the other state-of-the-art decision trees with a limited tree depth.
|
2502.00466
|
Enhancing Memory and Imagination Consistency in Diffusion-based World
Models via Linear-Time Sequence Modeling
|
cs.LG
|
World models are crucial for enabling agents to simulate and plan within
environments, yet existing approaches struggle with long-term dependencies and
inconsistent predictions. We introduce EDELINE, a novel framework that
integrates diffusion models with linear-time state space modelsto enhance
memory retention and temporal consistency. EDELINE employs a recurrent
embedding module based on Mamba SSMs for processing unbounded sequences, a
unified architecture for joint reward and termination prediction, and dynamic
loss harmonization to balance multi-task learning. Our results across multiple
benchmarks demonstrate EDELINE's superiority and robustness over prior
baselines in long-horizon tasks.
|
2502.00470
|
Distributed Primal-Dual Algorithms: Unification, Connections, and
Insights
|
math.OC cs.LG stat.ML
|
We study primal-dual algorithms for general empirical risk minimization
problems in distributed settings, focusing on two prominent classes of
algorithms. The first class is the communication-efficient distributed dual
coordinate ascent (CoCoA), derived from the coordinate ascent method for
solving the dual problem. The second class is the alternating direction method
of multipliers (ADMM), including consensus ADMM, linearized ADMM, and proximal
ADMM. We demonstrate that both classes of algorithms can be transformed into a
unified update form that involves only primal and dual variables. This
discovery reveals key connections between the two classes of algorithms: CoCoA
can be interpreted as a special case of proximal ADMM for solving the dual
problem, while consensus ADMM is closely related to a proximal ADMM algorithm.
This discovery provides the insight that by adjusting the augmented Lagrangian
parameter, we can easily enable the ADMM variants to outperform the CoCoA
variants. We further explore linearized versions of ADMM and analyze the
effects of tuning parameters on these ADMM variants in the distributed setting.
Our theoretical findings are supported by extensive simulation studies and
real-world data analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.