id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.13734
|
Sample complexity of data-driven tuning of model hyperparameters in
neural networks with structured parameter-dependent dual function
|
cs.LG
|
Modern machine learning algorithms, especially deep learning based
techniques, typically involve careful hyperparameter tuning to achieve the best
performance. Despite the surge of intense interest in practical techniques like
Bayesian optimization and random search based approaches to automating this
laborious and compute intensive task, the fundamental learning theoretic
complexity of tuning hyperparameters for deep neural networks is poorly
understood. Inspired by this glaring gap, we initiate the formal study of
hyperparameter tuning complexity in deep learning through a recently introduced
data driven setting. We assume that we have a series of deep learning tasks,
and we have to tune hyperparameters to do well on average over the distribution
of tasks. A major difficulty is that the utility function as a function of the
hyperparameter is very volatile and furthermore, it is given implicitly by an
optimization problem over the model parameters. To tackle this challenge, we
introduce a new technique to characterize the discontinuities and oscillations
of the utility function on any fixed problem instance as we vary the
hyperparameter; our analysis relies on subtle concepts including tools from
differential/algebraic geometry and constrained optimization. This can be used
to show that the learning theoretic complexity of the corresponding family of
utility functions is bounded. We instantiate our results and provide sample
complexity bounds for concrete applications tuning a hyperparameter that
interpolates neural activation functions and setting the kernel parameter in
graph neural networks.
|
2501.13735
|
A Study of the Plausibility of Attention between RNN Encoders in Natural
Language Inference
|
cs.CL
|
Attention maps in neural models for NLP are appealing to explain the decision
made by a model, hopefully emphasizing words that justify the decision. While
many empirical studies hint that attention maps can provide such justification
from the analysis of sound examples, only a few assess the plausibility of
explanations based on attention maps, i.e., the usefulness of attention maps
for humans to understand the decision. These studies furthermore focus on text
classification. In this paper, we report on a preliminary assessment of
attention maps in a sentence comparison task, namely natural language
inference. We compare the cross-attention weights between two RNN encoders with
human-based and heuristic-based annotations on the eSNLI corpus. We show that
the heuristic reasonably correlates with human annotations and can thus
facilitate evaluation of plausible explanations in sentence comparison tasks.
Raw attention weights however remain only loosely related to a plausible
explanation.
|
2501.13736
|
Discrete Layered Entropy, Conditional Compression and a Tighter Strong
Functional Representation Lemma
|
cs.IT math.IT
|
We study a quantity called discrete layered entropy, which approximates the
Shannon entropy within a logarithmic gap. Compared to the Shannon entropy, the
discrete layered entropy is piecewise linear, approximates the expected length
of the optimal one-to-one non-prefix-free encoding, and satisfies an elegant
conditioning property. These properties make it useful for approximating the
Shannon entropy in linear programming, studying the optimal length of
conditional encoding, and bounding the entropy of monotonic mixture
distributions. In particular, it can give a bound for the strong functional
representation lemma that improves upon the best bound (as long as the mutual
information is at least 2).
|
2501.13743
|
GPT-HTree: A Decision Tree Framework Integrating Hierarchical Clustering
and Large Language Models for Explainable Classification
|
cs.LG
|
This paper introduces GPT-HTree, a framework combining hierarchical
clustering, decision trees, and large language models (LLMs) to address this
challenge. By leveraging hierarchical clustering to segment individuals based
on salient features, resampling techniques to balance class distributions, and
decision trees to tailor classification paths within each cluster, GPT-HTree
ensures both accuracy and interpretability. LLMs enhance the framework by
generating human-readable cluster descriptions, bridging quantitative analysis
with actionable insights.
|
2501.13744
|
Centralized Versus Distributed Routing for Large-Scale Satellite
Networks
|
cs.NI cs.SY eess.SY
|
An important choice in the design of satellite networks is whether the
routing decisions are made in a distributed manner onboard the satellite, or
centrally on a ground-based controller. We study the tradeoff between
centralized and distributed routing in large-scale satellite networks. In
particular, we consider a centralized routing scheme that has access to global
but delayed network state information and a distributed routing scheme that has
access to local but real-time network state information. For both routing
schemes, we analyze the throughput and delay performance of shortest-path
algorithms in networks with and without buffers onboard the satellites. We show
that distributed routing outperforms centralized routing when the rate of
changes in the network link state is comparable to the inherent propagation and
transmission delays. In particular, we show that in highly dynamic networks
without buffers, the distributed scheme achieves higher throughput than a
centralized scheme. In networks with buffers, the distributed scheme achieves
lower delays with the same throughput.
|
2501.13746
|
EICopilot: Search and Explore Enterprise Information over Large-scale
Knowledge Graphs with LLM-driven Agents
|
cs.IR cs.AI
|
The paper introduces EICopilot, an novel agent-based solution enhancing
search and exploration of enterprise registration data within extensive online
knowledge graphs like those detailing legal entities, registered capital, and
major shareholders. Traditional methods necessitate text-based queries and
manual subgraph explorations, often resulting in time-consuming processes.
EICopilot, deployed as a chatbot via Baidu Enterprise Search, improves this
landscape by utilizing Large Language Models (LLMs) to interpret natural
language queries. This solution automatically generates and executes Gremlin
scripts, providing efficient summaries of complex enterprise relationships.
Distinct feature a data pre-processing pipeline that compiles and annotates
representative queries into a vector database of examples for In-context
learning (ICL), a comprehensive reasoning pipeline combining Chain-of-Thought
with ICL to enhance Gremlin script generation for knowledge graph search and
exploration, and a novel query masking strategy that improves intent
recognition for heightened script accuracy. Empirical evaluations demonstrate
the superior performance of EICopilot, including speed and accuracy, over
baseline methods, with the \emph{Full Mask} variant achieving a syntax error
rate reduction to as low as 10.00% and an execution correctness of up to
82.14%. These components collectively contribute to superior querying
capabilities and summarization of intricate datasets, positioning EICopilot as
a groundbreaking tool in the exploration and exploitation of large-scale
knowledge graphs for enterprise information search.
|
2501.13748
|
Exact Soft Analytical Side-Channel Attacks using Tractable Circuits
|
cs.LG cs.CR
|
Detecting weaknesses in cryptographic algorithms is of utmost importance for
designing secure information systems. The state-of-the-art soft analytical
side-channel attack (SASCA) uses physical leakage information to make
probabilistic predictions about intermediate computations and combines these
"guesses" with the known algorithmic logic to compute the posterior
distribution over the key. This attack is commonly performed via loopy belief
propagation, which, however, lacks guarantees in terms of convergence and
inference quality. In this paper, we develop a fast and exact inference method
for SASCA, denoted as ExSASCA, by leveraging knowledge compilation and
tractable probabilistic circuits. When attacking the Advanced Encryption
Standard (AES), the most widely used encryption algorithm to date, ExSASCA
outperforms SASCA by more than 31% top-1 success rate absolute. By leveraging
sparse belief messages, this performance is achieved with little more
computational cost than SASCA, and about 3 orders of magnitude less than exact
inference via exhaustive enumeration. Even with dense belief messages, ExSASCA
still uses 6 times less computations than exhaustive inference.
|
2501.13751
|
On Disentangled Training for Nonlinear Transform in Learned Image
Compression
|
eess.IV cs.CV
|
Learned image compression (LIC) has demonstrated superior rate-distortion
(R-D) performance compared to traditional codecs, but is challenged by training
inefficiency that could incur more than two weeks to train a state-of-the-art
model from scratch. Existing LIC methods overlook the slow convergence caused
by compacting energy in learning nonlinear transforms. In this paper, we first
reveal that such energy compaction consists of two components, i.e., feature
decorrelation and uneven energy modulation. On such basis, we propose a linear
auxiliary transform (AuxT) to disentangle energy compaction in training
nonlinear transforms. The proposed AuxT obtains coarse approximation to achieve
efficient energy compaction such that distribution fitting with the nonlinear
transforms can be simplified to fine details. We then develop wavelet-based
linear shortcuts (WLSs) for AuxT that leverages wavelet-based downsampling and
orthogonal linear projection for feature decorrelation and subband-aware
scaling for
|
2501.13756
|
Solving the long-tailed distribution problem by exploiting the synergies
and balance of different techniques
|
cs.CV cs.AI cs.LG
|
In real-world data, long-tailed data distribution is common, making it
challenging for models trained on empirical risk minimisation to learn and
classify tail classes effectively. While many studies have sought to improve
long tail recognition by altering the data distribution in the feature space
and adjusting model decision boundaries, research on the synergy and corrective
approach among various methods is limited. Our study delves into three
long-tail recognition techniques: Supervised Contrastive Learning (SCL),
Rare-Class Sample Generator (RSG), and Label-Distribution-Aware Margin Loss
(LDAM). SCL enhances intra-class clusters based on feature similarity and
promotes clear inter-class separability but tends to favour dominant classes
only. When RSG is integrated into the model, we observed that the intra-class
features further cluster towards the class centre, which demonstrates a
synergistic effect together with SCL's principle of enhancing intra-class
clustering. RSG generates new tail features and compensates for the tail
feature space squeezed by SCL. Similarly, LDAM is known to introduce a larger
margin specifically for tail classes; we demonstrate that LDAM further bolsters
the model's performance on tail classes when combined with the more explicit
decision boundaries achieved by SCL and RSG. Furthermore, SCL can compensate
for the dominant class accuracy sacrificed by RSG and LDAM. Our research
emphasises the synergy and balance among the three techniques, with each
amplifying the strengths of the others and mitigating their shortcomings. Our
experiment on long-tailed distribution datasets, using an end-to-end
architecture, yields competitive results by enhancing tail class accuracy
without compromising dominant class performance, achieving a balanced
improvement across all classes.
|
2501.13758
|
2-Tier SimCSE: Elevating BERT for Robust Sentence Embeddings
|
cs.CL cs.AI cs.LG
|
Effective sentence embeddings that capture semantic nuances and generalize
well across diverse contexts are crucial for natural language processing tasks.
We address this challenge by applying SimCSE (Simple Contrastive Learning of
Sentence Embeddings) using contrastive learning to fine-tune the minBERT model
for sentiment analysis, semantic textual similarity (STS), and paraphrase
detection. Our contributions include experimenting with three different dropout
techniques, namely standard dropout, curriculum dropout, and adaptive dropout,
to tackle overfitting, proposing a novel 2-Tier SimCSE Fine-tuning Model that
combines both unsupervised and supervised SimCSE on STS task, and exploring
transfer learning potential for Paraphrase and SST tasks. Our findings
demonstrate the effectiveness of SimCSE, with the 2-Tier model achieving
superior performance on the STS task, with an average test score of 0.742
across all three downstream tasks. The results of error analysis reveals
challenges in handling complex sentiments and reliance on lexical overlap for
paraphrase detection, highlighting areas for future research. The ablation
study revealed that removing Adaptive Dropout in the Single-Task Unsupervised
SimCSE Model led to improved performance on the STS task, indicating
overfitting due to added parameters. Transfer learning from SimCSE models on
Paraphrase and SST tasks did not enhance performance, suggesting limited
transferability of knowledge from the STS task.
|
2501.13762
|
On Deciding the Data Complexity of Answering Linear Monadic Datalog
Queries with LTL Operators(Extended Version)
|
cs.AI cs.CC cs.LO
|
Our concern is the data complexity of answering linear monadic datalog
queries whose atoms in the rule bodies can be prefixed by operators of linear
temporal logic LTL. We first observe that, for data complexity, answering any
connected query with operators $\bigcirc/\bigcirc^-$ (at the next/previous
moment) is either in AC0, or in $ACC0\!\setminus\!AC0$, or $NC^1$-complete, or
LogSpace-hard and in NLogSpace. Then we show that the problem of deciding
LogSpace-hardness of answering such queries is PSpace-complete, while checking
membership in the classes AC0 and ACC0 as well as $NC^1$-completeness can be
done in ExpSpace. Finally, we prove that membership in AC0 or in ACC0,
$NC^1$-completeness, and LogSpace-hardness are undecidable for queries with
operators $\Diamond_f/\Diamond_p$ (sometime in the future/past) provided that
$NC^1 \ne NLogSpace$, and $LogSpace \ne NLogSpace$.
|
2501.13763
|
Integrating Causality with Neurochaos Learning: Proposed Approach and
Research Agenda
|
cs.LG cs.AI
|
Deep learning implemented via neural networks, has revolutionized machine
learning by providing methods for complex tasks such as object
detection/classification and prediction. However, architectures based on deep
neural networks have started to yield diminishing returns, primarily due to
their statistical nature and inability to capture causal structure in the
training data. Another issue with deep learning is its high energy consumption,
which is not that desirable from a sustainability perspective.
Therefore, alternative approaches are being considered to address these
issues, both of which are inspired by the functioning of the human brain. One
approach is causal learning, which takes into account causality among the items
in the dataset on which the neural network is trained. It is expected that this
will help minimize the spurious correlations that are prevalent in the learned
representations of deep neural networks. The other approach is Neurochaos
Learning, a recent development, which draws its inspiration from the nonlinear
chaotic firing intrinsic to neurons in biological neural networks
(brain/central nervous system). Both approaches have shown improved results
over just deep learning alone.
To that end, in this position paper, we investigate how causal and neurochaos
learning approaches can be integrated together to produce better results,
especially in domains that contain linked data. We propose an approach for this
integration to enhance classification, prediction and reinforcement learning.
We also propose a set of research questions that need to be investigated in
order to make this integration a reality.
|
2501.13766
|
UGMathBench: A Diverse and Dynamic Benchmark for Undergraduate-Level
Mathematical Reasoning with Large Language Models
|
cs.CL cs.AI
|
Large Language Models (LLMs) have made significant strides in mathematical
reasoning, underscoring the need for a comprehensive and fair evaluation of
their capabilities. However, existing benchmarks often fall short, either
lacking extensive coverage of undergraduate-level mathematical problems or
probably suffering from test-set contamination. To address these issues, we
introduce UGMathBench, a diverse and dynamic benchmark specifically designed
for evaluating undergraduate-level mathematical reasoning with LLMs.
UGMathBench comprises 5,062 problems across 16 subjects and 111 topics,
featuring 10 distinct answer types. Each problem includes three randomized
versions, with additional versions planned for release as leading open-source
LLMs become saturated in UGMathBench. Furthermore, we propose two key metrics:
effective accuracy (EAcc), which measures the percentage of correctly solved
problems across all three versions, and reasoning gap ($\Delta$), which
assesses reasoning robustness by calculating the difference between the average
accuracy across all versions and EAcc. Our extensive evaluation of 23 leading
LLMs reveals that the highest EAcc achieved is 56.3\% by OpenAI-o1-mini, with
large $\Delta$ values observed across different models. This highlights the
need for future research aimed at developing "large reasoning models" with high
EAcc and $\Delta = 0$. We anticipate that the release of UGMathBench, along
with its detailed evaluation codes, will serve as a valuable resource to
advance the development of LLMs in solving mathematical problems.
|
2501.13767
|
An Efficient Diffusion-based Non-Autoregressive Solver for Traveling
Salesman Problem
|
cs.LG
|
Recent advances in neural models have shown considerable promise in solving
Traveling Salesman Problems (TSPs) without relying on much hand-crafted
engineering. However, while non-autoregressive (NAR) approaches benefit from
faster inference through parallelism, they typically deliver solutions of
inferior quality compared to autoregressive ones. To enhance the solution
quality while maintaining fast inference, we propose DEITSP, a diffusion model
with efficient iterations tailored for TSP that operates in a NAR manner.
Firstly, we introduce a one-step diffusion model that integrates the controlled
discrete noise addition process with self-consistency enhancement, enabling
optimal solution prediction through simultaneous denoising of multiple
solutions. Secondly, we design a dual-modality graph transformer to bolster the
extraction and fusion of features from node and edge modalities, while further
accelerating the inference with fewer layers. Thirdly, we develop an efficient
iterative strategy that alternates between adding and removing noise to improve
exploration compared to previous diffusion methods. Additionally, we devise a
scheduling framework to progressively refine the solution space by adjusting
noise levels, facilitating a smooth search for optimal solutions. Extensive
experiments on real-world and large-scale TSP instances demonstrate that DEITSP
performs favorably against existing neural approaches in terms of solution
quality, inference latency, and generalization ability. Our code is available
at $\href{https://github.com/DEITSP/DEITSP}{https://github.com/DEITSP/DEITSP}$.
|
2501.13772
|
Tune In, Act Up: Exploring the Impact of Audio Modality-Specific Edits
on Large Audio Language Models in Jailbreak
|
cs.SD cs.AI cs.LG cs.MM eess.AS
|
Large Language Models (LLMs) demonstrate remarkable zero-shot performance
across various natural language processing tasks. The integration of multimodal
encoders extends their capabilities, enabling the development of Multimodal
Large Language Models that process vision, audio, and text. However, these
capabilities also raise significant security concerns, as these models can be
manipulated to generate harmful or inappropriate content through jailbreak.
While extensive research explores the impact of modality-specific input edits
on text-based LLMs and Large Vision-Language Models in jailbreak, the effects
of audio-specific edits on Large Audio-Language Models (LALMs) remain
underexplored. Hence, this paper addresses this gap by investigating how
audio-specific edits influence LALMs inference regarding jailbreak. We
introduce the Audio Editing Toolbox (AET), which enables audio-modality edits
such as tone adjustment, word emphasis, and noise injection, and the Edited
Audio Datasets (EADs), a comprehensive audio jailbreak benchmark. We also
conduct extensive evaluations of state-of-the-art LALMs to assess their
robustness under different audio edits. This work lays the groundwork for
future explorations on audio-modality interactions in LALMs security.
|
2501.13773
|
Do Large Language Models Truly Understand Geometric Structures?
|
cs.CL
|
Geometric ability is a significant challenge for large language models (LLMs)
due to the need for advanced spatial comprehension and abstract thinking.
Existing datasets primarily evaluate LLMs on their final answers, but they
cannot truly measure their true understanding of geometric structures, as LLMs
can arrive at correct answers by coincidence. To fill this gap, we introduce
the GeomRel dataset, designed to evaluate LLMs' understanding of geometric
structures by isolating the core step of geometric relationship identification
in problem-solving. Using this benchmark, we conduct thorough evaluations of
diverse LLMs and identify key limitations in understanding geometric
structures. We further propose the Geometry Chain-of-Thought (GeoCoT) method,
which enhances LLMs' ability to identify geometric relationships, resulting in
significant performance improvements.
|
2501.13776
|
Crossfire: An Elastic Defense Framework for Graph Neural Networks Under
Bit Flip Attacks
|
cs.LG
|
Bit Flip Attacks (BFAs) are a well-established class of adversarial attacks,
originally developed for Convolutional Neural Networks within the computer
vision domain. Most recently, these attacks have been extended to target Graph
Neural Networks (GNNs), revealing significant vulnerabilities. This new
development naturally raises questions about the best strategies to defend GNNs
against BFAs, a challenge for which no solutions currently exist. Given the
applications of GNNs in critical fields, any defense mechanism must not only
maintain network performance, but also verifiably restore the network to its
pre-attack state. Verifiably restoring the network to its pre-attack state also
eliminates the need for costly evaluations on test data to ensure network
quality. We offer first insights into the effectiveness of existing honeypot-
and hashing-based defenses against BFAs adapted from the computer vision domain
to GNNs, and characterize the shortcomings of these approaches. To overcome
their limitations, we propose Crossfire, a hybrid approach that exploits weight
sparsity and combines hashing and honeypots with bit-level correction of
out-of-distribution weight elements to restore network integrity. Crossfire is
retraining-free and does not require labeled data. Averaged over 2,160
experiments on six benchmark datasets, Crossfire offers a 21.8% higher
probability than its competitors of reconstructing a GNN attacked by a BFA to
its pre-attack state. These experiments cover up to 55 bit flips from various
attacks. Moreover, it improves post-repair prediction quality by 10.85%.
Computational and storage overheads are negligible compared to the inherent
complexity of even the simplest GNNs.
|
2501.13778
|
Explainable XR: Understanding User Behaviors of XR Environments using
LLM-assisted Analytics Framework
|
cs.HC cs.CL
|
We present Explainable XR, an end-to-end framework for analyzing user
behavior in diverse eXtended Reality (XR) environments by leveraging Large
Language Models (LLMs) for data interpretation assistance. Existing XR user
analytics frameworks face challenges in handling cross-virtuality - AR, VR, MR
- transitions, multi-user collaborative application scenarios, and the
complexity of multimodal data. Explainable XR addresses these challenges by
providing a virtuality-agnostic solution for the collection, analysis, and
visualization of immersive sessions. We propose three main components in our
framework: (1) A novel user data recording schema, called User Action
Descriptor (UAD), that can capture the users' multimodal actions, along with
their intents and the contexts; (2) a platform-agnostic XR session recorder,
and (3) a visual analytics interface that offers LLM-assisted insights tailored
to the analysts' perspectives, facilitating the exploration and analysis of the
recorded XR session data. We demonstrate the versatility of Explainable XR by
demonstrating five use-case scenarios, in both individual and collaborative XR
applications across virtualities. Our technical evaluation and user studies
show that Explainable XR provides a highly usable analytics solution for
understanding user actions and delivering multifaceted, actionable insights
into user behaviors in immersive environments.
|
2501.13779
|
Not Every AI Problem is a Data Problem: We Should Be Intentional About
Data Scaling
|
cs.LG cs.AI
|
While Large Language Models require more and more data to train and scale,
rather than looking for any data to acquire, we should consider what types of
tasks are more likely to benefit from data scaling. We should be intentional in
our data acquisition. We argue that the topology of data itself informs which
tasks to prioritize in data scaling, and shapes the development of the next
generation of compute paradigms for tasks where data scaling is inefficient, or
even insufficient.
|
2501.13780
|
Matrix Completion in Group Testing: Bounds and Simulations
|
cs.IT cs.LG math.IT
|
The main goal of group testing is to identify a small number of defective
items in a large population of items. A test on a subset of items is positive
if the subset contains at least one defective item and negative otherwise. In
non-adaptive design, all tests can be tested simultaneously and represented by
a measurement matrix in which a row and a column represent a test and an item,
respectively. An entry in row $i$ and column $j$ is 1 if item $j$ belongs to
the test $i$ and is 0 otherwise. Given an unknown set of defective items, the
objective is to design a measurement matrix such that, by observing its
corresponding outcome vector, the defective items can be recovered efficiently.
The basic trait of this approach is that the measurement matrix has remained
unchanged throughout the course of generating the outcome vector and recovering
defective items. In this paper, we study the case in which some entries in the
measurement matrix are erased, called \emph{the missing measurement matrix},
before the recovery phase of the defective items, and our objective is to fully
recover the measurement matrix from the missing measurement matrix. In
particular, we show that some specific rows with erased entries provide
information aiding the recovery while others do not. Given measurement matrices
and erased entries follow the Bernoulli distribution, we show that before the
erasing event happens, sampling sufficient sets of defective items and their
corresponding outcome vectors can help us recover the measurement matrix from
the missing measurement matrix.
|
2501.13782
|
Defending against Adversarial Malware Attacks on ML-based Android
Malware Detection Systems
|
cs.CR cs.AI cs.LG cs.SE
|
Android malware presents a persistent threat to users' privacy and data
integrity. To combat this, researchers have proposed machine learning-based
(ML-based) Android malware detection (AMD) systems. However, adversarial
Android malware attacks compromise the detection integrity of the ML-based AMD
systems, raising significant concerns. Existing defenses against adversarial
Android malware provide protections against feature space attacks which
generate adversarial feature vectors only, leaving protection against realistic
threats from problem space attacks which generate real adversarial malware an
open problem. In this paper, we address this gap by proposing ADD, a practical
adversarial Android malware defense framework designed as a plug-in to enhance
the adversarial robustness of the ML-based AMD systems against problem space
attacks. Our extensive evaluation across various ML-based AMD systems
demonstrates that ADD is effective against state-of-the-art problem space
adversarial Android malware attacks. Additionally, ADD shows the defense
effectiveness in enhancing the adversarial robustness of real-world antivirus
solutions.
|
2501.13784
|
Rate-Distortion Region for Distributed Indirect Source Coding with
Decoder Side Information
|
cs.IT math.IT
|
This paper studies a variant of the rate-distortion problem motivated by
task-oriented semantic communication and distributed learning systems, where
$M$ correlated sources are independently encoded for a central decoder. The
decoder has access to correlated side information in addition to the messages
received from the encoders and aims to recover a latent random variable under a
given distortion constraint, rather than recovering the sources themselves. We
characterize the exact rate-distortion function for the case where the sources
are conditionally independent given the side information. Furthermore, we
develop a distributed Blahut-Arimoto (BA) algorithm to numerically compute the
rate-distortion function. Numerical examples are provided to demonstrate the
effectiveness of the proposed approach in calculating the rate-distortion
region.
|
2501.13786
|
Fast Iterative and Task-Specific Imputation with Online Learning
|
cs.LG
|
Missing feature values are a significant hurdle for downstream
machine-learning tasks such as classification and regression. However, they are
pervasive in multiple real-life use cases, for instance, in drug discovery
research. Moreover, imputation methods might be time-consuming and offer few
guarantees on the imputation quality, especially for not-missing-at-random
mechanisms. We propose an imputation approach named F3I based on the iterative
improvement of a K-nearest neighbor imputation that learns the weights for each
neighbor of a data point, optimizing for the most likely distribution of points
over data points. This algorithm can also be jointly trained with a downstream
task on the imputed values. We provide a theoretical analysis of the imputation
quality by F3I for several types of missing mechanisms. We also demonstrate the
performance of F3I on both synthetic data sets and real-life drug repurposing
and handwritten-digit recognition data.
|
2501.13787
|
Parameter-Efficient Fine-Tuning for Foundation Models
|
cs.CL cs.AI cs.LG
|
This survey delves into the realm of Parameter-Efficient Fine-Tuning (PEFT)
within the context of Foundation Models (FMs). PEFT, a cost-effective
fine-tuning technique, minimizes parameters and computational complexity while
striving for optimal downstream task performance. FMs, like ChatGPT, DALL-E,
and LLaVA specialize in language understanding, generative tasks, and
multimodal tasks, trained on diverse datasets spanning text, images, and
videos. The diversity of FMs guides various adaptation strategies for PEFT.
Therefore, this survey aims to provide a comprehensive overview of PEFT
techniques applied to diverse FMs and address critical gaps in understanding
the techniques, trends, and applications. We start by providing a detailed
development of FMs and PEFT. Subsequently, we systematically review the key
categories and core mechanisms of PEFT across diverse FMs to offer a
comprehensive understanding of trends. We also explore the most recent
applications across various FMs to demonstrate the versatility of PEFT,
shedding light on the integration of systematic PEFT methods with a range of
FMs. Furthermore, we identify potential research and development directions for
improving PEFTs in the future. This survey provides a valuable resource for
both newcomers and experts seeking to understand and use the power of PEFT
across FMs. All reviewed papers are listed at
\url{https://github.com/THUDM/Awesome-Parameter-Efficient-Fine-Tuning-for-Foundation-Models}.
|
2501.13790
|
Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic
Regression
|
cs.LG
|
We analyze two variants of Local Gradient Descent applied to distributed
logistic regression with heterogeneous, separable data and show convergence at
the rate $O(1/KR)$ for $K$ local steps and sufficiently large $R$ communication
rounds. In contrast, all existing convergence guarantees for Local GD applied
to any problem are at least $\Omega(1/R)$, meaning they fail to show the
benefit of local updates. The key to our improved guarantee is showing progress
on the logistic regression objective when using a large stepsize $\eta \gg
1/K$, whereas prior analysis depends on $\eta \leq 1/K$.
|
2501.13794
|
Unveiling the Power of Noise Priors: Enhancing Diffusion Models for
Mobile Traffic Prediction
|
cs.LG
|
Accurate prediction of mobile traffic, \textit{i.e.,} network traffic from
cellular base stations, is crucial for optimizing network performance and
supporting urban development. However, the non-stationary nature of mobile
traffic, driven by human activity and environmental changes, leads to both
regular patterns and abrupt variations. Diffusion models excel in capturing
such complex temporal dynamics due to their ability to capture the inherent
uncertainties. Most existing approaches prioritize designing novel denoising
networks but often neglect the critical role of noise itself, potentially
leading to sub-optimal performance. In this paper, we introduce a novel
perspective by emphasizing the role of noise in the denoising process. Our
analysis reveals that noise fundamentally shapes mobile traffic predictions,
exhibiting distinct and consistent patterns. We propose NPDiff, a framework
that decomposes noise into \textit{prior} and \textit{residual} components,
with the \textit{prior} derived from data dynamics, enhancing the model's
ability to capture both regular and abrupt variations. NPDiff can seamlessly
integrate with various diffusion-based prediction models, delivering
predictions that are effective, efficient, and robust. Extensive experiments
demonstrate that it achieves superior performance with an improvement over
30\%, offering a new perspective on leveraging diffusion models in this domain.
|
2501.13795
|
Training-Free Zero-Shot Temporal Action Detection with Vision-Language
Models
|
cs.CV
|
Existing zero-shot temporal action detection (ZSTAD) methods predominantly
use fully supervised or unsupervised strategies to recognize unseen activities.
However, these training-based methods are prone to domain shifts and require
high computational costs, which hinder their practical applicability in
real-world scenarios. In this paper, unlike previous works, we propose a
training-Free Zero-shot temporal Action Detection (FreeZAD) method, leveraging
existing vision-language (ViL) models to directly classify and localize unseen
activities within untrimmed videos without any additional fine-tuning or
adaptation. We mitigate the need for explicit temporal modeling and reliance on
pseudo-label quality by designing the LOGarithmic decay weighted
Outer-Inner-Contrastive Score (LogOIC) and frequency-based Actionness
Calibration. Furthermore, we introduce a test-time adaptation (TTA) strategy
using Prototype-Centric Sampling (PCS) to expand FreeZAD, enabling ViL models
to adapt more effectively for ZSTAD. Extensive experiments on the THUMOS14 and
ActivityNet-1.3 datasets demonstrate that our training-free method outperforms
state-of-the-art unsupervised methods while requiring only 1/13 of the runtime.
When equipped with TTA, the enhanced method further narrows the gap with fully
supervised methods.
|
2501.13796
|
PromptMono: Cross Prompting Attention for Self-Supervised Monocular
Depth Estimation in Challenging Environments
|
cs.CV
|
Considerable efforts have been made to improve monocular depth estimation
under ideal conditions. However, in challenging environments, monocular depth
estimation still faces difficulties. In this paper, we introduce visual prompt
learning for predicting depth across different environments within a unified
model, and present a self-supervised learning framework called PromptMono. It
employs a set of learnable parameters as visual prompts to capture
domain-specific knowledge. To integrate prompting information into image
representations, a novel gated cross prompting attention (GCPA) module is
proposed, which enhances the depth estimation in diverse conditions. We
evaluate the proposed PromptMono on the Oxford Robotcar dataset and the
nuScenes dataset. Experimental results demonstrate the superior performance of
the proposed method.
|
2501.13804
|
Towards Real-World Validation of a Physics-Based Ship Motion Prediction
Model
|
eess.SY cs.RO cs.SY
|
The maritime industry aims towards a sustainable future, which requires
significant improvements in operational efficiency. Current approaches focus on
minimising fuel consumption and emissions through greater autonomy. Efficient
and safe autonomous navigation requires high-fidelity ship motion models
applicable to real-world conditions. Although physics-based ship motion models
can predict ships' motion with sub-second resolution, their validation in
real-world conditions is rarely found in the literature. This study presents a
physics-based 3D dynamics motion model that is tailored to a container-ship,
and compares its predictions against real-world voyages. The model integrates
vessel motion over time and accounts for its hydrodynamic behavior under
different environmental conditions. The model's predictions are evaluated
against real vessel data both visually and using multiple distance measures.
Both methodologies demonstrate that the model's predictions align closely with
the real-world trajectories of the container-ship.
|
2501.13805
|
EgoHand: Ego-centric Hand Pose Estimation and Gesture Recognition with
Head-mounted Millimeter-wave Radar and IMUs
|
cs.CV
|
Recent advanced Virtual Reality (VR) headsets, such as the Apple Vision Pro,
employ bottom-facing cameras to detect hand gestures and inputs, which offers
users significant convenience in VR interactions. However, these bottom-facing
cameras can sometimes be inconvenient and pose a risk of unintentionally
exposing sensitive information, such as private body parts or personal
surroundings. To mitigate these issues, we introduce EgoHand. This system
provides an alternative solution by integrating millimeter-wave radar and IMUs
for hand gesture recognition, thereby offering users an additional option for
gesture interaction that enhances privacy protection. To accurately recognize
hand gestures, we devise a two-stage skeleton-based gesture recognition scheme.
In the first stage, a novel end-to-end Transformer architecture is employed to
estimate the coordinates of hand joints. Subsequently, these estimated joint
coordinates are utilized for gesture recognition. Extensive experiments
involving 10 subjects show that EgoHand can detect hand gestures with 90.8%
accuracy. Furthermore, EgoHand demonstrates robust performance across a variety
of cross-domain tests, including different users, dominant hands, body
postures, and scenes.
|
2501.13806
|
Generation of reusable learning objects from digital medical
collections: An analysis based on the MASMDOA framework
|
cs.CL cs.HC
|
Learning Objects represent a widespread approach to structuring instructional
materials in a large variety of educational contexts. The main aim of this work
consists of analyzing from a qualitative point of view the process of
generating reusable learning objects (RLOs) followed by Clavy, a tool that can
be used to retrieve data from multiple medical knowledge sources and
reconfigure such sources in diverse multimedia-based structures and
organizations. From these organizations, Clavy is able to generate learning
objects which can be adapted to various instructional healthcare scenarios with
several types of user profiles and distinct learning requirements. Moreover,
Clavy provides the capability of exporting these learning objects through
educational standard specifications, which improves their reusability features.
The analysis insights highlight the importance of having a tool able to
transfer knowledge from the available digital medical collections to learning
objects that can be easily accessed by medical students and healthcare
practitioners through the most popular e-learning platforms.
|
2501.13810
|
Learning to Help in Multi-Class Settings
|
cs.LG cs.AI
|
Deploying complex machine learning models on resource-constrained devices is
challenging due to limited computational power, memory, and model
retrainability. To address these limitations, a hybrid system can be
established by augmenting the local model with a server-side model, where
samples are selectively deferred by a rejector and then sent to the server for
processing. The hybrid system enables efficient use of computational resources
while minimizing the overhead associated with server usage. The recently
proposed Learning to Help (L2H) model trains a server model given a fixed local
(client) model, differing from the Learning to Defer (L2D) framework, which
trains the client for a fixed (expert) server. In both L2D and L2H, the
training includes learning a rejector at the client to determine when to query
the server. In this work, we extend the L2H model from binary to multi-class
classification problems and demonstrate its applicability in a number of
different scenarios of practical interest in which access to the server may be
limited by cost, availability, or policy. We derive a stage-switching surrogate
loss function that is differentiable, convex, and consistent with the Bayes
rule corresponding to the 0-1 loss for the L2H model. Experiments show that our
proposed methods offer an efficient and practical solution for multi-class
classification in resource-constrained environments.
|
2501.13812
|
By-Example Synthesis of Vector Textures
|
cs.CV cs.GR
|
We propose a new method for synthesizing an arbitrarily sized novel vector
texture given a single raster exemplar. Our method first segments the exemplar
to extract the primary textons, and then clusters them based on visual
similarity. We then compute a descriptor to capture each texton's neighborhood
which contains the inter-category relationships that are used at synthesis
time. Next, we use a simple procedure to both extract and place the secondary
textons behind the primary polygons. Finally, our method constructs a gradient
field for the background which is defined by a set of data points and colors.
The color of the secondary polygons are also adjusted to better match the
gradient field. To compare our work with other methods, we use a wide range of
perceptual-based metrics.
|
2501.13814
|
On entropy-constrained Gaussian channel capacity via the moment problem
|
cs.IT math.IT math.PR
|
We study the capacity of the power-constrained additive Gaussian channel with
an entropy constraint at the input. In particular, we characterize this
capacity in the low signal-to-noise ratio regime, as a corollary of the
following general result on a moment matching problem: we show that for any
continuous random variable with finite moments, the largest number of initial
moments that can be matched by a discrete random variable of sufficiently small
but positive entropy is three.
|
2501.13816
|
Large Language Model driven Policy Exploration for Recommender Systems
|
cs.IR
|
Recent advancements in Recommender Systems (RS) have incorporated
Reinforcement Learning (RL), framing the recommendation as a Markov Decision
Process (MDP). However, offline RL policies trained on static user data are
vulnerable to distribution shift when deployed in dynamic online environments.
Additionally, excessive focus on exploiting short-term relevant items can
hinder exploration, leading to suboptimal recommendations and negatively
impacting long-term user gains. Online RL-based RS also face challenges in
production deployment, due to the risks of exposing users to untrained or
unstable policies. Large Language Models (LLMs) offer a promising solution to
mimic user objectives and preferences for pre-training policies offline to
enhance the initial recommendations in online settings. Effectively managing
distribution shift and balancing exploration are crucial for improving RL-based
RS, especially when leveraging LLM-based pre-training. To address these
challenges, we propose an Interaction-Augmented Learned Policy (iALP) that
utilizes user preferences distilled from an LLM. Our approach involves
prompting the LLM with user states to extract item preferences, learning
rewards based on feedback, and updating the RL policy using an actor-critic
framework. Furthermore, to deploy iALP in an online scenario, we introduce an
adaptive variant, A-iALP, that implements a simple fine-tuning strategy
(A-iALP$_{ft}$), and an adaptive approach (A-iALP$_{ap}$) designed to mitigate
issues with compromised policies and limited exploration. Experiments across
three simulated environments demonstrate that A-iALP introduces substantial
performance improvements
|
2501.13817
|
Temporal Logic Guided Safe Navigation for Autonomous Vehicles
|
cs.RO cs.FL cs.SY eess.SY
|
Safety verification for autonomous vehicles (AVs) and ground robots is
crucial for ensuring reliable operation given their uncertain environments.
Formal language tools provide a robust and sound method to verify safety rules
for such complex cyber-physical systems. In this paper, we propose a hybrid
approach that combines the strengths of formal verification languages like
Linear Temporal Logic (LTL) and Signal Temporal Logic (STL) to generate safe
trajectories and optimal control inputs for autonomous vehicle navigation. We
implement a symbolic path planning approach using LTL to generate a formally
safe reference trajectory. A mixed integer linear programming (MILP) solver is
then used on this reference trajectory to solve for the control inputs while
satisfying the state, control and safety constraints described by STL. We test
our proposed solution on two environments and compare the results with popular
path planning algorithms. In contrast to conventional path planning algorithms,
our formally safe solution excels in handling complex specification scenarios
while ensuring both safety and comparable computation times.
|
2501.13818
|
Ensuring Medical AI Safety: Explainable AI-Driven Detection and
Mitigation of Spurious Model Behavior and Associated Data
|
cs.AI cs.CV cs.LG
|
Deep neural networks are increasingly employed in high-stakes medical
applications, despite their tendency for shortcut learning in the presence of
spurious correlations, which can have potentially fatal consequences in
practice. Detecting and mitigating shortcut behavior is a challenging task that
often requires significant labeling efforts from domain experts. To alleviate
this problem, we introduce a semi-automated framework for the identification of
spurious behavior from both data and model perspective by leveraging insights
from eXplainable Artificial Intelligence (XAI). This allows the retrieval of
spurious data points and the detection of model circuits that encode the
associated prediction rules. Moreover, we demonstrate how these shortcut
encodings can be used for XAI-based sample- and pixel-level data annotation,
providing valuable information for bias mitigation methods to unlearn the
undesired shortcut behavior. We show the applicability of our framework using
four medical datasets across two modalities, featuring controlled and
real-world spurious correlations caused by data artifacts. We successfully
identify and mitigate these biases in VGG16, ResNet50, and contemporary Vision
Transformer models, ultimately increasing their robustness and applicability
for real-world medical tasks.
|
2501.13820
|
Consistent spectral clustering in sparse tensor block models
|
math.ST cs.LG math.PR stat.TH
|
High-order clustering aims to classify objects in multiway datasets that are
prevalent in various fields such as bioinformatics, social network analysis,
and recommendation systems. These tasks often involve data that is sparse and
high-dimensional, presenting significant statistical and computational
challenges. This paper introduces a tensor block model specifically designed
for sparse integer-valued data tensors. We propose a simple spectral clustering
algorithm augmented with a trimming step to mitigate noise fluctuations, and
identify a density threshold that ensures the algorithm's consistency. Our
approach models sparsity using a sub-Poisson noise concentration framework,
accommodating heavier than sub-Gaussian tails. Remarkably, this natural class
of tensor block models is closed under aggregation across arbitrary modes.
Consequently, we obtain a comprehensive framework for evaluating the tradeoff
between signal loss and noise reduction during data aggregation. The analysis
is based on a novel concentration bound for sparse random Gram matrices. The
theoretical findings are illustrated through simulation experiments.
|
2501.13824
|
Hallucinations Can Improve Large Language Models in Drug Discovery
|
cs.CL cs.AI
|
Concerns about hallucinations in Large Language Models (LLMs) have been
raised by researchers, yet their potential in areas where creativity is vital,
such as drug discovery, merits exploration. In this paper, we come up with the
hypothesis that hallucinations can improve LLMs in drug discovery. To verify
this hypothesis, we use LLMs to describe the SMILES string of molecules in
natural language and then incorporate these descriptions as part of the prompt
to address specific tasks in drug discovery. Evaluated on seven LLMs and five
classification tasks, our findings confirm the hypothesis: LLMs can achieve
better performance with text containing hallucinations. Notably, Llama-3.1-8B
achieves an 18.35% gain in ROC-AUC compared to the baseline without
hallucination. Furthermore, hallucinations generated by GPT-4o provide the most
consistent improvements across models. Additionally, we conduct empirical
analyses and a case study to investigate key factors affecting performance and
the underlying reasons. Our research sheds light on the potential use of
hallucinations for LLMs and offers new perspectives for future research
leveraging LLMs in drug discovery.
|
2501.13825
|
Sample-Based Piecewise Linear Power Flow Approximations Using
Second-Order Sensitivities
|
math.OC cs.SY eess.SY
|
The inherent nonlinearity of the power flow equations poses significant
challenges in accurately modeling power systems, particularly when employing
linearized approximations. Although power flow linearizations provide
computational efficiency, they can fail to fully capture nonlinear behavior
across diverse operating conditions. To improve approximation accuracy, we
propose conservative piecewise linear approximations (CPLA) of the power flow
equations, which are designed to consistently over- or under-estimate the
quantity of interest, ensuring conservative behavior in optimization. The
flexibility provided by piecewise linear functions can yield improved accuracy
relative to standard linear approximations. However, applying CPLA across all
dimensions of the power flow equations could introduce significant
computational complexity, especially for large-scale optimization problems. In
this paper, we propose a strategy that selectively targets dimensions
exhibiting significant nonlinearities. Using a second-order sensitivity
analysis, we identify the directions where the power flow equations exhibit the
most significant curvature and tailor the CPLAs to improve accuracy in these
specific directions. This approach reduces the computational burden while
maintaining high accuracy, making it particularly well-suited for mixed-integer
programming problems involving the power flow equations.
|
2501.13826
|
Video-MMMU: Evaluating Knowledge Acquisition from Multi-Discipline
Professional Videos
|
cs.CV cs.CL
|
Humans acquire knowledge through three cognitive stages: perceiving
information, comprehending knowledge, and adapting knowledge to solve novel
problems. Videos serve as an effective medium for this learning process,
facilitating a progression through these cognitive stages. However, existing
video benchmarks fail to systematically evaluate the knowledge acquisition
capabilities in Large Multimodal Models (LMMs). To address this gap, we
introduce Video-MMMU, a multi-modal, multi-disciplinary benchmark designed to
assess LMMs' ability to acquire and utilize knowledge from videos. Video-MMMU
features a curated collection of 300 expert-level videos and 900
human-annotated questions across six disciplines, evaluating knowledge
acquisition through stage-aligned question-answer pairs: Perception,
Comprehension, and Adaptation. A proposed knowledge gain metric,
{\Delta}knowledge, quantifies improvement in performance after video viewing.
Evaluation of LMMs reveals a steep decline in performance as cognitive demands
increase and highlights a significant gap between human and model knowledge
acquisition, underscoring the need for methods to enhance LMMs' capability to
learn and adapt from videos.
|
2501.13828
|
PhotoGAN: Generative Adversarial Neural Network Acceleration with
Silicon Photonics
|
cs.AR cs.LG
|
Generative Adversarial Networks (GANs) are at the forefront of AI innovation,
driving advancements in areas such as image synthesis, medical imaging, and
data augmentation. However, the unique computational operations within GANs,
such as transposed convolutions and instance normalization, introduce
significant inefficiencies when executed on traditional electronic
accelerators, resulting in high energy consumption and suboptimal performance.
To address these challenges, we introduce PhotoGAN, the first silicon-photonic
accelerator designed to handle the specialized operations of GAN models. By
leveraging the inherent high throughput and energy efficiency of silicon
photonics, PhotoGAN offers an innovative, reconfigurable architecture capable
of accelerating transposed convolutions and other GAN-specific layers. The
accelerator also incorporates a sparse computation optimization technique to
reduce redundant operations, improving computational efficiency. Our
experimental results demonstrate that PhotoGAN achieves at least 4.4x higher
GOPS and 2.18x lower energy-per-bit (EPB) compared to state-of-the-art
accelerators, including GPUs and TPUs. These findings showcase PhotoGAN as a
promising solution for the next generation of GAN acceleration, providing
substantial gains in both performance and energy efficiency.
|
2501.13829
|
MV-GMN: State Space Model for Multi-View Action Recognition
|
cs.CV
|
Recent advancements in multi-view action recognition have largely relied on
Transformer-based models. While effective and adaptable, these models often
require substantial computational resources, especially in scenarios with
multiple views and multiple temporal sequences. Addressing this limitation,
this paper introduces the MV-GMN model, a state-space model specifically
designed to efficiently aggregate multi-modal data (RGB and skeleton),
multi-view perspectives, and multi-temporal information for action recognition
with reduced computational complexity. The MV-GMN model employs an innovative
Multi-View Graph Mamba network comprising a series of MV-GMN blocks. Each block
includes a proposed Bidirectional State Space Block and a GCN module. The
Bidirectional State Space Block introduces four scanning strategies, including
view-prioritized and time-prioritized approaches. The GCN module leverages
rule-based and KNN-based methods to construct the graph network, effectively
integrating features from different viewpoints and temporal instances.
Demonstrating its efficacy, MV-GMN outperforms the state-of-the-arts on several
datasets, achieving notable accuracies of 97.3\% and 96.7\% on the NTU RGB+D
120 dataset in cross-subject and cross-view scenarios, respectively. MV-GMN
also surpasses Transformer-based baselines while requiring only linear
inference complexity, underscoring the model's ability to reduce computational
load and enhance the scalability and applicability of multi-view action
recognition technologies.
|
2501.13830
|
A space-decoupling framework for optimization on bounded-rank matrices
with orthogonally invariant constraints
|
math.OC cs.AI cs.LG
|
Imposing additional constraints on low-rank optimization has garnered growing
interest. However, the geometry of coupled constraints hampers the
well-developed low-rank structure and makes the problem intricate. To this end,
we propose a space-decoupling framework for optimization on bounded-rank
matrices with orthogonally invariant constraints. The ``space-decoupling" is
reflected in several ways. We show that the tangent cone of coupled constraints
is the intersection of tangent cones of each constraint. Moreover, we decouple
the intertwined bounded-rank and orthogonally invariant constraints into two
spaces, leading to optimization on a smooth manifold. Implementing Riemannian
algorithms on this manifold is painless as long as the geometry of additional
constraints is known. In addition, we unveil the equivalence between the
reformulated problem and the original problem. Numerical experiments on
real-world applications -- spherical data fitting, graph similarity measuring,
low-rank SDP, model reduction of Markov processes, reinforcement learning, and
deep learning -- validate the superiority of the proposed framework.
|
2501.13831
|
Predicting Compact Phrasal Rewrites with Large Language Models for ASR
Post Editing
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) excel at rewriting tasks such as text style
transfer and grammatical error correction. While there is considerable overlap
between the inputs and outputs in these tasks, the decoding cost still
increases with output length, regardless of the amount of overlap. By
leveraging the overlap between the input and the output, Kaneko and Okazaki
(2023) proposed model-agnostic edit span representations to compress the
rewrites to save computation. They reported an output length reduction rate of
nearly 80% with minimal accuracy impact in four rewriting tasks. In this paper,
we propose alternative edit phrase representations inspired by phrase-based
statistical machine translation. We systematically compare our phrasal
representations with their span representations. We apply the LLM rewriting
model to the task of Automatic Speech Recognition (ASR) post editing and show
that our target-phrase-only edit representation has the best
efficiency-accuracy trade-off. On the LibriSpeech test set, our method closes
50-60% of the WER gap between the edit span model and the full rewrite model
while losing only 10-20% of the length reduction rate of the edit span model.
|
2501.13833
|
On the Reasoning Capacity of AI Models and How to Quantify It
|
cs.AI cs.CL cs.IT math.IT
|
Recent advances in Large Language Models (LLMs) have intensified the debate
surrounding the fundamental nature of their reasoning capabilities. While
achieving high performance on benchmarks such as GPQA and MMLU, these models
exhibit limitations in more complex reasoning tasks, highlighting the need for
more rigorous evaluation methodologies. We propose a novel phenomenological
approach that goes beyond traditional accuracy metrics to probe the underlying
mechanisms of model behavior, establishing a framework that could broadly
impact how we analyze and understand AI systems. Using positional bias in
multiple-choice reasoning tasks as a case study, we demonstrate how systematic
perturbations can reveal fundamental aspects of model decision-making. To
analyze these behaviors, we develop two complementary phenomenological models:
a Probabilistic Mixture Model (PMM) that decomposes model responses into
reasoning, memorization, and guessing components and an Information-Theoretic
Consistency (ITC) analysis that quantifies the relationship between model
confidence and strategy selection. Through controlled experiments on reasoning
benchmarks, we show that true reasoning remains challenging for current models,
with apparent success often relying on sophisticated combinations of
memorization and pattern matching rather than genuine logical deduction. More
fundamentally, we demonstrate that accuracy alone often overstates a model's
reasoning abilities, as model behavior can be characterized through underlying
mechanisms in the phase space of cognitive strategies, revealing how models
dynamically balance different approaches when responding to queries. This
framework enables quantitative criteria for real-world deployments, allowing
applications to specify reliability thresholds based on strategy distributions
rather than aggregate performance metrics.
|
2501.13836
|
Think Outside the Data: Colonial Biases and Systemic Issues in Automated
Moderation Pipelines for Low-Resource Languages
|
cs.CL cs.HC
|
Most social media users come from non-English speaking countries in the
Global South. Despite the widespread prevalence of harmful content in these
regions, current moderation systems repeatedly struggle in low-resource
languages spoken there. In this work, we examine the challenges AI researchers
and practitioners face when building moderation tools for low-resource
languages. We conducted semi-structured interviews with 22 AI researchers and
practitioners specializing in automatic detection of harmful content in four
diverse low-resource languages from the Global South. These are: Tamil from
South Asia, Swahili from East Africa, Maghrebi Arabic from North Africa, and
Quechua from South America. Our findings reveal that social media companies'
restrictions on researchers' access to data exacerbate the historical
marginalization of these languages, which have long lacked datasets for
studying online harms. Moreover, common preprocessing techniques and language
models, predominantly designed for data-rich English, fail to account for the
linguistic complexity of low-resource languages. This leads to critical errors
when moderating content in Tamil, Swahili, Arabic, and Quechua, which are
morphologically richer than English. Based on our findings, we establish that
the precarities in current moderation pipelines are rooted in deep systemic
inequities and continue to reinforce historical power imbalances. We conclude
by discussing multi-stakeholder approaches to improve moderation for
low-resource languages.
|
2501.13848
|
Where Do You Go? Pedestrian Trajectory Prediction using Scene Features
|
cs.CV cs.AI cs.LG
|
Accurate prediction of pedestrian trajectories is crucial for enhancing the
safety of autonomous vehicles and reducing traffic fatalities involving
pedestrians. While numerous studies have focused on modeling interactions among
pedestrians to forecast their movements, the influence of environmental factors
and scene-object placements has been comparatively underexplored. In this
paper, we present a novel trajectory prediction model that integrates both
pedestrian interactions and environmental context to improve prediction
accuracy. Our approach captures spatial and temporal interactions among
pedestrians within a sparse graph framework. To account for pedestrian-scene
interactions, we employ advanced image enhancement and semantic segmentation
techniques to extract detailed scene features. These scene and interaction
features are then fused through a cross-attention mechanism, enabling the model
to prioritize relevant environmental factors that influence pedestrian
movements. Finally, a temporal convolutional network processes the fused
features to predict future pedestrian trajectories. Experimental results
demonstrate that our method significantly outperforms existing state-of-the-art
approaches, achieving ADE and FDE values of 0.252 and 0.372 meters,
respectively, underscoring the importance of incorporating both social
interactions and environmental context in pedestrian trajectory prediction.
|
2501.13851
|
Large Vision-Language Models for Knowledge-Grounded Data Annotation of
Memes
|
cs.LG
|
Memes have emerged as a powerful form of communication, integrating visual
and textual elements to convey humor, satire, and cultural messages. Existing
research has focused primarily on aspects such as emotion classification, meme
generation, propagation, interpretation, figurative language, and
sociolinguistics, but has often overlooked deeper meme comprehension and
meme-text retrieval. To address these gaps, this study introduces
ClassicMemes-50-templates (CM50), a large-scale dataset consisting of over
33,000 memes, centered around 50 popular meme templates. We also present an
automated knowledge-grounded annotation pipeline leveraging large
vision-language models to produce high-quality image captions, meme captions,
and literary device labels overcoming the labor intensive demands of manual
annotation. Additionally, we propose a meme-text retrieval CLIP model (mtrCLIP)
that utilizes cross-modal embedding to enhance meme analysis, significantly
improving retrieval performance. Our contributions include:(1) a novel dataset
for large-scale meme study, (2) a scalable meme annotation framework, and (3) a
fine-tuned CLIP for meme-text retrieval, all aimed at advancing the
understanding and analysis of memes at scale.
|
2501.13855
|
First Lessons Learned of an Artificial Intelligence Robotic System for
Autonomous Coarse Waste Recycling Using Multispectral Imaging-Based Methods
|
cs.CV cs.LG cs.RO
|
Current disposal facilities for coarse-grained waste perform manual sorting
of materials with heavy machinery. Large quantities of recyclable materials are
lost to coarse waste, so more effective sorting processes must be developed to
recover them. Two key aspects to automate the sorting process are object
detection with material classification in mixed piles of waste, and autonomous
control of hydraulic machinery. Because most objects in those accumulations of
waste are damaged or destroyed, object detection alone is not feasible in the
majority of cases. To address these challenges, we propose a classification of
materials with multispectral images of ultraviolet (UV), visual (VIS), near
infrared (NIR), and short-wave infrared (SWIR) spectrums. Solution for
autonomous control of hydraulic heavy machines for sorting of bulky waste is
being investigated using cost-effective cameras and artificial
intelligence-based controllers.
|
2501.13858
|
The Lock Generative Adversarial Network for Medical Waveform Anomaly
Detection
|
cs.CE
|
Waveform signal analysis is a complex and important task in medical care. For
example, mechanical ventilators are critical life-support machines, but they
can cause serious injury to patients if they are out of synchronization with
the patients' own breathing reflex. This asynchrony is revealed by the
waveforms showing flow and pressure histories. Likewise, electrocardiograms
record the electrical activity of a patients' heart as a set of waveforms, and
anomalous waveforms can reveal important disease states. In both cases, subtle
variations in a complex waveform are important information for patient care;
signals which may be missed or mis-interpreted by human caregivers.
We report on the design of a novel Lock Generative Adversarial Network
architecture for anomaly detection in raw or summarized medical waveform data.
The proposed architecture uses alternating optimization of the generator and
discriminator networks to solve the convergence dilemma. Furthermore, the
fidelity of the generator networks' outputs to the actual distribution of
anomalous data is improved via synthetic minority oversampling. We evaluate
this new architecture on one ventilator asynchrony dataset, and two
electrocardiogram datasets, finding that the performance was either equal or
superior to the state-of-the art on all three.
|
2501.13859
|
Dual-Modal Prototype Joint Learning for Compositional Zero-Shot Learning
|
cs.CV
|
Compositional Zero-Shot Learning (CZSL) aims to recognize novel compositions
of attributes and objects by leveraging knowledge learned from seen
compositions. Recent approaches have explored the use of Vision-Language Models
(VLMs) to align textual and visual modalities. These methods typically employ
prompt engineering, parameter-tuning, and modality fusion to generate rich
textual prototypes that serve as class prototypes for CZSL. However, the
modality gap results in textual prototypes being unable to fully capture the
optimal representations of all class prototypes, particularly those with
fine-grained features, which can be directly obtained from the visual modality.
In this paper, we propose a novel Dual-Modal Prototype Joint Learning framework
for the CZSL task. Our approach, based on VLMs, introduces prototypes in both
the textual and visual modalities. The textual prototype is optimized to
capture broad conceptual information, aiding the model's generalization across
unseen compositions. Meanwhile, the visual prototype is used to mitigate the
classification errors caused by the modality gap and capture fine-grained
details to distinguish images with similar appearances. To effectively optimize
these prototypes, we design specialized decomposition modules and a joint
learning strategy that enrich the features from both modalities. These
prototypes not only capture key category information during training but also
serve as crucial reference targets during inference. Experimental results
demonstrate that our approach achieves state-of-the-art performance in the
closed-world setting and competitive performance in the open-world setting
across three publicly available CZSL benchmarks. These findings validate the
effectiveness of our method in advancing compositional generalization.
|
2501.13864
|
Autoencoders for Anomaly Detection are Unreliable
|
cs.LG cs.AI
|
Autoencoders are frequently used for anomaly detection, both in the
unsupervised and semi-supervised settings. They rely on the assumption that
when trained using the reconstruction loss, they will be able to reconstruct
normal data more accurately than anomalous data. Some recent works have posited
that this assumption may not always hold, but little has been done to study the
validity of the assumption in theory. In this work we show that this assumption
indeed does not hold, and illustrate that anomalies, lying far away from normal
data, can be perfectly reconstructed in practice. We revisit the theory of
failure of linear autoencoders for anomaly detection by showing how they can
perfectly reconstruct out of bounds, or extrapolate undesirably, and note how
this can be dangerous in safety critical applications. We connect this to
non-linear autoencoders through experiments on both tabular data and real-world
image data, the two primary application areas of autoencoders for anomaly
detection.
|
2501.13865
|
Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary
Codes
|
cs.CR cs.IT math.IT
|
Iterative bit flipping decoders are an efficient and effective decoder choice
for decoding codes which admit a sparse parity-check matrix. Among these,
sparse $(v,w)$-regular codes, which include LDPC and MDPC codes are of
particular interest both for efficient data correction and the design of
cryptographic primitives. In attaining the decoding the choice of the bit
flipping thresholds, which can be determined either statically, or during the
decoder execution by using information coming from the initial syndrome value
and its updates. In this work, we analyze a two-iterations parallel hard
decision bit flipping decoders and propose concrete criteria for threshold
determination, backed by a closed form model. In doing so, we introduce a new
tightly fitting model for the distribution of the Hamming weight of the
syndrome after the first decoder iteration and substantial improvements on the
DFR estimation with respect to existing approaches.
|
2501.13868
|
Lost in Siting: The Hidden Carbon Cost of Inequitable Residential Solar
Installations
|
cs.CE
|
The declining cost of solar photovoltaics (PV) combined with strong federal
and state-level incentives have resulted in a high number of residential solar
PV installations in the US. However, these installations are concentrated in
particular regions, such as California, and demographics, such as high-income
Asian neighborhoods. This inequitable distribution creates an illusion that
further increasing residential solar installations will become increasingly
challenging. Furthermore, while the inequity in solar installations has
received attention, no prior comprehensive work has been done on understanding
whether our current trajectory of residential solar adoption is energy- and
carbon-efficient. In this paper, we reveal the hidden energy and carbon cost of
the inequitable distribution of existing installations. Using US-based data on
carbon offset potential, the amount of avoided carbon emissions from using
rooftop PV instead of electric grid energy, and the number of existing solar
installations, we surprisingly observe that locations and demographics with a
higher carbon offset potential have fewer existing installations. For instance,
neighborhoods with relatively higher black population have 7.4% higher carbon
offset potential than average but 36.7% fewer installations; lower-income
neighborhoods have 14.7% higher potential and 47% fewer installations. We
propose several equity- and carbon-aware solar siting strategies. In evaluating
these strategies, we develop Sunsight, a toolkit that combines
simulation/visualization tools and our relevant datasets, which we are
releasing publicly. Our projections show that a multi-objective siting strategy
can address two problems at once; namely, it can improve societal outcomes in
terms of distributional equity and simultaneously improve the carbon-efficiency
(i.e., climate impact) of current installation trends by up to 39.8%.
|
2501.13876
|
FAST-LIVO2 on Resource-Constrained Platforms: LiDAR-Inertial-Visual
Odometry with Efficient Memory and Computation
|
cs.RO
|
This paper presents a lightweight LiDAR-inertial-visual odometry system
optimized for resource-constrained platforms. It integrates a
degeneration-aware adaptive visual frame selector into error-state iterated
Kalman filter (ESIKF) with sequential updates, improving computation efficiency
significantly while maintaining a similar level of robustness. Additionally, a
memory-efficient mapping structure combining a locally unified visual-LiDAR map
and a long-term visual map achieves a good trade-off between performance and
memory usage. Extensive experiments on x86 and ARM platforms demonstrate the
system's robustness and efficiency. On the Hilti dataset, our system achieves a
33% reduction in per-frame runtime and 47% lower memory usage compared to
FAST-LIVO2, with only a 3 cm increase in RMSE. Despite this slight accuracy
trade-off, our system remains competitive, outperforming state-of-the-art
(SOTA) LIO methods such as FAST-LIO2 and most existing LIVO systems. These
results validate the system's capability for scalable deployment on
resource-constrained edge computing platforms.
|
2501.13878
|
Eye Gaze as a Signal for Conveying User Attention in Contextual AI
Systems
|
cs.HC cs.CV
|
Advanced multimodal AI agents can now collaborate with users to solve
challenges in the world. We explore eye tracking's role in such interaction to
convey a user's attention relative to the physical environment. We hypothesize
that this knowledge improves contextual understanding for AI agents. By
observing hours of human-object interactions, we first measure the relationship
between an eye tracker's signal quality and its ability to reliably place gaze
on nearby physical objects. We then conduct experiments which relay the user's
scanpath history as additional context querying multimodal agents. Our results
show that eye tracking provides high value as a user attention signal and can
convey information about the user's current task and interests to the agent.
|
2501.13880
|
A RAG-Based Institutional Assistant
|
cs.CL
|
Although large language models (LLMs) demonstrate strong text generation
capabilities, they struggle in scenarios requiring access to structured
knowledge bases or specific documents, limiting their effectiveness in
knowledge-intensive tasks. To address this limitation, retrieval-augmented
generation (RAG) models have been developed, enabling generative models to
incorporate relevant document fragments into their inputs. In this paper, we
design and evaluate a RAG-based virtual assistant specifically tailored for the
University of S\~ao Paulo. Our system architecture comprises two key modules: a
retriever and a generative model. We experiment with different types of models
for both components, adjusting hyperparameters such as chunk size and the
number of retrieved documents. Our optimal retriever model achieves a Top-5
accuracy of 30%, while our most effective generative model scores 22.04\%
against ground truth answers. Notably, when the correct document chunks are
supplied to the LLMs, accuracy significantly improves to 54.02%, an increase of
over 30 percentage points. Conversely, without contextual input, performance
declines to 13.68%. These findings highlight the critical role of database
access in enhancing LLM performance. They also reveal the limitations of
current semantic search methods in accurately identifying relevant documents
and underscore the ongoing challenges LLMs face in generating precise
responses.
|
2501.13883
|
Utilizing Evolution Strategies to Train Transformers in Reinforcement
Learning
|
cs.LG cs.NE
|
We explore a capability of evolution strategies to train an agent with its
policy based on a transformer architecture in a reinforcement learning setting.
We performed experiments using OpenAI's highly parallelizable evolution
strategy to train Decision Transformer in Humanoid locomotion environment and
in the environment of Atari games, testing the ability of this black-box
optimization technique to train even such relatively large and complicated
models (compared to those previously tested in the literature). We also
proposed a method to aid the training by first pretraining the model before
using the OpenAI-ES to train it further, and tested its effectiveness. The
examined evolution strategy proved to be, in general, capable of achieving
strong results and managed to obtain high-performing agents. Therefore, the
pretraining was shown to be unnecessary; yet still, it helped us observe and
formulate several further insights.
|
2501.13884
|
Exploring Finetuned Audio-LLM on Heart Murmur Features
|
eess.AS cs.AI cs.SD
|
Large language models (LLMs) for audio have excelled in recognizing and
analyzing human speech, music, and environmental sounds. However, their
potential for understanding other types of sounds, particularly biomedical
sounds, remains largely underexplored despite significant scientific interest.
In this study, we focus on diagnosing cardiovascular diseases using
phonocardiograms, i.e., heart sounds. Most existing deep neural network (DNN)
paradigms are restricted to heart murmur classification (healthy vs unhealthy)
and do not predict other acoustic features of the murmur such as timing,
grading, harshness, pitch, and quality, which are important in helping
physicians diagnose the underlying heart conditions. We propose to finetune an
audio LLM, Qwen2-Audio, on the PhysioNet CirCor DigiScope phonocardiogram (PCG)
dataset and evaluate its performance in classifying 11 expert-labeled murmur
features. Additionally, we aim to achieve more noise-robust and generalizable
system by exploring a preprocessing segmentation algorithm using an audio
representation model, SSAMBA. Our results indicate that the LLM-based model
outperforms state-of-the-art methods in 8 of the 11 features and performs
comparably in the remaining 3. Moreover, the LLM successfully classifies
long-tail murmur features with limited training data, a task that all previous
methods have failed to classify. These findings underscore the potential of
audio LLMs as assistants to human cardiologists in enhancing heart disease
diagnosis.
|
2501.13885
|
Quantum model reduction for continuous-time quantum filters
|
quant-ph cs.SY eess.SY math-ph math.MP
|
The use of quantum stochastic models is widespread in dynamical reduction,
simulation of open systems, feedback control and adaptive estimation. In many
applications only part of the information contained in the filter's state is
actually needed to reconstruct the target observable quantities; thus, filters
of smaller dimensions could be in principle implemented to perform the same
task.In this work, we propose a systematic method to find, when possible,
reduced-order quantum filters that are capable of exactly reproducing the
evolution of expectation values of interest. In contrast with existing
reduction techniques, the reduced model we obtain is exact and in the form of a
Belavkin filtering equation, ensuring physical interpretability.This is
attained by leveraging tools from the theory of both minimal realization and
non-commutative conditional expectations. The proposed procedure is tested on
prototypical examples, laying the groundwork for applications in quantum
trajectory simulation and quantum feedback control.
|
2501.13887
|
What Does an Audio Deepfake Detector Focus on? A Study in the Time
Domain
|
cs.LG cs.SD eess.AS
|
Adding explanations to audio deepfake detection (ADD) models will boost their
real-world application by providing insight on the decision making process. In
this paper, we propose a relevancy-based explainable AI (XAI) method to analyze
the predictions of transformer-based ADD models. We compare against standard
Grad-CAM and SHAP-based methods, using quantitative faithfulness metrics as
well as a partial spoof test, to comprehensively analyze the relative
importance of different temporal regions in an audio. We consider large
datasets, unlike previous works where only limited utterances are studied, and
find that the XAI methods differ in their explanations. The proposed
relevancy-based XAI method performs the best overall on a variety of metrics.
Further investigation on the relative importance of speech/non-speech, phonetic
content, and voice onsets/offsets suggest that the XAI results obtained from
analyzing limited utterances don't necessarily hold when evaluated on large
datasets.
|
2501.13888
|
Multimodal Sensor Dataset for Monitoring Older Adults Post Lower-Limb
Fractures in Community Settings
|
cs.LG cs.CV
|
Lower-Limb Fractures (LLF) are a major health concern for older adults, often
leading to reduced mobility and prolonged recovery, potentially impairing daily
activities and independence. During recovery, older adults frequently face
social isolation and functional decline, complicating rehabilitation and
adversely affecting physical and mental health. Multi-modal sensor platforms
that continuously collect data and analyze it using machine-learning algorithms
can remotely monitor this population and infer health outcomes. They can also
alert clinicians to individuals at risk of isolation and decline. This paper
presents a new publicly available multi-modal sensor dataset, MAISON-LLF,
collected from older adults recovering from LLF in community settings. The
dataset includes data from smartphone and smartwatch sensors, motion detectors,
sleep-tracking mattresses, and clinical questionnaires on isolation and
decline. The dataset was collected from ten older adults living alone at home
for eight weeks each, totaling 560 days of 24-hour sensor data. For technical
validation, supervised machine-learning and deep-learning models were developed
using the sensor and clinical questionnaire data, providing a foundational
comparison for the research community.
|
2501.13889
|
Generating Realistic Forehead-Creases for User Verification via
Conditioned Piecewise Polynomial Curves
|
cs.CV
|
We propose a trait-specific image generation method that models forehead
creases geometrically using B-spline and B\'ezier curves. This approach ensures
the realistic generation of both principal creases and non-prominent crease
patterns, effectively constructing detailed and authentic forehead-crease
images. These geometrically rendered images serve as visual prompts for a
diffusion-based Edge-to-Image translation model, which generates corresponding
mated samples. The resulting novel synthetic identities are then used to train
a forehead-crease verification network. To enhance intra-subject diversity in
the generated samples, we employ two strategies: (a) perturbing the control
points of B-splines under defined constraints to maintain label consistency,
and (b) applying image-level augmentations to the geometric visual prompts,
such as dropout and elastic transformations, specifically tailored to crease
patterns. By integrating the proposed synthetic dataset with real-world data,
our method significantly improves the performance of forehead-crease
verification systems under a cross-database verification protocol.
|
2501.13890
|
Federated Granger Causality Learning for Interdependent Clients with
State Space Representation
|
cs.LG stat.ML
|
Advanced sensors and IoT devices have improved the monitoring and control of
complex industrial enterprises. They have also created an interdependent fabric
of geographically distributed process operations (clients) across these
enterprises. Granger causality is an effective approach to detect and quantify
interdependencies by examining how one client's state affects others over time.
Understanding these interdependencies captures how localized events, such as
faults and disruptions, can propagate throughout the system, possibly causing
widespread operational impacts. However, the large volume and complexity of
industrial data pose challenges in modeling these interdependencies. This paper
develops a federated approach to learning Granger causality. We utilize a
linear state space system framework that leverages low-dimensional state
estimates to analyze interdependencies. This addresses bandwidth limitations
and the computational burden commonly associated with centralized data
processing. We propose augmenting the client models with the Granger causality
information learned by the server through a Machine Learning (ML) function. We
examine the co-dependence between the augmented client and server models and
reformulate the framework as a standalone ML algorithm providing conditions for
its sublinear and linear convergence rates. We also study the convergence of
the framework to a centralized oracle model. Moreover, we include a
differential privacy analysis to ensure data security while preserving causal
insights. Using synthetic data, we conduct comprehensive experiments to
demonstrate the robustness of our approach to perturbations in causality, the
scalability to the size of communication, number of clients, and the dimensions
of raw data. We also evaluate the performance on two real-world industrial
control system datasets by reporting the volume of data saved by
decentralization.
|
2501.13893
|
Pix2Cap-COCO: Advancing Visual Comprehension via Pixel-Level Captioning
|
cs.CV cs.AI cs.LG
|
We present Pix2Cap-COCO, the first panoptic pixel-level caption dataset
designed to advance fine-grained visual understanding. To achieve this, we
carefully design an automated annotation pipeline that prompts GPT-4V to
generate pixel-aligned, instance-specific captions for individual objects
within images, enabling models to learn more granular relationships between
objects and their contexts. This approach results in 167,254 detailed captions,
with an average of 22.94 words per caption. Building on Pix2Cap-COCO, we
introduce a novel task, panoptic segmentation-captioning, which challenges
models to recognize instances in an image and provide detailed descriptions for
each simultaneously. To benchmark this task, we design a robust baseline based
on X-Decoder. The experimental results demonstrate that Pix2Cap-COCO is a
particularly challenging dataset, as it requires models to excel in both
fine-grained visual understanding and detailed language generation.
Furthermore, we leverage Pix2Cap-COCO for Supervised Fine-Tuning (SFT) on large
multimodal models (LMMs) to enhance their performance. For example, training
with Pix2Cap-COCO significantly improves the performance of GPT4RoI, yielding
gains in CIDEr +1.4%, ROUGE +0.4%, and SPICE +0.5% on Visual Genome dataset,
and strengthens its region understanding ability on the ViP-BENCH, with an
overall improvement of +5.1%, including notable increases in recognition
accuracy +11.2% and language generation quality +22.2%.
|
2501.13896
|
GUI-Bee: Align GUI Action Grounding to Novel Environments via Autonomous
Exploration
|
cs.CL cs.AI cs.CV cs.LG
|
Graphical User Interface (GUI) action grounding is a critical step in GUI
automation that maps language instructions to actionable elements on GUI
screens. Most recent works of GUI action grounding leverage large GUI datasets
to fine-tune MLLMs. However, the fine-tuning data always covers limited GUI
environments, and we find the performance of the resulting model deteriorates
in novel environments. We argue that the GUI grounding models should be further
aligned to the novel environments to reveal their full potential, when the
inference is known to involve novel environments, i.e., environments not used
during the previous fine-tuning. To realize this, we first propose GUI-Bee, an
MLLM-based autonomous agent, to collect high-quality, environment-specific data
through exploration and then continuously fine-tune GUI grounding models with
the collected data. Our agent leverages a novel Q-value-Incentive In-Context
Reinforcement Learning (Q-ICRL) method to optimize exploration efficiency and
data quality. Additionally, we introduce NovelScreenSpot, a benchmark for
testing how well the data can help align GUI action grounding models to novel
environments and demonstrate the effectiveness of data collected by GUI-Bee in
the experiments. Furthermore, we conduct an ablation study to validate the
Q-ICRL method in enhancing the efficiency of GUI-Bee. Project page:
https://gui-bee.github.io
|
2501.13898
|
PointOBB-v3: Expanding Performance Boundaries of Single Point-Supervised
Oriented Object Detection
|
cs.CV cs.AI
|
With the growing demand for oriented object detection (OOD), recent studies
on point-supervised OOD have attracted significant interest. In this paper, we
propose PointOBB-v3, a stronger single point-supervised OOD framework. Compared
to existing methods, it generates pseudo rotated boxes without additional
priors and incorporates support for the end-to-end paradigm. PointOBB-v3
functions by integrating three unique image views: the original view, a resized
view, and a rotated/flipped (rot/flp) view. Based on the views, a scale
augmentation module and an angle acquisition module are constructed. In the
first module, a Scale-Sensitive Consistency (SSC) loss and a Scale-Sensitive
Feature Fusion (SSFF) module are introduced to improve the model's ability to
estimate object scale. To achieve precise angle predictions, the second module
employs symmetry-based self-supervised learning. Additionally, we introduce an
end-to-end version that eliminates the pseudo-label generation process by
integrating a detector branch and introduces an Instance-Aware Weighting (IAW)
strategy to focus on high-quality predictions. We conducted extensive
experiments on the DIOR-R, DOTA-v1.0/v1.5/v2.0, FAIR1M, STAR, and RSAR
datasets. Across all these datasets, our method achieves an average improvement
in accuracy of 3.56% in comparison to previous state-of-the-art methods. The
code will be available at https://github.com/ZpyWHU/PointOBB-v3.
|
2501.13904
|
Privacy-Preserving Personalized Federated Prompt Learning for Multimodal
Large Language Models
|
cs.LG
|
Multimodal Large Language Models (LLMs) are pivotal in revolutionizing
customer support and operations by integrating multiple modalities such as
text, images, and audio. Federated Prompt Learning (FPL) is a recently proposed
approach that combines pre-trained multimodal LLMs such as vision-language
models with federated learning to create personalized, privacy-preserving AI
systems. However, balancing the competing goals of personalization,
generalization, and privacy remains a significant challenge.
Over-personalization can lead to overfitting, reducing generalizability, while
stringent privacy measures, such as differential privacy, can hinder both
personalization and generalization. In this paper, we propose a Differentially
Private Federated Prompt Learning (DP-FPL) approach to tackle this challenge by
leveraging a low-rank factorization scheme to capture generalization while
maintaining a residual term that preserves expressiveness for personalization.
To ensure privacy, we introduce a novel method where we apply local
differential privacy to the two low-rank components of the local prompt, and
global differential privacy to the global prompt. Our approach mitigates the
impact of privacy noise on the model performance while balancing the tradeoff
between personalization and generalization. Extensive experiments demonstrate
the effectiveness of our approach over other benchmarks.
|
2501.13905
|
On Learning Representations for Tabular Data Distillation
|
cs.LG
|
Dataset distillation generates a small set of information-rich instances from
a large dataset, resulting in reduced storage requirements, privacy or
copyright risks, and computational costs for downstream modeling, though much
of the research has focused on the image data modality. We study tabular data
distillation, which brings in novel challenges such as the inherent feature
heterogeneity and the common use of non-differentiable learning models (such as
decision tree ensembles and nearest-neighbor predictors). To mitigate these
challenges, we present $\texttt{TDColER}$, a tabular data distillation
framework via column embeddings-based representation learning. To evaluate this
framework, we also present a tabular data distillation benchmark, ${{\sf \small
TDBench}}$. Based on an elaborate evaluation on ${{\sf \small TDBench}}$,
resulting in 226,890 distilled datasets and 548,880 models trained on them, we
demonstrate that $\texttt{TDColER}$ is able to boost the distilled data quality
of off-the-shelf distillation schemes by 0.5-143% across 7 different tabular
learning models.
|
2501.13906
|
Universal optimality of $T$-avoiding spherical codes and designs
|
math.CO cs.IT math.IT math.MG
|
Given an open set (a union of open intervals), $T\subset [-1,1]$ we introduce
the concepts of $T$-avoiding spherical codes and designs, that is, spherical
codes that have no inner products in the set $T$. We show that certain codes
found in the minimal vectors of the Leech lattices, as well as the minimal
vectors of the Barnes--Wall lattice and codes derived from strongly regular
graphs, are universally optimal in the restricted class of $T$-avoiding codes.
We also extend a result of Delsarte--Goethals--Seidel about codes with three
inner products $\alpha, \beta, \gamma$ (in our terminology
$(\alpha,\beta)$-avoiding $\gamma$-codes). Parallel to the notion of tight
spherical designs, we also derive that these codes are minimal (tight)
$T$-avoiding spherical designs of fixed dimension and strength. In some cases,
we also find that codes under consideration have maximal cardinality in their
$T$-avoiding class for given dimension and minimum distance.
|
2501.13908
|
Graph Neural Controlled Differential Equations For Collaborative
Filtering
|
cs.IR
|
Graph Convolution Networks (GCNs) are widely considered state-of-the-art for
recommendation systems. Several studies in the field of recommendation systems
have attempted to apply collaborative filtering (CF) into the Neural ODE
framework. These studies follow the same idea as LightGCN, which removes the
weight matrix or with a discrete weight matrix. However, we argue that weight
control is critical for neural ODE-based methods. The importance of weight in
creating tailored graph convolution for each node is crucial, and employing a
fixed/discrete weight means it cannot adjust over time within the ODE function.
This rigidity in the graph convolution reduces its adaptability, consequently
hindering the performance of recommendations. In this study, to create an
optimal control for Neural ODE-based recommendation, we introduce a new method
called Graph Neural Controlled Differential Equations for Collaborative
Filtering (CDE-CF). Our method improves the performance of the Graph ODE-based
method by incorporating weight control in a continuous manner. To evaluate our
approach, we conducted experiments on various datasets. The results show that
our method surpasses competing baselines, including GCNs-based models and
state-of-the-art Graph ODE-based methods.
|
2501.13912
|
Analysis of Indic Language Capabilities in LLMs
|
cs.CL
|
This report evaluates the performance of text-in text-out Large Language
Models (LLMs) to understand and generate Indic languages. This evaluation is
used to identify and prioritize Indic languages suited for inclusion in safety
benchmarks. We conduct this study by reviewing existing evaluation studies and
datasets; and a set of twenty-eight LLMs that support Indic languages. We
analyze the LLMs on the basis of the training data, license for model and data,
type of access and model developers. We also compare Indic language performance
across evaluation datasets and find that significant performance disparities in
performance across Indic languages. Hindi is the most widely represented
language in models. While model performance roughly correlates with number of
speakers for the top five languages, the assessment after that varies.
|
2501.13915
|
Binary Diffusion Probabilistic Model
|
cs.CV
|
We introduce the Binary Diffusion Probabilistic Model (BDPM), a novel
generative model optimized for binary data representations. While denoising
diffusion probabilistic models (DDPMs) have demonstrated notable success in
tasks like image synthesis and restoration, traditional DDPMs rely on
continuous data representations and mean squared error (MSE) loss for training,
applying Gaussian noise models that may not be optimal for discrete or binary
data structures. BDPM addresses this by decomposing images into bitplanes and
employing XOR-based noise transformations, with a denoising model trained using
binary cross-entropy loss. This approach enables precise noise control and
computationally efficient inference, significantly lowering computational costs
and improving model convergence. When evaluated on image restoration tasks such
as image super-resolution, inpainting, and blind image restoration, BDPM
outperforms state-of-the-art methods on the FFHQ, CelebA, and CelebA-HQ
datasets. Notably, BDPM requires fewer inference steps than traditional DDPM
models to reach optimal results, showcasing enhanced inference efficiency.
|
2501.13916
|
PBM-VFL: Vertical Federated Learning with Feature and Sample Privacy
|
cs.LG
|
We present Poisson Binomial Mechanism Vertical Federated Learning (PBM-VFL),
a communication-efficient Vertical Federated Learning algorithm with
Differential Privacy guarantees. PBM-VFL combines Secure Multi-Party
Computation with the recently introduced Poisson Binomial Mechanism to protect
parties' private datasets during model training. We define the novel concept of
feature privacy and analyze end-to-end feature and sample privacy of our
algorithm. We compare sample privacy loss in VFL with privacy loss in HFL. We
also provide the first theoretical characterization of the relationship between
privacy budget, convergence error, and communication cost in
differentially-private VFL. Finally, we empirically show that our model
performs well with high levels of privacy.
|
2501.13918
|
Improving Video Generation with Human Feedback
|
cs.CV cs.AI cs.GR cs.LG
|
Video generation has achieved significant advances through rectified flow
techniques, but issues like unsmooth motion and misalignment between videos and
prompts persist. In this work, we develop a systematic pipeline that harnesses
human feedback to mitigate these problems and refine the video generation
model. Specifically, we begin by constructing a large-scale human preference
dataset focused on modern video generation models, incorporating pairwise
annotations across multi-dimensions. We then introduce VideoReward, a
multi-dimensional video reward model, and examine how annotations and various
design choices impact its rewarding efficacy. From a unified reinforcement
learning perspective aimed at maximizing reward with KL regularization, we
introduce three alignment algorithms for flow-based models by extending those
from diffusion models. These include two training-time strategies: direct
preference optimization for flow (Flow-DPO) and reward weighted regression for
flow (Flow-RWR), and an inference-time technique, Flow-NRG, which applies
reward guidance directly to noisy videos. Experimental results indicate that
VideoReward significantly outperforms existing reward models, and Flow-DPO
demonstrates superior performance compared to both Flow-RWR and standard
supervised fine-tuning methods. Additionally, Flow-NRG lets users assign custom
weights to multiple objectives during inference, meeting personalized video
quality needs. Project page: https://gongyeliu.github.io/videoalign.
|
2501.13919
|
Temporal Preference Optimization for Long-Form Video Understanding
|
cs.CV cs.AI cs.CL cs.LG cs.RO
|
Despite significant advancements in video large multimodal models
(video-LMMs), achieving effective temporal grounding in long-form videos
remains a challenge for existing models. To address this limitation, we propose
Temporal Preference Optimization (TPO), a novel post-training framework
designed to enhance the temporal grounding capabilities of video-LMMs through
preference learning. TPO adopts a self-training approach that enables models to
differentiate between well-grounded and less accurate temporal responses by
leveraging curated preference datasets at two granularities: localized temporal
grounding, which focuses on specific video segments, and comprehensive temporal
grounding, which captures extended temporal dependencies across entire video
sequences. By optimizing on these preference datasets, TPO significantly
enhances temporal understanding while reducing reliance on manually annotated
data. Extensive experiments on three long-form video understanding
benchmarks--LongVideoBench, MLVU, and Video-MME--demonstrate the effectiveness
of TPO across two state-of-the-art video-LMMs. Notably, LLaVA-Video-TPO
establishes itself as the leading 7B model on the Video-MME benchmark,
underscoring the potential of TPO as a scalable and efficient solution for
advancing temporal reasoning in long-form video understanding. Project page:
https://ruili33.github.io/tpo_website.
|
2501.13920
|
IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art
Text-to-Image Models
|
cs.CV cs.CL cs.LG
|
With the rapid development of diffusion models, text-to-image(T2I) models
have made significant progress, showcasing impressive abilities in prompt
following and image generation. Recently launched models such as FLUX.1 and
Ideogram2.0, along with others like Dall-E3 and Stable Diffusion 3, have
demonstrated exceptional performance across various complex tasks, raising
questions about whether T2I models are moving towards general-purpose
applicability. Beyond traditional image generation, these models exhibit
capabilities across a range of fields, including controllable generation, image
editing, video, audio, 3D, and motion generation, as well as computer vision
tasks like semantic segmentation and depth estimation. However, current
evaluation frameworks are insufficient to comprehensively assess these models'
performance across expanding domains. To thoroughly evaluate these models, we
developed the IMAGINE-E and tested six prominent models: FLUX.1, Ideogram2.0,
Midjourney, Dall-E3, Stable Diffusion 3, and Jimeng. Our evaluation is divided
into five key domains: structured output generation, realism, and physical
consistency, specific domain generation, challenging scenario generation, and
multi-style creation tasks. This comprehensive assessment highlights each
model's strengths and limitations, particularly the outstanding performance of
FLUX.1 and Ideogram2.0 in structured and specific domain tasks, underscoring
the expanding applications and potential of T2I models as foundational AI
tools. This study provides valuable insights into the current state and future
trajectory of T2I models as they evolve towards general-purpose usability.
Evaluation scripts will be released at https://github.com/jylei16/Imagine-e.
|
2501.13921
|
The Breeze 2 Herd of Models: Traditional Chinese LLMs Based on Llama
with Vision-Aware and Function-Calling Capabilities
|
cs.CL
|
Llama-Breeze2 (hereinafter referred to as Breeze2) is a suite of advanced
multi-modal language models, available in 3B and 8B parameter configurations,
specifically designed to enhance Traditional Chinese language representation.
Building upon the Llama 3.2 model family, we continue the pre-training of
Breeze2 on an extensive corpus to enhance the linguistic and cultural heritage
of Traditional Chinese. In addition to language modeling capabilities, we
significantly augment the models with function calling and vision understanding
capabilities. At the time of this publication, as far as we are aware, absent
reasoning-inducing prompts, Breeze2 are the strongest performing models in
Traditional Chinese function calling and image understanding in its size class.
The effectiveness of Breeze2 is benchmarked across various tasks, including
Taiwan general knowledge, instruction-following, long context, function
calling, and vision understanding. We are publicly releasing all Breeze2 models
under the Llama 3.2 Community License. We also showcase the capabilities of the
model running on mobile platform with a mobile application which we also open
source.
|
2501.13923
|
Efficient Mitigation of Error Floors in Quantum Error Correction using
Non-Binary Low-Density Parity-Check Codes
|
quant-ph cs.IT math.IT
|
In this paper, we propose an efficient method to reduce error floors in
quantum error correction using non-binary low-density parity-check (LDPC)
codes. We identify and classify cycle structures in the parity-check matrix
where estimated noise becomes trapped, and develop tailored decoding methods
for each cycle type. For Type-I cycles, we propose a method to make the
difference between estimated and true noise degenerate. Type-II cycles are
shown to be uncorrectable, while for Type-III cycles, we utilize the fact that
cycles in non-binary LDPC codes do not necessarily correspond to codewords,
allowing us to estimate the true noise. Our method significantly improves
decoding performance and reduces error floors.
|
2501.13924
|
Towards Robust Multimodal Open-set Test-time Adaptation via Adaptive
Entropy-aware Optimization
|
cs.CV cs.AI cs.LG
|
Test-time adaptation (TTA) has demonstrated significant potential in
addressing distribution shifts between training and testing data. Open-set
test-time adaptation (OSTTA) aims to adapt a source pre-trained model online to
an unlabeled target domain that contains unknown classes. This task becomes
more challenging when multiple modalities are involved. Existing methods have
primarily focused on unimodal OSTTA, often filtering out low-confidence samples
without addressing the complexities of multimodal data. In this work, we
present Adaptive Entropy-aware Optimization (AEO), a novel framework
specifically designed to tackle Multimodal Open-set Test-time Adaptation
(MM-OSTTA) for the first time. Our analysis shows that the entropy difference
between known and unknown samples in the target domain strongly correlates with
MM-OSTTA performance. To leverage this, we propose two key components:
Unknown-aware Adaptive Entropy Optimization (UAE) and Adaptive Modality
Prediction Discrepancy Optimization (AMP). These components enhance the ability
of model to distinguish unknown class samples during online adaptation by
amplifying the entropy difference between known and unknown samples. To
thoroughly evaluate our proposed methods in the MM-OSTTA setting, we establish
a new benchmark derived from existing datasets. This benchmark includes two
downstream tasks and incorporates five modalities. Extensive experiments across
various domain shift situations demonstrate the efficacy and versatility of the
AEO framework. Additionally, we highlight the strong performance of AEO in
long-term and continual MM-OSTTA settings, both of which are challenging and
highly relevant to real-world applications. Our source code is available at
https://github.com/donghao51/AEO.
|
2501.13925
|
GeoPixel: Pixel Grounding Large Multimodal Model in Remote Sensing
|
cs.CV
|
Recent advances in large multimodal models (LMMs) have recognized
fine-grained grounding as an imperative factor of visual understanding and
dialogue. However, the benefits of such representation in LMMs are limited to
the natural image domain, and these models perform poorly for remote sensing
(RS). The distinct overhead viewpoint, scale variation, and presence of small
objects in high-resolution RS imagery present a unique challenge in
region-level comprehension. Moreover, the development of the grounding
conversation capability of LMMs within RS is hindered by the lack of granular,
RS domain-specific grounded data. Addressing these limitations, we propose
GeoPixel - the first end-to-end high resolution RS-LMM that supports
pixel-level grounding. This capability allows fine-grained visual perception by
generating interleaved masks in conversation. GeoPixel supports up to 4K HD
resolution in any aspect ratio, ideal for high-precision RS image analysis. To
support the grounded conversation generation (GCG) in RS imagery, we curate a
visually grounded dataset GeoPixelD through a semi-automated pipeline that
utilizes set-of-marks prompting and spatial priors tailored for RS data to
methodically control the data generation process. GeoPixel demonstrates
superior performance in pixel-level comprehension, surpassing existing LMMs in
both single-target and multi-target segmentation tasks. Our methodological
ablation studies validate the effectiveness of each component in the overall
architecture. Our code and data will be publicly released.
|
2501.13926
|
Can We Generate Images with CoT? Let's Verify and Reinforce Image
Generation Step by Step
|
cs.CV cs.AI cs.CL
|
Chain-of-Thought (CoT) reasoning has been extensively explored in large
models to tackle complex understanding tasks. However, it still remains an open
question whether such strategies can be applied to verifying and reinforcing
image generation scenarios. In this paper, we provide the first comprehensive
investigation of the potential of CoT reasoning to enhance autoregressive image
generation. We focus on three techniques: scaling test-time computation for
verification, aligning model preferences with Direct Preference Optimization
(DPO), and integrating these techniques for complementary effects. Our results
demonstrate that these approaches can be effectively adapted and combined to
significantly improve image generation performance. Furthermore, given the
pivotal role of reward models in our findings, we propose the Potential
Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image
generation. PARM adaptively assesses each generation step through a potential
assessment approach, merging the strengths of existing reward models, and
PARM++ further introduces a reflection mechanism to self-correct the generated
unsatisfactory image. Using our investigated reasoning strategies, we enhance a
baseline model, Show-o, to achieve superior results, with a significant +24%
improvement on the GenEval benchmark, surpassing Stable Diffusion 3 by +15%. We
hope our study provides unique insights and paves a new path for integrating
CoT reasoning with autoregressive image generation. Code and models are
released at https://github.com/ZiyuGuo99/Image-Generation-CoT
|
2501.13927
|
CRPO: Confidence-Reward Driven Preference Optimization for Machine
Translation
|
cs.CL cs.AI cs.CV
|
Large language models (LLMs) have shown great potential in natural language
processing tasks, but their application to machine translation (MT) remains
challenging due to pretraining on English-centric data and the complexity of
reinforcement learning from human feedback (RLHF). Direct Preference
Optimization (DPO) has emerged as a simpler and more efficient alternative, but
its performance depends heavily on the quality of preference data. To address
this, we propose Confidence-Reward driven Preference Optimization (CRPO), a
novel method that combines reward scores with model confidence to improve data
selection for fine-tuning. CRPO selects challenging sentence pairs where the
model is uncertain or underperforms, leading to more effective learning. While
primarily designed for LLMs, CRPO also generalizes to encoder-decoder models
like NLLB, demonstrating its versatility. Empirical results show that CRPO
outperforms existing methods such as RS-DPO, RSO and MBR score in both
translation accuracy and data efficiency.
|
2501.13928
|
Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
|
cs.CV cs.AI cs.GR cs.RO
|
Multi-view 3D reconstruction remains a core challenge in computer vision,
particularly in applications requiring accurate and scalable representations
across diverse perspectives. Current leading methods such as DUSt3R employ a
fundamentally pairwise approach, processing images in pairs and necessitating
costly global alignment procedures to reconstruct from multiple views. In this
work, we propose Fast 3D Reconstruction (Fast3R), a novel multi-view
generalization to DUSt3R that achieves efficient and scalable 3D reconstruction
by processing many views in parallel. Fast3R's Transformer-based architecture
forwards N images in a single forward pass, bypassing the need for iterative
alignment. Through extensive experiments on camera pose estimation and 3D
reconstruction, Fast3R demonstrates state-of-the-art performance, with
significant improvements in inference speed and reduced error accumulation.
These results establish Fast3R as a robust alternative for multi-view
applications, offering enhanced scalability without compromising reconstruction
accuracy.
|
2501.13935
|
Low rank matrix completion and realization of graphs: results and
problems
|
math.HO cs.DM cs.LG math.CO math.GT
|
The Netflix problem (from machine learning) asks the following. Given a
ratings matrix in which each entry $(i,j)$ represents the rating of movie $j$
by customer $i$, if customer $i$ has watched movie $j$, and is otherwise
missing, we would like to predict the remaining entries in order to make good
recommendations to customers on what to watch next. The remaining entries are
predicted so as to minimize the {\it rank} of the completed matrix.
In this survey we study a more general problem, in which instead of knowing
specific matrix elements, we know linear relations on such elements. We
describe applications of these results to embeddings of graphs in surfaces
(more precisely, embeddings with rotation systems, and embeddings modulo 2).
|
2501.13936
|
Evaluating Computational Accuracy of Large Language Models in Numerical
Reasoning Tasks for Healthcare Applications
|
cs.AI cs.CL cs.LG
|
Large Language Models (LLMs) have emerged as transformative tools in the
healthcare sector, demonstrating remarkable capabilities in natural language
understanding and generation. However, their proficiency in numerical
reasoning, particularly in high-stakes domains like in clinical applications,
remains underexplored. Numerical reasoning is critical in healthcare
applications, influencing patient outcomes, treatment planning, and resource
allocation. This study investigates the computational accuracy of LLMs in
numerical reasoning tasks within healthcare contexts. Using a curated dataset
of 1,000 numerical problems, encompassing real-world scenarios such as dosage
calculations and lab result interpretations, the performance of a refined LLM
based on the GPT-3 architecture was evaluated. The methodology includes prompt
engineering, integration of fact-checking pipelines, and application of
regularization techniques to enhance model accuracy and generalization. Key
metrics such as precision, recall, and F1-score were utilized to assess the
model's efficacy. The results indicate an overall accuracy of 84.10%, with
improved performance in straightforward numerical tasks and challenges in
multi-step reasoning. The integration of a fact-checking pipeline improved
accuracy by 11%, underscoring the importance of validation mechanisms. This
research highlights the potential of LLMs in healthcare numerical reasoning and
identifies avenues for further refinement to support critical decision-making
in clinical environments. The findings aim to contribute to the development of
reliable, interpretable, and contextually relevant AI tools for healthcare.
|
2501.13941
|
GaussMark: A Practical Approach for Structural Watermarking of Language
Models
|
cs.CR cs.AI cs.CL cs.LG
|
Recent advances in Large Language Models (LLMs) have led to significant
improvements in natural language processing tasks, but their ability to
generate human-quality text raises significant ethical and operational concerns
in settings where it is important to recognize whether or not a given text was
generated by a human. Thus, recent work has focused on developing techniques
for watermarking LLM-generated text, i.e., introducing an almost imperceptible
signal that allows a provider equipped with a secret key to determine if given
text was generated by their model. Current watermarking techniques are often
not practical due to concerns with generation latency, detection time,
degradation in text quality, or robustness. Many of these drawbacks come from
the focus on token-level watermarking, which ignores the inherent structure of
text. In this work, we introduce a new scheme, GaussMark, that is simple and
efficient to implement, has formal statistical guarantees on its efficacy,
comes at no cost in generation latency, and embeds the watermark into the
weights of the model itself, providing a structural watermark. Our approach is
based on Gaussian independence testing and is motivated by recent empirical
observations that minor additive corruptions to LLM weights can result in
models of identical (or even improved) quality. We show that by adding a small
amount of Gaussian noise to the weights of a given LLM, we can watermark the
model in a way that is statistically detectable by a provider who retains the
secret key. We provide formal statistical bounds on the validity and power of
our procedure. Through an extensive suite of experiments, we demonstrate that
GaussMark is reliable, efficient, and relatively robust to corruptions such as
insertions, deletions, substitutions, and roundtrip translations and can be
instantiated with essentially no loss in model quality.
|
2501.13942
|
Prompt-Based Monte Carlo Tree Search for Mitigating Hallucinations in
Large Models
|
cs.AI
|
With the rapid development of large models in the field of artificial
intelligence, how to enhance their application capabilities in handling complex
problems in the field of scientific research remains a challenging problem to
be solved. This study proposes an improved Monte Carlo Tree Search (MCTS)
method based on prompt words. In the simulation search stage, it introduces
dynamic adjustment of exploration parameters and adaptive selection strategies,
which can better balance exploration and exploitation, thereby reducing the
hallucination phenomenon. This paper takes the four subsets of the SciEval
dataset as the test objects, and compares the Glm-4-flash+Improved MCTS method
with the methods of several existing models. The results show that the Improved
MCTS method performs better, providing new ideas and methods for the
application of large models in the field of scientific research.
|
2501.13943
|
Language Representation Favored Zero-Shot Cross-Domain Cognitive
Diagnosis
|
cs.CL cs.AI cs.CY cs.LG
|
Cognitive diagnosis aims to infer students' mastery levels based on their
historical response logs. However, existing cognitive diagnosis models (CDMs),
which rely on ID embeddings, often have to train specific models on specific
domains. This limitation may hinder their directly practical application in
various target domains, such as different subjects (e.g., Math, English and
Physics) or different education platforms (e.g., ASSISTments, Junyi Academy and
Khan Academy). To address this issue, this paper proposes the language
representation favored zero-shot cross-domain cognitive diagnosis (LRCD).
Specifically, LRCD first analyzes the behavior patterns of students, exercises
and concepts in different domains, and then describes the profiles of students,
exercises and concepts using textual descriptions. Via recent advanced
text-embedding modules, these profiles can be transformed to vectors in the
unified language space. Moreover, to address the discrepancy between the
language space and the cognitive diagnosis space, we propose language-cognitive
mappers in LRCD to learn the mapping from the former to the latter. Then, these
profiles can be easily and efficiently integrated and trained with existing
CDMs. Extensive experiments show that training LRCD on real-world datasets can
achieve commendable zero-shot performance across different target domains, and
in some cases, it can even achieve competitive performance with some classic
CDMs trained on the full response data on target domains. Notably, we
surprisingly find that LRCD can also provide interesting insights into the
differences between various subjects (such as humanities and sciences) and
sources (such as primary and secondary education).
|
2501.13944
|
Fanar: An Arabic-Centric Multimodal Generative AI Platform
|
cs.CL cs.AI
|
We present Fanar, a platform for Arabic-centric multimodal generative AI
systems, that supports language, speech and image generation tasks. At the
heart of Fanar are Fanar Star and Fanar Prime, two highly capable Arabic Large
Language Models (LLMs) that are best in the class on well established
benchmarks for similar sized models. Fanar Star is a 7B (billion) parameter
model that was trained from scratch on nearly 1 trillion clean and deduplicated
Arabic, English and Code tokens. Fanar Prime is a 9B parameter model
continually trained on the Gemma-2 9B base model on the same 1 trillion token
set. Both models are concurrently deployed and designed to address different
types of prompts transparently routed through a custom-built orchestrator. The
Fanar platform provides many other capabilities including a customized Islamic
Retrieval Augmented Generation (RAG) system for handling religious prompts, a
Recency RAG for summarizing information about current or recent events that
have occurred after the pre-training data cut-off date. The platform provides
additional cognitive capabilities including in-house bilingual speech
recognition that supports multiple Arabic dialects, voice and image generation
that is fine-tuned to better reflect regional characteristics. Finally, Fanar
provides an attribution service that can be used to verify the authenticity of
fact based generated content.
The design, development, and implementation of Fanar was entirely undertaken
at Hamad Bin Khalifa University's Qatar Computing Research Institute (QCRI) and
was sponsored by Qatar's Ministry of Communications and Information Technology
to enable sovereign AI technology development.
|
2501.13945
|
Self-Explanation in Social AI Agents
|
cs.CL cs.AI cs.CY
|
Social AI agents interact with members of a community, thereby changing the
behavior of the community. For example, in online learning, an AI social
assistant may connect learners and thereby enhance social interaction. These
social AI assistants too need to explain themselves in order to enhance
transparency and trust with the learners. We present a method of
self-explanation that uses introspection over a self-model of an AI social
assistant. The self-model is captured as a functional model that specifies how
the methods of the agent use knowledge to achieve its tasks. The process of
generating self-explanations uses Chain of Thought to reflect on the self-model
and ChatGPT to provide explanations about its functioning. We evaluate the
self-explanation of the AI social assistant for completeness and correctness.
We also report on its deployment in a live class.
|
2501.13946
|
Hallucination Mitigation using Agentic AI Natural Language-Based
Frameworks
|
cs.CL cs.AI cs.MA
|
Hallucinations remain a significant challenge in current Generative AI
models, undermining trust in AI systems and their reliability. This study
investigates how orchestrating multiple specialized Artificial Intelligent
Agents can help mitigate such hallucinations, with a focus on systems
leveraging Natural Language Processing (NLP) to facilitate seamless agent
interactions. To achieve this, we design a pipeline that introduces over three
hundred prompts, purposefully crafted to induce hallucinations, into a
front-end agent. The outputs are then systematically reviewed and refined by
second- and third-level agents, each employing distinct large language models
and tailored strategies to detect unverified claims, incorporate explicit
disclaimers, and clarify speculative content. Additionally, we introduce a set
of novel Key Performance Indicators (KPIs) specifically designed to evaluate
hallucination score levels. A dedicated fourth-level AI agent is employed to
evaluate these KPIs, providing detailed assessments and ensuring accurate
quantification of shifts in hallucination-related behaviors. A core component
of this investigation is the use of the OVON (Open Voice Network) framework,
which relies on universal NLP-based interfaces to transfer contextual
information among agents. Through structured JSON messages, each agent
communicates its assessment of the hallucination likelihood and the reasons
underlying questionable content, thereby enabling the subsequent stage to
refine the text without losing context. The results demonstrate that employing
multiple specialized agents capable of interoperating with each other through
NLP-based agentic frameworks can yield promising outcomes in hallucination
mitigation, ultimately bolstering trust within the AI community.
|
2501.13947
|
A Comprehensive Survey on Integrating Large Language Models with
Knowledge-Based Methods
|
cs.CL cs.AI
|
The rapid development of artificial intelligence has brought about
substantial advancements in the field. One promising direction is the
integration of Large Language Models (LLMs) with structured knowledge-based
systems. This approach aims to enhance AI capabilities by combining the
generative language understanding of LLMs with the precise knowledge
representation of structured systems. This survey explores the synergy between
LLMs and knowledge bases, focusing on real-world applications and addressing
associated technical, operational, and ethical challenges. Through a
comprehensive literature review, the study identifies critical issues and
evaluates existing solutions. The paper highlights the benefits of integrating
generative AI with knowledge bases, including improved data contextualization,
enhanced model accuracy, and better utilization of knowledge resources. The
findings provide a detailed overview of the current state of research, identify
key gaps, and offer actionable recommendations. These insights contribute to
advancing AI technologies and support their practical deployment across various
sectors.
|
2501.13948
|
Longitudinal Abuse and Sentiment Analysis of Hollywood Movie Dialogues
using LLMs
|
cs.CL cs.AI
|
Over the past decades, there has been an increasing concern about the
prevalence of abusive and violent content in Hollywood movies. This study uses
Large Language Models (LLMs) to explore the longitudinal abuse and sentiment
analysis of Hollywood Oscar and blockbuster movie dialogues from 1950 to 2024.
By employing fine-tuned LLMs, we analyze subtitles for over a thousand movies
categorised into four genres to examine the trends and shifts in emotional and
abusive content over the past seven decades. Our findings reveal significant
temporal changes in movie dialogues, which reflect broader social and cultural
influences. Overall, the emotional tendencies in the films are diverse, and the
detection of abusive content also exhibits significant fluctuations. The
results show a gradual rise in abusive content in recent decades, reflecting
social norms and regulatory policy changes. Genres such as thrillers still
present a higher frequency of abusive content that emphasises the ongoing
narrative role of violence and conflict. At the same time, underlying positive
emotions such as humour and optimism remain prevalent in most of the movies.
Furthermore, the gradual increase of abusive content in movie dialogues has
been significant over the last two decades, where Oscar-nominated movies
overtook the top ten blockbusters.
|
2501.13949
|
Can OpenAI o1 Reason Well in Ophthalmology? A 6,990-Question
Head-to-Head Evaluation Study
|
cs.CL cs.AI
|
Question: What is the performance and reasoning ability of OpenAI o1 compared
to other large language models in addressing ophthalmology-specific questions?
Findings: This study evaluated OpenAI o1 and five LLMs using 6,990
ophthalmological questions from MedMCQA. O1 achieved the highest accuracy
(0.88) and macro-F1 score but ranked third in reasoning capabilities based on
text-generation metrics. Across subtopics, o1 ranked first in ``Lens'' and
``Glaucoma'' but second to GPT-4o in ``Corneal and External Diseases'',
``Vitreous and Retina'' and ``Oculoplastic and Orbital Diseases''. Subgroup
analyses showed o1 performed better on queries with longer ground truth
explanations.
Meaning: O1's reasoning enhancements may not fully extend to ophthalmology,
underscoring the need for domain-specific refinements to optimize performance
in specialized fields like ophthalmology.
|
2501.13950
|
DEFEND: A Large-scale 1M Dataset and Foundation Model for Tobacco
Addiction Prevention
|
cs.CV
|
While tobacco advertising innovates at unprecedented speed, traditional
surveillance methods remain frozen in time, especially in the context of social
media. The lack of large-scale, comprehensive datasets and sophisticated
monitoring systems has created a widening gap between industry advancement and
public health oversight. This paper addresses this critical challenge by
introducing Tobacco-1M, a comprehensive dataset of one million tobacco product
images with hierarchical labels spanning 75 product categories, and DEFEND, a
novel foundation model for tobacco product understanding. Our approach
integrates a Feature Enhancement Module for rich multimodal representation
learning, a Local-Global Visual Coherence mechanism for detailed feature
discrimination, and an Enhanced Image-Text Alignment strategy for precise
product characterization. Experimental results demonstrate DEFEND's superior
performance, achieving 83.1% accuracy in product classification and 73.8% in
visual question-answering tasks, outperforming existing methods by significant
margins. Moreover, the model exhibits robust zero-shot learning capabilities
with 45.6% accuracy on novel product categories. This work provides regulatory
bodies and public health researchers with powerful tools for monitoring
emerging tobacco products and marketing strategies, potentially revolutionizing
approaches to tobacco control and public health surveillance.
|
2501.13951
|
A Layered Multi-Expert Framework for Long-Context Mental Health
Assessments
|
cs.CL cs.AI
|
Long-form mental health assessments pose unique challenges for large language
models (LLMs), which often exhibit hallucinations or inconsistent reasoning
when handling extended, domain-specific contexts. We introduce Stacked
Multi-Model Reasoning (SMMR), a layered framework that leverages multiple LLMs
and specialized smaller models as coequal 'experts'. Early layers isolate
short, discrete subtasks, while later layers integrate and refine these partial
outputs through more advanced long-context models. We evaluate SMMR on the
DAIC-WOZ depression-screening dataset and 48 curated case studies with
psychiatric diagnoses, demonstrating consistent improvements over single-model
baselines in terms of accuracy, F1-score, and PHQ-8 error reduction. By
harnessing diverse 'second opinions', SMMR mitigates hallucinations, captures
subtle clinical nuances, and enhances reliability in high-stakes mental health
assessments. Our findings underscore the value of multi-expert frameworks for
more trustworthy AI-driven screening.
|
2501.13952
|
The Dual-use Dilemma in LLMs: Do Empowering Ethical Capacities Make a
Degraded Utility?
|
cs.CL cs.AI
|
Recent years have witnessed extensive efforts to enhance Large Language
Models (LLMs) across various domains, alongside growing attention to their
ethical implications. However, a critical challenge remains largely overlooked:
LLMs must balance between rejecting harmful requests for safety and
accommodating legitimate ones for utility. This paper presents a Direct
Preference Optimization (DPO) based alignment framework that achieves better
overall performance by addressing this ethical-utility trade-off, using
chemical domain applications as a proof-of-concept. Our alignment pipeline
starts with a GPT-assisted three-phase data generation scheme, in which we
create LibraChemQA, a chemical question-answering dataset comprising 31.6k
triplet instances. By incorporating an innovative balanced seed in the data
generation process, our framework systematically considers both legitimate and
illegitimate requests. The framework also introduces a rephrasing mechanism for
efficient data augmentation that enhances the model's chemical comprehension.
We further develop a novel hybrid evaluation scheme with LLM judges for precise
assessment of both safety and utility. Experimental results demonstrate our
model's substantial improvements in overall performance where both safety and
utility are considered - our resulting model, LibraChem, outperforms leading
LLMs including Claude-3, GPT-4o, and LLaMA-3 by margins of 13.44%, 7.16%, and
7.10% respectively on our released benchmark.
|
2501.13953
|
Redundancy Principles for MLLMs Benchmarks
|
cs.CL cs.AI
|
With the rapid iteration of Multi-modality Large Language Models (MLLMs) and
the evolving demands of the field, the number of benchmarks produced annually
has surged into the hundreds. The rapid growth has inevitably led to
significant redundancy among benchmarks. Therefore, it is crucial to take a
step back and critically assess the current state of redundancy and propose
targeted principles for constructing effective MLLM benchmarks. In this paper,
we focus on redundancy from three key perspectives: 1) Redundancy of benchmark
capability dimensions, 2) Redundancy in the number of test questions, and 3)
Cross-benchmark redundancy within specific domains. Through the comprehensive
analysis over hundreds of MLLMs' performance across more than 20 benchmarks, we
aim to quantitatively measure the level of redundancy lies in existing MLLM
evaluations, provide valuable insights to guide the future development of MLLM
benchmarks, and offer strategies to refine and address redundancy issues
effectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.