id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.08870
|
When and why randomised exploration works (in linear bandits)
|
cs.LG stat.ML
|
We provide an approach for the analysis of randomised exploration algorithms
like Thompson sampling that does not rely on forced optimism or posterior
inflation. With this, we demonstrate that in the $d$-dimensional linear bandit
setting, when the action space is smooth and strongly convex, randomised
exploration algorithms enjoy an $n$-step regret bound of the order $O(d\sqrt{n}
\log(n))$. Notably, this shows for the first time that there exist non-trivial
linear bandit settings where Thompson sampling can achieve optimal dimension
dependence in the regret.
|
2502.08873
|
Robust Graph-Based Semi-Supervised Learning via $p$-Conductances
|
cs.LG cs.DM math.OC
|
We study the problem of semi-supervised learning on graphs in the regime
where data labels are scarce or possibly corrupted. We propose an approach
called $p$-conductance learning that generalizes the $p$-Laplace and Poisson
learning methods by introducing an objective reminiscent of $p$-Laplacian
regularization and an affine relaxation of the label constraints. This leads to
a family of probability measure mincut programs that balance sparse edge
removal with accurate distribution separation. Our theoretical analysis
connects these programs to well-known variational and probabilistic problems on
graphs (including randomized cuts, effective resistance, and Wasserstein
distance) and provides motivation for robustness when labels are diffused via
the heat kernel. Computationally, we develop a semismooth Newton-conjugate
gradient algorithm and extend it to incorporate class-size estimates when
converting the continuous solutions into label assignments. Empirical results
on computer vision and citation datasets demonstrate that our approach achieves
state-of-the-art accuracy in low label-rate, corrupted-label, and partial-label
regimes.
|
2502.08874
|
Data Sensor Fusion In Digital Twin Technology For Enhanced Capabilities
In A Home Environment
|
cs.AI cs.LG eess.SP
|
This paper investigates the integration of data sensor fusion in digital twin
technology to bolster home environment capabilities, particularly in the
context of challenges brought on by the coronavirus pandemic and its economic
effects. The study underscores the crucial role of digital transformation in
not just adapting to, but also mitigating disruptions during the fourth
industrial revolution. Using the Wit Motion sensor, data was collected for
activities such as walking, working, sitting, and lying, with sensors measuring
accelerometers, gyroscopes, and magnetometers. The research integrates
Cyber-physical systems, IoT, AI, and robotics to fortify digital twin
capabilities.
The paper compares sensor fusion methods, including feature-level fusion,
decision-level fusion, and Kalman filter fusion, alongside machine learning
models like SVM, GBoost, and Random Forest to assess model effectiveness.
Results show that sensor fusion significantly improves the accuracy and
reliability of these models, as it compensates for individual sensor
weaknesses, particularly with magnetometers. Despite higher accuracy in ideal
conditions, integrating data from multiple sensors ensures more consistent and
reliable results in real-world settings, thereby establishing a robust system
that can be confidently applied in practical scenarios.
|
2502.08881
|
WENDy for Nonlinear-in-Parameters ODEs
|
cs.LG stat.ME stat.ML
|
The Weak-form Estimation of Non-linear Dynamics (WENDy) algorithm is extended
to accommodate systems of ordinary differential equations that are
nonlinear-in-parameters. The extension rests on derived analytic expressions
for a likelihood function, its gradient and its Hessian matrix. WENDy makes use
of these to approximate a maximum likelihood estimator based on optimization
routines suited for non-convex optimization problems. The resulting parameter
estimation algorithm has better accuracy, a substantially larger domain of
convergence, and is often orders of magnitude faster than the conventional
output error least squares method (based on forward solvers).
The algorithm is efficiently implemented in Julia. We demonstrate the
algorithm's ability to accommodate the weak form optimization for both additive
normal and multiplicative log-normal noise, and present results on a suite of
benchmark systems of ordinary differential equations. In order to demonstrate
the practical benefits of our approach, we present extensive comparisons
between our method and output error methods in terms of accuracy, precision,
bias, and coverage.
|
2502.08882
|
2D Integrated Bayesian Tomography of Plasma Electron Density Profile for
HL-3 Based on Gaussian Process
|
cs.LG
|
This paper introduces an integrated Bayesian model that combines line
integral measurements and point values using Gaussian Process (GP). The
proposed method leverages Gaussian Process Regression (GPR) to incorporate
point values into 2D profiles and employs coordinate mapping to integrate
magnetic flux information for 2D inversion. The average relative error of the
reconstructed profile, using the integrated Bayesian tomography model with
normalized magnetic flux, is as low as 3.60*10^(-4). Additionally, sensitivity
tests were conducted on the number of grids, the standard deviation of
synthetic diagnostic data, and noise levels, laying a solid foundation for the
application of the model to experimental data. This work not only achieves
accurate 2D inversion using the integrated Bayesian model but also provides a
robust framework for decoupling pressure information from equilibrium
reconstruction, thus making it possible to optimize equilibrium reconstruction
using inversion results.
|
2502.08884
|
ShapeLib: designing a library of procedural 3D shape abstractions with
Large Language Models
|
cs.CV cs.AI cs.GR
|
Procedural representations are desirable, versatile, and popular shape
encodings. Authoring them, either manually or using data-driven procedures,
remains challenging, as a well-designed procedural representation should be
compact, intuitive, and easy to manipulate. A long-standing problem in shape
analysis studies how to discover a reusable library of procedural functions,
with semantically aligned exposed parameters, that can explain an entire shape
family. We present ShapeLib as the first method that leverages the priors of
frontier LLMs to design a library of 3D shape abstraction functions. Our system
accepts two forms of design intent: text descriptions of functions to include
in the library and a seed set of exemplar shapes. We discover procedural
abstractions that match this design intent by proposing, and then validating,
function applications and implementations. The discovered shape functions in
the library are not only expressive but also generalize beyond the seed set to
a full family of shapes. We train a recognition network that learns to infer
shape programs based on our library from different visual modalities
(primitives, voxels, point clouds). Our shape functions have parameters that
are semantically interpretable and can be modified to produce plausible shape
variations. We show that this allows inferred programs to be successfully
manipulated by an LLM given a text prompt. We evaluate ShapeLib on different
datasets and show clear advantages over existing methods and alternative
formulations.
|
2502.08886
|
Generative AI for Internet of Things Security: Challenges and
Opportunities
|
cs.CR cs.AI
|
As Generative AI (GenAI) continues to gain prominence and utility across
various sectors, their integration into the realm of Internet of Things (IoT)
security evolves rapidly. This work delves into an examination of the
state-of-the-art literature and practical applications on how GenAI could
improve and be applied in the security landscape of IoT. Our investigation aims
to map the current state of GenAI implementation within IoT security, exploring
their potential to fortify security measures further. Through the compilation,
synthesis, and analysis of the latest advancements in GenAI technologies
applied to IoT, this paper not only introduces fresh insights into the field,
but also lays the groundwork for future research directions. It explains the
prevailing challenges within IoT security, discusses the effectiveness of GenAI
in addressing these issues, and identifies significant research gaps through
MITRE Mitigations. Accompanied with three case studies, we provide a
comprehensive overview of the progress and future prospects of GenAI
applications in IoT security. This study serves as a foundational resource to
improve IoT security through the innovative application of GenAI, thus
contributing to the broader discourse on IoT security and technology
integration.
|
2502.08888
|
LLM-Enhanced Multiple Instance Learning for Joint Rumor and Stance
Detection with Social Context Information
|
cs.CL
|
The proliferation of misinformation, such as rumors on social media, has
drawn significant attention, prompting various expressions of stance among
users. Although rumor detection and stance detection are distinct tasks, they
can complement each other. Rumors can be identified by cross-referencing
stances in related posts, and stances are influenced by the nature of the
rumor. However, existing stance detection methods often require post-level
stance annotations, which are costly to obtain. We propose a novel LLM-enhanced
MIL approach to jointly predict post stance and claim class labels, supervised
solely by claim labels, using an undirected microblog propagation model. Our
weakly supervised approach relies only on bag-level labels of claim veracity,
aligning with multi-instance learning (MIL) principles. To achieve this, we
transform the multi-class problem into multiple MIL-based binary classification
problems. We then employ a discriminative attention layer to aggregate the
outputs from these classifiers into finer-grained classes. Experiments
conducted on three rumor datasets and two stance datasets demonstrate the
effectiveness of our approach, highlighting strong connections between rumor
veracity and expressed stances in responding posts. Our method shows promising
performance in joint rumor and stance detection compared to the
state-of-the-art methods.
|
2502.08889
|
Linear-Time User-Level DP-SCO via Robust Statistics
|
cs.LG cs.CR cs.DS stat.ML
|
User-level differentially private stochastic convex optimization (DP-SCO) has
garnered significant attention due to the paramount importance of safeguarding
user privacy in modern large-scale machine learning applications. Current
methods, such as those based on differentially private stochastic gradient
descent (DP-SGD), often struggle with high noise accumulation and suboptimal
utility due to the need to privatize every intermediate iterate. In this work,
we introduce a novel linear-time algorithm that leverages robust statistics,
specifically the median and trimmed mean, to overcome these challenges. Our
approach uniquely bounds the sensitivity of all intermediate iterates of SGD
with gradient estimation based on robust statistics, thereby significantly
reducing the gradient estimation noise for privacy purposes and enhancing the
privacy-utility trade-off. By sidestepping the repeated privatization required
by previous methods, our algorithm not only achieves an improved theoretical
privacy-utility trade-off but also maintains computational efficiency. We
complement our algorithm with an information-theoretic lower bound, showing
that our upper bound is optimal up to logarithmic factors and the dependence on
$\epsilon$. This work sets the stage for more robust and efficient
privacy-preserving techniques in machine learning, with implications for future
research and application in the field.
|
2502.08896
|
Communication is All You Need: Persuasion Dataset Construction via
Multi-LLM Communication
|
cs.CL cs.AI
|
Large Language Models (LLMs) have shown proficiency in generating persuasive
dialogue, yet concerns about the fluency and sophistication of their outputs
persist. This paper presents a multi-LLM communication framework designed to
enhance the generation of persuasive data automatically. This framework
facilitates the efficient production of high-quality, diverse linguistic
content with minimal human oversight. Through extensive evaluations, we
demonstrate that the generated data excels in naturalness, linguistic
diversity, and the strategic use of persuasion, even in complex scenarios
involving social taboos. The framework also proves adept at generalizing across
novel contexts. Our results highlight the framework's potential to
significantly advance research in both computational and social science domains
concerning persuasive communication.
|
2502.08898
|
Learning in Strategic Queuing Systems with Small Buffers
|
cs.GT cs.AI cs.MA
|
Routers in networking use simple learning algorithms to find the best way to
deliver packets to their desired destination. This simple, myopic and
distributed decision system makes large queuing systems simple to operate, but
at the same time, the system needs more capacity than would be required if all
traffic were centrally coordinated. In a recent paper, Gaitonde and Tardos (EC
2020 and JACM 2023) initiate the study of such systems, modeling them as an
infinitely repeated game in which routers compete for servers and the system
maintains a state (number of packets held by each queue) resulting from
outcomes of previous rounds. Queues get to send a packet at each step to one of
the servers, and servers attempt to process only one of the arriving packets,
modeling routers. However, their model assumes that servers have no buffers at
all, so queues have to resend all packets that were not served successfully.
They show that, even with hugely increased server capacity relative to what is
needed in the centrally-coordinated case, ensuring that the system is stable
requires using timestamps and priority for older packets. We consider a system
with two important changes, which make the model more realistic: first we add a
very small buffer to each server, allowing it to hold on to a single packet to
be served later (even if it fails to serve it); and second, we do not require
timestamps or priority for older packets. Our main result is to show that when
queues are learning, a small constant factor increase in server capacity,
compared to what would be needed if centrally coordinating, suffices to keep
the system stable, even if servers select randomly among packets arriving
simultaneously. This work contributes to the growing literature on the impact
of selfish learning in systems with carryover effects between rounds: when
outcomes in the present round affect the game in the future.
|
2502.08900
|
Can Uniform Meaning Representation Help GPT-4 Translate from Indigenous
Languages?
|
cs.CL
|
While ChatGPT and GPT-based models are able to effectively perform many tasks
without additional fine-tuning, they struggle with related to extremely
low-resource languages and indigenous languages. Uniform Meaning Representation
(UMR), a semantic representation designed to capture the meaning of texts in
many languages, is well-poised to be leveraged in the development of
low-resource language technologies. In this work, we explore the downstream
technical utility of UMR for low-resource languages by incorporating it into
GPT-4 prompts. Specifically, we examine the ability of GPT-4 to perform
translation from three indigenous languages (Navajo, Ar\'apaho, and Kukama),
with and without demonstrations, as well as with and without UMR annotations.
Ultimately we find that in the majority of our test cases, integrating UMR into
the prompt results in a statistically significant increase in performance,
which is a promising indication of future applications of the UMR formalism.
|
2502.08902
|
CoL3D: Collaborative Learning of Single-view Depth and Camera Intrinsics
for Metric 3D Shape Recovery
|
cs.CV
|
Recovering the metric 3D shape from a single image is particularly relevant
for robotics and embodied intelligence applications, where accurate spatial
understanding is crucial for navigation and interaction with environments.
Usually, the mainstream approaches achieve it through monocular depth
estimation. However, without camera intrinsics, the 3D metric shape can not be
recovered from depth alone. In this study, we theoretically demonstrate that
depth serves as a 3D prior constraint for estimating camera intrinsics and
uncover the reciprocal relations between these two elements. Motivated by this,
we propose a collaborative learning framework for jointly estimating depth and
camera intrinsics, named CoL3D, to learn metric 3D shapes from single images.
Specifically, CoL3D adopts a unified network and performs collaborative
optimization at three levels: depth, camera intrinsics, and 3D point clouds.
For camera intrinsics, we design a canonical incidence field mechanism as a
prior that enables the model to learn the residual incident field for enhanced
calibration. Additionally, we incorporate a shape similarity measurement loss
in the point cloud space, which improves the quality of 3D shapes essential for
robotic applications. As a result, when training and testing on a single
dataset with in-domain settings, CoL3D delivers outstanding performance in both
depth estimation and camera calibration across several indoor and outdoor
benchmark datasets, which leads to remarkable 3D shape quality for the
perception capabilities of robots.
|
2502.08903
|
3D-Grounded Vision-Language Framework for Robotic Task Planning:
Automated Prompt Synthesis and Supervised Reasoning
|
cs.RO cs.AI
|
Vision-language models (VLMs) have achieved remarkable success in scene
understanding and perception tasks, enabling robots to plan and execute actions
adaptively in dynamic environments. However, most multimodal large language
models lack robust 3D scene localization capabilities, limiting their
effectiveness in fine-grained robotic operations. Additionally, challenges such
as low recognition accuracy, inefficiency, poor transferability, and
reliability hinder their use in precision tasks. To address these limitations,
we propose a novel framework that integrates a 2D prompt synthesis module by
mapping 2D images to point clouds, and incorporates a small language model
(SLM) for supervising VLM outputs. The 2D prompt synthesis module enables VLMs,
trained on 2D images and text, to autonomously extract precise 3D spatial
information without manual intervention, significantly enhancing 3D scene
understanding. Meanwhile, the SLM supervises VLM outputs, mitigating
hallucinations and ensuring reliable, executable robotic control code
generation. Our framework eliminates the need for retraining in new
environments, thereby improving cost efficiency and operational robustness.
Experimental results that the proposed framework achieved a 96.0\% Task Success
Rate (TSR), outperforming other methods. Ablation studies demonstrated the
critical role of both the 2D prompt synthesis module and the output supervision
module (which, when removed, caused a 67\% TSR drop). These findings validate
the framework's effectiveness in improving 3D recognition, task planning, and
robotic task execution.
|
2502.08904
|
MIH-TCCT: Mitigating Inconsistent Hallucinations in LLMs via
Event-Driven Text-Code Cyclic Training
|
cs.AI
|
Recent methodologies utilizing synthetic datasets have aimed to address
inconsistent hallucinations in large language models (LLMs); however,these
approaches are primarily tailored to specific tasks, limiting their
generalizability. Inspired by the strong performance of code-trained models in
logic-intensive domains, we propose a novel framework that leverages
event-based text to generate corresponding code and employs cyclic training to
transfer the logical consistency of code to natural language effectively. Our
method significantly reduces inconsistent hallucinations across three leading
LLMs and two categories of natural language tasks while maintaining overall
performance. This framework effectively alleviates hallucinations without
necessitating adaptation to downstream tasks, demonstrating generality and
providing new perspectives to tackle the challenge of inconsistent
hallucinations.
|
2502.08905
|
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential
Low-Rank Matrix Adaptation
|
cs.CV
|
The Parameter-Efficient Fine-Tuning (PEFT) methods have been extensively
researched for large language models in the downstream tasks. Among all the
existing approaches, the Low-Rank Adaptation (LoRA) has gained popularity for
its streamlined design by incorporating low-rank matrices into existing
pre-trained models. Though effective, LoRA allocates every module an identical
low-rank matrix, which ignores the varying properties and contributions across
different components. Moreover, the existing adaptive LoRA solutions rely
highly on intuitive importance scoring indicators to adjust the interior rank
of the decomposition matrices. In this paper, we propose a new PEFT scheme
called DiffoRA, which is theoretically grounded and enables module-wise
adoption of LoRA. At the core of our DiffoRA lies a Differential Adaptation
Matrix (DAM) to determine which module is the most suitable and essential for
fine-tuning. We explain how the designed matrix impacts the convergence rate
and generalization capability of a pre-trained model. Furthermore, we construct
the DAM via continuous relaxation and discretization with weight-sharing
optimizations. We fully implement our DiffoRA and design comprehensive
experiments to evaluate its performance. The experimental results demonstrate
that our approach achieves the best model accuracy over all the
state-of-the-art baselines across various benchmarks.
|
2502.08908
|
Reinforced Large Language Model is a formal theorem prover
|
cs.AI
|
To take advantage of Large Language Model in theorem formalization and proof,
we propose a reinforcement learning framework to iteratively optimize the
pretrained LLM by rolling out next tactics and comparing them with the expected
ones. The experiment results show that it helps to achieve a higher accuracy
compared with directly fine-tuned LLM.
|
2502.08909
|
Towards Automated Fact-Checking of Real-World Claims: Exploring Task
Formulation and Assessment with LLMs
|
cs.CL cs.AI
|
Fact-checking is necessary to address the increasing volume of
misinformation. Traditional fact-checking relies on manual analysis to verify
claims, but it is slow and resource-intensive. This study establishes baseline
comparisons for Automated Fact-Checking (AFC) using Large Language Models
(LLMs) across multiple labeling schemes (binary, three-class, five-class) and
extends traditional claim verification by incorporating analysis, verdict
classification, and explanation in a structured setup to provide comprehensive
justifications for real-world claims. We evaluate Llama-3 models of varying
sizes (3B, 8B, 70B) on 17,856 claims collected from PolitiFact (2007-2024)
using evidence retrieved via restricted web searches. We utilize TIGERScore as
a reference-free evaluation metric to score the justifications. Our results
show that larger LLMs consistently outperform smaller LLMs in classification
accuracy and justification quality without fine-tuning. We find that smaller
LLMs in a one-shot scenario provide comparable task performance to fine-tuned
Small Language Models (SLMs) with large context sizes, while larger LLMs
consistently surpass them. Evidence integration improves performance across all
models, with larger LLMs benefiting most. Distinguishing between nuanced labels
remains challenging, emphasizing the need for further exploration of labeling
schemes and alignment with evidences. Our findings demonstrate the potential of
retrieval-augmented AFC with LLMs.
|
2502.08910
|
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on
a Single GPU
|
cs.CL cs.LG
|
In modern large language models (LLMs), handling very long context lengths
presents significant challenges as it causes slower inference speeds and
increased memory costs. Additionally, most existing pre-trained LLMs fail to
generalize beyond their original training sequence lengths. To enable efficient
and practical long-context utilization, we introduce InfiniteHiP, a novel, and
practical LLM inference framework that accelerates processing by dynamically
eliminating irrelevant context tokens through a modular hierarchical token
pruning algorithm. Our method also allows generalization to longer sequences by
selectively applying various RoPE adjustment methods according to the internal
attention patterns within LLMs. Furthermore, we offload the key-value cache to
host memory during inference, significantly reducing GPU memory pressure. As a
result, InfiniteHiP enables the processing of up to 3 million tokens on a
single L40s 48GB GPU -- 3x larger -- without any permanent loss of context
information. Our framework achieves an 18.95x speedup in attention decoding for
a 1 million token context without requiring additional training. We implement
our method in the SGLang framework and demonstrate its effectiveness and
practicality through extensive evaluations.
|
2502.08914
|
Diffusion Models Through a Global Lens: Are They Culturally Inclusive?
|
cs.CV cs.AI
|
Text-to-image diffusion models have recently enabled the creation of visually
compelling, detailed images from textual prompts. However, their ability to
accurately represent various cultural nuances remains an open question. In our
work, we introduce CultDiff benchmark, evaluating state-of-the-art diffusion
models whether they can generate culturally specific images spanning ten
countries. We show that these models often fail to generate cultural artifacts
in architecture, clothing, and food, especially for underrepresented country
regions, by conducting a fine-grained analysis of different similarity aspects,
revealing significant disparities in cultural relevance, description fidelity,
and realism compared to real-world reference images. With the collected human
evaluations, we develop a neural-based image-image similarity metric, namely,
CultDiff-S, to predict human judgment on real and generated images with
cultural artifacts. Our work highlights the need for more inclusive generative
AI systems and equitable dataset representation over a wide range of cultures.
|
2502.08916
|
PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic
Decision-Making Applied to Histopathology
|
cs.CV cs.AI cs.CL cs.MA
|
Diagnosing diseases through histopathology whole slide images (WSIs) is
fundamental in modern pathology but is challenged by the gigapixel scale and
complexity of WSIs. Trained histopathologists overcome this challenge by
navigating the WSI, looking for relevant patches, taking notes, and compiling
them to produce a final holistic diagnostic. Traditional AI approaches, such as
multiple instance learning and transformer-based models, fail short of such a
holistic, iterative, multi-scale diagnostic procedure, limiting their adoption
in the real-world. We introduce PathFinder, a multi-modal, multi-agent
framework that emulates the decision-making process of expert pathologists.
PathFinder integrates four AI agents, the Triage Agent, Navigation Agent,
Description Agent, and Diagnosis Agent, that collaboratively navigate WSIs,
gather evidence, and provide comprehensive diagnoses with natural language
explanations. The Triage Agent classifies the WSI as benign or risky; if risky,
the Navigation and Description Agents iteratively focus on significant regions,
generating importance maps and descriptive insights of sampled patches.
Finally, the Diagnosis Agent synthesizes the findings to determine the
patient's diagnostic classification. Our Experiments show that PathFinder
outperforms state-of-the-art methods in skin melanoma diagnosis by 8% while
offering inherent explainability through natural language descriptions of
diagnostically relevant patches. Qualitative analysis by pathologists shows
that the Description Agent's outputs are of high quality and comparable to
GPT-4o. PathFinder is also the first AI-based system to surpass the average
performance of pathologists in this challenging melanoma classification task by
9%, setting a new record for efficient, accurate, and interpretable AI-assisted
diagnostics in pathology. Data, code and models available at
https://pathfinder-dx.github.io/
|
2502.08918
|
CLEAR: Cluster-based Prompt Learning on Heterogeneous Graphs
|
cs.LG
|
Prompt learning has attracted increasing attention in the graph domain as a
means to bridge the gap between pretext and downstream tasks. Existing studies
on heterogeneous graph prompting typically use feature prompts to modify node
features for specific downstream tasks, which do not concern the structure of
heterogeneous graphs. Such a design also overlooks information from the
meta-paths, which are core to learning the high-order semantics of the
heterogeneous graphs. To address these issues, we propose CLEAR, a
Cluster-based prompt LEARNING model on heterogeneous graphs. We present cluster
prompts that reformulate downstream tasks as heterogeneous graph
reconstruction. In this way, we align the pretext and downstream tasks to share
the same training objective. Additionally, our cluster prompts are also
injected into the meta-paths such that the prompt learning process incorporates
high-order semantic information entailed by the meta-paths. Extensive
experiments on downstream tasks confirm the superiority of CLEAR. It
consistently outperforms state-of-the-art models, achieving up to 5%
improvement on the F1 metric for node classification.
|
2502.08920
|
Exploring Emotion-Sensitive LLM-Based Conversational AI
|
cs.HC cs.AI
|
Conversational AI chatbots have become increasingly common within the
customer service industry. Despite improvements in their emotional development,
they often lack the authenticity of real customer service interactions or the
competence of service providers. By comparing emotion-sensitive and
emotion-insensitive LLM-based chatbots across 30 participants, we aim to
explore how emotional sensitivity in chatbots influences perceived competence
and overall customer satisfaction in service interactions. Additionally, we
employ sentiment analysis techniques to analyze and interpret the emotional
content of user inputs. We highlight that perceptions of chatbot
trustworthiness and competence were higher in the case of the emotion-sensitive
chatbot, even if issue resolution rates were not affected. We discuss
implications of improved user satisfaction from emotion-sensitive chatbots and
potential applications in support services.
|
2502.08921
|
Detecting Malicious Concepts Without Image Generation in AIGC
|
cs.CR cs.CV
|
The task of text-to-image generation has achieved tremendous success in
practice, with emerging concept generation models capable of producing highly
personalized and customized content. Fervor for concept generation is
increasing rapidly among users, and platforms for concept sharing have sprung
up. The concept owners may upload malicious concepts and disguise them with
non-malicious text descriptions and example images to deceive users into
downloading and generating malicious content. The platform needs a quick method
to determine whether a concept is malicious to prevent the spread of malicious
concepts. However, simply relying on concept image generation to judge whether
a concept is malicious requires time and computational resources. Especially,
as the number of concepts uploaded and downloaded on the platform continues to
increase, this approach becomes impractical and poses a risk of generating
malicious content. In this paper, we propose Concept QuickLook, the first
systematic work to incorporate malicious concept detection into research, which
performs detection based solely on concept files without generating any images.
We define malicious concepts and design two work modes for detection: concept
matching and fuzzy detection. Extensive experiments demonstrate that the
proposed Concept QuickLook can detect malicious concepts and demonstrate
practicality in concept sharing platforms. We also design robustness
experiments to further validate the effectiveness of the solution. We hope this
work can initiate malicious concept detection tasks and provide some
inspiration.
|
2502.08922
|
Self-Consistency of the Internal Reward Models Improves Self-Rewarding
Language Models
|
cs.AI
|
Aligning Large Language Models (LLMs) with human preferences is crucial for
their deployment in real-world applications. Recent advancements in
Self-Rewarding Language Models suggest that an LLM can use its internal reward
models (such as LLM-as-a-Judge) \cite{yuanself} to generate preference data,
improving alignment performance without costly human annotation. However, we
find that different internal reward models within the same LLM often generate
inconsistent preferences. This inconsistency raises concerns about the
reliability of self-generated preference data, hinders overall alignment
performance, and highlights the need for further research to ensure reliable
and coherent alignment with human preferences. To address this limitation, we
propose Self-Consistent Internal Rewards (SCIR), a novel framework designed to
enhance consistency among internal reward models during training. In each
training step, we collect preference predictions from multiple pre-defined
internal reward models and enforce consistency and confidence through an
inconsistency penalty mechanism, thereby improving the reliability of these
internal reward models. We selectively use data with consistent predictions for
preference optimization, ensuring the quality of the preference data. By
employing self-consistent internal rewards, our method significantly improves
the alignment performance and reward modeling capability of LLMs, outperforming
baseline methods by a notable margin.
|
2502.08923
|
CopySpec: Accelerating LLMs with Speculative Copy-and-Paste Without
Compromising Quality
|
cs.CL cs.AI cs.LG
|
We introduce CopySpec, an innovative technique designed to tackle the
inefficiencies LLMs face when generating responses that closely resemble
previous outputs. CopySpec identifies repeated sequences in the model's chat
history and speculates that the same tokens will follow, enabling seamless
copying without compromising output quality or requiring additional GPU memory.
To evaluate the effectiveness of our approach, we conducted experiments using
five LLMs and five datasets: MT-Bench, CNN/DM, GSM-8K, HumanEval, and our newly
created dataset, MT-Redundant. MT-Redundant, introduced in this paper,
transforms the second turn of MT-Bench into a request for variations of the
first turn's answer, simulating real-world scenarios where users request
modifications to prior responses. Our results demonstrate significant
speed-ups: up to 2.35x on CNN/DM, 3.08x on the second turn of select
MT-Redundant categories, and 2.66x on the third turn of GSM-8K's
self-correction tasks. Moreover, we show that CopySpec integrates seamlessly
with speculative decoding, yielding an average 49% additional speed-up over
speculative decoding for the second turn of MT-Redundant across all eight
categories. While LLMs, even with speculative decoding, suffer from slower
inference as context sizes grow, CopySpec leverages the expanded context to
accelerate inference, making it faster as the context size increases. Our code
and dataset are publicly available at https://github.com/RazvanDu/CopySpec.
|
2502.08924
|
Escaping Collapse: The Strength of Weak Data for Large Language Model
Training
|
cs.LG cs.AI cs.CL
|
Synthetically-generated data plays an increasingly larger role in training
large language models. However, while synthetic data has been found to be
useful, studies have also shown that without proper curation it can cause LLM
performance to plateau, or even "collapse", after many training iterations. In
this paper, we formalize this question and develop a theoretical framework to
investigate how much curation is needed in order to ensure that LLM performance
continually improves. We find that the requirements are nearly minimal. We
describe a training procedure that converges to an optimal LLM even if almost
all of the non-synthetic training data is of poor quality. Our analysis is
inspired by boosting, a classic machine learning technique that leverages a
very weak learning algorithm to produce an arbitrarily good classifier. Our
training procedure subsumes many recently proposed methods for training LLMs on
synthetic data, and thus our analysis sheds light on why they are successful,
and also suggests opportunities for future improvement. We present experiments
that validate our theory, and show that dynamically focusing labeling resources
on the most challenging examples -- in much the same way that boosting focuses
the efforts of the weak learner -- leads to improved performance.
|
2502.08927
|
Dynamic watermarks in images generated by diffusion models
|
cs.CV
|
High-fidelity text-to-image diffusion models have revolutionized visual
content generation, but their widespread use raises significant ethical
concerns, including intellectual property protection and the misuse of
synthetic media. To address these challenges, we propose a novel multi-stage
watermarking framework for diffusion models, designed to establish copyright
and trace generated images back to their source. Our multi-stage watermarking
technique involves embedding: (i) a fixed watermark that is localized in the
diffusion model's learned noise distribution and, (ii) a human-imperceptible,
dynamic watermark in generates images, leveraging a fine-tuned decoder. By
leveraging the Structural Similarity Index Measure (SSIM) and cosine
similarity, we adapt the watermark's shape and color to the generated content
while maintaining robustness. We demonstrate that our method enables reliable
source verification through watermark classification, even when the dynamic
watermark is adjusted for content-specific variations. Source model
verification is enabled through watermark classification. o support further
research, we generate a dataset of watermarked images and introduce a
methodology to evaluate the statistical impact of watermarking on generated
content.Additionally, we rigorously test our framework against various attack
scenarios, demonstrating its robustness and minimal impact on image quality.
Our work advances the field of AI-generated content security by providing a
scalable solution for model ownership verification and misuse prevention.
|
2502.08932
|
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning
Paradigms
|
cs.AI cs.CV
|
To create usable and deployable Artificial Intelligence (AI) systems, there
requires a level of assurance in performance under many different conditions.
Many times, deployed machine learning systems will require more classic logic
and reasoning performed through neurosymbolic programs jointly with artificial
neural network sensing. While many prior works have examined the assurance of a
single component of the system solely with either the neural network alone or
entire enterprise systems, very few works have examined the assurance of
integrated neurosymbolic systems. Within this work, we assess the assurance of
end-to-end fully differentiable neurosymbolic systems that are an emerging
method to create data-efficient and more interpretable models. We perform this
investigation using Scallop, an end-to-end neurosymbolic library, across
classification and reasoning tasks in both the image and audio domains. We
assess assurance across adversarial robustness, calibration, user performance
parity, and interpretability of solutions for catching misaligned solutions. We
find end-to-end neurosymbolic methods present unique opportunities for
assurance beyond their data efficiency through our empirical results but not
across the board. We find that this class of neurosymbolic models has higher
assurance in cases where arithmetic operations are defined and where there is
high dimensionality to the input space, where fully neural counterparts
struggle to learn robust reasoning operations. We identify the relationship
between neurosymbolic models' interpretability to catch shortcuts that later
result in increased adversarial vulnerability despite performance parity.
Finally, we find that the promise of data efficiency is typically only in the
case of class imbalanced reasoning problems.
|
2502.08933
|
AutoLike: Auditing Social Media Recommendations through User
Interactions
|
cs.LG
|
Modern social media platforms, such as TikTok, Facebook, and YouTube, rely on
recommendation systems to personalize content for users based on user
interactions with endless streams of content, such as "For You" pages. However,
these complex algorithms can inadvertently deliver problematic content related
to self-harm, mental health, and eating disorders. We introduce AutoLike, a
framework to audit recommendation systems in social media platforms for topics
of interest and their sentiments. To automate the process, we formulate the
problem as a reinforcement learning problem. AutoLike drives the recommendation
system to serve a particular type of content through interactions (e.g.,
liking). We apply the AutoLike framework to the TikTok platform as a case
study. We evaluate how well AutoLike identifies TikTok content automatically
across nine topics of interest; and conduct eight experiments to demonstrate
how well it drives TikTok's recommendation system towards particular topics and
sentiments. AutoLike has the potential to assist regulators in auditing
recommendation systems for problematic content. (Warning: This paper contains
qualitative examples that may be viewed as offensive or harmful.)
|
2502.08938
|
Reevaluating Policy Gradient Methods for Imperfect-Information Games
|
cs.LG
|
In the past decade, motivated by the putative failure of naive self-play deep
reinforcement learning (DRL) in adversarial imperfect-information games,
researchers have developed numerous DRL algorithms based on fictitious play
(FP), double oracle (DO), and counterfactual regret minimization (CFR). In
light of recent results of the magnetic mirror descent algorithm, we
hypothesize that simpler generic policy gradient methods like PPO are
competitive with or superior to these FP, DO, and CFR-based DRL approaches. To
facilitate the resolution of this hypothesis, we implement and release the
first broadly accessible exact exploitability computations for four large
games. Using these games, we conduct the largest-ever exploitability comparison
of DRL algorithms for imperfect-information games. Over 5600 training runs, FP,
DO, and CFR-based approaches fail to outperform generic policy gradient
methods. Code is available at https://github.com/nathanlct/IIG-RL-Benchmark and
https://github.com/gabrfarina/exp-a-spiel .
|
2502.08939
|
TokenSynth: A Token-based Neural Synthesizer for Instrument Cloning and
Text-to-Instrument
|
cs.SD cs.AI
|
Recent advancements in neural audio codecs have enabled the use of tokenized
audio representations in various audio generation tasks, such as
text-to-speech, text-to-audio, and text-to-music generation. Leveraging this
approach, we propose TokenSynth, a novel neural synthesizer that utilizes a
decoder-only transformer to generate desired audio tokens from MIDI tokens and
CLAP (Contrastive Language-Audio Pretraining) embedding, which has
timbre-related information. Our model is capable of performing instrument
cloning, text-to-instrument synthesis, and text-guided timbre manipulation
without any fine-tuning. This flexibility enables diverse sound design and
intuitive timbre control. We evaluated the quality of the synthesized audio,
the timbral similarity between synthesized and target audio/text, and synthesis
accuracy (i.e., how accurately it follows the input MIDI) using objective
measures. TokenSynth demonstrates the potential of leveraging advanced neural
audio codecs and transformers to create powerful and versatile neural
synthesizers. The source code, model weights, and audio demos are available at:
https://github.com/KyungsuKim42/tokensynth
|
2502.08940
|
Towards Understanding Why Data Augmentation Improves Generalization
|
cs.CV cs.LG stat.ML
|
Data augmentation is a cornerstone technique in deep learning, widely used to
improve model generalization. Traditional methods like random cropping and
color jittering, as well as advanced techniques such as CutOut, Mixup, and
CutMix, have achieved notable success across various domains. However, the
mechanisms by which data augmentation improves generalization remain poorly
understood, and existing theoretical analyses typically focus on individual
techniques without a unified explanation. In this work, we present a unified
theoretical framework that elucidates how data augmentation enhances
generalization through two key effects: partial semantic feature removal and
feature mixing. Partial semantic feature removal reduces the model's reliance
on individual feature, promoting diverse feature learning and better
generalization. Feature mixing, by scaling down original semantic features and
introducing noise, increases training complexity, driving the model to develop
more robust features. Advanced methods like CutMix integrate both effects,
achieving complementary benefits. Our theoretical insights are further
supported by experimental results, validating the effectiveness of this unified
perspective.
|
2502.08941
|
Analysis of Off-Policy $n$-Step TD-Learning with Linear Function
Approximation
|
cs.LG cs.AI
|
This paper analyzes multi-step temporal difference (TD)-learning algorithms
within the ``deadly triad'' scenario, characterized by linear function
approximation, off-policy learning, and bootstrapping. In particular, we prove
that $n$-step TD-learning algorithms converge to a solution as the sampling
horizon $n$ increases sufficiently. The paper is divided into two parts. In the
first part, we comprehensively examine the fundamental properties of their
model-based deterministic counterparts, including projected value iteration,
gradient descent algorithms, which can be viewed as prototype deterministic
algorithms whose analysis plays a pivotal role in understanding and developing
their model-free reinforcement learning counterparts. In particular, we prove
that these algorithms converge to meaningful solutions when $n$ is sufficiently
large. Based on these findings, in the second part, two $n$-step TD-learning
algorithms are proposed and analyzed, which can be seen as the model-free
reinforcement learning counterparts of the model-based deterministic
algorithms.
|
2502.08942
|
Language in the Flow of Time: Time-Series-Paired Texts Weaved into a
Unified Temporal Narrative
|
cs.LG cs.AI
|
While many advances in time series models focus exclusively on numerical
data, research on multimodal time series, particularly those involving
contextual textual information commonly encountered in real-world scenarios,
remains in its infancy. Consequently, effectively integrating the text modality
remains challenging. In this work, we highlight an intuitive yet significant
observation that has been overlooked by existing works: time-series-paired
texts exhibit periodic properties that closely mirror those of the original
time series. Building on this insight, we propose a novel framework, Texts as
Time Series (TaTS), which considers the time-series-paired texts to be
auxiliary variables of the time series. TaTS can be plugged into any existing
numerical-only time series models and enable them to handle time series data
with paired texts effectively. Through extensive experiments on both multimodal
time series forecasting and imputation tasks across benchmark datasets with
various existing time series models, we demonstrate that TaTS can enhance
predictive performance and achieve outperformance without modifying model
architectures.
|
2502.08943
|
Beyond the Singular: The Essential Role of Multiple Generations in
Effective Benchmark Evaluation and Analysis
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) have demonstrated significant utilities in
real-world applications, exhibiting impressive capabilities in natural language
processing and understanding. Benchmark evaluations are crucial for assessing
the capabilities of LLMs as they can provide a comprehensive assessment of
their strengths and weaknesses. However, current evaluation methods often
overlook the inherent randomness of LLMs by employing deterministic generation
strategies or relying on a single random sample, resulting in unaccounted
sampling variance and unreliable benchmark score estimates. In this paper, we
propose a hierarchical statistical model that provides a more comprehensive
representation of the benchmarking process by incorporating both benchmark
characteristics and LLM randomness. We show that leveraging multiple
generations improves the accuracy of estimating the benchmark score and reduces
variance. We also introduce $\mathbb P\left(\text{correct}\right)$, a
prompt-level difficulty score based on correct ratios, providing fine-grained
insights into individual prompts. Additionally, we create a data map that
visualizes difficulty and semantic prompts, enabling error detection and
quality control in benchmark construction.
|
2502.08946
|
The Stochastic Parrot on LLM's Shoulder: A Summative Assessment of
Physical Concept Understanding
|
cs.CL cs.AI cs.CV cs.LG
|
In a systematic way, we investigate a widely asked question: Do LLMs really
understand what they say?, which relates to the more familiar term Stochastic
Parrot. To this end, we propose a summative assessment over a carefully
designed physical concept understanding task, PhysiCo. Our task alleviates the
memorization issue via the usage of grid-format inputs that abstractly describe
physical phenomena. The grids represents varying levels of understanding, from
the core phenomenon, application examples to analogies to other abstract
patterns in the grid world. A comprehensive study on our task demonstrates: (1)
state-of-the-art LLMs, including GPT-4o, o1 and Gemini 2.0 flash thinking, lag
behind humans by ~40%; (2) the stochastic parrot phenomenon is present in LLMs,
as they fail on our grid task but can describe and recognize the same concepts
well in natural language; (3) our task challenges the LLMs due to intrinsic
difficulties rather than the unfamiliar grid format, as in-context learning and
fine-tuning on same formatted data added little to their performance.
|
2502.08947
|
Structured Convergence in Large Language Model Representations via
Hierarchical Latent Space Folding
|
cs.CL
|
Token representations in high-dimensional latent spaces often exhibit
redundancy, limiting computational efficiency and reducing structural coherence
across model layers. Hierarchical latent space folding introduces a structured
transformation mechanism that enforces a multi-scale organization within
learned embeddings, refining representational compactness while preserving
essential contextual distinctions. The proposed approach incorporates dynamic
folding operations that iteratively adjust token embeddings through structured
transformations, influencing both short-range and long-range dependencies in
sequential processing tasks. Empirical evaluation demonstrates a reduction in
representational variance across layers, contributing to more stable perplexity
distributions and enhancing predictive confidence in text generation. The
structured redistribution of attention head utilization leads to more efficient
allocation of computational resources, particularly in deeper layers, where
hierarchical refinements improve contextual abstraction. Comparative analysis
of activation sparsity patterns suggests that hierarchical adjustments
selectively reinforce critical pathways while reducing computational overhead
in non-essential regions of the model. Statistical assessments of token
reordering frequencies reveal that hierarchical modifications introduce subtle
shifts in sequential dependencies, improving contextual alignment while
maintaining syntactic correctness. Computational trade-offs associated with
hierarchical folding introduce marginal increases in training time per epoch,
yet empirical findings indicate that inference efficiency benefits from the
structured representation adjustments. The results highlight the impact of
hierarchical latent space folding on optimizing model performance through
improved representation structuring and computational efficiency.
|
2502.08949
|
Self-Supervised Graph Contrastive Pretraining for Device-level
Integrated Circuits
|
cs.LG
|
Self-supervised graph representation learning has driven significant
advancements in domains such as social network analysis, molecular design, and
electronics design automation (EDA). However, prior works in EDA have mainly
focused on the representation of gate-level digital circuits, failing to
capture analog and mixed-signal circuits. To address this gap, we introduce
DICE: Device-level Integrated Circuits Encoder, the first self-supervised
pretrained graph neural network (GNN) model for any circuit expressed at the
device level. DICE is a message-passing neural network (MPNN) trained through
graph contrastive learning, and its pretraining process is simulation-free,
incorporating two novel data augmentation techniques. Experimental results
demonstrate that DICE achieves substantial performance gains across three
downstream tasks, underscoring its effectiveness for both analog and digital
circuits.
|
2502.08950
|
Single-Agent Planning in a Multi-Agent System: A Unified Framework for
Type-Based Planners
|
cs.MA cs.GT
|
We consider a general problem where an agent is in a multi-agent environment
and must plan for herself without any prior information about her opponents. At
each moment, this pivotal agent is faced with a trade-off between exploiting
her currently accumulated information about the other agents and exploring
further to improve future (re-)planning. We propose a theoretic framework that
unifies a spectrum of planners for the pivotal agent to address this trade-off.
The planner at one end of this spectrum aims to find exact solutions, while
those towards the other end yield approximate solutions as the problem scales
up. Beyond theoretical analysis, we also implement \textbf{13} planners and
conduct experiments in a specific domain called \textit{multi-agent route
planning} with the number of agents \textbf{up to~50}, to compare their
performaces in various scenarios. One interesting observation comes from a
class of planners that we call \textit{safe-agents} and their enhanced variants
by incorporating domain-specific knowledge, which is a simple special case
under the proposed general framework, but performs sufficiently well in most
cases. Our unified framework, as well as those induced planners, provides new
insights on multi-agent decision-making, with potential applications to related
areas such as mechanism design.
|
2502.08953
|
Integrated Optimization and Game Theory Framework for Fair Cost
Allocation in Community Microgrids
|
eess.SY cs.LG cs.SY
|
Fair cost allocation in community microgrids remains a significant challenge
due to the complex interactions between multiple participants with varying load
profiles, distributed energy resources, and storage systems. Traditional cost
allocation methods often fail to adequately address the dynamic nature of
participant contributions and benefits, leading to inequitable distribution of
costs and reduced participant satisfaction. This paper presents a novel
framework integrating multi-objective optimization with cooperative game theory
for fair and efficient microgrid operation and cost allocation. The proposed
approach combines mixed-integer linear programming for optimal resource
dispatch with Shapley value analysis for equitable benefit distribution,
ensuring both system efficiency and participant satisfaction. The framework was
validated using real-world data across six distinct operational scenarios,
demonstrating significant improvements in both technical and economic
performance. Results show peak demand reductions ranging from 7.8% to 62.6%,
solar utilization rates reaching 114.8% through effective storage integration,
and cooperative gains of up to $1,801.01 per day. The Shapley value-based
allocation achieved balanced benefit-cost distributions, with net positions
ranging from -16.0% to +14.2% across different load categories, ensuring
sustainable participant cooperation.
|
2502.08954
|
Medicine on the Edge: Comparative Performance Analysis of On-Device LLMs
for Clinical Reasoning
|
cs.CL
|
The deployment of Large Language Models (LLM) on mobile devices offers
significant potential for medical applications, enhancing privacy, security,
and cost-efficiency by eliminating reliance on cloud-based services and keeping
sensitive health data local. However, the performance and accuracy of on-device
LLMs in real-world medical contexts remain underexplored. In this study, we
benchmark publicly available on-device LLMs using the AMEGA dataset, evaluating
accuracy, computational efficiency, and thermal limitation across various
mobile devices. Our results indicate that compact general-purpose models like
Phi-3 Mini achieve a strong balance between speed and accuracy, while medically
fine-tuned models such as Med42 and Aloe attain the highest accuracy. Notably,
deploying LLMs on older devices remains feasible, with memory constraints
posing a greater challenge than raw processing power. Our study underscores the
potential of on-device LLMs for healthcare while emphasizing the need for more
efficient inference and models tailored to real-world clinical reasoning.
|
2502.08957
|
Training Trajectory Predictors Without Ground-Truth Data
|
cs.RO
|
This paper presents a framework capable of accurately and smoothly estimating
position, heading, and velocity. Using this high-quality input, we propose a
system based on Trajectron++, able to consistently generate precise trajectory
predictions. Unlike conventional models that require ground-truth data for
training, our approach eliminates this dependency. Our analysis demonstrates
that poor quality input leads to noisy and unreliable predictions, which can be
detrimental to navigation modules. We evaluate both input data quality and
model output to illustrate the impact of input noise. Furthermore, we show that
our estimation system enables effective training of trajectory prediction
models even with limited data, producing robust predictions across different
environments. Accurate estimations are crucial for deploying trajectory
prediction models in real-world scenarios, and our system ensures meaningful
and reliable results across various application contexts.
|
2502.08958
|
Biologically Plausible Brain Graph Transformer
|
cs.LG cs.AI
|
State-of-the-art brain graph analysis methods fail to fully encode the
small-world architecture of brain graphs (accompanied by the presence of hubs
and functional modules), and therefore lack biological plausibility to some
extent. This limitation hinders their ability to accurately represent the
brain's structural and functional properties, thereby restricting the
effectiveness of machine learning models in tasks such as brain disorder
detection. In this work, we propose a novel Biologically Plausible Brain Graph
Transformer (BioBGT) that encodes the small-world architecture inherent in
brain graphs. Specifically, we present a network entanglement-based node
importance encoding technique that captures the structural importance of nodes
in global information propagation during brain graph communication,
highlighting the biological properties of the brain structure. Furthermore, we
introduce a functional module-aware self-attention to preserve the functional
segregation and integration characteristics of brain graphs in the learned
representations. Experimental results on three benchmark datasets demonstrate
that BioBGT outperforms state-of-the-art models, enhancing biologically
plausible brain graph representations for various brain graph analytical tasks
|
2502.08960
|
A Comprehensive Survey on Imbalanced Data Learning
|
cs.LG
|
With the expansion of data availability, machine learning (ML) has achieved
remarkable breakthroughs in both academia and industry. However, imbalanced
data distributions are prevalent in various types of raw data and severely
hinder the performance of ML by biasing the decision-making processes. To
deepen the understanding of imbalanced data and facilitate the related research
and applications, this survey systematically analyzing various real-world data
formats and concludes existing researches for different data formats into four
distinct categories: data re-balancing, feature representation, training
strategy, and ensemble learning. This structured analysis help researchers
comprehensively understand the pervasive nature of imbalance across diverse
data format, thereby paving a clearer path toward achieving specific research
goals. we provide an overview of relevant open-source libraries, spotlight
current challenges, and offer novel insights aimed at fostering future
advancements in this critical area of study.
|
2502.08963
|
Modeling Time-evolving Causality over Data Streams
|
cs.LG
|
Given an extensive, semi-infinite collection of multivariate coevolving data
sequences (e.g., sensor/web activity streams) whose observations influence each
other, how can we discover the time-changing cause-and-effect relationships in
co-evolving data streams? How efficiently can we reveal dynamical patterns that
allow us to forecast future values? In this paper, we present a novel streaming
method, ModePlait, which is designed for modeling such causal relationships
(i.e., time-evolving causality) in multivariate co-evolving data streams and
forecasting their future values. The solution relies on characteristics of the
causal relationships that evolve over time in accordance with the dynamic
changes of exogenous variables. ModePlait has the following properties: (a)
Effective: it discovers the time-evolving causality in multivariate co-evolving
data streams by detecting the transitions of distinct dynamical patterns
adaptively. (b) Accurate: it enables both the discovery of time-evolving
causality and the forecasting of future values in a streaming fashion. (c)
Scalable: our algorithm does not depend on data stream length and thus is
applicable to very large sequences. Extensive experiments on both synthetic and
real-world datasets demonstrate that our proposed model outperforms
state-of-the-art methods in terms of discovering the time-evolving causality as
well as forecasting.
|
2502.08966
|
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
|
cs.CR cs.AI
|
Tool-Based Agent Systems (TBAS) allow Language Models (LMs) to use external
tools for tasks beyond their standalone capabilities, such as searching
websites, booking flights, or making financial transactions. However, these
tools greatly increase the risks of prompt injection attacks, where malicious
content hijacks the LM agent to leak confidential data or trigger harmful
actions. Existing defenses (OpenAI GPTs) require user confirmation before every
tool call, placing onerous burdens on users. We introduce Robust TBAS (RTBAS),
which automatically detects and executes tool calls that preserve integrity and
confidentiality, requiring user confirmation only when these safeguards cannot
be ensured. RTBAS adapts Information Flow Control to the unique challenges
presented by TBAS. We present two novel dependency screeners, using
LM-as-a-judge and attention-based saliency, to overcome these challenges.
Experimental results on the AgentDojo Prompt Injection benchmark show RTBAS
prevents all targeted attacks with only a 2% loss of task utility when under
attack, and further tests confirm its ability to obtain near-oracle performance
on detecting both subtle and direct privacy leaks.
|
2502.08967
|
Low Complexity Artificial Noise Aided Beam Focusing Design in Near-Field
Terahertz Communications
|
cs.IT math.IT
|
In this paper, we develop a novel low-complexity artificial noise (AN) aided
beam focusing scheme in a near-field terahertz wiretap communication system. In
this system, the base station (BS) equipped with a large-scale array transmits
signals to a legitimate user, while mitigating information leakage to an
eavesdropper. We formulate an optimization problem to maximize the secrecy rate
achieved at the legitimate user and solve it by designing the optimal beam
focusing and power allocation. Numerical results demonstrate the significant
performance improvement achieved by the proposed AN aided beam focusing scheme,
especially when the eavesdropper is located closer to the BS than the
legitimate user.
|
2502.08969
|
SkyRover: A Modular Simulator for Cross-Domain Pathfinding
|
cs.RO cs.AI cs.LG cs.MA
|
Unmanned Aerial Vehicles (UAVs) and Automated Guided Vehicles (AGVs)
increasingly collaborate in logistics, surveillance, inspection tasks and etc.
However, existing simulators often focus on a single domain, limiting
cross-domain study. This paper presents the SkyRover, a modular simulator for
UAV-AGV multi-agent pathfinding (MAPF). SkyRover supports realistic agent
dynamics, configurable 3D environments, and convenient APIs for external
solvers and learning methods. By unifying ground and aerial operations, it
facilitates cross-domain algorithm design, testing, and benchmarking.
Experiments highlight SkyRover's capacity for efficient pathfinding and
high-fidelity simulations in UAV-AGV coordination. Project is available at
https://sites.google.com/view/mapf3d/home.
|
2502.08972
|
Tuning-Free Personalized Alignment via Trial-Error-Explain In-Context
Learning
|
cs.CL cs.AI
|
Language models are aligned to the collective voice of many, resulting in
generic outputs that do not align with specific users' styles. In this work, we
present Trial-Error-Explain In-Context Learning (TICL), a tuning-free method
that personalizes language models for text generation tasks with fewer than 10
examples per user. TICL iteratively expands an in-context learning prompt via a
trial-error-explain process, adding model-generated negative samples and
explanations that provide fine-grained guidance towards a specific user's
style. TICL achieves favorable win rates on pairwise comparisons with
LLM-as-a-judge up to 91.5% against the previous state-of-the-art and
outperforms competitive tuning-free baselines for personalized alignment tasks
of writing emails, essays and news articles. Both lexical and qualitative
analyses show that the negative samples and explanations enable language models
to learn stylistic context more effectively and overcome the bias towards
structural and formal phrases observed in their zero-shot outputs. By
front-loading inference compute to create a user-specific in-context learning
prompt that does not require extra generation steps at test time, TICL presents
a novel yet simple approach for personalized alignment.
|
2502.08974
|
Topo2Seq: Enhanced Topology Reasoning via Topology Sequence Learning
|
cs.CV
|
Extracting lane topology from perspective views (PV) is crucial for planning
and control in autonomous driving. This approach extracts potential drivable
trajectories for self-driving vehicles without relying on high-definition (HD)
maps. However, the unordered nature and weak long-range perception of the
DETR-like framework can result in misaligned segment endpoints and limited
topological prediction capabilities. Inspired by the learning of contextual
relationships in language models, the connectivity relations in roads can be
characterized as explicit topology sequences. In this paper, we introduce
Topo2Seq, a novel approach for enhancing topology reasoning via topology
sequences learning. The core concept of Topo2Seq is a randomized order
prompt-to-sequence learning between lane segment decoder and topology sequence
decoder. The dual-decoder branches simultaneously learn the lane topology
sequences extracted from the Directed Acyclic Graph (DAG) and the lane graph
containing geometric information. Randomized order prompt-to-sequence learning
extracts unordered key points from the lane graph predicted by the lane segment
decoder, which are then fed into the prompt design of the topology sequence
decoder to reconstruct an ordered and complete lane graph. In this way, the
lane segment decoder learns powerful long-range perception and accurate
topological reasoning from the topology sequence decoder. Notably, topology
sequence decoder is only introduced during training and does not affect the
inference efficiency. Experimental evaluations on the OpenLane-V2 dataset
demonstrate the state-of-the-art performance of Topo2Seq in topology reasoning.
|
2502.08975
|
Small Molecule Drug Discovery Through Deep Learning:Progress,
Challenges, and Opportunities
|
cs.LG q-bio.BM
|
Due to their excellent drug-like and pharmacokinetic properties, small
molecule drugs are widely used to treat various diseases, making them a
critical component of drug discovery. In recent years, with the rapid
development of deep learning (DL) techniques, DL-based small molecule drug
discovery methods have achieved excellent performance in prediction accuracy,
speed, and complex molecular relationship modeling compared to traditional
machine learning approaches. These advancements enhance drug screening
efficiency and optimization, and they provide more precise and effective
solutions for various drug discovery tasks. Contributing to this field's
development, this paper aims to systematically summarize and generalize the
recent key tasks and representative techniques in DL-based small molecule drug
discovery in recent years. Specifically, we provide an overview of the major
tasks in small molecule drug discovery and their interrelationships. Next, we
analyze the six core tasks, summarizing the related methods, commonly used
datasets, and technological development trends. Finally, we discuss key
challenges, such as interpretability and out-of-distribution generalization,
and offer our insights into future research directions for DL-assisted small
molecule drug discovery.
|
2502.08977
|
Text-driven 3D Human Generation via Contrastive Preference Optimization
|
cs.CV
|
Recent advances in Score Distillation Sampling (SDS) have improved 3D human
generation from textual descriptions. However, existing methods still face
challenges in accurately aligning 3D models with long and complex textual
inputs. To address this challenge, we propose a novel framework that introduces
contrastive preferences, where human-level preference models, guided by both
positive and negative prompts, assist SDS for improved alignment. Specifically,
we design a preference optimization module that integrates multiple models to
comprehensively capture the full range of textual features. Furthermore, we
introduce a negation preference module to mitigate over-optimization of
irrelevant details by leveraging static-dynamic negation prompts, effectively
preventing ``reward hacking". Extensive experiments demonstrate that our method
achieves state-of-the-art results, significantly enhancing texture realism and
visual alignment with textual descriptions, particularly for long and complex
inputs.
|
2502.08978
|
What exactly has TabPFN learned to do?
|
cs.LG stat.ML
|
TabPFN [Hollmann et al., 2023], a Transformer model pretrained to perform
in-context learning on fresh tabular classification problems, was presented at
the last ICLR conference. To better understand its behavior, we treat it as a
black-box function approximator generator and observe its generated function
approximations on a varied selection of training datasets. Exploring its
learned inductive biases in this manner, we observe behavior that is at turns
either brilliant or baffling. We conclude this post with thoughts on how these
results might inform the development, evaluation, and application of prior-data
fitted networks (PFNs) in the future.
|
2502.08982
|
Outback: Fast and Communication-efficient Index for Key-Value Store on
Disaggregated Memory
|
cs.DB
|
Disaggregated memory systems achieve resource utilization efficiency and
system scalability by distributing computation and memory resources into
distinct pools of nodes. RDMA is an attractive solution to support
high-throughput communication between different disaggregated resource pools.
However, existing RDMA solutions face a dilemma: one-sided RDMA completely
bypasses computation at memory nodes, but its communication takes multiple
round trips; two-sided RDMA achieves one-round-trip communication but requires
non-trivial computation for index lookups at memory nodes, which violates the
principle of disaggregated memory. This work presents Outback, a novel indexing
solution for key-value stores with a one-round-trip RDMA-based network that
does not incur computation-heavy tasks at memory nodes. Outback is the first to
utilize dynamic minimal perfect hashing and separates its index into two
components: one memory-efficient and compute-heavy component at compute nodes
and the other memory-heavy and compute-efficient component at memory nodes. We
implement a prototype of Outback and evaluate its performance in a public
cloud. The experimental results show that Outback achieves higher throughput
than both the state-of-the-art one-sided RDMA and two-sided RDMA-based
in-memory KVS by 1.06-5.03x, due to the unique strength of applying a separated
perfect hashing index.
|
2502.08985
|
Few is More: Task-Efficient Skill-Discovery for Multi-Task Offline
Multi-Agent Reinforcement Learning
|
cs.LG cs.AI cs.MA
|
As a data-driven approach, offline MARL learns superior policies solely from
offline datasets, ideal for domains rich in historical data but with high
interaction costs and risks. However, most existing methods are task-specific,
requiring retraining for new tasks, leading to redundancy and inefficiency. To
address this issue, in this paper, we propose a task-efficient multi-task
offline MARL algorithm, Skill-Discovery Conservative Q-Learning (SD-CQL).
Unlike existing offline skill-discovery methods, SD-CQL discovers skills by
reconstructing the next observation. It then evaluates fixed and variable
actions separately and employs behavior-regularized conservative Q-learning to
execute the optimal action for each skill. This approach eliminates the need
for local-global alignment and enables strong multi-task generalization from
limited small-scale source tasks. Substantial experiments on StarCraftII
demonstrates the superior generalization performance and task-efficiency of
SD-CQL. It achieves the best performance on $\textbf{10}$ out of $14$ task
sets, with up to $\textbf{65%}$ improvement on individual task sets, and is
within $4\%$ of the best baseline on the remaining four.
|
2502.08987
|
Neural Force Field: Learning Generalized Physical Representation from a
Few Examples
|
cs.LG cs.AI
|
Physical reasoning is a remarkable human ability that enables rapid learning
and generalization from limited experience. Current AI models, despite
extensive training, still struggle to achieve similar generalization,
especially in Out-of-distribution (OOD) settings. This limitation stems from
their inability to abstract core physical principles from observations. A key
challenge is developing representations that can efficiently learn and
generalize physical dynamics from minimal data. Here we present Neural Force
Field (NFF) a modeling framework built on Neural Ordinary Differential Equation
(NODE) that learns interpretable force field representations which can be
efficiently integrated through an Ordinary Differential Equation ( ODE) solver
to predict object trajectories. Unlike existing approaches that rely on
high-dimensional latent spaces, NFF captures fundamental physical concepts such
as gravity, support, and collision in an interpretable manner. Experiments on
two challenging physical reasoning tasks demonstrate that NFF, trained with
only a few examples, achieves strong generalization to unseen scenarios. This
physics-grounded representation enables efficient forward-backward planning and
rapid adaptation through interactive refinement. Our work suggests that
incorporating physics-inspired representations into learning systems can help
bridge the gap between artificial and human physical reasoning capabilities.
|
2502.08988
|
Latents of latents to delineate pixels: hybrid Matryoshka
autoencoder-to-U-Net pairing for segmenting large medical images in GPU-poor
and low-data regimes
|
cs.CV
|
Medical images are often high-resolution and lose important detail if
downsampled, making pixel-level methods such as semantic segmentation much less
efficient if performed on a low-dimensional image. We propose a low-rank
Matryoshka projection and a hybrid segmenting architecture that preserves
important information while retaining sufficient pixel geometry for pixel-level
tasks. We design the Matryoshka Autoencoder (MatAE-U-Net) which combines the
hierarchical encoding of the Matryoshka Autoencoder with the spatial
reconstruction capabilities of a U-Net decoder, leveraging multi-scale feature
extraction and skip connections to enhance accuracy and generalisation. We
apply it to the problem of segmenting the left ventricle (LV) in
echocardiographic images using the Stanford EchoNet-D dataset, including 1,000
standardised video-mask pairs of cardiac ultrasound videos resized to 112x112
pixels. The MatAE-UNet model achieves a Mean IoU of 77.68\%, Mean Pixel
Accuracy of 97.46\%, and Dice Coefficient of 86.91\%, outperforming the
baseline U-Net, which attains a Mean IoU of 74.70\%, Mean Pixel Accuracy of
97.31\%, and Dice Coefficient of 85.20\%. The results highlight the potential
of using the U-Net in the recursive Matroshka latent space for imaging problems
with low-contrast such as echocardiographic analysis.
|
2502.08989
|
RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency
Detection in Privacy-Preserving Federated Learning
|
cs.CR cs.AI
|
Federated Learning (FL) allows users to collaboratively train a global
machine learning model by sharing local model only, without exposing their
private data to a central server. This distributed learning is particularly
appealing in scenarios where data privacy is crucial, and it has garnered
substantial attention from both industry and academia. However, studies have
revealed privacy vulnerabilities in FL, where adversaries can potentially infer
sensitive information from the shared model parameters. In this paper, we
present an efficient masking-based secure aggregation scheme utilizing
lightweight cryptographic primitives to mitigate privacy risks. Our scheme
offers several advantages over existing methods. First, it requires only a
single setup phase for the entire FL training session, significantly reducing
communication overhead. Second, it minimizes user-side overhead by eliminating
the need for user-to-user interactions, utilizing an intermediate server layer
and a lightweight key negotiation method. Third, the scheme is highly resilient
to user dropouts, and the users can join at any FL round. Fourth, it can detect
and defend against malicious server activities, including recently discovered
model inconsistency attacks. Finally, our scheme ensures security in both
semi-honest and malicious settings. We provide security analysis to formally
prove the robustness of our approach. Furthermore, we implemented an end-to-end
prototype of our scheme. We conducted comprehensive experiments and
comparisons, which show that it outperforms existing solutions in terms of
communication and computation overhead, functionality, and security.
|
2502.08991
|
Task Generalization With AutoRegressive Compositional Structure: Can
Learning From $\d$ Tasks Generalize to $\d^{T}$ Tasks?
|
cs.LG stat.ML
|
Large language models (LLMs) exhibit remarkable task generalization, solving
tasks they were never explicitly trained on with only a few demonstrations.
This raises a fundamental question: When can learning from a small set of tasks
generalize to a large task family? In this paper, we investigate task
generalization through the lens of AutoRegressive Compositional (ARC)
structure, where each task is a composition of $T$ operations, and each
operation is among a finite family of $\d$ subtasks. This yields a total class
of size~\( \d^\TT \). We first show that generalization to all \( \d^\TT \)
tasks is theoretically achievable by training on only \( \tilde{O}(\d) \)
tasks. Empirically, we demonstrate that Transformers achieve such exponential
task generalization on sparse parity functions via in-context learning (ICL)
and Chain-of-Thought (CoT) reasoning. We further demonstrate this
generalization in arithmetic and language translation, extending beyond parity
functions.
|
2502.08993
|
Off-Policy Evaluation for Recommendations with Missing-Not-At-Random
Rewards
|
stat.ML cs.LG
|
Unbiased recommender learning (URL) and off-policy evaluation/learning
(OPE/L) techniques are effective in addressing the data bias caused by display
position and logging policies, thereby consistently improving the performance
of recommendations. However, when both bias exits in the logged data, these
estimators may suffer from significant bias. In this study, we first analyze
the position bias of the OPE estimator when rewards are missing not at random.
To mitigate both biases, we propose a novel estimator that leverages two
probabilities of logging policies and reward observations as propensity scores.
Our experiments demonstrate that the proposed estimator achieves superior
performance compared to other estimators, even as the levels of bias in reward
observations increases.
|
2502.08995
|
PixLift: Accelerating Web Browsing via AI Upscaling
|
cs.PF cs.AI
|
Accessing the internet in regions with expensive data plans and limited
connectivity poses significant challenges, restricting information access and
economic growth. Images, as a major contributor to webpage sizes, exacerbate
this issue, despite advances in compression formats like WebP and AVIF. The
continued growth of complex and curated web content, coupled with suboptimal
optimization practices in many regions, has prevented meaningful reductions in
web page sizes. This paper introduces PixLift, a novel solution to reduce
webpage sizes by downscaling their images during transmission and leveraging AI
models on user devices to upscale them. By trading computational resources for
bandwidth, PixLift enables more affordable and inclusive web access. We address
key challenges, including the feasibility of scaled image requests on popular
websites, the implementation of PixLift as a browser extension, and its impact
on user experience. Through the analysis of 71.4k webpages, evaluations of
three mainstream upscaling models, and a user study, we demonstrate PixLift's
ability to significantly reduce data usage without compromising image quality,
fostering a more equitable internet.
|
2502.08996
|
Masked Modulation: High-Throughput Half-Duplex ISAC Transmission
Waveform Design
|
cs.IT math.IT
|
Integrated sensing and communication (ISAC) enables numerous innovative
wireless applications. Communication-centric design is a practical choice for
the construction of the sixth generation (6G) ISAC networks.
Continuous-wave-based ISAC systems, with orthogonal frequency-division
multiplexing (OFDM) being a representative example, suffer from the
self-interference (SI) problem, and hence are less suitable for long-range
sensing. On the other hand, pulse-based half-duplex ISAC systems are free of
SI, but are also less favourable for high-throughput communication scenarios.
In this treatise, we propose MASked Modulation (MASM), a half-duplex ISAC
waveform design scheme, which minimises a range blindness metric, referred to
as "range glint", given a duty cycle (proportional to communication throughput)
constraint. In particular, MASM is capable of supporting high-throughput
communication (~50% duty cycle) under mild range glint. Moreover, MASM can be
flexibly adapted to frame-level waveform designs by operating on the slow-time
scale. In terms of optimal transmit mask design, a set of masks is shown to be
ideal in the sense of sidelobe level and range glint intensity.
|
2502.08997
|
Hierarchical Vision Transformer with Prototypes for Interpretable
Medical Image Classification
|
cs.CV
|
Explainability is a highly demanded requirement for applications in high-risk
areas such as medicine. Vision Transformers have mainly been limited to
attention extraction to provide insight into the model's reasoning. Our
approach combines the high performance of Vision Transformers with the
introduction of new explainability capabilities. We present HierViT, a Vision
Transformer that is inherently interpretable and adapts its reasoning to that
of humans. A hierarchical structure is used to process domain-specific features
for prediction. It is interpretable by design, as it derives the target output
with human-defined features that are visualized by exemplary images
(prototypes). By incorporating domain knowledge about these decisive features,
the reasoning is semantically similar to human reasoning and therefore
intuitive. Moreover, attention heatmaps visualize the crucial regions for
identifying each feature, thereby providing HierViT with a versatile tool for
validating predictions. Evaluated on two medical benchmark datasets, LIDC-IDRI
for lung nodule assessment and derm7pt for skin lesion classification, HierViT
achieves superior and comparable prediction accuracy, respectively, while
offering explanations that align with human reasoning.
|
2502.09000
|
Residual Transformer Fusion Network for Salt and Pepper Image Denoising
|
cs.CV cs.LG
|
Convolutional Neural Network (CNN) has been widely used in unstructured
datasets, one of which is image denoising. Image denoising is a noisy image
reconstruction process that aims to reduce additional noise that occurs from
the noisy image with various strategies. Image denoising has a problem, namely
that some image denoising methods require some prior knowledge of information
about noise. To overcome this problem, a combined architecture of Convolutional
Vision Transformer (CvT) and Residual Networks (ResNet) is used which is called
the Residual Transformer Fusion Network (RTF-Net). In general, the process in
this architecture can be divided into two parts, Noise Suppression Network
(NSN) and Structure Enhancement Network (SEN). Residual Block is used in the
Noise Suppression Network and is used to learn the noise map in the image,
while the CvT is used in the Structure Enhancement Network and is used to learn
the details that need to be added to the image processed by the Noise
Suppression Network. The model was trained using the DIV2K Training Set
dataset, and validation using the DIV2K Validation Set. After doing the
training, the model was tested using Lena, Bridge, Pepper, and BSD300 images
with noise levels ranging from 30%, 50%, and 70% and the PSNR results were
compared with the DBA, NASNLM, PARIGI, NLSF, NLSF-MLP and NLSF-CNN methods. The
test results show that the proposed method is superior in all cases except for
Pepper's image with a noise level of 30%, where NLSF-CNN is superior with a
PSNR value of 32.99 dB, while the proposed method gets a PSNR value of 31.70
dB.
|
2502.09001
|
Privacy-Preserving Hybrid Ensemble Model for Network Anomaly Detection:
Balancing Security and Data Protection
|
cs.LG
|
Privacy-preserving network anomaly detection has become an essential area of
research due to growing concerns over the protection of sensitive data.
Traditional anomaly detection models often prioritize accuracy while neglecting
the critical aspect of privacy. In this work, we propose a hybrid ensemble
model that incorporates privacy-preserving techniques to address both detection
accuracy and data protection. Our model combines the strengths of several
machine learning algorithms, including K-Nearest Neighbors (KNN), Support
Vector Machines (SVM), XGBoost, and Artificial Neural Networks (ANN), to create
a robust system capable of identifying network anomalies while ensuring
privacy. The proposed approach integrates advanced preprocessing techniques
that enhance data quality and address the challenges of small sample sizes and
imbalanced datasets. By embedding privacy measures into the model design, our
solution offers a significant advancement over existing methods, ensuring both
enhanced detection performance and strong privacy safeguards.
|
2502.09002
|
End-to-End triplet loss based fine-tuning for network embedding in
effective PII detection
|
cs.LG
|
There are many approaches in mobile data ecosystem that inspect network
traffic generated by applications running on user's device to detect personal
data exfiltration from the user's device. State-of-the-art methods rely on
features extracted from HTTP requests and in this context, machine learning
involves training classifiers on these features and making predictions using
labelled packet traces. However, most of these methods include external feature
selection before model training. Deep learning, on the other hand, typically
does not require such techniques, as it can autonomously learn and identify
patterns in the data without external feature extraction or selection
algorithms. In this article, we propose a novel deep learning based end-to-end
learning framework for prediction of exposure of personally identifiable
information (PII) in mobile packets. The framework employs a pre-trained large
language model (LLM) and an autoencoder to generate embedding of network
packets and then uses a triplet-loss based fine-tuning method to train the
model, increasing detection effectiveness using two real-world datasets. We
compare our proposed detection framework with other state-of-the-art works in
detecting PII leaks from user's device.
|
2502.09003
|
RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach
for Large Language Models
|
cs.LG cs.AI
|
Supervised fine-tuning is a standard method for adapting pre-trained large
language models (LLMs) to downstream tasks. Quantization has been recently
studied as a post-training technique for efficient LLM deployment. To obtain
quantized fine-tuned LLMs, conventional pipelines would first fine-tune the
pre-trained models, followed by post-training quantization. This often yields
suboptimal performance as it fails to leverage the synergy between fine-tuning
and quantization. To effectively realize low-bit quantization of weights,
activations, and KV caches in LLMs, we propose an algorithm named Rotated
Straight-Through-Estimator (RoSTE), which combines quantization-aware
supervised fine-tuning (QA-SFT) with an adaptive rotation strategy that
identifies an effective rotation configuration to reduce activation outliers.
We provide theoretical insights on RoSTE by analyzing its prediction error when
applied to an overparameterized least square quantized training problem. Our
findings reveal that the prediction error is directly proportional to the
quantization error of the converged weights, which can be effectively managed
through an optimized rotation configuration. Experiments on Pythia and Llama
models of different sizes demonstrate the effectiveness of RoSTE. Compared to
existing post-SFT quantization baselines, our method consistently achieves
superior performances across various tasks and different LLM architectures.
|
2502.09004
|
Hope vs. Hate: Understanding User Interactions with LGBTQ+ News Content
in Mainstream US News Media through the Lens of Hope Speech
|
cs.CL cs.CY cs.LG
|
This paper makes three contributions. First, via a substantial corpus of
1,419,047 comments posted on 3,161 YouTube news videos of major US cable news
outlets, we analyze how users engage with LGBTQ+ news content. Our analyses
focus both on positive and negative content. In particular, we construct a
fine-grained hope speech classifier that detects positive (hope speech),
negative, neutral, and irrelevant content. Second, in consultation with a
public health expert specializing on LGBTQ+ health, we conduct an annotation
study with a balanced and diverse political representation and release a
dataset of 3,750 instances with fine-grained labels and detailed annotator
demographic information. Finally, beyond providing a vital resource for the
LGBTQ+ community, our annotation study and subsequent in-the-wild assessments
reveal (1) strong association between rater political beliefs and how they rate
content relevant to a marginalized community; (2) models trained on individual
political beliefs exhibit considerable in-the-wild disagreement; and (3)
zero-shot large language models (LLMs) align more with liberal raters.
|
2502.09010
|
Data-Driven Discovery of Population Balance Equations for the
Particulate Sciences
|
cs.CE
|
Understanding the behavior of particles in a dispersed phase system via
population balances holds fundamental importance in studies of particulate
sciences across various fields. Particle behavior, however, is sophisticated as
a single particle can undergo internal property changes (e.g., size, cell age,
and energy content) through various mechanisms. When confronted with an unknown
distributed particulate system, discovering the underlying population balance
equation (PBE) entails firstly learning the underlying particulate phenomena
followed by the associated phenomenological laws that govern the kinetics and
mechanisms of particle transformations in their local conditions. Conventional
inverse problem approaches reveal the shape of phenomenological functions for
predetermined forms of PBE (e.g., pure breakage/aggregation PBE, etc.).
However, these methods can be limited in their ability to uncover the
mechanisms which govern uncharacterized particulate systems from data.
Leveraging the increasing abundance of data, we devise a data-driven framework
based on sparse regression to learn PBEs as linear combinations of an extensive
pool of candidate terms. Thus, this approach enables effective and accurate
functional identification of PBEs without assuming the structure a priori,
hence mitigating any potential loss of details, while minimizing model
overfitting and providing a more interpretable representation of particulate
systems. We showcase the proficiency of our approach across a wide spectrum of
particulate systems, ranging from simple canonical pure breakage and pure
aggregation systems to complex systems with multiple particulate processes. Our
approach holds the potential to generalize the discovery of PBEs along with
their phenomenological laws from data, thus facilitating wider adoption of
population balances.
|
2502.09017
|
Diversity Enhances an LLM's Performance in RAG and Long-context Task
|
cs.CL cs.LG
|
The rapid advancements in large language models (LLMs) have highlighted the
challenge of context window limitations, primarily due to the quadratic time
complexity of the self-attention mechanism (\(O(N^2)\), where \(N\) denotes the
context window length). This constraint impacts tasks such as
retrieval-augmented generation (RAG) in question answering (Q\&A) and long
context summarization. A common approach involves selecting content with the
highest similarity to the query; however, this often leads to redundancy and
the exclusion of diverse yet relevant information. Building on principles from
Maximal Marginal Relevance (MMR) and Farthest Point Sampling (FPS), we
integrate diversity into the content selection process. Our findings reveal
that incorporating diversity substantially increases the recall of selecting
relevant sentences or chunks before LLM-based Q\&A and summarization. These
results highlight the importance of maintaining diversity in future LLM
applications to further improve summarization and Q\&A outcomes.
|
2502.09018
|
Zero-shot Concept Bottleneck Models
|
cs.LG cs.AI cs.CV
|
Concept bottleneck models (CBMs) are inherently interpretable and
intervenable neural network models, which explain their final label prediction
by the intermediate prediction of high-level semantic concepts. However, they
require target task training to learn input-to-concept and concept-to-label
mappings, incurring target dataset collections and training resources. In this
paper, we present \textit{zero-shot concept bottleneck models} (Z-CBMs), which
predict concepts and labels in a fully zero-shot manner without training neural
networks. Z-CBMs utilize a large-scale concept bank, which is composed of
millions of vocabulary extracted from the web, to describe arbitrary input in
various domains. For the input-to-concept mapping, we introduce concept
retrieval, which dynamically finds input-related concepts by the cross-modal
search on the concept bank. In the concept-to-label inference, we apply concept
regression to select essential concepts from the retrieved concepts by sparse
linear regression. Through extensive experiments, we confirm that our Z-CBMs
provide interpretable and intervenable concepts without any additional
training. Code will be available at https://github.com/yshinya6/zcbm.
|
2502.09020
|
EventSTR: A Benchmark Dataset and Baselines for Event Stream based Scene
Text Recognition
|
cs.CV cs.AI
|
Mainstream Scene Text Recognition (STR) algorithms are developed based on RGB
cameras which are sensitive to challenging factors such as low illumination,
motion blur, and cluttered backgrounds. In this paper, we propose to recognize
the scene text using bio-inspired event cameras by collecting and annotating a
large-scale benchmark dataset, termed EventSTR. It contains 9,928
high-definition (1280 * 720) event samples and involves both Chinese and
English characters. We also benchmark multiple STR algorithms as the baselines
for future works to compare. In addition, we propose a new event-based scene
text recognition framework, termed SimC-ESTR. It first extracts the event
features using a visual encoder and projects them into tokens using a Q-former
module. More importantly, we propose to augment the vision tokens based on a
memory mechanism before feeding into the large language models. A
similarity-based error correction mechanism is embedded within the large
language model to correct potential minor errors fundamentally based on
contextual information. Extensive experiments on the newly proposed EventSTR
dataset and two simulation STR datasets fully demonstrate the effectiveness of
our proposed model. We believe that the dataset and algorithmic model can
innovatively propose an event-based STR task and are expected to accelerate the
application of event cameras in various industries. The source code and
pre-trained models will be released on https://github.com/Event-AHU/EventSTR
|
2502.09022
|
Mechanistic Unveiling of Transformer Circuits: Self-Influence as a Key
to Model Reasoning
|
cs.AI
|
Transformer-based language models have achieved significant success; however,
their internal mechanisms remain largely opaque due to the complexity of
non-linear interactions and high-dimensional operations. While previous studies
have demonstrated that these models implicitly embed reasoning trees, humans
typically employ various distinct logical reasoning mechanisms to complete the
same task. It is still unclear which multi-step reasoning mechanisms are used
by language models to solve such tasks. In this paper, we aim to address this
question by investigating the mechanistic interpretability of language models,
particularly in the context of multi-step reasoning tasks. Specifically, we
employ circuit analysis and self-influence functions to evaluate the changing
importance of each token throughout the reasoning process, allowing us to map
the reasoning paths adopted by the model. We apply this methodology to the
GPT-2 model on a prediction task (IOI) and demonstrate that the underlying
circuits reveal a human-interpretable reasoning process used by the model.
|
2502.09026
|
Billet Number Recognition Based on Test-Time Adaptation
|
cs.CV
|
During the steel billet production process, it is essential to recognize
machine-printed or manually written billet numbers on moving billets in
real-time. To address the issue of low recognition accuracy for existing scene
text recognition methods, caused by factors such as image distortions and
distribution differences between training and test data, we propose a billet
number recognition method that integrates test-time adaptation with prior
knowledge. First, we introduce a test-time adaptation method into a model that
uses the DB network for text detection and the SVTR network for text
recognition. By minimizing the model's entropy during the testing phase, the
model can adapt to the distribution of test data without the need for
supervised fine-tuning. Second, we leverage the billet number encoding rules as
prior knowledge to assess the validity of each recognition result. Invalid
results, which do not comply with the encoding rules, are replaced. Finally, we
introduce a validation mechanism into the CTC algorithm using prior knowledge
to address its limitations in recognizing damaged characters. Experimental
results on real datasets, including both machine-printed billet numbers and
handwritten billet numbers, show significant improvements in evaluation
metrics, validating the effectiveness of the proposed method.
|
2502.09027
|
A Contextual-Aware Position Encoding for Sequential Recommendation
|
cs.IR
|
Sequential recommendation (SR), which encodes user activity to predict the
next action, has emerged as a widely adopted strategy in developing commercial
personalized recommendation systems. A critical component of modern SR models
is the attention mechanism, which synthesizes users' historical activities.
This mechanism is typically order-invariant and generally relies on position
encoding (PE). Conventional SR models simply assign a learnable vector to each
position, resulting in only modest gains compared to traditional recommendation
models. Moreover, limited research has been conducted on position encoding
tailored for sequential recommendation, leaving a significant gap in addressing
its unique requirements. To bridge this gap, we propose a novel
Contextual-Aware Position Encoding method for sequential recommendation,
abbreviated as CAPE. To the best of our knowledge, CAPE is the first PE method
specifically designed for sequential recommendation. Comprehensive experiments
conducted on benchmark SR datasets demonstrate that CAPE consistently enhances
multiple mainstream backbone models and achieves state-of-the-art performance,
across small and large scale model size. Furthermore, we deployed CAPE in an
industrial setting on a real-world commercial platform, clearly showcasing the
effectiveness of our approach. Our source code is available at
https://github.com/yjdy/CAPE.
|
2502.09029
|
MTDP: Modulated Transformer Diffusion Policy Model
|
cs.RO
|
Recent research on robot manipulation based on Behavior Cloning (BC) has made
significant progress. By combining diffusion models with BC, diffusion policiy
has been proposed, enabling robots to quickly learn manipulation tasks with
high success rates. However, integrating diffusion policy with high-capacity
Transformer presents challenges, traditional Transformer architectures struggle
to effectively integrate guiding conditions, resulting in poor performance in
manipulation tasks when using Transformer-based models. In this paper, we
investigate key architectural designs of Transformers and improve the
traditional Transformer architecture by proposing the Modulated Transformer
Diffusion Policy (MTDP) model for diffusion policy. The core of this model is
the Modulated Attention module we proposed, which more effectively integrates
the guiding conditions with the main input, improving the generative model's
output quality and, consequently, increasing the robot's task success rate. In
six experimental tasks, MTDP outperformed existing Transformer model
architectures, particularly in the Toolhang experiment, where the success rate
increased by 12\%. To verify the generality of Modulated Attention, we applied
it to the UNet architecture to construct Modulated UNet Diffusion Policy model
(MUDP), which also achieved higher success rates than existing UNet
architectures across all six experiments. The Diffusion Policy uses Denoising
Diffusion Probabilistic Models (DDPM) as the diffusion model. Building on this,
we also explored Denoising Diffusion Implicit Models (DDIM) as the diffusion
model, constructing the MTDP-I and MUDP-I model, which nearly doubled the
generation speed while maintaining performance.
|
2502.09038
|
AoI-Sensitive Data Forwarding with Distributed Beamforming in
UAV-Assisted IoT
|
cs.AI
|
This paper proposes a UAV-assisted forwarding system based on distributed
beamforming to enhance age of information (AoI) in Internet of Things (IoT).
Specifically, UAVs collect and relay data between sensor nodes (SNs) and the
remote base station (BS). However, flight delays increase the AoI and degrade
the network performance. To mitigate this, we adopt distributed beamforming to
extend the communication range, reduce the flight frequency and ensure the
continuous data relay and efficient energy utilization. Then, we formulate an
optimization problem to minimize AoI and UAV energy consumption, by jointly
optimizing the UAV trajectories and communication schedules. The problem is
non-convex and with high dynamic, and thus we propose a deep reinforcement
learning (DRL)-based algorithm to solve the problem, thereby enhancing the
stability and accelerate convergence speed. Simulation results show that the
proposed algorithm effectively addresses the problem and outperforms other
benchmark algorithms.
|
2502.09039
|
Large Images are Gaussians: High-Quality Large Image Representation with
Levels of 2D Gaussian Splatting
|
cs.CV cs.AI
|
While Implicit Neural Representations (INRs) have demonstrated significant
success in image representation, they are often hindered by large training
memory and slow decoding speed. Recently, Gaussian Splatting (GS) has emerged
as a promising solution in 3D reconstruction due to its high-quality novel view
synthesis and rapid rendering capabilities, positioning it as a valuable tool
for a broad spectrum of applications. In particular, a GS-based representation,
2DGS, has shown potential for image fitting. In our work, we present
\textbf{L}arge \textbf{I}mages are \textbf{G}aussians (\textbf{LIG}), which
delves deeper into the application of 2DGS for image representations,
addressing the challenge of fitting large images with 2DGS in the situation of
numerous Gaussian points, through two distinct modifications: 1) we adopt a
variant of representation and optimization strategy, facilitating the fitting
of a large number of Gaussian points; 2) we propose a Level-of-Gaussian
approach for reconstructing both coarse low-frequency initialization and fine
high-frequency details. Consequently, we successfully represent large images as
Gaussian points and achieve high-quality large image representation,
demonstrating its efficacy across various types of large images. Code is
available at
{\href{https://github.com/HKU-MedAI/LIG}{https://github.com/HKU-MedAI/LIG}}.
|
2502.09042
|
Typhoon T1: An Open Thai Reasoning Model
|
cs.CL cs.AI
|
This paper introduces Typhoon T1, an open effort to develop an open Thai
reasoning model. A reasoning model is a relatively new type of generative model
built on top of large language models (LLMs). A reasoning model generates a
long chain of thought before arriving at a final answer, an approach found to
improve performance on complex tasks. However, details on developing such a
model are limited, especially for reasoning models that can generate traces in
a low-resource language. Typhoon T1 presents an open effort that dives into the
details of developing a reasoning model in a more cost-effective way by
leveraging supervised fine-tuning using open datasets, instead of reinforcement
learning. This paper shares the details about synthetic data generation and
training, as well as our dataset and model weights. Additionally, we provide
insights gained from developing a reasoning model that generalizes across
domains and is capable of generating reasoning traces in a low-resource
language, using Thai as an example. We hope this open effort provides a
foundation for further research in this field.
|
2502.09045
|
Evolution of Data-driven Single- and Multi-Hazard Susceptibility Mapping
and Emergence of Deep Learning Methods
|
cs.CV
|
Data-driven susceptibility mapping of natural hazards has harnessed the
advances in classification methods used on heterogeneous sources represented as
raster images. Susceptibility mapping is an important step towards risk
assessment for any natural hazard. Increasingly, multiple hazards co-occur
spatially, temporally, or both, which calls for an in-depth study on
multi-hazard susceptibility mapping. In recent years, single-hazard
susceptibility mapping algorithms have become well-established and have been
extended to multi-hazard susceptibility mapping. Deep learning is also emerging
as a promising method for single-hazard susceptibility mapping. Here, we
discuss the evolution of methods for a single hazard, their extensions to
multi-hazard maps as a late fusion of decisions, and the use of deep learning
methods in susceptibility mapping. We finally propose a vision for adapting
data fusion strategies in multimodal deep learning to multi-hazard
susceptibility mapping. From the background study of susceptibility methods, we
demonstrate that deep learning models are promising, untapped methods for
multi-hazard susceptibility mapping. Data fusion strategies provide a larger
space of deep learning models applicable to multi-hazard susceptibility
mapping.
|
2502.09046
|
Criteria-Aware Graph Filtering: Extremely Fast Yet Accurate
Multi-Criteria Recommendation
|
cs.IR cs.AI cs.IT cs.LG cs.SI math.IT
|
Multi-criteria (MC) recommender systems, which utilize MC rating information
for recommendation, are increasingly widespread in various e-commerce domains.
However, the MC recommendation using training-based collaborative filtering,
requiring consideration of multiple ratings compared to single-criterion
counterparts, often poses practical challenges in achieving state-of-the-art
performance along with scalable model training. To solve this problem, we
propose CA-GF, a training-free MC recommendation method, which is built upon
criteria-aware graph filtering for efficient yet accurate MC recommendations.
Specifically, first, we construct an item-item similarity graph using an MC
user-expansion graph. Next, we design CA-GF composed of the following key
components, including 1) criterion-specific graph filtering where the optimal
filter for each criterion is found using various types of polynomial low-pass
filters and 2) criteria preference-infused aggregation where the smoothed
signals from each criterion are aggregated. We demonstrate that CA-GF is (a)
efficient: providing the computational efficiency, offering the extremely fast
runtime of less than 0.2 seconds even on the largest benchmark dataset, (b)
accurate: outperforming benchmark MC recommendation methods, achieving
substantial accuracy gains up to 24% compared to the best competitor, and (c)
interpretable: providing interpretations for the contribution of each criterion
to the model prediction based on visualizations.
|
2502.09047
|
Optimal Algorithms in Linear Regression under Covariate Shift: On the
Importance of Precondition
|
stat.ML cs.LG
|
A common pursuit in modern statistical learning is to attain satisfactory
generalization out of the source data distribution (OOD). In theory, the
challenge remains unsolved even under the canonical setting of covariate shift
for the linear model. This paper studies the foundational (high-dimensional)
linear regression where the ground truth variables are confined to an
ellipse-shape constraint and addresses two fundamental questions in this
regime: (i) given the target covariate matrix, what is the min-max
\emph{optimal} algorithm under covariate shift? (ii) for what kinds of target
classes, the commonly-used SGD-type algorithms achieve optimality? Our analysis
starts with establishing a tight lower generalization bound via a Bayesian
Cramer-Rao inequality. For (i), we prove that the optimal estimator can be
simply a certain linear transformation of the best estimator for the source
distribution. Given the source and target matrices, we show that the
transformation can be efficiently computed via a convex program. The min-max
optimal analysis for SGD leverages the idea that we recognize both the
accumulated updates of the applied algorithms and the ideal transformation as
preconditions on the learning variables. We provide sufficient conditions when
SGD with its acceleration variants attain optimality.
|
2502.09050
|
Leveraging Member-Group Relations via Multi-View Graph Filtering for
Effective Group Recommendation
|
cs.IR cs.AI cs.IT cs.LG cs.SI math.IT
|
Group recommendation aims at providing optimized recommendations tailored to
diverse groups, enabling groups to enjoy appropriate items. On the other hand,
most existing group recommendation methods are built upon deep neural network
(DNN) architectures designed to capture the intricate relationships between
member-level and group-level interactions. While these DNN-based approaches
have proven their effectiveness, they require complex and expensive training
procedures to incorporate group-level interactions in addition to member-level
interactions. To overcome such limitations, we introduce Group-GF, a new
approach for extremely fast recommendations of items to each group via
multi-view graph filtering (GF) that offers a holistic view of complex
member-group dynamics, without the need for costly model training.
Specifically, in Group-GF, we first construct three item similarity graphs
manifesting different viewpoints for GF. Then, we discover a distinct
polynomial graph filter for each similarity graph and judiciously aggregate the
three graph filters. Extensive experiments demonstrate the effectiveness of
Group-GF in terms of significantly reducing runtime and achieving
state-of-the-art recommendation accuracy.
|
2502.09051
|
AIDE: Agentically Improve Visual Language Model with Domain Experts
|
cs.CV cs.AI cs.MA
|
The enhancement of Visual Language Models (VLMs) has traditionally relied on
knowledge distillation from larger, more capable models. This dependence
creates a fundamental bottleneck for improving state-of-the-art systems,
particularly when no superior models exist. We introduce AIDE (Agentic
Improvement through Domain Experts), a novel framework that enables VLMs to
autonomously enhance their capabilities by leveraging specialized domain expert
models. AIDE operates through a four-stage process: (1) identifying instances
for refinement, (2) engaging domain experts for targeted analysis, (3)
synthesizing expert outputs with existing data, and (4) integrating enhanced
instances into the training pipeline. Experiments on multiple benchmarks,
including MMMU, MME, MMBench, etc., demonstrate AIDE's ability to achieve
notable performance gains without relying on larger VLMs nor human supervision.
Our framework provides a scalable, resource-efficient approach to continuous
VLM improvement, addressing critical limitations in current methodologies,
particularly valuable when larger models are unavailable to access.
|
2502.09053
|
Game Theory Meets Large Language Models: A Systematic Survey
|
cs.AI cs.GT cs.LG
|
Game theory establishes a fundamental framework for analyzing strategic
interactions among rational decision-makers. The rapid advancement of large
language models (LLMs) has sparked extensive research exploring the
intersection of these two fields. Specifically, game-theoretic methods are
being applied to evaluate and enhance LLM capabilities, while LLMs themselves
are reshaping classic game models. This paper presents a comprehensive survey
of the intersection of these fields, exploring a bidirectional relationship
from three perspectives: (1) Establishing standardized game-based benchmarks
for evaluating LLM behavior; (2) Leveraging game-theoretic methods to improve
LLM performance through algorithmic innovations; (3) Characterizing the
societal impacts of LLMs through game modeling. Among these three aspects, we
also highlight how the equilibrium analysis for traditional game models is
impacted by LLMs' advanced language understanding, which in turn extends the
study of game theory. Finally, we identify key challenges and future research
directions, assessing their feasibility based on the current state of the
field. By bridging theoretical rigor with emerging AI capabilities, this survey
aims to foster interdisciplinary collaboration and drive progress in this
evolving research area.
|
2502.09054
|
Cost-Saving LLM Cascades with Early Abstention
|
cs.AI
|
LLM cascades are based on the idea that processing all queries with the
largest and most expensive LLMs is inefficient. Instead, cascades deploy small
LLMs to answer the majority of queries, limiting the use of large and expensive
LLMs to only the most difficult queries. This approach can significantly reduce
costs without impacting performance. However, risk-sensitive domains such as
finance or medicine place an additional premium on avoiding model errors.
Recognizing that even the most expensive models may make mistakes, applications
in these domains benefit from allowing LLM systems to completely abstain from
answering a query when the chance of making a mistake is significant. However,
giving a cascade the ability to abstain poses an immediate design question for
LLM cascades: should abstention only be allowed at the final model or also at
earlier models? Since the error patterns of small and large models are
correlated, the latter strategy may further reduce inference costs by letting
inexpensive models anticipate abstention decisions by expensive models, thereby
obviating the need to run the expensive models. We investigate the benefits of
"early abstention" in LLM cascades and find that it reduces the overall test
loss by 2.2% on average across six benchmarks (GSM8K, MedMCQA, MMLU, TriviaQA,
TruthfulQA, and XSum). These gains result from a more effective use of
abstention, which trades a 4.1% average increase in the overall abstention rate
for a 13.0% reduction in cost and a 5.0% reduction in error rate. Our findings
demonstrate that it is possible to leverage correlations between the error
patterns of different language models to drive performance improvements for LLM
systems with abstention.
|
2502.09055
|
Exploring the Needs of Practising Musicians in Co-Creative AI Through
Co-Design
|
cs.HC cs.AI
|
Recent advances in generative AI music have resulted in new technologies that
are being framed as co-creative tools for musicians with early work
demonstrating their potential to add to music practice. While the field has
seen many valuable contributions, work that involves practising musicians in
the design and development of these tools is limited, with the majority of work
including them only once a tool has been developed. In this paper, we present a
case study that explores the needs of practising musicians through the
co-design of a musical variation system, highlighting the importance of
involving a diverse range of musicians throughout the design process and
uncovering various design insights. This was achieved through two workshops and
a two week ecological evaluation, where musicians from different musical
backgrounds offered valuable insights not only on a musical system's design but
also on how a musical AI could be integrated into their musical practices.
|
2502.09056
|
Adapting Language-Specific LLMs to a Reasoning Model in One Day via
Model Merging -- An Open Recipe
|
cs.CL cs.AI
|
This paper investigates data selection and model merging methodologies aimed
at incorporating advanced reasoning capabilities such as those of DeepSeek R1
into language-specific large language models (LLMs), with a particular focus on
the Thai LLM. Our goal is to enhance the reasoning capabilities of
language-specific LLMs while maintaining their target language abilities.
DeepSeek R1 excels in reasoning but primarily benefits high-resource languages
such as English and Chinese. However, low-resource languages remain underserved
due to the dominance of English-centric training data and model optimizations,
which limit performance in these languages. This limitation results in
unreliable code-switching and diminished effectiveness on tasks in low-resource
languages. Meanwhile, local and regional LLM initiatives have attempted to
bridge this gap by developing language-specific LLMs that focus on improving
local linguistic fidelity. We demonstrate that, with only publicly available
datasets and a computational budget of $120, it is possible to enhance the
reasoning capabilities of language-specific LLMs to match the level of DeepSeek
R1, without compromising their performance on target language tasks.
|
2502.09057
|
Vision-Language In-Context Learning Driven Few-Shot Visual Inspection
Model
|
cs.CV
|
We propose general visual inspection model using Vision-Language Model~(VLM)
with few-shot images of non-defective or defective products, along with
explanatory texts that serve as inspection criteria. Although existing VLM
exhibit high performance across various tasks, they are not trained on specific
tasks such as visual inspection. Thus, we construct a dataset consisting of
diverse images of non-defective and defective products collected from the web,
along with unified formatted output text, and fine-tune VLM. For new products,
our method employs In-Context Learning, which allows the model to perform
inspections with an example of non-defective or defective image and the
corresponding explanatory texts with visual prompts. This approach eliminates
the need to collect a large number of training samples and re-train the model
for each product. The experimental results show that our method achieves high
performance, with MCC of 0.804 and F1-score of 0.950 on MVTec AD in a one-shot
manner. Our code is available
at~https://github.com/ia-gu/Vision-Language-In-Context-Learning-Driven-Few-Shot-Visual-Inspection-Model.
|
2502.09058
|
Unleashing the Power of Large Language Model for Denoising
Recommendation
|
cs.IR
|
Recommender systems are crucial for personalizing user experiences but often
depend on implicit feedback data, which can be noisy and misleading. Existing
denoising studies involve incorporating auxiliary information or learning
strategies from interaction data. However, they struggle with the inherent
limitations of external knowledge and interaction data, as well as the
non-universality of certain predefined assumptions, hindering accurate noise
identification. Recently, large language models (LLMs) have gained attention
for their extensive world knowledge and reasoning abilities, yet their
potential in enhancing denoising in recommendations remains underexplored. In
this paper, we introduce LLaRD, a framework leveraging LLMs to improve
denoising in recommender systems, thereby boosting overall recommendation
performance. Specifically, LLaRD generates denoising-related knowledge by first
enriching semantic insights from observational data via LLMs and inferring
user-item preference knowledge. It then employs a novel Chain-of-Thought (CoT)
technique over user-item interaction graphs to reveal relation knowledge for
denoising. Finally, it applies the Information Bottleneck (IB) principle to
align LLM-generated denoising knowledge with recommendation targets, filtering
out noise and irrelevant LLM knowledge. Empirical results demonstrate LLaRD's
effectiveness in enhancing denoising and recommendation accuracy.
|
2502.09060
|
Anchor Sponsor Firms in Open Source Software Ecosystems
|
cs.SE cs.CY cs.SI
|
Firms are intensifying their involvement with open source software (OSS),
going beyond contributing to individual projects and releasing their own core
technologies as OSS. These technologies, from web frameworks to programming
languages, are the foundations of large and growing ecosystems. Yet we know
little about how these anchor sponsors shape the behavior of OSS contributors.
We examine Mozilla Corporation's role as incubator and anchor sponsor in the
Rust programming language ecosystem, leveraging data on nearly 30,000
developers and 40,000 OSS projects from 2015 to 2022. When Mozilla abruptly
exited Rust in August 2020, event-study models estimate a negative impact on
ecosystem activity: a 9\% immediate drop in weekly commits and a 0.6 percentage
point decline in trend. We observe an asymmetry in the shock's effects: former
Mozilla developers and close collaborators continued contributing relatively
quickly, whereas more distant developers showed reduced or ceased activity even
six months later. An agent-based model of an OSS ecosystem with an anchor
sponsor replicates these patterns. We also find a marked slowdown in new
developers and projects entering Rust post-shock. Our results suggest that
Mozilla served as a critical signal of Rust's quality and stability. Once
withdrawn, newcomers and less-embedded developers were the most discouraged,
raising concerns about long-term ecosystem sustainability.
|
2502.09061
|
CRANE: Reasoning with constrained LLM generation
|
cs.PL cs.LG
|
Code generation, symbolic math reasoning, and other tasks require LLMs to
produce outputs that are both syntactically and semantically correct.
Constrained LLM generation is a promising direction to enforce adherence to
formal grammar, but prior works have empirically observed that strict
enforcement of formal constraints often diminishes the reasoning capabilities
of LLMs. In this work, we first provide a theoretical explanation for why
constraining LLM outputs to very restrictive grammars that only allow
syntactically valid final answers reduces the reasoning capabilities of the
model. Second, we demonstrate that by augmenting the output grammar with
carefully designed additional rules, it is always possible to preserve the
reasoning capabilities of the LLM while ensuring syntactic and semantic
correctness in its outputs. Building on these theoretical insights, we propose
a reasoning-augmented constrained decoding algorithm, CRANE, which effectively
balances the correctness of constrained generation with the flexibility of
unconstrained generation. Experiments on multiple open-source LLMs and
benchmarks show that CRANE significantly outperforms both state-of-the-art
constrained decoding strategies and standard unconstrained decoding, showing up
to 10% points accuracy improvement over baselines on challenging symbolic
reasoning benchmarks GSM-symbolic and FOLIO.
|
2502.09064
|
StyleBlend: Enhancing Style-Specific Content Creation in Text-to-Image
Diffusion Models
|
cs.CV
|
Synthesizing visually impressive images that seamlessly align both text
prompts and specific artistic styles remains a significant challenge in
Text-to-Image (T2I) diffusion models. This paper introduces StyleBlend, a
method designed to learn and apply style representations from a limited set of
reference images, enabling content synthesis of both text-aligned and
stylistically coherent. Our approach uniquely decomposes style into two
components, composition and texture, each learned through different strategies.
We then leverage two synthesis branches, each focusing on a corresponding style
component, to facilitate effective style blending through shared features
without affecting content generation. StyleBlend addresses the common issues of
text misalignment and weak style representation that previous methods have
struggled with. Extensive qualitative and quantitative comparisons demonstrate
the superiority of our approach.
|
2502.09065
|
Lowering the Error Floor of Error Correction Code Transformer
|
cs.IT math.IT
|
With the success of transformer architectures across diverse applications,
the error correction code transformer (ECCT) has gained significant attention
for its superior decoding performance. In spite of its advantages, the error
floor phenomenon in ECCT decoding remains unexplored. We present the first
investigation of the error floor issue in ECCT and propose a hybrid decoding
approach that integrates hard decision decoders as pre- and post-decoders with
ECCT to effectively lower the error floor. In particular, we introduce a novel
loss function for ECCT that considers the dynamics of hybrid decoding
algorithm. Training ECCT with the proposed loss function enhances its ability
to correct specific error patterns by taking into account its interaction with
the auxiliary decoders. Simulation results demonstrate that the proposed hybrid
decoder with the novel loss function significantly outperforms the original
ECCT in both the waterfall and the error floor regions.
|
2502.09067
|
FlowAR: une plateforme uniformis\'ee pour la reconnaissance des
activit\'es humaines \`a partir de capteurs binaires
|
cs.LG
|
This demo showcases a platform for developing human activity recognition (AR)
systems, focusing on daily activities using sensor data, like binary sensors.
With a data-driven approach, this platform, named FlowAR, features a three-step
pipeline (flow): data cleaning, segmentation, and personalized classification.
Its modularity allows flexibility to test methods, datasets, and ensure
rigorous evaluations. A concrete use case demonstrates its effectiveness.
|
2502.09073
|
Enhancing RAG with Active Learning on Conversation Records: Reject
Incapables and Answer Capables
|
cs.CL
|
Retrieval-augmented generation (RAG) is a key technique for leveraging
external knowledge and reducing hallucinations in large language models (LLMs).
However, RAG still struggles to fully prevent hallucinated responses. To
address this, it is essential to identify samples prone to hallucination or
guide LLMs toward correct responses, which experts then annotate to develop
high-quality datasets for refining LLMs. However, the growing scarcity of such
datasets makes their creation challenging. This paper proposes using the vast
amount of conversations from widespread LLM usage to build these datasets,
training LLMs to avoid hallucination-prone questions while accurately
responding to manageable ones. Given the impracticality of expert-annotating
all conversation records, the paper introduces AL4RAG, which uses active
learning to select the most suitable conversation samples for annotation,
optimizing performance within an annotation budget. Additionally, recognizing
that traditional active learning methods are not fully compatible with RAG due
to unsuitable distance metrics, we develop a novel sample distance measurement
for RAG active learning. Extensive experiments show that our method
consistently outperforms baselines across multiple metrics.
|
2502.09075
|
PTZ-Calib: Robust Pan-Tilt-Zoom Camera Calibration
|
cs.CV
|
In this paper, we present PTZ-Calib, a robust two-stage PTZ camera
calibration method, that efficiently and accurately estimates camera parameters
for arbitrary viewpoints. Our method includes an offline and an online stage.
In the offline stage, we first uniformly select a set of reference images that
sufficiently overlap to encompass a complete 360{\deg} view. We then utilize
the novel PTZ-IBA (PTZ Incremental Bundle Adjustment) algorithm to
automatically calibrate the cameras within a local coordinate system.
Additionally, for practical application, we can further optimize camera
parameters and align them with the geographic coordinate system using extra
global reference 3D information. In the online stage, we formulate the
calibration of any new viewpoints as a relocalization problem. Our approach
balances the accuracy and computational efficiency to meet real-world demands.
Extensive evaluations demonstrate our robustness and superior performance over
state-of-the-art methods on various real and synthetic datasets. Datasets and
source code can be accessed online at https://github.com/gjgjh/PTZ-Calib
|
2502.09079
|
Quantifying Cryptocurrency Unpredictability: A Comprehensive Study of
Complexity and Forecasting
|
q-fin.ST cs.LG q-fin.CP
|
This paper offers a thorough examination of the univariate predictability in
cryptocurrency time-series. By exploiting a combination of complexity measure
and model predictions we explore the cryptocurrencies time-series forecasting
task focusing on the exchange rate in USD of Litecoin, Binance Coin, Bitcoin,
Ethereum, and XRP. On one hand, to assess the complexity and the randomness of
these time-series, a comparative analysis has been performed using Brownian and
colored noises as a benchmark. The results obtained from the Complexity-Entropy
causality plane and power density spectrum analysis reveal that cryptocurrency
time-series exhibit characteristics closely resembling those of Brownian noise
when analyzed in a univariate context. On the other hand, the application of a
wide range of statistical, machine and deep learning models for time-series
forecasting demonstrates the low predictability of cryptocurrencies. Notably,
our analysis reveals that simpler models such as Naive models consistently
outperform the more complex machine and deep learning ones in terms of
forecasting accuracy across different forecast horizons and time windows. The
combined study of complexity and forecasting accuracies highlights the
difficulty of predicting the cryptocurrency market. These findings provide
valuable insights into the inherent characteristics of the cryptocurrency data
and highlight the need to reassess the challenges associated with predicting
cryptocurrency's price movements.
|
2502.09080
|
BevSplat: Resolving Height Ambiguity via Feature-Based Gaussian
Primitives for Weakly-Supervised Cross-View Localization
|
cs.CV
|
This paper addresses the problem of weakly supervised cross-view
localization, where the goal is to estimate the pose of a ground camera
relative to a satellite image with noisy ground truth annotations. A common
approach to bridge the cross-view domain gap for pose estimation is Bird's-Eye
View (BEV) synthesis. However, existing methods struggle with height ambiguity
due to the lack of depth information in ground images and satellite height
maps. Previous solutions either assume a flat ground plane or rely on complex
models, such as cross-view transformers. We propose BevSplat, a novel method
that resolves height ambiguity by using feature-based Gaussian primitives. Each
pixel in the ground image is represented by a 3D Gaussian with semantic and
spatial features, which are synthesized into a BEV feature map for relative
pose estimation. Additionally, to address challenges with panoramic query
images, we introduce an icosphere-based supervision strategy for the Gaussian
primitives. We validate our method on the widely used KITTI and VIGOR datasets,
which include both pinhole and panoramic query images. Experimental results
show that BevSplat significantly improves localization accuracy over prior
approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.