id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.14163
|
Reddit Rules and Rulers: Quantifying the Link Between Rules and
Perceptions of Governance across Thousands of Communities
|
cs.SI cs.CY cs.HC
|
Rules are a critical component of the functioning of nearly every online
community, yet it is challenging for community moderators to make data-driven
decisions about what rules to set for their communities. The connection between
a community's rules and how its membership feels about its governance is not
well understood. In this work, we conduct the largest-to-date analysis of rules
on Reddit, collecting a set of 67,545 unique rules across 5,225 communities
which collectively account for more than 67% of all content on Reddit. More
than just a point-in-time study, our work measures how communities change their
rules over a 5+ year period. We develop a method to classify these rules using
a taxonomy of 17 key attributes extended from previous work. We assess what
types of rules are most prevalent, how rules are phrased, and how they vary
across communities of different types. Using a dataset of communities'
discussions about their governance, we are the first to identify the rules most
strongly associated with positive community perceptions of governance: rules
addressing who participates, how content is formatted and tagged, and rules
about commercial activities. We conduct a longitudinal study to quantify the
impact of adding new rules to communities, finding that after a rule is added,
community perceptions of governance immediately improve, yet this effect
diminishes after six months. Our results have important implications for
platforms, moderators, and researchers. We make our classification model and
rules datasets public to support future research on this topic.
|
2501.14164
|
WaveMax: Radar Waveform Design via Convex Maximization of FrFT Phase
Retrieval
|
eess.SP cs.IT math.IT
|
The ambiguity function (AF) is a critical tool in radar waveform design,
representing the two-dimensional correlation between a transmitted signal and
its time-delayed, frequency-shifted version. Obtaining a radar signal to match
a specified AF magnitude is a bi-variate variant of the well-known phase
retrieval problem. Prior approaches to this problem were either limited to a
few classes of waveforms or lacked a computable procedure to estimate the
signal. Our recent work provided a framework for solving this problem for both
band- and time-limited signals using non-convex optimization. In this paper, we
introduce a novel approach WaveMax that formulates waveform recovery as a
convex optimization problem by relying on the fractional Fourier transform
(FrFT)-based AF. We exploit the fact that AF of the FrFT of the original signal
is equivalent to a rotation of the original AF. In particular, we reconstruct
the radar signal by solving a low-rank minimization problem, which approximates
the waveform using the leading eigenvector of a matrix derived from the AF. Our
theoretical analysis shows that unique waveform reconstruction is achievable
with a sample size no more than three times the signal frequencies or time
samples. Numerical experiments validate the efficacy of WaveMax in recovering
signals from noiseless and noisy AF, including scenarios with randomly and
uniformly sampled sparse data.
|
2501.14165
|
LoCoML: A Framework for Real-World ML Inference Pipelines
|
cs.SE cs.AI
|
The widespread adoption of machine learning (ML) has brought forth diverse
models with varying architectures, and data requirements, introducing new
challenges in integrating these systems into real-world applications.
Traditional solutions often struggle to manage the complexities of connecting
heterogeneous models, especially when dealing with varied technical
specifications. These limitations are amplified in large-scale, collaborative
projects where stakeholders contribute models with different technical
specifications. To address these challenges, we developed LoCoML, a low-code
framework designed to simplify the integration of diverse ML models within the
context of the \textit{Bhashini Project} - a large-scale initiative aimed at
integrating AI-driven language technologies such as automatic speech
recognition, machine translation, text-to-speech, and optical character
recognition to support seamless communication across more than 20 languages.
Initial evaluations show that LoCoML adds only a small amount of computational
load, making it efficient and effective for large-scale ML integration. Our
practical insights show that a low-code approach can be a practical solution
for connecting multiple ML models in a collaborative environment.
|
2501.14166
|
Enhancing Multimodal Entity Linking with Jaccard Distance-based
Conditional Contrastive Learning and Contextual Visual Augmentation
|
cs.CV cs.AI
|
Previous research on multimodal entity linking (MEL) has primarily employed
contrastive learning as the primary objective. However, using the rest of the
batch as negative samples without careful consideration, these studies risk
leveraging easy features and potentially overlook essential details that make
entities unique. In this work, we propose JD-CCL (Jaccard Distance-based
Conditional Contrastive Learning), a novel approach designed to enhance the
ability to match multimodal entity linking models. JD-CCL leverages
meta-information to select negative samples with similar attributes, making the
linking task more challenging and robust. Additionally, to address the
limitations caused by the variations within the visual modality among mentions
and entities, we introduce a novel method, CVaCPT (Contextual Visual-aid
Controllable Patch Transform). It enhances visual representations by
incorporating multi-view synthetic images and contextual textual
representations to scale and shift patch representations. Experimental results
on benchmark MEL datasets demonstrate the strong effectiveness of our approach.
|
2501.14170
|
Argos: Agentic Time-Series Anomaly Detection with Autonomous Rule
Generation via Large Language Models
|
cs.LG cs.DC cs.MA
|
Observability in cloud infrastructure is critical for service providers,
driving the widespread adoption of anomaly detection systems for monitoring
metrics. However, existing systems often struggle to simultaneously achieve
explainability, reproducibility, and autonomy, which are three indispensable
properties for production use. We introduce Argos, an agentic system for
detecting time-series anomalies in cloud infrastructure by leveraging large
language models (LLMs). Argos proposes to use explainable and reproducible
anomaly rules as intermediate representation and employs LLMs to autonomously
generate such rules. The system will efficiently train error-free and
accuracy-guaranteed anomaly rules through multiple collaborative agents and
deploy the trained rules for low-cost online anomaly detection. Through
evaluation results, we demonstrate that Argos outperforms state-of-the-art
methods, increasing $F_1$ scores by up to $9.5\%$ and $28.3\%$ on public
anomaly detection datasets and an internal dataset collected from Microsoft,
respectively.
|
2501.14171
|
Fully Guided Neural Schr\"odinger bridge for Brain MR image synthesis
|
eess.IV cs.CV
|
Multi-modal brain MRI provides essential complementary information for
clinical diagnosis. However, acquiring all modalities is often challenging due
to time and cost constraints. To address this, various methods have been
proposed to generate missing modalities from available ones. Traditional
approaches can be broadly categorized into two main types: paired and unpaired
methods. While paired methods offer superior performance, obtaining large-scale
paired datasets is challenging in real-world scenarios. Conversely, unpaired
methods facilitate large-scale data collection but struggle to preserve
critical image features, such as tumors. In this paper, we propose Fully Guided
Schr\"odinger Bridges (FGSB), a novel framework based on Neural Schr\"odinger
Bridges, to overcome these limitations. FGSB achieves stable, high-quality
generation of missing modalities using minimal paired data. Furthermore, when
provided with ground truth or a segmentation network for specific regions, FGSB
can generate missing modalities while preserving these critical areas with
reduced data requirements. Our proposed model consists of two consecutive
phases. 1) Generation Phase: Fuses a generated image, a paired reference image,
and Gaussian noise, employing iterative refinement to mitigate issues such as
mode collapse and improve generation quality 2) Training Phase: Learns the
mapping from the generated image to the target modality. Experiments
demonstrate that FGSB achieves comparable generation performance to methods
trained on large datasets, while using data from only two subjects. Moreover,
the utilization of lesion information with FGSB significantly enhances its
ability to preserve crucial lesion features.
|
2501.14172
|
UltraLightSqueezeNet: A Deep Learning Architecture for Malaria
Classification with up to 54x fewer trainable parameters for resource
constrained devices
|
cs.LG cs.AI cs.CV
|
Lightweight deep learning approaches for malaria detection have gained
attention for their potential to enhance diagnostics in resource constrained
environments. For our study, we selected SqueezeNet1.1 as it is one of the most
popular lightweight architectures. SqueezeNet1.1 is a later version of
SqueezeNet1.0 and is 2.4 times more computationally efficient than the original
model. We proposed and implemented three ultra-lightweight architecture
variants to SqueezeNet1.1 architecture, namely Variant 1 (one fire module),
Variant 2 (two fire modules), and Variant 3 (four fire modules), which are even
more compact than SqueezeNetV1.1 (eight fire modules). These models were
implemented to evaluate the best performing variant that achieves superior
computational efficiency without sacrificing accuracy in malaria blood cell
classification. The models were trained and evaluated using the NIH Malaria
dataset. We assessed each model's performance based on metrics including
accuracy, recall, precision, F1-score, and Area Under the Curve (AUC). The
results show that the SqueezeNet1.1 model achieves the highest performance
across all metrics, with a classification accuracy of 97.12%. Variant 3 (four
fire modules) offers a competitive alternative, delivering almost identical
results (accuracy 96.55%) with a 6x reduction in computational overhead
compared to SqueezeNet1.1. Variant 2 and Variant 1 perform slightly lower than
Variant 3, with Variant 2 (two fire modules) reducing computational overhead by
28x, and Variant 1 (one fire module) achieving a 54x reduction in trainable
parameters compared to SqueezeNet1.1. These findings demonstrate that our
SqueezeNet1.1 architecture variants provide a flexible approach to malaria
detection, enabling the selection of a variant that balances resource
constraints and performance.
|
2501.14173
|
Constrained Fuel and Time Optimal 6DOF Powered Descent Guidance Using
Indirect Optimization
|
math.OC cs.SY eess.SY
|
Powered descent guidance (PDG) problems subject to six-degrees-of-freedom
(6DOF) dynamics allow for enforcement of practical attitude constraints.
However, numerical solutions to 6DOF PDG problems are challenging due to fast
rotational dynamics coupled with translational dynamics, and the presence of
highly nonlinear state/control path inequality constraints. In this work,
constrained fuel- and time-optimal 6DOF PDG problems are solved leveraging a
regularized indirect method, subject to inequality constraints on the thrust
magnitude, thruster gimbal angle, rocket tilt angle, glideslope angle, and
angular velocity magnitude. To overcome the challenges associated with solving
the resulting multipoint boundary-value problems (MPBVPs), the state-only path
inequality constraints (SOPICs) are enforced through an interior penalty
function method, which embeds the resulting MPBVPs into a multi-parameter
smooth neighboring families of two-point BVPs. Extremal solutions are obtained
using an indirect multiple-shooting solution method with numerical
continuation. Moreover, an empirical relation is derived for the
directly-adjoined Lagrange multipliers associated with SOPICs. The fuel- and
time-optimal trajectories are compared against solutions of DIDO -- a capable
pseudospectral-based software for solving practical constrained optimal control
problems.
|
2501.14174
|
Dreamweaver: Learning Compositional World Representations from Pixels
|
cs.CV cs.AI cs.LG
|
Humans have an innate ability to decompose their perceptions of the world
into objects and their attributes, such as colors, shapes, and movement
patterns. This cognitive process enables us to imagine novel futures by
recombining familiar concepts. However, replicating this ability in artificial
intelligence systems has proven challenging, particularly when it comes to
modeling videos into compositional concepts and generating unseen, recomposed
futures without relying on auxiliary data, such as text, masks, or bounding
boxes. In this paper, we propose Dreamweaver, a neural architecture designed to
discover hierarchical and compositional representations from raw videos and
generate compositional future simulations. Our approach leverages a novel
Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent
objects and attributes. In addition, Dreamweaver uses a multi-future-frame
prediction objective to capture disentangled representations for dynamic
concepts more effectively as well as static concepts. In experiments, we
demonstrate our model outperforms current state-of-the-art baselines for world
modeling when evaluated under the DCI framework across multiple datasets.
Furthermore, we show how the modularized concept representations of our model
enable compositional imagination, allowing the generation of novel videos by
recombining attributes from different objects.
|
2501.14175
|
Cybersecurity Assessment of Smart Grid Exposure Using a Machine Learning
Based Approach
|
cs.LG cs.CR
|
Given that disturbances to the stable and normal operation of power systems
have grown phenomenally, particularly in terms of unauthorized access to
confidential and critical data, injection of malicious software, and
exploitation of security vulnerabilities in a poorly patched software among
others; then developing, as a countermeasure, an assessment solutions with
machine learning capabilities to match up in real-time, with the growth and
fast pace of these cyber-attacks, is not only critical to the security,
reliability and safe operation of power system, but also germane to
guaranteeing advanced monitoring and efficient threat detection. Using the
Mississippi State University and Oak Ridge National Laboratory dataset, the
study used an XGB Classifier modeling approach in machine learning to diagnose
and assess power system disturbances, in terms of Attack Events, Natural Events
and No-Events. As test results show, the model, in all the three sub-datasets,
generally demonstrates good performance on all metrics, as it relates to
accurately identifying and classifying all the three power system events.
|
2501.14176
|
RL + Transformer = A General-Purpose Problem Solver
|
cs.LG cs.AI
|
What if artificial intelligence could not only solve problems for which it
was trained but also learn to teach itself to solve new problems (i.e.,
meta-learn)? In this study, we demonstrate that a pre-trained transformer
fine-tuned with reinforcement learning over multiple episodes develops the
ability to solve problems that it has never encountered before - an emergent
ability called In-Context Reinforcement Learning (ICRL). This powerful
meta-learner not only excels in solving unseen in-distribution environments
with remarkable sample efficiency, but also shows strong performance in
out-of-distribution environments. In addition, we show that it exhibits
robustness to the quality of its training data, seamlessly stitches together
behaviors from its context, and adapts to non-stationary environments. These
behaviors demonstrate that an RL-trained transformer can iteratively improve
upon its own solutions, making it an excellent general-purpose problem solver.
|
2501.14182
|
Post-hoc Spurious Correlation Neutralization with Single-Weight
Fictitious Class Unlearning
|
cs.CV
|
Neural network training tends to exploit the simplest features as shortcuts
to greedily minimize training loss. However, some of these features might be
spuriously correlated with the target labels, leading to incorrect predictions
by the model. Several methods have been proposed to address this issue.
Focusing on suppressing the spurious correlations with model training, they not
only incur additional training cost, but also have limited practical utility as
the model misbehavior due to spurious relations is usually discovered after its
deployment. It is also often overlooked that spuriousness is a subjective
notion. Hence, the precise questions that must be investigated are; to what
degree a feature is spurious, and how we can proportionally distract the
model's attention from it for reliable prediction. To this end, we propose a
method that enables post-hoc neutralization of spurious feature impact,
controllable to an arbitrary degree. We conceptualize spurious features as
fictitious sub-classes within the original classes, which can be eliminated by
a class removal scheme. We then propose a unique precise class removal
technique that employs a single-weight modification, which entails negligible
performance compromise for the remaining classes. We perform extensive
experiments, demonstrating that by editing just a single weight in a post-hoc
manner, our method achieves highly competitive, or better performance against
the state-of-the-art methods.
|
2501.14183
|
VarDrop: Enhancing Training Efficiency by Reducing Variate Redundancy in
Periodic Time Series Forecasting
|
cs.LG cs.AI
|
Variate tokenization, which independently embeds each variate as separate
tokens, has achieved remarkable improvements in multivariate time series
forecasting. However, employing self-attention with variate tokens incurs a
quadratic computational cost with respect to the number of variates, thus
limiting its training efficiency for large-scale applications. To address this
issue, we propose VarDrop, a simple yet efficient strategy that reduces the
token usage by omitting redundant variate tokens during training. VarDrop
adaptively excludes redundant tokens within a given batch, thereby reducing the
number of tokens used for dot-product attention while preserving essential
information. Specifically, we introduce k-dominant frequency hashing (k-DFH),
which utilizes the ranked dominant frequencies in the frequency domain as a
hash value to efficiently group variate tokens exhibiting similar periodic
behaviors. Then, only representative tokens in each group are sampled through
stratified sampling. By performing sparse attention with these selected tokens,
the computational cost of scaled dot-product attention is significantly
alleviated. Experiments conducted on public benchmark datasets demonstrate that
VarDrop outperforms existing efficient baselines.
|
2501.14184
|
Tight Sample Complexity Bounds for Parameter Estimation Under Quantum
Differential Privacy for Qubits
|
quant-ph cs.CR cs.IT math.IT
|
This short note provides tight upper and lower bounds for minimal number of
samples (copies of quantum states) required to attain a prescribed accuracy
(measured by error variance) for scalar parameters using unbiased estimators
under quantum local differential privacy for qubits. In the small privacy
budget $\epsilon$ regime, i.e., $\epsilon\ll 1$, the sample complexity scales
as $\Theta(\epsilon^{-2})$. This bound matches that of classical parameter
estimation under differential privacy. The lower bound loosens (converges to
zero) in the large privacy budget regime, i.e., $\epsilon\gg 1$, but that case
is not particularly interesting as tight bounds for parameter estimation in the
noiseless case are widely known. That being said, extensions to systems with
higher dimensions and tightening the bounds for the large privacy budget regime
are interesting avenues for future research.
|
2501.14186
|
GeoSim.AI: AI assistants for numerical simulations in geomechanics
|
cs.CE
|
The ability to accomplish tasks via natural language instructions is one of
the most efficient forms of interaction between humans and technology. This
efficiency has been translated into practical applications with generative AI
tools now allowing users to get things done through natural language queries.
The emergence of advanced Large Language Models (LLMs) marks a pivotal shift in
this direction. With ongoing advancements in the field of generative AI,
integrating natural language commands into sophisticated technical fields in
science and engineering is becoming increasingly feasible. This paper
introduces GeoSim.AI - a suite of AI assistants for numerical simulations in
geomechanics - thereby demonstrating the transformative potential of generative
AI in geotechnical engineering. We investigate how AI assistants powered by
LLMs can streamline the process of creating complex simulation inputs and
interpreting results by translating natural language instructions or image
inputs into precise technical commands and scripts. This approach aims to
bridge the gap between human intent and the intricate requirements of numerical
modeling tools, potentially revolutionizing how researchers and engineers
interact with simulation software. We present demonstrations involving AI
assistants for performing slope stability analyses in various software
packages. The demonstrations highlight the potential of this technology to
significantly enhance productivity and accessibility in computational
geomechanics. GeoSim.AI is under active development, continuously expanding the
suite of AI assistants for various numerical simulation problems in
geotechnical engineering.
|
2501.14189
|
Distributed Multi-Agent Coordination Using Multi-Modal Foundation Models
|
cs.AI cs.LG cs.MA
|
Distributed Constraint Optimization Problems (DCOPs) offer a powerful
framework for multi-agent coordination but often rely on labor-intensive,
manual problem construction. To address this, we introduce VL-DCOPs, a
framework that takes advantage of large multimodal foundation models (LFMs) to
automatically generate constraints from both visual and linguistic
instructions. We then introduce a spectrum of agent archetypes for solving
VL-DCOPs: from a neuro-symbolic agent that delegates some of the algorithmic
decisions to an LFM, to a fully neural agent that depends entirely on an LFM
for coordination. We evaluate these agent archetypes using state-of-the-art
LLMs (large language models) and VLMs (vision language models) on three novel
VL-DCOP tasks and compare their respective advantages and drawbacks. Lastly, we
discuss how this work extends to broader frontier challenges in the DCOP
literature.
|
2501.14190
|
High-Precision Fabric Defect Detection via Adaptive Shape Convolutions
and Large Kernel Spatial Modeling
|
cs.CV
|
Detecting fabric defects in the textile industry remains a challenging task
due to the diverse and complex nature of defect patterns. Traditional methods
often suffer from slow inference speeds, limited accuracy, and inadequate
recognition rates, particularly in scenarios involving intricate or subtle
defects. To overcome these limitations, we introduce Fab-ASLKS, an advanced
fabric defect detection framework built upon the YOLOv8s architecture.
Fab-ASLKS incorporates two key modules: (1) the Adaptive Shape Convolution
Module (ASCM), which leverages adaptive shape convolution within the Neck to
enhance feature fusion and improve efficiency by extending the capabilities of
the standard C2f structure, and (2) the Large Kernel Shift Convolution Module
(LKSCM), designed to emulate large kernel effects within the Backbone, enabling
superior spatial information extraction. These modules collaboratively optimize
feature extraction and information integration across the network. Extensive
experiments conducted on the Tianchi fabric defect detection dataset
demonstrate that Fab-ASLKS achieves a 5% improvement in mAP@50 over the
baseline, showcasing its capability to deliver high precision and efficiency.
|
2501.14193
|
Fabrication of Soft and Comfortable Pressure-Sensing Shoe Sole for
Intuitive Monitoring of Human Quality Gaits
|
eess.SY cs.SY
|
The study discusses the design and fabrication of flexible pressure sensors
using Ecoflex/Graphene composites. The fabricated sensor is used for the
application of intuitive monitoring of human quality gaits and implementation
of the soft and comfortable shoe sole for rehabilitation of the patients with
foot disorder is also taken into consideration. The sensor is fabricated using
molding and casting technique by sandwiching the thin film Ecoflex/Graphene
composites between the copper (Cu) electrodes with the dimension of 15 x 15 mm2
with high sensitivity. There are five pressure sensors integrated in the shoe
sole, a sensor at the forefoot, three sensors at the midfoot and one sensor at
the lower foot (heel). The behavior of the sensor is negative piezoresistive in
which the resistance decreases as the pressure increases. The sensors are
embedded in a soft and comfortable shoe sole and then integrated with a laptop
or mobile application to monitor and analyze human gait in real-time.
Furthermore, a dedicated Graphical User Interface (GUI) is designed to read the
data. The pressure sensors are integrated with ESP32 microcontroller which
wirelessly transmit data to the GUI and smart phones which could be further
used in the intuitive monitoring, rehabilitation of the patients with foot
disorder or neuromotor diseases.
|
2501.14194
|
ENTER: Event Based Interpretable Reasoning for VideoQA
|
cs.CV cs.AI
|
In this paper, we present ENTER, an interpretable Video Question Answering
(VideoQA) system based on event graphs. Event graphs convert videos into
graphical representations, where video events form the nodes and event-event
relationships (temporal/causal/hierarchical) form the edges. This structured
representation offers many benefits: 1) Interpretable VideoQA via generated
code that parses event-graph; 2) Incorporation of contextual visual information
in the reasoning process (code generation) via event graphs; 3) Robust VideoQA
via Hierarchical Iterative Update of the event graphs. Existing interpretable
VideoQA systems are often top-down, disregarding low-level visual information
in the reasoning plan generation, and are brittle. While bottom-up approaches
produce responses from visual data, they lack interpretability. Experimental
results on NExT-QA, IntentQA, and EgoSchema demonstrate that not only does our
method outperform existing top-down approaches while obtaining competitive
performance against bottom-up approaches, but more importantly, offers superior
interpretability and explainability in the reasoning process.
|
2501.14195
|
VideoShield: Regulating Diffusion-based Video Generation Models via
Watermarking
|
cs.CV
|
Artificial Intelligence Generated Content (AIGC) has advanced significantly,
particularly with the development of video generation models such as
text-to-video (T2V) models and image-to-video (I2V) models. However, like other
AIGC types, video generation requires robust content control. A common approach
is to embed watermarks, but most research has focused on images, with limited
attention given to videos. Traditional methods, which embed watermarks
frame-by-frame in a post-processing manner, often degrade video quality. In
this paper, we propose VideoShield, a novel watermarking framework specifically
designed for popular diffusion-based video generation models. Unlike
post-processing methods, VideoShield embeds watermarks directly during video
generation, eliminating the need for additional training. To ensure video
integrity, we introduce a tamper localization feature that can detect changes
both temporally (across frames) and spatially (within individual frames). Our
method maps watermark bits to template bits, which are then used to generate
watermarked noise during the denoising process. Using DDIM Inversion, we can
reverse the video to its original watermarked noise, enabling straightforward
watermark extraction. Additionally, template bits allow precise detection for
potential temporal and spatial modification. Extensive experiments across
various video models (both T2V and I2V models) demonstrate that our method
effectively extracts watermarks and detects tamper without compromising video
quality. Furthermore, we show that this approach is applicable to image
generation models, enabling tamper detection in generated images as well. Codes
and models are available at
\href{https://github.com/hurunyi/VideoShield}{https://github.com/hurunyi/VideoShield}.
|
2501.14196
|
PASER: A Physics-Inspired Theory for Stimulated Growth and Real-Time
Optimization in On-Demand Platforms
|
physics.soc-ph cs.SI econ.TH
|
This paper introduces an innovative framework for understanding on-demand
platforms by quantifying positive network effects, trust, revenue dynamics, and
the influence of demand on platform operations at per-minute or even per-second
granularity. Drawing inspiration from physics, the framework provides both a
theoretical and pragmatic perspective, offering a pictorial and quantitative
representation of how on-demand platforms create value. It seeks to demystify
their nuanced operations by providing practical, tangible, and highly
applicable metrics, platform design templates, and real-time optimization tools
for strategic what-if scenario planning. Its model demonstrates strong
predictive power and is deeply rooted in raw data. The framework offers a
deterministic insight into the workings of diverse platforms like Uber, Airbnb,
and food delivery services. Furthermore, it generalizes to model all on-demand
service platforms with cyclical operations. It works synergistically with
machine learning, game theory, and agent-based models by providing a solid
quantitative core rooted in raw data, based on physical truths, and is capable
of delivering tangible predictions for real-time operational adjustments. The
framework's mathematical model was rigorously validated using highly detailed
historical data retrieved with near 100% certainty. Applying data-driven
induction, distinct qualities were identified in big data sets via an iterative
process. Through analogical thinking, a clear and highly intuitive mapping
between the elements, operational principles, and dynamic behaviors of a
well-known physical system was established to create a physics-inspired lens
for Uber. This novel quantitative framework was named PASER (Profit
Amplification by Stimulated Emission of Revenue), drawing an analogy to its
physical counterpart, the LASER (Light Amplification by Stimulated Emission of
Radiation).
|
2501.14197
|
Bi-directional Curriculum Learning for Graph Anomaly Detection: Dual
Focus on Homogeneity and Heterogeneity
|
cs.LG cs.SI stat.ML
|
Graph anomaly detection (GAD) aims to identify nodes from a graph that are
significantly different from normal patterns. Most previous studies are
model-driven, focusing on enhancing the detection effect by improving the model
structure. However, these approaches often treat all nodes equally, neglecting
the different contributions of various nodes to the training. Therefore, we
introduce graph curriculum learning as a simple and effective plug-and-play
module to optimize GAD methods. The existing graph curriculum learning mainly
focuses on the homogeneity of graphs and treats nodes with high homogeneity as
easy nodes. In fact, GAD models can handle not only graph homogeneity but also
heterogeneity, which leads to the unsuitability of these existing methods. To
address this problem, we propose an innovative Bi-directional Curriculum
Learning strategy (BCL), which considers nodes with higher and lower similarity
to neighbor nodes as simple nodes in the direction of focusing on homogeneity
and focusing on heterogeneity, respectively, and prioritizes their training.
Extensive experiments show that BCL can be quickly integrated into existing
detection processes and significantly improves the performance of ten GAD
anomaly detection models on seven commonly used datasets.
|
2501.14198
|
Sparse Mixture-of-Experts for Non-Uniform Noise Reduction in MRI Images
|
eess.IV cs.CV
|
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool in clinical
settings, but its utility is often hindered by noise artifacts introduced
during the imaging process.Effective denoising is critical for enhancing image
quality while preserving anatomical structures. However, traditional denoising
methods, which often assume uniform noise distributions, struggle to handle the
non-uniform noise commonly present in MRI images. In this paper, we introduce a
novel approach leveraging a sparse mixture-of-experts framework for MRI image
denoising. Each expert is a specialized denoising convolutional neural network
fine-tuned to target specific noise characteristics associated with different
image regions. Our method demonstrates superior performance over
state-of-the-art denoising techniques on both synthetic and real-world brain
MRI datasets. Furthermore, we show that it generalizes effectively to unseen
datasets, highlighting its robustness and adaptability.
|
2501.14199
|
Coordinating Ride-Pooling with Public Transit using Reward-Guided
Conservative Q-Learning: An Offline Training and Online Fine-Tuning
Reinforcement Learning Framework
|
cs.LG cs.AI cs.ET
|
This paper introduces a novel reinforcement learning (RL) framework, termed
Reward-Guided Conservative Q-learning (RG-CQL), to enhance coordination between
ride-pooling and public transit within a multimodal transportation network. We
model each ride-pooling vehicle as an agent governed by a Markov Decision
Process (MDP) and propose an offline training and online fine-tuning RL
framework to learn the optimal operational decisions of the multimodal
transportation systems, including rider-vehicle matching, selection of drop-off
locations for passengers, and vehicle routing decisions, with improved data
efficiency. During the offline training phase, we develop a Conservative Double
Deep Q Network (CDDQN) as the action executor and a supervised learning-based
reward estimator, termed the Guider Network, to extract valuable insights into
action-reward relationships from data batches. In the online fine-tuning phase,
the Guider Network serves as an exploration guide, aiding CDDQN in effectively
and conservatively exploring unknown state-action pairs. The efficacy of our
algorithm is demonstrated through a realistic case study using real-world data
from Manhattan. We show that integrating ride-pooling with public transit
outperforms two benchmark cases solo rides coordinated with transit and
ride-pooling without transit coordination by 17% and 22% in the achieved system
rewards, respectively. Furthermore, our innovative offline training and online
fine-tuning framework offers a remarkable 81.3% improvement in data efficiency
compared to traditional online RL methods with adequate exploration budgets,
with a 4.3% increase in total rewards and a 5.6% reduction in overestimation
errors. Experimental results further demonstrate that RG-CQL effectively
addresses the challenges of transitioning from offline to online RL in
large-scale ride-pooling systems integrated with transit.
|
2501.14204
|
Dynamic Token Reduction during Generation for Vision Language Models
|
cs.CV cs.AI
|
Vision-Language Models (VLMs) have achieved notable success in multimodal
tasks but face practical limitations due to the quadratic complexity of decoder
attention mechanisms and autoregressive generation. Existing methods like FASTV
and VTW have achieved notable results in reducing redundant visual tokens, but
these approaches focus on pruning tokens in a single forward pass without
systematically analyzing the redundancy of visual tokens throughout the entire
generation process. In this paper, we introduce a dynamic pruning strategy
tailored for VLMs, namedDynamic Rate (DyRate), which progressively adjusts the
compression rate during generation. Our analysis of the distribution of
attention reveals that the importance of visual tokens decreases throughout the
generation process, inspiring us to adopt a more aggressive compression rate.
By integrating a lightweight predictor based on attention distribution, our
approach enables flexible adjustment of pruning rates based on the attention
distribution. Our experimental results demonstrate that our method not only
reduces computational demands but also maintains the quality of responses.
|
2501.14208
|
You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from
Video Demonstrations
|
cs.RO cs.CV
|
Bimanual robotic manipulation is a long-standing challenge of embodied
intelligence due to its characteristics of dual-arm spatial-temporal
coordination and high-dimensional action spaces. Previous studies rely on
pre-defined action taxonomies or direct teleoperation to alleviate or
circumvent these issues, often making them lack simplicity, versatility and
scalability. Differently, we believe that the most effective and efficient way
for teaching bimanual manipulation is learning from human demonstrated videos,
where rich features such as spatial-temporal positions, dynamic postures,
interaction states and dexterous transitions are available almost for free. In
this work, we propose the YOTO (You Only Teach Once), which can extract and
then inject patterns of bimanual actions from as few as a single binocular
observation of hand movements, and teach dual robot arms various complex tasks.
Furthermore, based on keyframes-based motion trajectories, we devise a subtle
solution for rapidly generating training demonstrations with diverse variations
of manipulated objects and their locations. These data can then be used to
learn a customized bimanual diffusion policy (BiDP) across diverse scenes. In
experiments, YOTO achieves impressive performance in mimicking 5 intricate
long-horizon bimanual tasks, possesses strong generalization under different
visual and spatial conditions, and outperforms existing visuomotor imitation
learning methods in accuracy and efficiency. Our project link is
https://hnuzhy.github.io/projects/YOTO.
|
2501.14210
|
PuzzleGPT: Emulating Human Puzzle-Solving Ability for Time and Location
Prediction
|
cs.CV cs.AI cs.LG
|
The task of predicting time and location from images is challenging and
requires complex human-like puzzle-solving ability over different clues. In
this work, we formalize this ability into core skills and implement them using
different modules in an expert pipeline called PuzzleGPT. PuzzleGPT consists of
a perceiver to identify visual clues, a reasoner to deduce prediction
candidates, a combiner to combinatorially combine information from different
clues, a web retriever to get external knowledge if the task can't be solved
locally, and a noise filter for robustness. This results in a zero-shot,
interpretable, and robust approach that records state-of-the-art performance on
two datasets -- TARA and WikiTilo. PuzzleGPT outperforms large VLMs such as
BLIP-2, InstructBLIP, LLaVA, and even GPT-4V, as well as automatically
generated reasoning pipelines like VisProg, by at least 32% and 38%,
respectively. It even rivals or surpasses finetuned models.
|
2501.14211
|
When GNNs meet symmetry in ILPs: an orbit-based feature augmentation
approach
|
cs.LG math.OC
|
A common characteristic in integer linear programs (ILPs) is symmetry,
allowing variables to be permuted without altering the underlying problem
structure. Recently, GNNs have emerged as a promising approach for solving
ILPs. However, a significant challenge arises when applying GNNs to ILPs with
symmetry: classic GNN architectures struggle to differentiate between symmetric
variables, which limits their predictive accuracy. In this work, we investigate
the properties of permutation equivariance and invariance in GNNs, particularly
in relation to the inherent symmetry of ILP formulations. We reveal that the
interaction between these two factors contributes to the difficulty of
distinguishing between symmetric variables. To address this challenge, we
explore the potential of feature augmentation and propose several guiding
principles for constructing augmented features. Building on these principles,
we develop an orbit-based augmentation scheme that first groups symmetric
variables and then samples augmented features for each group from a discrete
uniform distribution. Empirical results demonstrate that our proposed approach
significantly enhances both training efficiency and predictive performance.
|
2501.14216
|
TFG-Flow: Training-free Guidance in Multimodal Generative Flow
|
cs.LG cs.AI cs.CE
|
Given an unconditional generative model and a predictor for a target property
(e.g., a classifier), the goal of training-free guidance is to generate samples
with desirable target properties without additional training. As a highly
efficient technique for steering generative models toward flexible outcomes,
training-free guidance has gained increasing attention in diffusion models.
However, existing methods only handle data in continuous spaces, while many
scientific applications involve both continuous and discrete data (referred to
as multimodality). Another emerging trend is the growing use of the simple and
general flow matching framework in building generative foundation models, where
guided generation remains under-explored. To address this, we introduce
TFG-Flow, a novel training-free guidance method for multimodal generative flow.
TFG-Flow addresses the curse-of-dimensionality while maintaining the property
of unbiased sampling in guiding discrete variables. We validate TFG-Flow on
four molecular design tasks and show that TFG-Flow has great potential in drug
design by generating molecules with desired properties.
|
2501.14224
|
Top Ten Challenges Towards Agentic Neural Graph Databases
|
cs.AI cs.DB cs.LG
|
Graph databases (GDBs) like Neo4j and TigerGraph excel at handling
interconnected data but lack advanced inference capabilities. Neural Graph
Databases (NGDBs) address this by integrating Graph Neural Networks (GNNs) for
predictive analysis and reasoning over incomplete or noisy data. However, NGDBs
rely on predefined queries and lack autonomy and adaptability. This paper
introduces Agentic Neural Graph Databases (Agentic NGDBs), which extend NGDBs
with three core functionalities: autonomous query construction, neural query
execution, and continuous learning. We identify ten key challenges in realizing
Agentic NGDBs: semantic unit representation, abductive reasoning, scalable
query execution, and integration with foundation models like large language
models (LLMs). By addressing these challenges, Agentic NGDBs can enable
intelligent, self-improving systems for modern data-driven applications, paving
the way for adaptable and autonomous data management solutions.
|
2501.14225
|
Multi-agent KTO: Reinforcing Strategic Interactions of Large Language
Model in Language Game
|
cs.CL cs.AI cs.HC
|
Achieving Artificial General Intelligence (AGI) requires AI agents that can
not only make stratigic decisions but also engage in flexible and meaningful
communication. Inspired by Wittgenstein's language game theory in Philosophical
Investigations, we propose that language agents can learn through in-context
interaction rather than traditional multi-stage frameworks that separate
decision-making from language expression. Using Werewolf, a social deduction
game that tests language understanding, strategic interaction, and
adaptability, we develop the Multi-agent Kahneman & Tversky's Optimization
(MaKTO). MaKTO engages diverse models in extensive gameplay to generate
unpaired desirable and unacceptable responses, then employs KTO to refine the
model's decision-making process. In 9-player Werewolf games, MaKTO achieves a
61% average win rate across various models, outperforming GPT-4o and two-stage
RL agents by relative improvements of 23.0% and 10.9%, respectively. Notably,
MaKTO also demonstrates human-like performance, winning 60% against expert
players and showing only 49% detectability in Turing-style blind tests. These
results showcase MaKTO's superior decision-making, strategic adaptation, and
natural language generation in complex social deduction games.
|
2501.14228
|
Detection and Classification of Acute Lymphoblastic Leukemia Utilizing
Deep Transfer Learning
|
cs.CV cs.AI
|
A mutation in the DNA of a single cell that compromises its function
initiates leukemia,leading to the overproduction of immature white blood cells
that encroach upon the space required for the generation of healthy blood
cells.Leukemia is treatable if identified in its initial stages. However,its
diagnosis is both arduous and time consuming. This study proposes a novel
approach for diagnosing leukemia across four stages Benign,Early,Pre,and Pro
using deep learning techniques.We employed two Convolutional Neural Network
(CNN) models as MobileNetV2 with an altered head and a custom model. The custom
model consists of multiple convolutional layers,each paired with corresponding
max pooling layers.We utilized MobileNetV2 with ImageNet weights,adjusting the
head to integrate the final results.The dataset used is the publicly available
"Acute Lymphoblastic Leukemia (ALL) Image Dataset", and we applied the
Synthetic Minority Oversampling Technique (SMOTE) to augment and balance the
training dataset.The custom model achieved an accuracy of 98.6%, while
MobileNetV2 attained a superior accuracy of 99.69%. The pretrained model showed
promising results,indicating an increased likelihood of real-world application.
|
2501.14230
|
GreedyPixel: Fine-Grained Black-Box Adversarial Attack Via Greedy
Algorithm
|
cs.CV cs.CR cs.LG
|
A critical requirement for deep learning models is ensuring their robustness
against adversarial attacks. These attacks commonly introduce noticeable
perturbations, compromising the visual fidelity of adversarial examples.
Another key challenge is that while white-box algorithms can generate effective
adversarial perturbations, they require access to the model gradients, limiting
their practicality in many real-world scenarios. Existing attack mechanisms
struggle to achieve similar efficacy without access to these gradients. In this
paper, we introduce GreedyPixel, a novel pixel-wise greedy algorithm designed
to generate high-quality adversarial examples using only query-based feedback
from the target model. GreedyPixel improves computational efficiency in what is
typically a brute-force process by perturbing individual pixels in sequence,
guided by a pixel-wise priority map. This priority map is constructed by
ranking gradients obtained from a surrogate model, providing a structured path
for perturbation. Our results demonstrate that GreedyPixel achieves attack
success rates comparable to white-box methods without the need for gradient
information, and surpasses existing algorithms in black-box settings, offering
higher success rates, reduced computational time, and imperceptible
perturbations. These findings underscore the advantages of GreedyPixel in terms
of attack efficacy, time efficiency, and visual quality.
|
2501.14231
|
Micro-macro Wavelet-based Gaussian Splatting for 3D Reconstruction from
Unconstrained Images
|
cs.CV
|
3D reconstruction from unconstrained image collections presents substantial
challenges due to varying appearances and transient occlusions. In this paper,
we introduce Micro-macro Wavelet-based Gaussian Splatting (MW-GS), a novel
approach designed to enhance 3D reconstruction by disentangling scene
representations into global, refined, and intrinsic components. The proposed
method features two key innovations: Micro-macro Projection, which allows
Gaussian points to capture details from feature maps across multiple scales
with enhanced diversity; and Wavelet-based Sampling, which leverages frequency
domain information to refine feature representations and significantly improve
the modeling of scene appearances. Additionally, we incorporate a Hierarchical
Residual Fusion Network to seamlessly integrate these features. Extensive
experiments demonstrate that MW-GS delivers state-of-the-art rendering
performance, surpassing existing methods.
|
2501.14232
|
Learning-Augmented Online Control for Decarbonizing Water
Infrastructures
|
eess.SY cs.SY
|
Water infrastructures are essential for drinking water supply, irrigation,
fire protection, and other critical applications. However, water pumping
systems, which are key to transporting water to the point of use, consume
significant amounts of energy and emit millions of tons of greenhouse gases
annually. With the wide deployment of digital water meters and sensors in these
infrastructures, Machine Learning (ML) has the potential to optimize water
supply control and reduce greenhouse gas emissions. Nevertheless, the inherent
vulnerability of ML methods in terms of worst-case performance raises safety
concerns when deployed in critical water infrastructures. To address this
challenge, we propose a learning-augmented online control algorithm, termed
LAOC, designed to dynamically schedule the activation and/or speed of water
pumps. To ensure safety, we introduce a novel design of safe action sets for
online control problems. By leveraging these safe action sets, LAOC can
provably guarantee safety constraints while utilizing ML predictions to reduce
energy and environmental costs. Our analysis reveals the tradeoff between
safety requirements and average energy/environmental cost performance.
Additionally, we conduct an experimental study on a building water supply
system to demonstrate the empirical performance of LAOC. The results indicate
that LAOC can effectively reduce environmental and energy costs while
guaranteeing safety constraints.
|
2501.14233
|
A Data-driven Dynamic Temporal Correlation Modeling Framework for
Renewable Energy Scenario Generation
|
cs.LG
|
Renewable energy power is influenced by the atmospheric system, which
exhibits nonlinear and time-varying features. To address this, a dynamic
temporal correlation modeling framework is proposed for renewable energy
scenario generation. A novel decoupled mapping path is employed for joint
probability distribution modeling, formulating regression tasks for both
marginal distributions and the correlation structure using proper scoring rules
to ensure the rationality of the modeling process. The scenario generation
process is divided into two stages. Firstly, the dynamic correlation network
models temporal correlations based on a dynamic covariance matrix, capturing
the time-varying features of renewable energy while enhancing the
interpretability of the black-box model. Secondly, the implicit quantile
network models the marginal quantile function in a nonparametric, continuous
manner, enabling scenario generation through marginal inverse sampling.
Experimental results demonstrate that the proposed dynamic correlation quantile
network outperforms state-of-the-art methods in quantifying uncertainty and
capturing dynamic correlation for short-term renewable energy scenario
generation.
|
2501.14234
|
STAR-RIS-Enabled Multi-Path Beam Routing with Passive Beam Splitting
|
eess.SP cs.IT math.IT
|
Reconfigurable intelligent surfaces (RISs) can be densely deployed in the
environment to create multi-reflection line-of-sight (LoS) links for signal
coverage enhancement. However, conventional reflection-only RISs can only
achieve half-space reflection, which limits the LoS path diversity. In
contrast, simultaneously transmitting and reflecting RISs (STAR-RISs) can
achieve full-space reflection and transmission, thereby creating more LoS
paths. Hence, in this paper, we study a new multi-STAR-RIS-aided communication
system, where a multi-antenna base station (BS) transmits to multiple
single-antenna users by exploiting the signal beam routing over a set of
cascaded LoS paths each formed by multiple STAR-RISs. To reveal essential
insights, we first consider a simplified single-user case, aiming to maximize
its received signal power by jointly optimizing the active beamforming at the
BS, the BS's power allocation over different paths, the number of selected
beam-routing paths, the selected STAR-RISs for each path, as well as their
amplitude and phase shifts for transmission/reflection. However, this problem
is difficult to be optimally solved as different paths may be intricately
coupled at their shared STAR-RISs. To tackle this difficulty, we first derive
the optimal solution to this problem in closed-form for a given set of paths.
The clique-based approach in graph theory is then applied to solve the
remaining multi-path selection problem efficiently. Next, we extend the
proposed clique-based method to the multi-user case to maximize the minimum
received signal power among all users, subject to additional constraints on the
disjointness of the selected paths for different users. Simulation results show
that our proposed STAR-RIS-enabled beam routing outperforms the conventional
beam routing with reflection-only RISs in both single- and multi-user cases.
|
2501.14238
|
Point-LN: A Lightweight Framework for Efficient Point Cloud
Classification Using Non-Parametric Positional Encoding
|
cs.CV cs.AI cs.LG cs.RO
|
We introduce Point-LN, a novel lightweight framework engineered for efficient
3D point cloud classification. Point-LN integrates essential non-parametric
components-such as Farthest Point Sampling (FPS), k-Nearest Neighbors (k-NN),
and non-learnable positional encoding-with a streamlined learnable classifier
that significantly enhances classification accuracy while maintaining a minimal
parameter footprint. This hybrid architecture ensures low computational costs
and rapid inference speeds, making Point-LN ideal for real-time and
resource-constrained applications. Comprehensive evaluations on benchmark
datasets, including ModelNet40 and ScanObjectNN, demonstrate that Point-LN
achieves competitive performance compared to state-of-the-art methods, all
while offering exceptional efficiency. These results establish Point-LN as a
robust and scalable solution for diverse point cloud classification tasks,
highlighting its potential for widespread adoption in various computer vision
applications.
|
2501.14246
|
Adaptive Progressive Attention Graph Neural Network for EEG Emotion
Recognition
|
eess.SP cs.LG
|
In recent years, numerous neuroscientific studies have shown that human
emotions are closely linked to specific brain regions, with these regions
exhibiting variability across individuals and emotional states. To fully
leverage these neural patterns, we propose an Adaptive Progressive Attention
Graph Neural Network (APAGNN), which dynamically captures the spatial
relationships among brain regions during emotional processing. The APAGNN
employs three specialized experts that progressively analyze brain topology.
The first expert captures global brain patterns, the second focuses on
region-specific features, and the third examines emotion-related channels. This
hierarchical approach enables increasingly refined analysis of neural activity.
Additionally, a weight generator integrates the outputs of all three experts,
balancing their contributions to produce the final predictive label. Extensive
experiments on three publicly available datasets (SEED, SEED-IV and MPED)
demonstrate that the proposed method enhances EEG emotion recognition
performance, achieving superior results compared to baseline methods.
|
2501.14249
|
Humanity's Last Exam
|
cs.LG cs.AI cs.CL
|
Benchmarks are important tools for tracking the rapid advancements in large
language model (LLM) capabilities. However, benchmarks are not keeping pace in
difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like
MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In
response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at
the frontier of human knowledge, designed to be the final closed-ended academic
benchmark of its kind with broad subject coverage. HLE consists of 3,000
questions across dozens of subjects, including mathematics, humanities, and the
natural sciences. HLE is developed globally by subject-matter experts and
consists of multiple-choice and short-answer questions suitable for automated
grading. Each question has a known solution that is unambiguous and easily
verifiable, but cannot be quickly answered via internet retrieval.
State-of-the-art LLMs demonstrate low accuracy and calibration on HLE,
highlighting a significant gap between current LLM capabilities and the expert
human frontier on closed-ended academic questions. To inform research and
policymaking upon a clear understanding of model capabilities, we publicly
release HLE at https://lastexam.ai.
|
2501.14250
|
Siren: A Learning-Based Multi-Turn Attack Framework for Simulating
Real-World Human Jailbreak Behaviors
|
cs.CL cs.AI cs.CR
|
Large language models (LLMs) are widely used in real-world applications,
raising concerns about their safety and trustworthiness. While red-teaming with
jailbreak prompts exposes the vulnerabilities of LLMs, current efforts focus
primarily on single-turn attacks, overlooking the multi-turn strategies used by
real-world adversaries. Existing multi-turn methods rely on static patterns or
predefined logical chains, failing to account for the dynamic strategies during
attacks. We propose Siren, a learning-based multi-turn attack framework
designed to simulate real-world human jailbreak behaviors. Siren consists of
three stages: (1) training set construction utilizing Turn-Level LLM feedback
(Turn-MF), (2) post-training attackers with supervised fine-tuning (SFT) and
direct preference optimization (DPO), and (3) interactions between the
attacking and target LLMs. Experiments demonstrate that Siren achieves an
attack success rate (ASR) of 90% with LLaMA-3-8B as the attacker against
Gemini-1.5-Pro as the target model, and 70% with Mistral-7B against GPT-4o,
significantly outperforming single-turn baselines. Moreover, Siren with a
7B-scale model achieves performance comparable to a multi-turn baseline that
leverages GPT-4o as the attacker, while requiring fewer turns and employing
decomposition strategies that are better semantically aligned with attack
goals. We hope Siren inspires the development of stronger defenses against
advanced multi-turn jailbreak attacks under realistic scenarios. Code is
available at https://github.com/YiyiyiZhao/siren. Warning: This paper contains
potentially harmful text.
|
2501.14253
|
Distributionally Robust Coreset Selection under Covariate Shift
|
stat.ML cs.LG
|
Coreset selection, which involves selecting a small subset from an existing
training dataset, is an approach to reducing training data, and various
approaches have been proposed for this method. In practical situations where
these methods are employed, it is often the case that the data distributions
differ between the development phase and the deployment phase, with the latter
being unknown. Thus, it is challenging to select an effective subset of
training data that performs well across all deployment scenarios. We therefore
propose Distributionally Robust Coreset Selection (DRCS). DRCS theoretically
derives an estimate of the upper bound for the worst-case test error, assuming
that the future covariate distribution may deviate within a defined range from
the training distribution. Furthermore, by selecting instances in a way that
suppresses the estimate of the upper bound for the worst-case test error, DRCS
achieves distributionally robust training instance selection. This study is
primarily applicable to convex training computation, but we demonstrate that it
can also be applied to deep learning under appropriate approximations. In this
paper, we focus on covariate shift, a type of data distribution shift, and
demonstrate the effectiveness of DRCS through experiments.
|
2501.14256
|
Revisiting Applicable and Comprehensive Knowledge Tracing in Large-Scale
Data
|
cs.LG cs.IR
|
Knowledge Tracing (KT) is a fundamental component of Intelligent Tutoring
Systems (ITS), enabling the modeling of students' knowledge states to predict
future performance. The introduction of Deep Knowledge Tracing (DKT), the first
deep learning-based KT (DLKT) model, has brought significant advantages in
terms of applicability and comprehensiveness. However, recent DLKT models, such
as Attentive Knowledge Tracing (AKT), have often prioritized predictive
performance at the expense of these benefits. While deep sequential models like
DKT have shown potential, they face challenges related to parallel computing,
storage decision modification, and limited storage capacity. To address these
limitations, we propose DKT2, a novel KT model that leverages the recently
developed xLSTM architecture. DKT2 enhances input representation using the
Rasch model and incorporates Item Response Theory (IRT) for interpretability,
allowing for the decomposition of learned knowledge into familiar and
unfamiliar knowledge. By integrating this knowledge with predicted questions,
DKT2 generates comprehensive knowledge states. Extensive experiments conducted
across three large-scale datasets demonstrate that DKT2 consistently
outperforms 17 baseline models in various prediction tasks, underscoring its
potential for real-world educational applications. This work bridges the gap
between theoretical advancements and practical implementation in KT.Our code
and datasets will be available at https://github.com/codebase-2025/DKT2.
|
2501.14259
|
Optimal Investment under Mutual Strategy Influence among Agents
|
eess.SY cs.SY math.OC q-fin.MF q-fin.PM
|
In financial markets, agents often mutually influence each other's investment
strategies and adjust their strategies to align with others. However, there is
limited quantitative study of agents' investment strategies in such scenarios.
In this work, we formulate the optimal investment differential game problem to
study the mutual influence among agents. We derive the analytical solutions for
agents' optimal strategies and propose a fast algorithm to find approximate
solutions with low computational complexity. We theoretically analyze the
impact of mutual influence on agents' optimal strategies and terminal wealth.
When the mutual influence is strong and approaches infinity, we show that
agents' optimal strategies converge to the asymptotic strategy. Furthermore, in
general cases, we prove that agents' optimal strategies are linear combinations
of the asymptotic strategy and their rational strategies without others'
influence. We validate the performance of the fast algorithm and verify the
correctness of our analysis using numerical experiments. This work is crucial
to comprehend mutual influence among agents and design effective mechanisms to
guide their strategies in financial markets.
|
2501.14264
|
CDI: Blind Image Restoration Fidelity Evaluation based on Consistency
with Degraded Image
|
eess.IV cs.CV
|
Recent advancements in Blind Image Restoration (BIR) methods, based on
Generative Adversarial Networks and Diffusion Models, have significantly
improved visual quality. However, they present significant challenges for Image
Quality Assessment (IQA), as the existing Full-Reference IQA methods often rate
images with high perceptual quality poorly. In this paper, we reassess the
Solution Non-Uniqueness and Degradation Indeterminacy issues of BIR, and
propose constructing a specific BIR IQA system. In stead of directly comparing
a restored image with a reference image, the BIR IQA evaluates fidelity by
calculating the Consistency with Degraded Image (CDI). Specifically, we propose
a wavelet domain Reference Guided CDI algorithm, which can acquire the
consistency with a degraded image for various types without requiring knowledge
of degradation parameters. The supported degradation types include down
sampling, blur, noise, JPEG and complex combined degradations etc. In addition,
we propose a Reference Agnostic CDI, enabling BIR fidelity evaluation without
reference images. Finally, in order to validate the rationality of CDI, we
create a new Degraded Images Switch Display Comparison Dataset (DISDCD) for
subjective evaluation of BIR fidelity. Experiments conducted on DISDCD verify
that CDI is markedly superior to common Full Reference IQA methods for BIR
fidelity evaluation. The source code and the DISDCD dataset will be publicly
available shortly.
|
2501.14265
|
Bayesian Neural Networks for One-to-Many Mapping in Image Enhancement
|
cs.CV
|
In image enhancement tasks, such as low-light and underwater image
enhancement, a degraded image can correspond to multiple plausible target
images due to dynamic photography conditions, such as variations in
illumination. This naturally results in a one-to-many mapping challenge. To
address this, we propose a Bayesian Enhancement Model (BEM) that incorporates
Bayesian Neural Networks (BNNs) to capture data uncertainty and produce diverse
outputs. To achieve real-time inference, we introduce a two-stage approach:
Stage I employs a BNN to model the one-to-many mappings in the low-dimensional
space, while Stage II refines fine-grained image details using a Deterministic
Neural Network (DNN). To accelerate BNN training and convergence, we introduce
a dynamic Momentum Prior. Extensive experiments on multiple low-light and
underwater image enhancement benchmarks demonstrate the superiority of our
method over deterministic models.
|
2501.14266
|
TrajFlow: A Generative Framework for Occupancy Density Estimation Using
Normalizing Flows
|
cs.LG
|
In transportation systems and autonomous vehicles, intelligent agents must
understand the future motion of traffic participants to effectively plan motion
trajectories. At the same time, the motion of traffic participants is
inherently uncertain. In this paper, we propose TrajFlow, a generative
framework for estimating the occupancy density of traffic participants. Our
framework utilizes a causal encoder to extract semantically meaningful
embeddings of the observed trajectory, as well as a normalizing flow to decode
these embeddings and determine the most likely future location of traffic
participants at some time point in the future. Our formulation differs from
existing approaches because we model the marginal distribution of spatial
locations instead of the joint distribution of unobserved trajectories. The
advantages of a marginal formulation are numerous. First, we demonstrate that
the marginal formulation produces higher accuracy on challenging trajectory
forecasting benchmarks. Second, the marginal formulation allows for a fully
continuous sampling of future locations. Finally, marginal densities are better
suited for downstream tasks as they allow for the computation of per-agent
motion trajectories and occupancy grids, the two most commonly used
representations for motion forecasting. We present a novel architecture based
entirely on neural differential equations as an implementation of this
framework and provide ablations to demonstrate the advantages of a continuous
implementation over a more traditional discrete neural network based approach.
The code is available at https://github.com/kosieram21/TrajFlow .
|
2501.14268
|
Pre-train and Fine-tune: Recommenders as Large Models
|
cs.IR cs.AI
|
In reality, users have different interests in different periods, regions,
scenes, etc. Such changes in interest are so drastic that they are difficult to
be captured by recommenders. Existing multi-domain learning can alleviate this
problem. However, the structure of the industrial recommendation system is
complex, the amount of data is huge, and the training cost is extremely high,
so it is difficult to modify the structure of the industrial recommender and
re-train it. To fill this gap, we consider recommenders as large pre-trained
models and fine-tune them. We first propose the theory of the information
bottleneck for fine-tuning and present an explanation for the fine-tuning
technique in recommenders. To tailor for recommendation, we design an
information-aware adaptive kernel (IAK) technique to fine-tune the pre-trained
recommender. Specifically, we define fine-tuning as two phases: knowledge
compression and knowledge matching and let the training stage of IAK explicitly
approximate these two phases. Our proposed approach designed from the essence
of fine-tuning is well interpretable. Extensive online and offline experiments
show the superiority of our proposed method. Besides, we also share unique and
important lessons we learned when deploying the method in a large-scale online
platform. We also present the potential issues of fine-tuning techniques in
recommendation systems and the corresponding solutions. The recommender with
IAK technique has been deployed on the homepage of a billion-scale online food
platform for several months and has yielded considerable profits in our
business.
|
2501.14269
|
Hierarchical Time-Aware Mixture of Experts for Multi-Modal Sequential
Recommendation
|
cs.IR cs.AI
|
Multi-modal sequential recommendation (SR) leverages multi-modal data to
learn more comprehensive item features and user preferences than traditional SR
methods, which has become a critical topic in both academia and industry.
Existing methods typically focus on enhancing multi-modal information utility
through adaptive modality fusion to capture the evolving of user preference
from user-item interaction sequences. However, most of them overlook the
interference caused by redundant interest-irrelevant information contained in
rich multi-modal data. Additionally, they primarily rely on implicit temporal
information based solely on chronological ordering, neglecting explicit
temporal signals that could more effectively represent dynamic user interest
over time. To address these limitations, we propose a Hierarchical time-aware
Mixture of experts for multi-modal Sequential Recommendation (HM4SR) with a
two-level Mixture of Experts (MoE) and a multi-task learning strategy.
Specifically, the first MoE, named Interactive MoE, extracts essential user
interest-related information from the multi-modal data of each item. Then, the
second MoE, termed Temporal MoE, captures user dynamic interests by introducing
explicit temporal embeddings from timestamps in modality encoding. To further
address data sparsity, we propose three auxiliary supervision tasks:
sequence-level category prediction (CP) for item feature understanding,
contrastive learning on ID (IDCL) to align sequence context with user
interests, and placeholder contrastive learning (PCL) to integrate temporal
information with modalities for dynamic interest modeling. Extensive
experiments on four public datasets verify the effectiveness of HM4SR compared
to several state-of-the-art approaches.
|
2501.14270
|
Max-Min Fairness for IRS-Assisted Secure Two-Way Communications
|
cs.IT math.IT
|
This paper investigates an intelligent reflective surface (IRS) assisted
secure multi-user two-way communication system. The aim of this paper is to
enhance the physical layer security by optimizing the minimum secrecy-rate
among all user-pairs in the presence of a malicious user. The optimization
problem is converted into an alternating optimization problem consisting of two
sub-problems. Transmit power optimization is handled using a fractional
programming method, whereas IRS phase shift optimization is handled with
semi-definite programming. The convergence of the proposed algorithm is
investigated numerically. The performance gain in minimum secrecy-rate is
quantified for four different user configurations in comparison to the baseline
scheme. Results indicate a 3.6-fold gain in minimum secrecy rate over the
baseline scheme when the IRS is positioned near a legitimate user, even when
the malicious user is located close to the same legitimate user.
|
2501.14271
|
TLXML: Task-Level Explanation of Meta-Learning via Influence Functions
|
cs.LG
|
The scheme of adaptation via meta-learning is seen as an ingredient for
solving the problem of data shortage or distribution shift in real-world
applications, but it also brings the new risk of inappropriate updates of the
model in the user environment, which increases the demand for explainability.
Among the various types of XAI methods, establishing a method of explanation
based on past experience in meta-learning requires special consideration due to
its bi-level structure of training, which has been left unexplored. In this
work, we propose influence functions for explaining meta-learning that measure
the sensitivities of training tasks to adaptation and inference. We also argue
that the approximation of the Hessian using the Gauss-Newton matrix resolves
computational barriers peculiar to meta-learning. We demonstrate the adequacy
of the method through experiments on task distinction and task distribution
distinction using image classification tasks with MAML and Prototypical
Network.
|
2501.14275
|
Leveraging Online Olympiad-Level Math Problems for LLMs Training and
Contamination-Resistant Evaluation
|
cs.CL cs.AI cs.LG
|
Advances in Large Language Models (LLMs) have sparked interest in their
ability to solve Olympiad-level math problems. However, the training and
evaluation of these models are constrained by the limited size and quality of
available datasets, as creating large-scale data for such advanced problems
requires extensive effort from human experts. In addition, current benchmarks
are prone to contamination, leading to unreliable evaluations. In this paper,
we present an automated pipeline that leverages the rich resources of the Art
of Problem Solving (AoPS) forum, which predominantly features Olympiad-level
problems and community-driven solutions. Using open-source LLMs, we develop a
method to extract question-answer pairs from the forum, resulting in
AoPS-Instruct, a dataset of more than 600,000 high-quality QA pairs. Our
experiments demonstrate that fine-tuning LLMs on AoPS-Instruct improves their
reasoning abilities across various benchmarks. Moreover, we build an automatic
pipeline that introduces LiveAoPSBench, an evolving evaluation set with
timestamps, derived from the latest forum data, providing a
contamination-resistant benchmark for assessing LLM performance. Notably, we
observe a significant decline in LLM performance over time, suggesting their
success on older examples may stem from pre-training exposure rather than true
reasoning ability. Our work presents a scalable approach to creating and
maintaining large-scale, high-quality datasets for advanced math reasoning,
offering valuable insights into the capabilities and limitations of LLMs in
this domain. Our benchmark and code is available at
https://github.com/DSL-Lab/aops
|
2501.14276
|
Global Semantic-Guided Sub-image Feature Weight Allocation in
High-Resolution Large Vision-Language Models
|
cs.CV cs.AI
|
As the demand for high-resolution image processing in Large Vision-Language
Models (LVLMs) grows, sub-image partitioning has become a popular approach for
mitigating visual information loss associated with fixed-resolution processing.
However, existing partitioning methods uniformly process sub-images, resulting
in suboptimal image understanding. In this work, we reveal that the sub-images
with higher semantic relevance to the entire image encapsulate richer visual
information for preserving the model's visual understanding ability. Therefore,
we propose the Global Semantic-guided Weight Allocator (GSWA) module, which
dynamically allocates weights to sub-images based on their relative information
density, emulating human visual attention mechanisms. This approach enables the
model to focus on more informative regions, overcoming the limitations of
uniform treatment. We integrate GSWA into the InternVL2-2B framework to create
SleighVL, a lightweight yet high-performing model. Extensive experiments
demonstrate that SleighVL outperforms models with comparable parameters and
remains competitive with larger models. Our work provides a promising direction
for more efficient and contextually aware high-resolution image processing in
LVLMs, advancing multimodal system development.
|
2501.14277
|
Dense-SfM: Structure from Motion with Dense Consistent Matching
|
cs.CV
|
We present Dense-SfM, a novel Structure from Motion (SfM) framework designed
for dense and accurate 3D reconstruction from multi-view images. Sparse
keypoint matching, which traditional SfM methods often rely on, limits both
accuracy and point density, especially in texture-less areas. Dense-SfM
addresses this limitation by integrating dense matching with a Gaussian
Splatting (GS) based track extension which gives more consistent, longer
feature tracks. To further improve reconstruction accuracy, Dense-SfM is
equipped with a multi-view kernelized matching module leveraging transformer
and Gaussian Process architectures, for robust track refinement across
multi-views. Evaluations on the ETH3D and Texture-Poor SfM datasets show that
Dense-SfM offers significant improvements in accuracy and density over
state-of-the-art methods.
|
2501.14278
|
Active Learning for Continual Learning: Keeping the Past Alive in the
Present
|
cs.LG cs.AI
|
Continual learning (CL) enables deep neural networks to adapt to
ever-changing data distributions. In practice, there may be scenarios where
annotation is costly, leading to active continual learning (ACL), which
performs active learning (AL) for the CL scenarios when reducing the labeling
cost by selecting the most informative subset is preferable. However,
conventional AL strategies are not suitable for ACL, as they focus solely on
learning the new knowledge, leading to catastrophic forgetting of previously
learned tasks. Therefore, ACL requires a new AL strategy that can balance the
prevention of catastrophic forgetting and the ability to quickly learn new
tasks. In this paper, we propose AccuACL, Accumulated informativeness-based
Active Continual Learning, by the novel use of the Fisher information matrix as
a criterion for sample selection, derived from a theoretical analysis of the
Fisher-optimality preservation properties within the framework of ACL, while
also addressing the scalability issue of Fisher information-based AL. Extensive
experiments demonstrate that AccuACL significantly outperforms AL baselines
across various CL algorithms, increasing the average accuracy and forgetting by
23.8% and 17.0%, respectively, in average.
|
2501.14279
|
Deep Learning-Powered Classification of Thoracic Diseases in Chest
X-Rays
|
eess.IV cs.CV
|
Chest X-rays play a pivotal role in diagnosing respiratory diseases such as
pneumonia, tuberculosis, and COVID-19, which are prevalent and present unique
diagnostic challenges due to overlapping visual features and variability in
image quality. Severe class imbalance and the complexity of medical images
hinder automated analysis. This study leverages deep learning techniques,
including transfer learning on pre-trained models (AlexNet, ResNet, and
InceptionNet), to enhance disease detection and classification. By fine-tuning
these models and incorporating focal loss to address class imbalance,
significant performance improvements were achieved. Grad-CAM visualizations
further enhance model interpretability, providing insights into clinically
relevant regions influencing predictions. The InceptionV3 model, for instance,
achieved a 28% improvement in AUC and a 15% increase in F1-Score. These
findings highlight the potential of deep learning to improve diagnostic
workflows and support clinical decision-making.
|
2501.14280
|
Enhancing Robotic Precision in Construction: A Modular Factor
Graph-Based Framework to Deflection and Backlash Compensation Using
High-Accuracy Accelerometers
|
cs.RO
|
Accurate positioning is crucial in the construction industry, where labor
shortages highlight the need for automation. Robotic systems with long
kinematic chains are required to reach complex workspaces, including floors,
walls, and ceilings. These requirements significantly impact positioning
accuracy due to effects such as deflection and backlash in various parts along
the kinematic chain. In this work, we introduce a novel approach that
integrates deflection and backlash compensation models with high-accuracy
accelerometers, significantly enhancing position accuracy. Our method employs a
modular framework based on a factor graph formulation to estimate the state of
the kinematic chain, leveraging acceleration measurements to inform the model.
Extensive testing on publicly released datasets, reflecting real-world
construction disturbances, demonstrates the advantages of our approach. The
proposed method reduces the $95\%$ error threshold in the xy-plane by $50\%$
compared to the state-of-the-art Virtual Joint Method, and by $31\%$ when
incorporating base tilt compensation.
|
2501.14284
|
Feature-based Evolutionary Diversity Optimization of Discriminating
Instances for Chance-constrained Optimization Problems
|
cs.NE math.OC
|
Algorithm selection is crucial in the field of optimization, as no single
algorithm performs perfectly across all types of optimization problems. Finding
the best algorithm among a given set of algorithms for a given problem requires
a detailed analysis of the problem's features. To do so, it is important to
have a diverse set of benchmarking instances highlighting the difference in
algorithms' performance. In this paper, we evolve diverse benchmarking
instances for chance-constrained optimization problems that contain stochastic
components characterized by their expected values and variances. These
instances clearly differentiate the performance of two given algorithms,
meaning they are easy to solve by one algorithm and hard to solve by the other.
We introduce a $(\mu+1)~EA$ for feature-based diversity optimization to evolve
such differentiating instances. We study the chance-constrained maximum
coverage problem with stochastic weights on the vertices as an example of
chance-constrained optimization problems. The experimental results demonstrate
that our method successfully generates diverse instances based on different
features while effectively distinguishing the performance between a pair of
algorithms.
|
2501.14285
|
Cascaded Large-Scale TSP Solving with Unified Neural Guidance: Bridging
Local and Population-based Search
|
cs.NE
|
The traveling salesman problem (TSP) is a fundamental NP-hard optimization
problem. This work presents UNiCS, a novel unified neural-guided cascaded
solver for solving large-scale TSP instances. UNiCS comprises a local search
(LS) phase and a population-based search (PBS) phase, both guided by a learning
component called unified neural guidance (UNG). Specifically, UNG guides
solution generation across both phases and determines appropriate phase
transition timing to effectively combine the complementary strengths of LS and
PBS. While trained only on simple distributions with relatively small-scale TSP
instances, UNiCS generalizes effectively to challenging TSP benchmarks
containing much larger instances (10,000-71,009 nodes) with diverse node
distributions entirely unseen during training. Experimental results on the
large-scale TSP instances demonstrate that UNiCS consistently outperforms
state-of-the-art methods, with its advantage remaining consistent across
various runtime budgets.
|
2501.14287
|
Snapshot multi-spectral imaging through defocusing and a Fourier imager
network
|
physics.optics cs.CV cs.LG physics.app-ph
|
Multi-spectral imaging, which simultaneously captures the spatial and
spectral information of a scene, is widely used across diverse fields,
including remote sensing, biomedical imaging, and agricultural monitoring.
Here, we introduce a snapshot multi-spectral imaging approach employing a
standard monochrome image sensor with no additional spectral filters or
customized components. Our system leverages the inherent chromatic aberration
of wavelength-dependent defocusing as a natural source of physical encoding of
multi-spectral information; this encoded image information is rapidly decoded
via a deep learning-based multi-spectral Fourier Imager Network (mFIN). We
experimentally tested our method with six illumination bands and demonstrated
an overall accuracy of 92.98% for predicting the illumination channels at the
input and achieved a robust multi-spectral image reconstruction on various test
objects. This deep learning-powered framework achieves high-quality
multi-spectral image reconstruction using snapshot image acquisition with a
monochrome image sensor and could be useful for applications in biomedicine,
industrial quality control, and agriculture, among others.
|
2501.14288
|
A Comprehensive Framework for Semantic Similarity Analysis of Human and
AI-Generated Text Using Transformer Architectures and Ensemble Techniques
|
cs.CL cs.AI
|
The rapid advancement of large language models (LLMs) has made detecting
AI-generated text an increasingly critical challenge. Traditional methods often
fail to capture the nuanced semantic differences between human and
machine-generated content. We therefore propose a novel approach based on
semantic similarity analysis, leveraging a multi-layered architecture that
combines a pre-trained DeBERTa-v3-large model, Bi-directional LSTMs, and linear
attention pooling to capture both local and global semantic patterns. To
enhance performance, we employ advanced input and output augmentation
techniques such as sector-level context integration and wide output
configurations. These techniques enable the model to learn more discriminative
features and generalize across diverse domains. Experimental results show that
this approach works better than traditional methods, proving its usefulness for
AI-generated text detection and other text comparison tasks.
|
2501.14289
|
Higher-Order Meta Distribution Analysis of Wireless Systems with
Application to the Reliability of UWB THz Networks
|
eess.SY cs.SY
|
Communication reliability, as defined by 3GPP, refers to the probability of
providing a desired quality of service (QoS). This metric is typically
quantified for wireless networks by averaging the QoS success indicator over
spatial and temporal random variables. Recently, the meta distribution (MD) has
emerged as a two-level performance analysis tool for wireless networks,
offering a detailed examination of the outer level (i.e., system-level)
reliability assessment versus the inner level (i.e., link-level) reliability
thresholds. Most existing studies focus on first-order spatiotemporal MD
reliability analyses, and the benefits of leveraging MD reliability for
applications beyond this structure remain unexplored, a gap addressed in this
paper. We present wireless application examples that can benefit the
higher-order MD reliability analysis. Specifically, we provide the analysis and
numerical results for a second-order spatial-spectral-temporal MD reliability
of ultra-wideband THz communication. The results demonstrate the value of the
hierarchical representation of MD reliability across three domains and the
impact of the inner-layer target reliability on the overall MD reliability
measure.
|
2501.14291
|
Advances in Temporal Point Processes: Bayesian, Deep, and LLM Approaches
|
cs.LG stat.ML
|
Temporal point processes (TPPs) are stochastic process models used to
characterize event sequences occurring in continuous time. Traditional
statistical TPPs have a long-standing history, with numerous models proposed
and successfully applied across diverse domains. In recent years, advances in
deep learning have spurred the development of neural TPPs, enabling greater
flexibility and expressiveness in capturing complex temporal dynamics. The
emergence of large language models (LLMs) has further sparked excitement,
offering new possibilities for modeling and analyzing event sequences by
leveraging their rich contextual understanding. This survey presents a
comprehensive review of recent research on TPPs from three perspectives:
Bayesian, deep learning, and LLM approaches. We begin with a review of the
fundamental concepts of TPPs, followed by an in-depth discussion of model
design and parameter estimation techniques in these three frameworks. We also
revisit classic application areas of TPPs to highlight their practical
relevance. Finally, we outline challenges and promising directions for future
research.
|
2501.14294
|
Examining Alignment of Large Language Models through Representative
Heuristics: The Case of Political Stereotypes
|
cs.CL cs.AI
|
Examining the alignment of large language models (LLMs) has become
increasingly important, particularly when these systems fail to operate as
intended. This study explores the challenge of aligning LLMs with human
intentions and values, with specific focus on their political inclinations.
Previous research has highlighted LLMs' propensity to display political
leanings, and their ability to mimic certain political parties' stances on
various issues. However, the extent and conditions under which LLMs deviate
from empirical positions have not been thoroughly examined. To address this
gap, our study systematically investigates the factors contributing to LLMs'
deviations from empirical positions on political issues, aiming to quantify
these deviations and identify the conditions that cause them.
Drawing on cognitive science findings related to representativeness
heuristics -- where individuals readily recall the representative attribute of
a target group in a way that leads to exaggerated beliefs -- we scrutinize LLM
responses through this heuristics lens. We conduct experiments to determine how
LLMs exhibit stereotypes by inflating judgments in favor of specific political
parties. Our results indicate that while LLMs can mimic certain political
parties' positions, they often exaggerate these positions more than human
respondents do. Notably, LLMs tend to overemphasize representativeness to a
greater extent than humans. This study highlights the susceptibility of LLMs to
representativeness heuristics, suggeseting potential vulnerabilities to
political stereotypes. We propose prompt-based mitigation strategies that
demonstrate effectiveness in reducing the influence of representativeness in
LLM responses.
|
2501.14296
|
Multi-stage Large Language Model Pipelines Can Outperform GPT-4o in
Relevance Assessment
|
cs.IR
|
The effectiveness of search systems is evaluated using relevance labels that
indicate the usefulness of documents for specific queries and users. While
obtaining these relevance labels from real users is ideal, scaling such data
collection is challenging. Consequently, third-party annotators are employed,
but their inconsistent accuracy demands costly auditing, training, and
monitoring. We propose an LLM-based modular classification pipeline that
divides the relevance assessment task into multiple stages, each utilising
different prompts and models of varying sizes and capabilities. Applied to TREC
Deep Learning (TREC-DL), one of our approaches showed an 18.4% Krippendorff's
$\alpha$ accuracy increase over OpenAI's GPT-4o mini while maintaining a cost
of about 0.2 USD per million input tokens, offering a more efficient and
scalable solution for relevance assessment. This approach beats the baseline
performance of GPT-4o (5 USD). With a pipeline approach, even the accuracy of
the GPT-4o flagship model, measured in $\alpha$, could be improved by 9.7%.
|
2501.14300
|
Fast Think-on-Graph: Wider, Deeper and Faster Reasoning of Large
Language Model on Knowledge Graph
|
cs.AI cs.CL cs.LG cs.SI
|
Graph Retrieval Augmented Generation (GRAG) is a novel paradigm that takes
the naive RAG system a step further by integrating graph information, such as
knowledge graph (KGs), into large-scale language models (LLMs) to mitigate
hallucination. However, existing GRAG still encounter limitations: 1) simple
paradigms usually fail with the complex problems due to the narrow and shallow
correlations capture from KGs 2) methods of strong coupling with KGs tend to be
high computation cost and time consuming if the graph is dense. In this paper,
we propose the Fast Think-on-Graph (FastToG), an innovative paradigm for
enabling LLMs to think ``community by community" within KGs. To do this,
FastToG employs community detection for deeper correlation capture and two
stages community pruning - coarse and fine pruning for faster retrieval.
Furthermore, we also develop two Community-to-Text methods to convert the graph
structure of communities into textual form for better understanding by LLMs.
Experimental results demonstrate the effectiveness of FastToG, showcasing
higher accuracy, faster reasoning, and better explainability compared to the
previous works.
|
2501.14302
|
TD-RD: A Top-Down Benchmark with Real-Time Framework for Road Damage
Detection
|
cs.CV
|
Object detection has witnessed remarkable advancements over the past decade,
largely driven by breakthroughs in deep learning and the proliferation of large
scale datasets. However, the domain of road damage detection remains relatively
under explored, despite its critical significance for applications such as
infrastructure maintenance and road safety. This paper addresses this gap by
introducing a novel top down benchmark that offers a complementary perspective
to existing datasets, specifically tailored for road damage detection. Our
proposed Top Down Road Damage Detection Dataset (TDRD) includes three primary
categories of road damage cracks, potholes, and patches captured from a top
down viewpoint. The dataset consists of 7,088 high resolution images,
encompassing 12,882 annotated instances of road damage. Additionally, we
present a novel real time object detection framework, TDYOLOV10, designed to
handle the unique challenges posed by the TDRD dataset. Comparative studies
with state of the art models demonstrate competitive baseline results. By
releasing TDRD, we aim to accelerate research in this crucial area. A sample of
the dataset will be made publicly available upon the paper's acceptance.
|
2501.14304
|
MASTER: A Multi-Agent System with LLM Specialized MCTS
|
cs.AI
|
Large Language Models (LLM) are increasingly being explored for
problem-solving tasks. However, their strategic planning capability is often
viewed with skepticism. Recent studies have incorporated the Monte Carlo Tree
Search (MCTS) algorithm to augment the planning capacity of LLM. Despite its
potential, MCTS relies on extensive sampling simulations to approximate the
true reward distribution, which leads to two primary issues. Firstly, MCTS is
effective for tasks like the Game of Go, where simulation results can yield
objective rewards (e.g., 1 for a win and 0 for a loss). However, for tasks such
as question answering, the result of a simulation is the answer to the
question, which cannot yield an objective reward without the ground truth.
Secondly, obtaining statistically significant reward estimations typically
requires a sample size exceeding 30 simulations, resulting in excessive token
usage and time consumption. To address these challenges, we present the
Multi-Agent System with Tactical Execution and Reasoning using LLM Specialized
MCTS (MASTER), a novel framework that coordinates agent recruitment and
communication through LLM specialized MCTS. This system autonomously adjusts
the number of agents based on task complexity and ensures focused communication
among them. Comprehensive experiments across various tasks demonstrate the
effectiveness of our proposed framework. It achieves 76% accuracy on HotpotQA
and 80% on WebShop, setting new state-of-the-art performance on these datasets.
|
2501.14305
|
A Zero-Shot LLM Framework for Automatic Assignment Grading in Higher
Education
|
cs.CY cs.AI
|
Automated grading has become an essential tool in education technology due to
its ability to efficiently assess large volumes of student work, provide
consistent and unbiased evaluations, and deliver immediate feedback to enhance
learning. However, current systems face significant limitations, including the
need for large datasets in few-shot learning methods, a lack of personalized
and actionable feedback, and an overemphasis on benchmark performance rather
than student experience. To address these challenges, we propose a Zero-Shot
Large Language Model (LLM)-Based Automated Assignment Grading (AAG) system.
This framework leverages prompt engineering to evaluate both computational and
explanatory student responses without requiring additional training or
fine-tuning. The AAG system delivers tailored feedback that highlights
individual strengths and areas for improvement, thereby enhancing student
learning outcomes. Our study demonstrates the system's effectiveness through
comprehensive evaluations, including survey responses from higher education
students that indicate significant improvements in motivation, understanding,
and preparedness compared to traditional grading methods. The results validate
the AAG system's potential to transform educational assessment by prioritizing
learning experiences and providing scalable, high-quality feedback.
|
2501.14306
|
Additive Manufacturing Processes Protocol Prediction by Artificial
Intelligence using X-ray Computed Tomography data
|
cs.CV physics.app-ph
|
The quality of the part fabricated from the Additive Manufacturing (AM)
process depends upon the process parameters used, and therefore, optimization
is required for apt quality. A methodology is proposed to set these parameters
non-iteratively without human intervention. It utilizes Artificial Intelligence
(AI) to fully automate the process, with the capability to self-train any apt
AI model by further assimilating the training data.This study includes three
commercially available 3D printers for soft material printing based on the
Material Extrusion (MEX) AM process. The samples are 3D printed for six
different AM process parameters obtained by varying layer height and nozzle
speed. The novelty part of the methodology is incorporating an AI-based image
segmentation step in the decision-making stage that uses quality inspected
training data from the Non-Destructive Testing (NDT) method. The performance of
the trained AI model is compared with the two software tools based on the
classical thresholding method. The AI-based Artificial Neural Network (ANN)
model is trained from NDT-assessed and AI-segmented data to automate the
selection of optimized process parameters. The AI-based model is 99.3 %
accurate, while the best available commercial classical image method is 83.44 %
accurate. The best value of overall R for training ANN is 0.82. The MEX process
gives a 22.06 % porosity error relative to the design. The NDT-data trained two
AI models integrated into a series pipeline for optimal process parameters are
proposed and verified by classical optimization and mechanical testing methods.
|
2501.14308
|
Learning Primitive Relations for Compositional Zero-Shot Learning
|
cs.CV cs.AI
|
Compositional Zero-Shot Learning (CZSL) aims to identify unseen state-object
compositions by leveraging knowledge learned from seen compositions. Existing
approaches often independently predict states and objects, overlooking their
relationships. In this paper, we propose a novel framework, learning primitive
relations (LPR), designed to probabilistically capture the relationships
between states and objects. By employing the cross-attention mechanism, LPR
considers the dependencies between states and objects, enabling the model to
infer the likelihood of unseen compositions. Experimental results demonstrate
that LPR outperforms state-of-the-art methods on all three CZSL benchmark
datasets in both closed-world and open-world settings. Through qualitative
analysis, we show that LPR leverages state-object relationships for unseen
composition prediction.
|
2501.14309
|
BrainGuard: Privacy-Preserving Multisubject Image Reconstructions from
Brain Activities
|
cs.CV
|
Reconstructing perceived images from human brain activity forms a crucial
link between human and machine learning through Brain-Computer Interfaces.
Early methods primarily focused on training separate models for each individual
to account for individual variability in brain activity, overlooking valuable
cross-subject commonalities. Recent advancements have explored multisubject
methods, but these approaches face significant challenges, particularly in data
privacy and effectively managing individual variability. To overcome these
challenges, we introduce BrainGuard, a privacy-preserving collaborative
training framework designed to enhance image reconstruction from multisubject
fMRI data while safeguarding individual privacy. BrainGuard employs a
collaborative global-local architecture where individual models are trained on
each subject's local data and operate in conjunction with a shared global model
that captures and leverages cross-subject patterns. This architecture
eliminates the need to aggregate fMRI data across subjects, thereby ensuring
privacy preservation. To tackle the complexity of fMRI data, BrainGuard
integrates a hybrid synchronization strategy, enabling individual models to
dynamically incorporate parameters from the global model. By establishing a
secure and collaborative training environment, BrainGuard not only protects
sensitive brain data but also improves the image reconstructions accuracy.
Extensive experiments demonstrate that BrainGuard sets a new benchmark in both
high-level and low-level metrics, advancing the state-of-the-art in brain
decoding through its innovative design.
|
2501.14310
|
Permutation-based multi-objective evolutionary feature selection for
high-dimensional data
|
cs.LG cs.AI
|
Feature selection is a critical step in the analysis of high-dimensional
data, where the number of features often vastly exceeds the number of samples.
Effective feature selection not only improves model performance and
interpretability but also reduces computational costs and mitigates the risk of
overfitting. In this context, we propose a novel feature selection method for
high-dimensional data, based on the well-known permutation feature importance
approach, but extending it to evaluate subsets of attributes rather than
individual features. This extension more effectively captures how interactions
among features influence model performance. The proposed method employs a
multi-objective evolutionary algorithm to search for candidate feature subsets,
with the objectives of maximizing the degradation in model performance when the
selected features are shuffled, and minimizing the cardinality of the feature
subset. The effectiveness of our method has been validated on a set of 24
publicly available high-dimensional datasets for classification and regression
tasks, and compared against 9 well-established feature selection methods
designed for high-dimensional problems, including the conventional permutation
feature importance method. The results demonstrate the ability of our approach
in balancing accuracy and computational efficiency, providing a powerful tool
for feature selection in complex, high-dimensional datasets.
|
2501.14311
|
An Efficient Real Time DDoS Detection Model Using Machine Learning
Algorithms
|
cs.LG
|
Distributed Denial of Service attacks have become a significant threat to
industries and governments leading to substantial financial losses. With the
growing reliance on internet services, DDoS attacks can disrupt services by
overwhelming servers with false traffic causing downtime and data breaches.
Although various detection techniques exist, selecting an effective method
remains challenging due to trade-offs between time efficiency and accuracy.
This research focuses on developing an efficient real-time DDoS detection
system using machine learning algorithms leveraging the UNB CICDDoS2019 dataset
including various traffic features. The study aims to classify DDoS and
non-DDoS traffic through various ML classifiers including Logistic Regression,
K-Nearest Neighbors, Random Forest, Support Vector Machine, Naive Bayes. The
dataset is preprocessed through data cleaning, standardization and feature
selection techniques using Principal Component Analysis. The research explores
the performance of these algorithms in terms of precision, recall and F1-score
as well as time complexity to create a reliable system capable of real-time
detection and mitigation of DDoS attacks. The findings indicate that RF,
AdaBoost and XGBoost outperform other algorithms in accuracy and efficiency,
making them ideal candidates for real-time applications.
|
2501.14312
|
Locality-aware Fair Scheduling in LLM Serving
|
cs.DC cs.LG
|
Large language model (LLM) inference workload dominates a wide variety of
modern AI applications, ranging from multi-turn conversation to document
analysis. Balancing fairness and efficiency is critical for managing diverse
client workloads with varying prefix patterns. Unfortunately, existing fair
scheduling algorithms for LLM serving, such as Virtual Token Counter (VTC),
fail to take prefix locality into consideration and thus suffer from poor
performance. On the other hand, locality-aware scheduling algorithms in
existing LLM serving frameworks tend to maximize the prefix cache hit rate
without considering fair sharing among clients.
This paper introduces the first locality-aware fair scheduling algorithm,
Deficit Longest Prefix Match (DLPM), which can maintain a high degree of prefix
locality with a fairness guarantee. We also introduce a novel algorithm, Double
Deficit LPM (D$^2$LPM), extending DLPM for the distributed setup that can find
a balance point among fairness, locality, and load-balancing. Our extensive
evaluation demonstrates the superior performance of DLPM and D$^2$LPM in
ensuring fairness while maintaining high throughput (up to 2.87$\times$ higher
than VTC) and low per-client (up to 7.18$\times$ lower than state-of-the-art
distributed LLM serving system) latency.
|
2501.14313
|
Between Close Enough to Reveal and Far Enough to Protect: a New Privacy
Region for Correlated Data
|
cs.IT math.IT
|
When users make personal privacy choices, correlation between their data can
cause inadvertent leakage about users who do not want to share their data by
other users sharing their data. As a solution, we consider local redaction
mechanisms. As prior works proposed data-independent privatization mechanisms,
we study the family of data-independent local redaction mechanisms and
upper-bound their utility when data correlation is modeled by a stationary
Markov process. In contrast, we derive a novel data-dependent mechanism, which
improves the utility by leveraging a data-dependent leakage measure.
|
2501.14314
|
Graph Feedback Bandits on Similar Arms: With and Without Graph
Structures
|
cs.LG
|
In this paper, we study the stochastic multi-armed bandit problem with graph
feedback. Motivated by applications in clinical trials and recommendation
systems, we assume that two arms are connected if and only if they are similar
(i.e., their means are close to each other). We establish a regret lower bound
for this problem under the novel feedback structure and introduce two upper
confidence bound (UCB)-based algorithms: Double-UCB, which has
problem-independent regret upper bounds, and Conservative-UCB, which has
problem-dependent upper bounds. Leveraging the similarity structure, we also
explore a scenario where the number of arms increases over time (referred to as
the \emph{ballooning setting}). Practical applications of this scenario include
Q\&A platforms (e.g., Reddit, Stack Overflow, Quora) and product reviews on
platforms like Amazon and Flipkart, where answers (or reviews) continuously
appear, and the goal is to display the best ones at the top. We extend these
two UCB-based algorithms to the ballooning setting. Under mild assumptions, we
provide regret upper bounds for both algorithms and discuss their
sub-linearity. Furthermore, we propose a new version of the corresponding
algorithms that do not rely on prior knowledge of the graph's structural
information and provide regret upper bounds. Finally, we conduct experiments to
validate the theoretical results.
|
2501.14315
|
Clear Minds Think Alike: What Makes LLM Fine-tuning Robust? A Study of
Token Perplexity
|
cs.CL
|
Maintaining consistent model performance across domains is a fundamental
challenge in machine learning. While recent work has explored using
LLM-generated data for fine-tuning, its impact on cross-domain generalization
remains poorly understood. In this paper, we present a systematic analysis
revealing that fine-tuning with LLM-generated data not only improves target
task performance but also reduces out-of-domain (OOD) degradation compared to
fine-tuning with ground truth data. Through analyzing the data sequence in
tasks of various domains, we demonstrate that this enhanced OOD robustness
stems from a reduced prevalence of high perplexity tokens in LLM-generated
sequences. Following this hypothesis we showed that masking high perplexity
tokens in ground truth training data also achieves similar OOD preservation
comparable to using LLM-generated data. Extensive experiments across diverse
model architectures and scales, including Gemma2-2B, Mistral-7B and Llama3-8B,
corroborate the consistency of our findings. To the best of our knowledge, this
work provides the first mechanistic explanation for the superior OOD robustness
conferred by LLM-generated training data, offering valuable insights for
developing more robust fine-tuning strategies.
|
2501.14316
|
PAID: A Framework of Product-Centric Advertising Image Design
|
cs.CV
|
Creating visually appealing advertising images is often a labor-intensive and
time-consuming process. Is it possible to automatically generate such images
using only basic product information--specifically, a product foreground image,
taglines, and a target size? Existing methods mainly focus on parts of the
problem and fail to provide a comprehensive solution. To address this gap, we
propose a novel multistage framework called Product-Centric Advertising Image
Design (PAID). It consists of four sequential stages to highlight product
foregrounds and taglines while achieving overall image aesthetics: prompt
generation, layout generation, background image generation, and graphics
rendering. Different expert models are designed and trained for the first three
stages: First, we use a visual language model (VLM) to generate background
prompts that match the products. Next, a VLM-based layout generation model
arranges the placement of product foregrounds, graphic elements (taglines and
decorative underlays), and various nongraphic elements (objects from the
background prompt). Following this, we train an SDXL-based image generation
model that can simultaneously accept prompts, layouts, and foreground controls.
To support the PAID framework, we create corresponding datasets with over
50,000 labeled images. Extensive experimental results and online A/B tests
demonstrate that PAID can produce more visually appealing advertising images.
|
2501.14317
|
Nautilus: Locality-aware Autoencoder for Scalable Mesh Generation
|
cs.CV
|
Triangle meshes are fundamental to 3D applications, enabling efficient
modification and rasterization while maintaining compatibility with standard
rendering pipelines. However, current automatic mesh generation methods
typically rely on intermediate representations that lack the continuous surface
quality inherent to meshes. Converting these representations into meshes
produces dense, suboptimal outputs. Although recent autoregressive approaches
demonstrate promise in directly modeling mesh vertices and faces, they are
constrained by the limitation in face count, scalability, and structural
fidelity. To address these challenges, we propose Nautilus, a locality-aware
autoencoder for artist-like mesh generation that leverages the local properties
of manifold meshes to achieve structural fidelity and efficient representation.
Our approach introduces a novel tokenization algorithm that preserves face
proximity relationships and compresses sequence length through locally shared
vertices and edges, enabling the generation of meshes with an unprecedented
scale of up to 5,000 faces. Furthermore, we develop a Dual-stream Point
Conditioner that provides multi-scale geometric guidance, ensuring global
consistency and local structural fidelity by capturing fine-grained geometric
features. Extensive experiments demonstrate that Nautilus significantly
outperforms state-of-the-art methods in both fidelity and scalability. The
project page is at https://nautilusmeshgen.github.io.
|
2501.14319
|
Scalable Benchmarking and Robust Learning for Noise-Free Ego-Motion and
3D Reconstruction from Noisy Video
|
cs.CV cs.RO
|
We aim to redefine robust ego-motion estimation and photorealistic 3D
reconstruction by addressing a critical limitation: the reliance on noise-free
data in existing models. While such sanitized conditions simplify evaluation,
they fail to capture the unpredictable, noisy complexities of real-world
environments. Dynamic motion, sensor imperfections, and synchronization
perturbations lead to sharp performance declines when these models are deployed
in practice, revealing an urgent need for frameworks that embrace and excel
under real-world noise. To bridge this gap, we tackle three core challenges:
scalable data generation, comprehensive benchmarking, and model robustness
enhancement. First, we introduce a scalable noisy data synthesis pipeline that
generates diverse datasets simulating complex motion, sensor imperfections, and
synchronization errors. Second, we leverage this pipeline to create
Robust-Ego3D, a benchmark rigorously designed to expose noise-induced
performance degradation, highlighting the limitations of current learning-based
methods in ego-motion accuracy and 3D reconstruction quality. Third, we propose
Correspondence-guided Gaussian Splatting (CorrGS), a novel test-time adaptation
method that progressively refines an internal clean 3D representation by
aligning noisy observations with rendered RGB-D frames from clean 3D map,
enhancing geometric alignment and appearance restoration through visual
correspondence. Extensive experiments on synthetic and real-world data
demonstrate that CorrGS consistently outperforms prior state-of-the-art
methods, particularly in scenarios involving rapid motion and dynamic
illumination.
|
2501.14321
|
Domain Expansion: Parameter-Efficient Modules as Building Blocks for
Composite Domains
|
cs.LG
|
Parameter-Efficient Fine-Tuning (PEFT) is an efficient alternative to full
scale fine-tuning, gaining popularity recently. With pre-trained model sizes
growing exponentially, PEFT can be effectively utilized to fine-tune compact
modules, Parameter-Efficient Modules (PEMs), trained to be domain experts over
diverse domains. In this project, we explore composing such individually
fine-tuned PEMs for distribution generalization over the composite domain. To
compose PEMs, simple composing functions are used that operate purely on the
weight space of the individually fine-tuned PEMs, without requiring any
additional fine-tuning. The proposed method is applied to the task of
representing the 16 Myers-Briggs Type Indicator (MBTI) composite personalities
via 4 building block dichotomies, comprising of 8 individual traits which can
be merged (composed) to yield a unique personality. We evaluate the individual
trait PEMs and the composed personality PEMs via an online MBTI personality
quiz questionnaire, validating the efficacy of PEFT to fine-tune PEMs and
merging PEMs without further fine-tuning for domain composition.
|
2501.14322
|
Relative Layer-Wise Relevance Propagation: a more Robust Neural Networks
eXplaination
|
cs.LG cs.AI
|
Machine learning methods are solving very successfully a plethora of tasks,
but they have the disadvantage of not providing any information about their
decision. Consequently, estimating the reasoning of the system provides
additional information. For this, Layer-Wise Relevance Propagation (LRP) is one
of the methods in eXplainable Machine Learning (XML). Its purpose is to provide
contributions of any neural network output in the domain of its input. The main
drawback of current methods is mainly due to division by small values. To
overcome this problem, we provide a new definition called Relative LRP where
the classical conservation law is satisfied up to a multiplicative factor but
without divisions by small values except for Resnet skip connection. In this
article, we will focus on image classification. This allows us to visualize the
contributions of a pixel to the predictions of a multi-layer neural network.
Pixel contributions provide a focus to further analysis on regions of potential
interest. R-LRP can be applied for any dense, CNN or residual neural networks.
Moreover, R-LRP doesn't need any hyperparameters to tune contrary to other LRP
methods. We then compare the R-LRP method on different datasets with simple
CNN, VGG16, VGG19 and Resnet50 networks.
|
2501.14323
|
Automatic detection and prediction of nAMD activity change in retinal
OCT using Siamese networks and Wasserstein Distance for ordinality
|
eess.IV cs.CV cs.LG
|
Neovascular age-related macular degeneration (nAMD) is a leading cause of
vision loss among older adults, where disease activity detection and
progression prediction are critical for nAMD management in terms of timely drug
administration and improving patient outcomes. Recent advancements in deep
learning offer a promising solution for predicting changes in AMD from optical
coherence tomography (OCT) retinal volumes. In this work, we proposed deep
learning models for the two tasks of the public MARIO Challenge at MICCAI 2024,
designed to detect and forecast changes in nAMD severity with longitudinal
retinal OCT. For the first task, we employ a Vision Transformer (ViT) based
Siamese Network to detect changes in AMD severity by comparing scan embeddings
of a patient from different time points. To train a model to forecast the
change after 3 months, we exploit, for the first time, an Earth Mover
(Wasserstein) Distance-based loss to harness the ordinal relation within the
severity change classes. Both models ranked high on the preliminary
leaderboard, demonstrating that their predictive capabilities could facilitate
nAMD treatment management.
|
2501.14325
|
Joint Infrastructure Planning and Order Assignment for On-Demand
Food-Delivery Services with Coordinated Drones and Human Couriers
|
eess.SY cs.SY math.OC
|
This paper investigates the optimal infrastructure planning and order
assignment problem of an on-demand food-delivery platform with a mixed fleet of
drones and human couriers. The platform has two delivery modes: (a) ground
delivery and (b) drone-assisted delivery (i.e., air delivery). In ground
delivery, couriers directly collect and transport orders from restaurants to
destinations. For air delivery, the delivery process involves three legs:
initially, a human courier picks up the order from the restaurant and
transports it to a nearby launchpad, where personnel load the orders onto
drones and replace batteries as needed. The loaded drone then transports the
order from the launchpad to a kiosk, where another courier retrieves the order
from the kiosk for final delivery. The platform must determine the optimal
locations for launchpads and kiosks within a transportation network, and devise
an order assignment strategy that allocates food-delivery orders between ground
and air delivery considering the bundling probabilities of ground deliveries
and the waiting times at launchpads and kiosks. We formulate the platform's
problem as a mixed-integer nonlinear program and develop a novel neural
network-assisted optimization method to obtain high-quality solutions. A case
study in Hong Kong validates our model and algorithm, revealing that drone
delivery reduces operational costs, minimizes courier fleet size, and increases
order bundling opportunities. We also find that the expansion of air delivery
services may entail larger delivery times due to the trade-off between the
travel time savings induced by the faster air delivery and the associated
detours incurred by intermodal transfer and extra waiting times at launchpads
and kiosks, which crucially depends on the distance of the orders and the
sequence of activating long-distance air delivery routes versus short-distance
ones.
|
2501.14334
|
Exploring the sustainable scaling of AI dilemma: A projective study of
corporations' AI environmental impacts
|
cs.AI cs.CY cs.LG
|
The rapid growth of artificial intelligence (AI), particularly Large Language
Models (LLMs), has raised concerns regarding its global environmental impact
that extends beyond greenhouse gas emissions to include consideration of
hardware fabrication and end-of-life processes. The opacity from major
providers hinders companies' abilities to evaluate their AI-related
environmental impacts and achieve net-zero targets.
In this paper, we propose a methodology to estimate the environmental impact
of a company's AI portfolio, providing actionable insights without
necessitating extensive AI and Life-Cycle Assessment (LCA) expertise. Results
confirm that large generative AI models consume up to 4600x more energy than
traditional models. Our modelling approach, which accounts for increased AI
usage, hardware computing efficiency, and changes in electricity mix in line
with IPCC scenarios, forecasts AI electricity use up to 2030. Under a high
adoption scenario, driven by widespread Generative AI and agents adoption
associated to increasingly complex models and frameworks, AI electricity use is
projected to rise by a factor of 24.4.
Mitigating the environmental impact of Generative AI by 2030 requires
coordinated efforts across the AI value chain. Isolated measures in hardware
efficiency, model efficiency, or grid improvements alone are insufficient. We
advocate for standardized environmental assessment frameworks, greater
transparency from the all actors of the value chain and the introduction of a
"Return on Environment" metric to align AI development with net-zero goals.
|
2501.14338
|
Correlation-Based Band Selection for Hyperspectral Image Classification
|
cs.CV eess.IV
|
Hyperspectral images offer extensive spectral information about ground
objects across multiple spectral bands. However, the large volume of data can
pose challenges during processing. Typically, adjacent bands in hyperspectral
data are highly correlated, leading to the use of only a few selected bands for
various applications. In this work, we present a correlation-based band
selection approach for hyperspectral image classification. Our approach
calculates the average correlation between bands using correlation coefficients
to identify the relationships among different bands. Afterward, we select a
subset of bands by analyzing the average correlation and applying a
threshold-based method. This allows us to isolate and retain bands that exhibit
lower inter-band dependencies, ensuring that the selected bands provide diverse
and non-redundant information. We evaluate our proposed approach on two
standard benchmark datasets: Pavia University (PA) and Salinas Valley (SA),
focusing on image classification tasks. The experimental results demonstrate
that our method performs competitively with other standard band selection
approaches.
|
2501.14340
|
From Classical to Quantum: Explicit Classical Distributions Achieving
Maximal Quantum $f$-Divergence
|
quant-ph cs.IT math.IT
|
Explicit classical states achieving maximal $f$-divergence are given,
allowing for a simple proof of Matsumoto's Theorem, and the systematic
extension of any inequality between classical $f$-divergences to quantum
$f$-divergences. Our methodology is particularly simple as it does not require
any elaborate matrix analysis machinery but only basic linear algebra. It is
also effective, as illustrated by two examples improving existing bounds:
(i)~an improved quantum Pinsker inequality is derived between $\chi^2$ and
trace norm, and leveraged to improve a bound in decoherence theory; (ii)~a new
reverse quantum Pinsker inequality is derived for any quantum $f$-divergence,
and compared to previous (Audenaert-Eisert and Hirche-Tomamichel) bounds.
|
2501.14342
|
Chain-of-Retrieval Augmented Generation
|
cs.IR cs.CL
|
This paper introduces an approach for training o1-like RAG models that
retrieve and reason over relevant information step by step before generating
the final answer. Conventional RAG methods usually perform a single retrieval
step before the generation process, which limits their effectiveness in
addressing complex queries due to imperfect retrieval results. In contrast, our
proposed method, CoRAG (Chain-of-Retrieval Augmented Generation), allows the
model to dynamically reformulate the query based on the evolving state. To
train CoRAG effectively, we utilize rejection sampling to automatically
generate intermediate retrieval chains, thereby augmenting existing RAG
datasets that only provide the correct final answer. At test time, we propose
various decoding strategies to scale the model's test-time compute by
controlling the length and number of sampled retrieval chains. Experimental
results across multiple benchmarks validate the efficacy of CoRAG, particularly
in multi-hop question answering tasks, where we observe more than 10 points
improvement in EM score compared to strong baselines. On the KILT benchmark,
CoRAG establishes a new state-of-the-art performance across a diverse range of
knowledge-intensive tasks. Furthermore, we offer comprehensive analyses to
understand the scaling behavior of CoRAG, laying the groundwork for future
research aimed at developing factual and grounded foundation models.
|
2501.14345
|
A Ground Truth Approach for Assessing Process Mining Techniques
|
cs.DB
|
The assessment of process mining techniques using real-life data is often
compromised by the lack of ground truth knowledge, the presence of
non-essential outliers in system behavior and recording errors in event logs.
Using synthetically generated data could leverage ground truth for better
evaluation. Existing log generation tools inject noise directly into the logs,
which does not capture many typical behavioral deviations. Furthermore, the
link between the model and the log, which is needed for later assessment,
becomes lost.
We propose a ground-truth approach for generating process data from either
existing or synthetic initial process models, whether automatically generated
or hand-made. This approach incorporates patterns of behavioral deviations and
recording errors to produce a synthetic yet realistic deviating model and
imperfect event log. These, together with the initial model, are required to
assess process mining techniques based on ground truth knowledge. We
demonstrate this approach to create datasets of synthetic process data for
three processes, one of which we used in a conformance checking use case,
focusing on the assessment of (relaxed) systemic alignments to expose and
explain deviations in modeled and recorded behavior. Our results show that this
approach, unlike traditional methods, provides detailed insights into the
strengths and weaknesses of process mining techniques, both quantitatively and
qualitatively.
|
2501.14346
|
HorNets: Learning from Discrete and Continuous Signals with Routing
Neural Networks
|
cs.LG cs.AI
|
Construction of neural network architectures suitable for learning from both
continuous and discrete tabular data is a challenging research endeavor.
Contemporary high-dimensional tabular data sets are often characterized by a
relatively small instance count, requiring data-efficient learning. We propose
HorNets (Horn Networks), a neural network architecture with state-of-the-art
performance on synthetic and real-life data sets from scarce-data tabular
domains. HorNets are based on a clipped polynomial-like activation function,
extended by a custom discrete-continuous routing mechanism that decides which
part of the neural network to optimize based on the input's cardinality. By
explicitly modeling parts of the feature combination space or combining whole
space in a linear attention-like manner, HorNets dynamically decide which mode
of operation is the most suitable for a given piece of data with no explicit
supervision. This architecture is one of the few approaches that reliably
retrieves logical clauses (including noisy XNOR) and achieves state-of-the-art
classification performance on 14 real-life biomedical high-dimensional data
sets. HorNets are made freely available under a permissive license alongside a
synthetic generator of categorical benchmarks.
|
2501.14349
|
Online Inverse Linear Optimization: Improved Regret Bound, Robustness to
Suboptimality, and Toward Tight Regret Analysis
|
cs.LG
|
We study an online learning problem where, over $T$ rounds, a learner
observes both time-varying sets of feasible actions and an agent's optimal
actions, selected by solving linear optimization over the feasible actions. The
learner sequentially makes predictions of the agent's underlying linear
objective function, and their quality is measured by the regret, the cumulative
gap between optimal objective values and those achieved by following the
learner's predictions. A seminal work by B\"armann et al. (ICML 2017) showed
that online learning methods can be applied to this problem to achieve regret
bounds of $O(\sqrt{T})$. Recently, Besbes et al. (COLT 2021, Oper. Res. 2023)
significantly improved the result by achieving an $O(n^4\ln T)$ regret bound,
where $n$ is the dimension of the ambient space of objective vectors. Their
method, based on the ellipsoid method, runs in polynomial time but is
inefficient for large $n$ and $T$. In this paper, we obtain an $O(n\ln T)$
regret bound, improving upon the previous bound of $O(n^4\ln T)$ by a factor of
$n^3$. Our method is simple and efficient: we apply the online Newton step
(ONS) to appropriate exp-concave loss functions. Moreover, for the case where
the agent's actions are possibly suboptimal, we establish an $O(n\ln
T+\sqrt{\Delta_Tn\ln T})$ regret bound, where $\Delta_T$ is the cumulative
suboptimality of the agent's actions. This bound is achieved by using MetaGrad,
which runs ONS with $\Theta(\ln T)$ different learning rates in parallel. We
also provide a simple instance that implies an $\Omega(n)$ lower bound, showing
that our $O(n\ln T)$ bound is tight up to an $O(\ln T)$ factor. This gives rise
to a natural question: can the $O(\ln T)$ factor in the upper bound be removed?
For the special case of $n=2$, we show that an $O(1)$ regret bound is possible,
while we delineate challenges in extending this result to higher dimensions.
|
2501.14351
|
Facies Classification with Copula Entropy
|
cs.LG physics.geo-ph stat.AP
|
In this paper we propose to apply copula entropy (CE) to facies
classification. In our method, the correlations between geological variables
and facies classes are measured with CE and then the variables associated with
large negative CEs are selected for classification. We verified the proposed
method on a typical facies dataset for facies classification and the
experimental results show that the proposed method can select less geological
variables for facies classification without sacrificing classification
performance. The geological variables such selected are also interpretable to
geologists with geological meanings due to the rigorous definition of CE.
|
2501.14356
|
Causal-Inspired Multitask Learning for Video-Based Human Pose Estimation
|
cs.CV
|
Video-based human pose estimation has long been a fundamental yet challenging
problem in computer vision. Previous studies focus on spatio-temporal modeling
through the enhancement of architecture design and optimization strategies.
However, they overlook the causal relationships in the joints, leading to
models that may be overly tailored and thus estimate poorly to challenging
scenes. Therefore, adequate causal reasoning capability, coupled with good
interpretability of model, are both indispensable and prerequisite for
achieving reliable results. In this paper, we pioneer a causal perspective on
pose estimation and introduce a causal-inspired multitask learning framework,
consisting of two stages. \textit{In the first stage}, we try to endow the
model with causal spatio-temporal modeling ability by introducing two
self-supervision auxiliary tasks. Specifically, these auxiliary tasks enable
the network to infer challenging keypoints based on observed keypoint
information, thereby imbuing causal reasoning capabilities into the model and
making it robust to challenging scenes. \textit{In the second stage}, we argue
that not all feature tokens contribute equally to pose estimation. Prioritizing
causal (keypoint-relevant) tokens is crucial to achieve reliable results, which
could improve the interpretability of the model. To this end, we propose a
Token Causal Importance Selection module to identify the causal tokens and
non-causal tokens (\textit{e.g.}, background and objects). Additionally,
non-causal tokens could provide potentially beneficial cues but may be
redundant. We further introduce a non-causal tokens clustering module to merge
the similar non-causal tokens. Extensive experiments show that our method
outperforms state-of-the-art methods on three large-scale benchmark datasets.
|
2501.14358
|
CSI-Free Low-Complexity Remote State Estimation over Wireless MIMO
Fading Channels using Semantic Analog Aggregation
|
eess.SY cs.IT cs.SY eess.SP math.IT
|
In this work, we investigate low-complexity remote system state estimation
over wireless multiple-input-multiple-output (MIMO) channels without requiring
prior knowledge of channel state information (CSI). We start by reviewing the
conventional Kalman filtering-based state estimation algorithm, which typically
relies on perfect CSI and incurs considerable computational complexity. To
overcome the need for CSI, we introduce a novel semantic aggregation method, in
which sensors transmit semantic measurement discrepancies to the remote state
estimator through analog aggregation. To further reduce computational
complexity, we introduce a constant-gain-based filtering algorithm that can be
optimized offline using the constrained stochastic successive convex
approximation (CSSCA) method. We derive a closed-form sufficient condition for
the estimation stability of our proposed scheme via Lyapunov drift analysis.
Numerical results showcase significant performance gains using the proposed
scheme compared to several widely used methods.
|
2501.14360
|
In System Alignments we Trust! Explainable Alignments via Projections
|
cs.AI cs.FL
|
Alignments are a well-known process mining technique for reconciling system
logs and normative process models. Evidence of certain behaviors in a real
system may only be present in one representation - either a log or a model -
but not in the other. Since for processes in which multiple entities, like
objects and resources, are involved in the activities, their interactions
affect the behavior and are therefore essential to take into account in the
alignments.
Additionally, both logged and modeled representations of reality may be
imprecise and only partially represent some of these entities, but not all. In
this paper, we introduce the concept of "relaxations" through projections for
alignments to deal with partially correct models and logs. Relaxed alignments
help to distinguish between trustworthy and untrustworthy content of the two
representations (the log and the model) to achieve a better understanding of
the underlying process and expose quality issues.
|
2501.14369
|
Low-rank Prompt Interaction for Continual Vision-Language Retrieval
|
cs.CV
|
Research on continual learning in multi-modal tasks has been receiving
increasing attention. However, most existing work overlooks the explicit
cross-modal and cross-task interactions. In this paper, we innovatively propose
the Low-rank Prompt Interaction (LPI) to address this general problem of
multi-modal understanding, which considers both cross-modal and cross-task
interactions. Specifically, as for the former, we employ multi-modal
correlation modules for corresponding Transformer layers. Considering that the
training parameters scale to the number of layers and tasks, we propose
low-rank interaction-augmented decomposition to avoid memory explosion while
enhancing the cross-modal association through sharing and separating
common-specific low-rank factors. In addition, due to the multi-modal semantic
differences carried by the low-rank initialization, we adopt hierarchical
low-rank contrastive learning to ensure training robustness. As for the latter,
we initially employ a visual analysis and identify that different tasks have
clear distinctions in proximity. Therefore, we introduce explicit task
contrastive constraints in the prompt learning process based on task semantic
distances. Experiments on two retrieval tasks show performance improvements
with the introduction of a minimal number of parameters, demonstrating the
effectiveness of our method. Code is available at
https://github.com/Kelvin-ywc/LPI.
|
2501.14371
|
DRESSing Up LLM: Efficient Stylized Question-Answering via Style
Subspace Editing
|
cs.CL cs.AI cs.LG
|
We introduce DRESS, a novel approach for generating stylized large language
model (LLM) responses through representation editing. Existing methods like
prompting and fine-tuning are either insufficient for complex style adaptation
or computationally expensive, particularly in tasks like NPC creation or
character role-playing. Our approach leverages the over-parameterized nature of
LLMs to disentangle a style-relevant subspace within the model's representation
space to conduct representation editing, ensuring a minimal impact on the
original semantics. By applying adaptive editing strengths, we dynamically
adjust the steering vectors in the style subspace to maintain both stylistic
fidelity and semantic integrity. We develop two stylized QA benchmark datasets
to validate the effectiveness of DRESS, and the results demonstrate significant
improvements compared to baseline methods such as prompting and ITI. In short,
DRESS is a lightweight, train-free solution for enhancing LLMs with flexible
and effective style control, making it particularly useful for developing
stylized conversational agents. Codes and benchmark datasets are available at
https://github.com/ArthurLeoM/DRESS-LLM.
|
2501.14373
|
Fat-to-Thin Policy Optimization: Offline RL with Sparse Policies
|
cs.LG
|
Sparse continuous policies are distributions that can choose some actions at
random yet keep strictly zero probability for the other actions, which are
radically different from the Gaussian. They have important real-world
implications, e.g. in modeling safety-critical tasks like medicine. The
combination of offline reinforcement learning and sparse policies provides a
novel paradigm that enables learning completely from logged datasets a
safety-aware sparse policy. However, sparse policies can cause difficulty with
the existing offline algorithms which require evaluating actions that fall
outside of the current support. In this paper, we propose the first offline
policy optimization algorithm that tackles this challenge: Fat-to-Thin Policy
Optimization (FtTPO). Specifically, we maintain a fat (heavy-tailed) proposal
policy that effectively learns from the dataset and injects knowledge to a thin
(sparse) policy, which is responsible for interacting with the environment. We
instantiate FtTPO with the general $q$-Gaussian family that encompasses both
heavy-tailed and sparse policies and verify that it performs favorably in a
safety-critical treatment simulation and the standard MuJoCo suite. Our code is
available at \url{https://github.com/lingweizhu/fat2thin}.
|
2501.14377
|
Dream to Fly: Model-Based Reinforcement Learning for Vision-Based Drone
Flight
|
cs.RO
|
Autonomous drone racing has risen as a challenging robotic benchmark for
testing the limits of learning, perception, planning, and control. Expert human
pilots are able to agilely fly a drone through a race track by mapping the
real-time feed from a single onboard camera directly to control commands.
Recent works in autonomous drone racing attempting direct pixel-to-commands
control policies (without explicit state estimation) have relied on either
intermediate representations that simplify the observation space or performed
extensive bootstrapping using Imitation Learning (IL). This paper introduces an
approach that learns policies from scratch, allowing a quadrotor to
autonomously navigate a race track by directly mapping raw onboard camera
pixels to control commands, just as human pilots do. By leveraging model-based
reinforcement learning~(RL) - specifically DreamerV3 - we train visuomotor
policies capable of agile flight through a race track using only raw pixel
observations. While model-free RL methods such as PPO struggle to learn under
these conditions, DreamerV3 efficiently acquires complex visuomotor behaviors.
Moreover, because our policies learn directly from pixel inputs, the
perception-aware reward term employed in previous RL approaches to guide the
training process is no longer needed. Our experiments demonstrate in both
simulation and real-world flight how the proposed approach can be deployed on
agile quadrotors. This approach advances the frontier of vision-based
autonomous flight and shows that model-based RL is a promising direction for
real-world robotics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.