id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.10467
|
YNote: A Novel Music Notation for Fine-Tuning LLMs in Music Generation
|
cs.SD cs.AI eess.AS
|
The field of music generation using Large Language Models (LLMs) is evolving
rapidly, yet existing music notation systems, such as MIDI, ABC Notation, and
MusicXML, remain too complex for effective fine-tuning of LLMs. These formats
are difficult for both machines and humans to interpret due to their
variability and intricate structure. To address these challenges, we introduce
YNote, a simplified music notation system that uses only four characters to
represent a note and its pitch. YNote's fixed format ensures consistency,
making it easy to read and more suitable for fine-tuning LLMs. In our
experiments, we fine-tuned GPT-2 (124M) on a YNote-encoded dataset and achieved
BLEU and ROUGE scores of 0.883 and 0.766, respectively. With just two notes as
prompts, the model was able to generate coherent and stylistically relevant
music. We believe YNote offers a practical alternative to existing music
notations for machine learning applications and has the potential to
significantly enhance the quality of music generation using LLMs.
|
2502.10470
|
MetaDE: Evolving Differential Evolution by Differential Evolution
|
cs.NE cs.AI
|
As a cornerstone in the Evolutionary Computation (EC) domain, Differential
Evolution (DE) is known for its simplicity and effectiveness in handling
challenging black-box optimization problems. While the advantages of DE are
well-recognized, achieving peak performance heavily depends on its
hyperparameters such as the mutation factor, crossover probability, and the
selection of specific DE strategies. Traditional approaches to this
hyperparameter dilemma have leaned towards parameter tuning or adaptive
mechanisms. However, identifying the optimal settings tailored for specific
problems remains a persistent challenge. In response, we introduce MetaDE, an
approach that evolves DE's intrinsic hyperparameters and strategies using DE
itself at a meta-level. A pivotal aspect of MetaDE is a specialized
parameterization technique, which endows it with the capability to dynamically
modify DE's parameters and strategies throughout the evolutionary process. To
augment computational efficiency, MetaDE incorporates a design that leverages
parallel processing through a GPU-accelerated computing framework. Within such
a framework, DE is not just a solver but also an optimizer for its own
configurations, thus streamlining the process of hyperparameter optimization
and problem-solving into a cohesive and automated workflow. Extensive
evaluations on the CEC2022 benchmark suite demonstrate MetaDE's promising
performance. Moreover, when applied to robot control via evolutionary
reinforcement learning, MetaDE also demonstrates promising performance. The
source code of MetaDE is publicly accessible at:
https://github.com/EMI-Group/metade.
|
2502.10473
|
Diverse Transformer Decoding for Offline Reinforcement Learning Using
Financial Algorithmic Approaches
|
cs.AI cs.LG
|
Offline Reinforcement Learning (RL) algorithms learn a policy using a fixed
training dataset, which is then deployed online to interact with the
environment and make decisions. Transformers, a standard choice for modeling
time-series data, are gaining popularity in offline RL. In this context, Beam
Search (BS), an approximate inference algorithm, is the go-to decoding method.
Offline RL eliminates the need for costly or risky online data collection.
However, the restricted dataset induces uncertainty as the agent may encounter
unfamiliar sequences of states and actions during execution that were not
covered in the training data. In this context, BS lacks two important
properties essential for offline RL: It does not account for the aforementioned
uncertainty, and its greedy left-right search approach often results in
sequences with minimal variations, failing to explore potentially better
alternatives.
To address these limitations, we propose Portfolio Beam Search (PBS), a
simple-yet-effective alternative to BS that balances exploration and
exploitation within a Transformer model during decoding. We draw inspiration
from financial economics and apply these principles to develop an
uncertainty-aware diversification mechanism, which we integrate into a
sequential decoding algorithm at inference time. We empirically demonstrate the
effectiveness of PBS on the D4RL locomotion benchmark, where it achieves higher
returns and significantly reduces outcome variability.
|
2502.10475
|
X-SG$^2$S: Safe and Generalizable Gaussian Splatting with X-dimensional
Watermarks
|
cs.CR cs.AI cs.CV
|
3D Gaussian Splatting (3DGS) has been widely used in 3D reconstruction and 3D
generation. Training to get a 3DGS scene often takes a lot of time and
resources and even valuable inspiration. The increasing amount of 3DGS digital
asset have brought great challenges to the copyright protection. However, it
still lacks profound exploration targeted at 3DGS. In this paper, we propose a
new framework X-SG$^2$S which can simultaneously watermark 1 to 3D messages
while keeping the original 3DGS scene almost unchanged. Generally, we have a
X-SG$^2$S injector for adding multi-modal messages simultaneously and an
extractor for extract them. Specifically, we first split the watermarks into
message patches in a fixed manner and sort the 3DGS points. A self-adaption
gate is used to pick out suitable location for watermarking. Then use a
XD(multi-dimension)-injection heads to add multi-modal messages into sorted
3DGS points. A learnable gate can recognize the location with extra messages
and XD-extraction heads can restore hidden messages from the location
recommended by the learnable gate. Extensive experiments demonstrated that the
proposed X-SG$^2$S can effectively conceal multi modal messages without
changing pretrained 3DGS pipeline or the original form of 3DGS parameters.
Meanwhile, with simple and efficient model structure and high practicality,
X-SG$^2$S still shows good performance in hiding and extracting multi-modal
inner structured or unstructured messages. X-SG$^2$S is the first to unify 1 to
3D watermarking model for 3DGS and the first framework to add multi-modal
watermarks simultaneous in one 3DGS which pave the wave for later researches.
|
2502.10476
|
Multi-Objective Planning with Contextual Lexicographic Reward
Preferences
|
cs.AI cs.RO cs.SY
|
Autonomous agents are often required to plan under multiple objectives whose
preference ordering varies based on context. The agent may encounter multiple
contexts during its course of operation, each imposing a distinct lexicographic
ordering over the objectives, with potentially different reward functions
associated with each context. Existing approaches to multi-objective planning
typically consider a single preference ordering over the objectives, across the
state space, and do not support planning under multiple objective orderings
within an environment. We present Contextual Lexicographic Markov Decision
Process (CLMDP), a framework that enables planning under varying lexicographic
objective orderings, depending on the context. In a CLMDP, both the objective
ordering at a state and the associated reward functions are determined by the
context. We employ a Bayesian approach to infer a state-context mapping from
expert trajectories. Our algorithm to solve a CLMDP first computes a policy for
each objective ordering and then combines them into a single context-aware
policy that is valid and cycle-free. The effectiveness of the proposed approach
is evaluated in simulation and using a mobile robot.
|
2502.10477
|
Knowledge Integration Strategies in Autonomous Vehicle Prediction and
Planning: A Comprehensive Survey
|
cs.AI cs.LG
|
This comprehensive survey examines the integration of knowledge-based
approaches into autonomous driving systems, with a focus on trajectory
prediction and planning. We systematically review methodologies for
incorporating domain knowledge, traffic rules, and commonsense reasoning into
these systems, spanning purely symbolic representations to hybrid
neuro-symbolic architectures. In particular, we analyze recent advancements in
formal logic and differential logic programming, reinforcement learning
frameworks, and emerging techniques that leverage large foundation models and
diffusion models for knowledge representation. Organized under a unified
literature survey section, our discussion synthesizes the state-of-the-art into
a high-level overview, supported by a detailed comparative table that maps key
works to their respective methodological categories. This survey not only
highlights current trends -- including the growing emphasis on interpretable
AI, formal verification in safety-critical systems, and the increased use of
generative models in prediction and planning -- but also outlines the
challenges and opportunities for developing robust, knowledge-enhanced
autonomous driving systems.
|
2502.10478
|
SinSim: Sinkhorn-Regularized SimCLR
|
cs.LG cs.CV stat.ML
|
Self-supervised learning has revolutionized representation learning by
eliminating the need for labeled data. Contrastive learning methods, such as
SimCLR, maximize the agreement between augmented views of an image but lack
explicit regularization to enforce a globally structured latent space. This
limitation often leads to suboptimal generalization. We propose SinSim, a novel
extension of SimCLR that integrates Sinkhorn regularization from optimal
transport theory to enhance representation structure. The Sinkhorn loss, an
entropy-regularized Wasserstein distance, encourages a well-dispersed and
geometry-aware feature space, preserving discriminative power. Empirical
evaluations on various datasets demonstrate that SinSim outperforms SimCLR and
achieves competitive performance against prominent self-supervised methods such
as VICReg and Barlow Twins. UMAP visualizations further reveal improved class
separability and structured feature distributions. These results indicate that
integrating optimal transport regularization into contrastive learning provides
a principled and effective mechanism for learning robust, well-structured
representations. Our findings open new directions for applying transport-based
constraints in self-supervised learning frameworks.
|
2502.10479
|
Lifetime Analysis of Circular $k$-out-of-$n$: G Balanced Systems in a
Shock Environment
|
eess.SY cs.PF cs.SY math.PR
|
This paper examines the lifetime distributions of circular $k$-out-of-$n$: G
balanced systems operating in a shock environment, providing a unified
framework for both discrete- and continuous-time perspectives. The system
remains functioning only if at least $k$ operating units satisfy a predefined
balance condition (BC). Building on this concept, we demonstrate that the shock
numbers to failure (SNTF) follow a discrete phase-type distribution by modeling
the system's stochastic dynamics with a finite Markov chain and applying
BC-based state space consolidation. Additionally, we develop a computationally
efficient method for directly computing multi-step transition probabilities of
the underlying Markov chain. Next, assuming the inter-arrival times between
shocks follow a phase-type distribution, we establish that the continuous-time
system lifetime, or the time to system failure (TTF), also follows a phase-type
distribution with different parameters. Extensive numerical studies illustrate
the impact of key parameters-such as the number of units, minimum requirement
of the number of operating units, individual unit reliability, choice of
balance condition, and inter-shock time distribution-on the SNTF, TTF, and
their variability.
|
2502.10480
|
Safe Multi-agent Satellite Servicing with Control Barrier Functions
|
eess.SY cs.RO cs.SY
|
The use of control barrier functions under uncertain pose information of
multiple small servicing agents is analyzed for a satellite servicing
application. The application consists of modular servicing agents deployed
towards a tumbling space object from a mothership. Relative position and
orientation of each agent is obtained via fusion of relative range and inertial
measurement sensors. The control barrier functions are utilized to avoid
collisions with other agents for the application of simultaneously relocating
servicing agents on a tumbling body. A differential collision detection and
avoidance framework using the polytopic hull of the tumbling space object is
utilized to safely guide the agents away from the tumbling object.
|
2502.10481
|
Chronic Diseases Prediction Using ML
|
cs.LG
|
The recent increase in morbidity is primarily due to chronic diseases
including Diabetes, Heart disease, Lung cancer, and brain tumours. The results
for patients can be improved, and the financial burden on the healthcare system
can be lessened, through the early detection and prevention of certain
disorders. In this study, we built a machine-learning model for predicting the
existence of numerous diseases utilising datasets from various sources,
including Kaggle, Dataworld, and the UCI repository, that are relevant to each
of the diseases we intended to predict.
Following the acquisition of the datasets, we used feature engineering to
extract pertinent features from the information, after which the model was
trained on a training set and improved using a validation set. A test set was
then used to assess the correctness of the final model. We provide an
easy-to-use interface where users may enter the parameters for the selected
ailment. Once the right model has been run, it will indicate whether the user
has a certain ailment and offer suggestions for how to treat or prevent it.
|
2502.10482
|
A Self-Supervised Reinforcement Learning Approach for Fine-Tuning Large
Language Models Using Cross-Attention Signals
|
cs.AI
|
We propose a novel reinforcement learning framework for post training large
language models that does not rely on human in the loop feedback. Instead, our
approach uses cross attention signals within the model itself to derive a self
supervised reward, thereby guiding iterative fine tuning of the model policy.
By analyzing how the model attends to the input prompt during generation, we
construct measures of prompt coverage, focus, and coherence. We then use these
measures to rank or score candidate responses, providing a reward signal that
encourages the model to produce well aligned, on topic text. In empirical
comparisons against standard policy gradient methods and RL fine tuning with
synthetic preference models, our method shows significant gains in prompt
relevance and consistency over a non RL baseline. While it does not yet match
the performance of fully human supervised RLHF systems, it highlights an
important direction for scaling alignment with minimal human labeling. We
provide a detailed analysis, discuss potential limitations, and outline future
work for combining cross-attention based signals with smaller amounts of human
feedback.
|
2502.10485
|
Forecasting time series with constraints
|
stat.ML cs.AI cs.LG math.ST stat.AP stat.ME stat.TH
|
Time series forecasting presents unique challenges that limit the
effectiveness of traditional machine learning algorithms. To address these
limitations, various approaches have incorporated linear constraints into
learning algorithms, such as generalized additive models and hierarchical
forecasting. In this paper, we propose a unified framework for integrating and
combining linear constraints in time series forecasting. Within this framework,
we show that the exact minimizer of the constrained empirical risk can be
computed efficiently using linear algebra alone. This approach allows for
highly scalable implementations optimized for GPUs. We validate the proposed
methodology through extensive benchmarking on real-world tasks, including
electricity demand forecasting and tourism forecasting, achieving
state-of-the-art performance.
|
2502.10486
|
VLM-Guard: Safeguarding Vision-Language Models via Fulfilling Safety
Alignment Gap
|
cs.CR cs.AI cs.CV
|
The emergence of vision language models (VLMs) comes with increased safety
concerns, as the incorporation of multiple modalities heightens vulnerability
to attacks. Although VLMs can be built upon LLMs that have textual safety
alignment, it is easily undermined when the vision modality is integrated. We
attribute this safety challenge to the modality gap, a separation of image and
text in the shared representation space, which blurs the distinction between
harmful and harmless queries that is evident in LLMs but weakened in VLMs. To
avoid safety decay and fulfill the safety alignment gap, we propose VLM-Guard,
an inference-time intervention strategy that leverages the LLM component of a
VLM as supervision for the safety alignment of the VLM. VLM-Guard projects the
representations of VLM into the subspace that is orthogonal to the safety
steering direction that is extracted from the safety-aligned LLM. Experimental
results on three malicious instruction settings show the effectiveness of
VLM-Guard in safeguarding VLM and fulfilling the safety alignment gap between
VLM and its LLM component.
|
2502.10487
|
Fast Proxies for LLM Robustness Evaluation
|
cs.CR cs.AI
|
Evaluating the robustness of LLMs to adversarial attacks is crucial for safe
deployment, yet current red-teaming methods are often prohibitively expensive.
We compare the ability of fast proxy metrics to predict the real-world
robustness of an LLM against a simulated attacker ensemble. This allows us to
estimate a model's robustness to computationally expensive attacks without
requiring runs of the attacks themselves. Specifically, we consider
gradient-descent-based embedding-space attacks, prefilling attacks, and direct
prompting. Even though direct prompting in particular does not achieve high
ASR, we find that it and embedding-space attacks can predict attack success
rates well, achieving $r_p=0.87$ (linear) and $r_s=0.94$ (Spearman rank)
correlations with the full attack ensemble while reducing computational cost by
three orders of magnitude.
|
2502.10489
|
LiveVal: Time-aware Data Valuation via Adaptive Reference Points
|
cs.LG cs.AI
|
Time-aware data valuation enhances training efficiency and model robustness,
as early detection of harmful samples could prevent months of wasted
computation. However, existing methods rely on model retraining or convergence
assumptions or fail to capture long-term training dynamics.
We propose LiveVal, an efficient time-aware data valuation method with three
key designs:
1) seamless integration with SGD training for efficient data contribution
monitoring; 2) reference-based valuation with normalization for reliable
benchmark establishment; and 3) adaptive reference point selection for
real-time updating with optimized memory usage.
We establish theoretical guarantees for LiveVal's stability and prove that
its valuations are bounded and directionally aligned with optimization
progress. Extensive experiments demonstrate that LiveVal provides efficient
data valuation across different modalities and model scales, achieving 180
speedup over traditional methods while maintaining robust detection
performance.
|
2502.10490
|
A Robust Attack: Displacement Backdoor Attack
|
cs.CR cs.AI cs.CV
|
As artificial intelligence becomes more prevalent in our lives, people are
enjoying the convenience it brings, but they are also facing hidden threats,
such as data poisoning and adversarial attacks. These threats can have
disastrous consequences for the application of artificial intelligence,
especially for some applications that take effect immediately, such as
autonomous driving and medical fields. Among these threats, backdoor attacks
have left a deep impression on people with their concealment and simple
deployment, making them a threat that cannot be ignored, however, in the
process of deploying the backdoor model, the backdoor attack often has some
reasons that make it unsatisfactory in real-world applications, such as jitter
and brightness changes. Based on this, we propose a highly robust backdoor
attack that shifts the target sample and combines it with itself to form a
backdoor sample, the Displacement Backdoor Attack(DBA). Experimental results
show that the DBA attack can resist data augmentation that simulates real-world
differences, such as rotation and cropping.
|
2502.10491
|
F-StrIPE: Fast Structure-Informed Positional Encoding for Symbolic Music
Generation
|
cs.SD cs.AI cs.LG eess.AS
|
While music remains a challenging domain for generative models like
Transformers, recent progress has been made by exploiting suitable
musically-informed priors. One technique to leverage information about musical
structure in Transformers is inserting such knowledge into the positional
encoding (PE) module. However, Transformers carry a quadratic cost in sequence
length. In this paper, we propose F-StrIPE, a structure-informed PE scheme that
works in linear complexity. Using existing kernel approximation techniques
based on random features, we show that F-StrIPE is a generalization of
Stochastic Positional Encoding (SPE). We illustrate the empirical merits of
F-StrIPE using melody harmonization for symbolic music.
|
2502.10492
|
Multi-view 3D surface reconstruction from SAR images by inverse
rendering
|
cs.CV eess.SP
|
3D reconstruction of a scene from Synthetic Aperture Radar (SAR) images
mainly relies on interferometric measurements, which involve strict constraints
on the acquisition process. These last years, progress in deep learning has
significantly advanced 3D reconstruction from multiple views in optical
imaging, mainly through reconstruction-by-synthesis approaches pioneered by
Neural Radiance Fields. In this paper, we propose a new inverse rendering
method for 3D reconstruction from unconstrained SAR images, drawing inspiration
from optical approaches. First, we introduce a new simplified differentiable
SAR rendering model, able to synthesize images from a digital elevation model
and a radar backscattering coefficients map. Then, we introduce a
coarse-to-fine strategy to train a Multi-Layer Perceptron (MLP) to fit the
height and appearance of a given radar scene from a few SAR views. Finally, we
demonstrate the surface reconstruction capabilities of our method on synthetic
SAR images produced by ONERA's physically-based EMPRISE simulator. Our method
showcases the potential of exploiting geometric disparities in SAR images and
paves the way for multi-sensor data fusion.
|
2502.10495
|
SWA-LDM: Toward Stealthy Watermarks for Latent Diffusion Models
|
cs.CR cs.AI cs.CV cs.LG
|
In the rapidly evolving landscape of image generation, Latent Diffusion
Models (LDMs) have emerged as powerful tools, enabling the creation of highly
realistic images. However, this advancement raises significant concerns
regarding copyright infringement and the potential misuse of generated content.
Current watermarking techniques employed in LDMs often embed constant signals
to the generated images that compromise their stealthiness, making them
vulnerable to detection by malicious attackers. In this paper, we introduce
SWA-LDM, a novel approach that enhances watermarking by randomizing the
embedding process, effectively eliminating detectable patterns while preserving
image quality and robustness. Our proposed watermark presence attack reveals
the inherent vulnerabilities of existing latent-based watermarking methods,
demonstrating how easily these can be exposed. Through comprehensive
experiments, we validate that SWA-LDM not only fortifies watermark stealthiness
but also maintains competitive performance in watermark robustness and visual
fidelity. This work represents a pivotal step towards securing LDM-generated
images against unauthorized use, ensuring both copyright protection and content
integrity in an era where digital image authenticity is paramount.
|
2502.10497
|
Hallucinations and Truth: A Comprehensive Accuracy Evaluation of RAG,
LoRA and DoRA
|
cs.CL cs.AI
|
Recent advancements in Generative AI have significantly improved the
efficiency and adaptability of natural language processing (NLP) systems,
particularly through Retrieval-Augmented Generation (RAG), Low-Rank Adaptation
(LoRA), and Weight-Decomposed Low-Rank Adaptation (DoRA). RAG integrates
external knowledge to enhance factual consistency in generative outputs, while
LoRA enables parameter-efficient fine-tuning of large language models (LLMs).
DoRA further refines this process by optimizing fine-tuning through adaptive
parameter ranking and domain-aware weight adjustments, improving learning
efficiency while maintaining inference performance.
This paper presents a large-scale empirical evaluation of RAG, LoRA, and
DoRA, with model fine-tuning and generation performance assessed on 20,000
FAQ-based queries, while the knowledge base spans 400,000 entries. The study
analyzes key performance metrics such as accuracy, relevance, and inference
latency. Experimental results demonstrate that DoRA achieves the highest
accuracy (90.1%), relevance score (0.88), and lowest latency (110 ms per
query), outperforming both LoRA and RAG in real-world, domain-specific
generative AI applications.
Furthermore, this study examines the trade-offs between fine-tuning
efficiency, computational cost, and real-time adaptability across different
models. Findings highlight RAG's effectiveness in knowledge grounding, LoRA's
cost-efficient domain adaptation, and DoRA's ability to balance fine-tuning
efficiency with model precision. These insights provide practical guidance for
deploying AI-driven generative systems in accuracy-critical domains such as
healthcare, finance, and legal services, ensuring scalability, reliability, and
optimal performance in dynamic environments.
|
2502.10498
|
The Role of World Models in Shaping Autonomous Driving: A Comprehensive
Survey
|
cs.CV
|
Driving World Model (DWM), which focuses on predicting scene evolution during
the driving process, has emerged as a promising paradigm in pursuing autonomous
driving. These methods enable autonomous driving systems to better perceive,
understand, and interact with dynamic driving environments. In this survey, we
provide a comprehensive overview of the latest progress in DWM. We categorize
existing approaches based on the modalities of the predicted scenes and
summarize their specific contributions to autonomous driving. In addition,
high-impact datasets and various metrics tailored to different tasks within the
scope of DWM research are reviewed. Finally, we discuss the potential
limitations of current research and propose future directions. This survey
provides valuable insights into the development and application of DWM,
fostering its broader adoption in autonomous driving. The relevant papers are
collected at https://github.com/LMD0311/Awesome-World-Model.
|
2502.10505
|
Preference learning made easy: Everything should be understood through
win rate
|
cs.LG cs.CL stat.ML
|
Preference learning, or the task of aligning generative models to preference
comparison data, has yet to reach the conceptual maturity of classification,
density estimation, etc. To close this gap, this work presents a framework to
understand preference learning starting from the sampling distribution of
pairwise preference data. First, we prove that the only evaluation of a
generative model that respects both preferences and prevalences in the data
distribution is a form of win rate, justifying win rate as the focal point to
understand preference learning. We then analyze preference learning methods as
win rate optimization (WRO) or non-WRO. We present novel instances of WRO
beyond existing examples (RLHF, NLHF) and identify two key theoretical benefits
of all such methods. We prove that common non-WRO methods like DPO and SFT on
preferred samples lack these properties and suggest ways to mitigate such
theoretical limitations. We also show that WRO underperforms in practice due
optimization difficulties and that optimization success predicts performance
better than choices which affect the objective's solution. Our analysis
highlights best practices for existing methods and provides recommendations for
future research, guided by the principle that one should either align non-WRO
methods more closely with WRO or improve the optimization of WRO objectives.
|
2502.10510
|
MixMin: Finding Data Mixtures via Convex Minimization
|
cs.LG stat.ML
|
Modern machine learning pipelines are increasingly combining and mixing data
from diverse and disparate sources, e.g., pre-training large language models.
Yet, finding the optimal data mixture is a challenging and open problem. We
formalize this data mixing problem as a bi-level objective: the best mixture is
the one that would lead to the best model for a downstream objective.
Unfortunately, this objective is generally intractable. In this paper, we make
the observation that the bi-level data mixing objective becomes convex as our
model class becomes larger. We develop and study a gradient-based approach for
optimizing this convex objective, which we call MixMin, and test it on language
modeling and chemistry tasks. MixMin was the only method that uniformly
improved the data mixture in all our experiments. With MixMin, we improved the
data mixture using less than 0.2% additional compute for a pythia-410M model
trained on 8.2B tokens, resulting between 1-5% relative improvement to negative
log likelihood on PIQA, ARC Easy, SciQ, and OpenWebMath. Crucially, we found
that MixMin mixtures for smaller models improved training of larger models,
suggesting that MixMin mixtures may be scale-invariant. When mixing bioassay
data to train an XGBoost model, we saw improvements to average precision scores
of 0.03-0.15.
|
2502.10514
|
Applying Deep Learning to Ads Conversion Prediction in Last Mile
Delivery Marketplace
|
cs.LG
|
Deep neural networks (DNNs) have revolutionized web-scale ranking systems,
enabling breakthroughs in capturing complex user behaviors and driving
performance gains. At DoorDash, we first harnessed this transformative power by
transitioning our homepage Ads ranking system from traditional tree based
models to cutting edge multi task DNNs. This evolution sparked advancements in
data foundations, model design, training efficiency, evaluation rigor, and
online serving, delivering substantial business impact and reshaping our
approach to machine learning. In this paper, we talk about our problem driven
journey, from identifying the right problems and crafting targeted solutions to
overcoming the complexity of developing and scaling a deep learning
recommendation system. Through our successes and learned lessons, we aim to
share insights and practical guidance to teams pursuing similar advancements in
machine learning systems.
|
2502.10517
|
KernelBench: Can LLMs Write Efficient GPU Kernels?
|
cs.LG cs.AI cs.PF cs.SE
|
Efficient GPU kernels are crucial for building performant machine learning
architectures, but writing them is a time-consuming challenge that requires
significant expertise; therefore, we explore using language models (LMs) to
automate kernel generation. We introduce KernelBench, an open-source framework
for evaluating LMs' ability to write fast and correct kernels on a suite of 250
carefully selected PyTorch ML workloads. KernelBench represents a real-world
engineering environment and making progress on the introduced benchmark
directly translates to faster practical kernels. We introduce a new evaluation
metric fast_p, which measures the percentage of generated kernels that are
functionally correct and offer a speedup greater than an adjustable threshold p
over baseline. Our experiments across various state-of-the-art models and
test-time methods show that frontier reasoning models perform the best out of
the box but still fall short overall, matching the PyTorch baseline in less
than 20% of the cases. While we show that results can improve by leveraging
execution and profiling feedback during iterative refinement, KernelBench
remains a challenging benchmark, with its difficulty increasing as we raise
speedup threshold p.
|
2502.10522
|
GraphiT: Efficient Node Classification on Text-Attributed Graphs with
Prompt Optimized LLMs
|
cs.AI cs.LG
|
The application of large language models (LLMs) to graph data has attracted a
lot of attention recently. LLMs allow us to use deep contextual embeddings from
pretrained models in text-attributed graphs, where shallow embeddings are often
used for the text attributes of nodes. However, it is still challenging to
efficiently encode the graph structure and features into a sequential form for
use by LLMs. In addition, the performance of an LLM alone, is highly dependent
on the structure of the input prompt, which limits their effectiveness as a
reliable approach and often requires iterative manual adjustments that could be
slow, tedious and difficult to replicate programmatically. In this paper, we
propose GraphiT (Graphs in Text), a framework for encoding graphs into a
textual format and optimizing LLM prompts for graph prediction tasks. Here we
focus on node classification for text-attributed graphs. We encode the graph
data for every node and its neighborhood into a concise text to enable LLMs to
better utilize the information in the graph. We then further programmatically
optimize the LLM prompts using the DSPy framework to automate this step and
make it more efficient and reproducible. GraphiT outperforms our LLM-based
baselines on three datasets and we show how the optimization step in GraphiT
leads to measurably better results without manual prompt tweaking. We also
demonstrated that our graph encoding approach is competitive to other graph
encoding methods while being less expensive because it uses significantly less
tokens for the same task.
|
2502.10525
|
Towards Watermarking of Open-Source LLMs
|
cs.CR cs.LG
|
While watermarks for closed LLMs have matured and have been included in
large-scale deployments, these methods are not applicable to open-source
models, which allow users full control over the decoding process. This setting
is understudied yet critical, given the rising performance of open-source
models. In this work, we lay the foundation for systematic study of open-source
LLM watermarking. For the first time, we explicitly formulate key requirements,
including durability against common model modifications such as model merging,
quantization, or finetuning, and propose a concrete evaluation setup. Given the
prevalence of these modifications, durability is crucial for an open-source
watermark to be effective. We survey and evaluate existing methods, showing
that they are not durable. We also discuss potential ways to improve their
durability and highlight remaining challenges. We hope our work enables future
progress on this important problem.
|
2502.10526
|
Tempo: Helping Data Scientists and Domain Experts Collaboratively
Specify Predictive Modeling Tasks
|
cs.HC cs.AI
|
Temporal predictive models have the potential to improve decisions in health
care, public services, and other domains, yet they often fail to effectively
support decision-makers. Prior literature shows that many misalignments between
model behavior and decision-makers' expectations stem from issues of model
specification, namely how, when, and for whom predictions are made. However,
model specifications for predictive tasks are highly technical and difficult
for non-data-scientist stakeholders to interpret and critique. To address this
challenge we developed Tempo, an interactive system that helps data scientists
and domain experts collaboratively iterate on model specifications. Using
Tempo's simple yet precise temporal query language, data scientists can quickly
prototype specifications with greater transparency about pre-processing
choices. Moreover, domain experts can assess performance within data subgroups
to validate that models behave as expected. Through three case studies, we
demonstrate how Tempo helps multidisciplinary teams quickly prune infeasible
specifications and identify more promising directions to explore.
|
2502.10533
|
Expert-Agnostic Learning to Defer
|
cs.LG cs.HC
|
Learning to Defer (L2D) learns autonomous systems to independently manage
straightforward cases, while deferring uncertain cases to human experts. Recent
advancements in this field have introduced features enabling flexibility to
unseen experts at test-time, but we find these approaches have significant
limitations. To address these, we introduce EA-L2D: Expert-Agnostic Learning to
Defer, a novel L2D framework that leverages a Bayesian approach to model expert
behaviour in an expert-agnostic manner, facilitating optimal deferral
decisions. EA-L2D offers several critical improvements over prior methods,
including the ability to incorporate prior knowledge about experts, a reduced
reliance on expert-annotated data, and robust performance when deferring to
experts with expertise not seen during training. Evaluating on CIFAR-10,
HAM10000, German Traffic Lights, Breast Ultrasound, Axial Organ Slices, and
Blood Cell MNIST, we observe performance gains over the next state-of-the-art
of 1-16\% for seen experts and 4-28\% for unseen experts in settings with high
expert diversity.
|
2502.10536
|
PolyPath: Adapting a Large Multimodal Model for Multi-slide Pathology
Report Generation
|
cs.CV cs.AI cs.LG
|
The interpretation of histopathology cases underlies many important
diagnostic and treatment decisions in medicine. Notably, this process typically
requires pathologists to integrate and summarize findings across multiple
slides per case. Existing vision-language capabilities in computational
pathology have so far been largely limited to small regions of interest, larger
regions at low magnification, or single whole-slide images (WSIs). This limits
interpretation of findings that span multiple high-magnification regions across
multiple WSIs. By making use of Gemini 1.5 Flash, a large multimodal model
(LMM) with a 1-million token context window, we demonstrate the ability to
generate bottom-line diagnoses from up to 40,000 768x768 pixel image patches
from multiple WSIs at 10X magnification. This is the equivalent of up to 11
hours of video at 1 fps. Expert pathologist evaluations demonstrate that the
generated report text is clinically accurate and equivalent to or preferred
over the original reporting for 68% (95% CI: [60%, 76%]) of multi-slide
examples with up to 5 slides. While performance decreased for examples with 6
or more slides, this study demonstrates the promise of leveraging the
long-context capabilities of modern LMMs for the uniquely challenging task of
medical report generation where each case can contain thousands of image
patches.
|
2502.10538
|
Amortized Locally Decodable Codes
|
cs.IT cs.CR math.IT
|
Locally Decodable Codes (LDCs) are error correcting codes that admit
efficient decoding of individual message symbols without decoding the entire
message. Unfortunately, known LDC constructions offer a sub-optimal trade-off
between rate, error tolerance and locality, the number of queries that the
decoder must make to the received codeword $\tilde {y}$ to recovered a
particular symbol from the original message $x$, even in relaxed settings where
the encoder/decoder share randomness or where the channel is resource bounded.
We initiate the study of Amortized Locally Decodable Codes where the local
decoder wants to recover multiple symbols of the original message $x$ and the
total number of queries to the received codeword $\tilde{y}$ can be amortized
by the total number of message symbols recovered. We demonstrate that
amortization allows us to overcome prior barriers and impossibility results. We
first demonstrate that the Hadamard code achieves amortized locality below $2$
-- a result that is known to be impossible without amortization. Second, we
study amortized locally decodable codes in cryptographic settings where the
sender and receiver share a secret key or where the channel is resource-bounded
and where the decoder wants to recover a consecutive subset of message symbols
$[L,R]$. In these settings we show that it is possible to achieve a trifecta:
constant rate, error tolerance and constant amortized locality.
|
2502.10540
|
From Deep Additive Kernel Learning to Last-Layer Bayesian Neural
Networks via Induced Prior Approximation
|
cs.LG stat.ML
|
With the strengths of both deep learning and kernel methods like Gaussian
Processes (GPs), Deep Kernel Learning (DKL) has gained considerable attention
in recent years. From the computational perspective, however, DKL becomes
challenging when the input dimension of the GP layer is high. To address this
challenge, we propose the Deep Additive Kernel (DAK) model, which incorporates
i) an additive structure for the last-layer GP; and ii) induced prior
approximation for each GP unit. This naturally leads to a last-layer Bayesian
neural network (BNN) architecture. The proposed method enjoys the
interpretability of DKL as well as the computational advantages of BNN.
Empirical results show that the proposed approach outperforms state-of-the-art
DKL methods in both regression and classification tasks.
|
2502.10546
|
Learning to be Smooth: An End-to-End Differentiable Particle Smoother
|
cs.LG cs.AI cs.RO
|
For challenging state estimation problems arising in domains like vision and
robotics, particle-based representations attractively enable temporal reasoning
about multiple posterior modes. Particle smoothers offer the potential for more
accurate offline data analysis by propagating information both forward and
backward in time, but have classically required human-engineered dynamics and
observation models. Extending recent advances in discriminative training of
particle filters, we develop a framework for low-variance propagation of
gradients across long time sequences when training particle smoothers. Our
"two-filter'' smoother integrates particle streams that are propagated forward
and backward in time, while incorporating stratification and importance weights
in the resampling step to provide low-variance gradient estimates for neural
network dynamics and observation models. The resulting mixture density particle
smoother is substantially more accurate than state-of-the-art particle filters,
as well as search-based baselines, for city-scale global vehicle localization
from real-world videos and maps.
|
2502.10547
|
A standardised platform for translational advances in fluidic soft
systems
|
cs.RO
|
Soft machines are poised to deliver significant real-world impact, with soft
robotics emerging as a key sub-discipline. This field integrates biological
inspiration, materials science, and embodied intelligence to create bio-robotic
hybrids, blurring the boundary between engineered systems and biology. Over the
past 15 years, research in fluidically controlled soft robots has led to
commercialised systems that leverage "softness" to improve human-machine
interaction or to handle delicate objects. However, translating laboratory
advancements into scalable applications remains challenging due to difficulties
in prototyping and manufacturing ultra-flexible materials, as well as the
absence of standardised design processes. Here we show that the Flex Printer,
an open-source, low-cost FDM platform, enables reliable printing of
ultra-flexible soft robots with embedded fluidic logic. By employing an
innovative upside-down print orientation, the system significantly expands the
range of printable geometries. We demonstrate how this approach allows robots
to autonomously walk off the print bed immediately after fabrication - a
milestone achievement in soft robotics. This work provides a foundation for
standardisation and scalable manufacturing, critical for accelerating the
field's impact. More broadly, by lowering barriers to entry, this platform has
the potential to democratise soft robotics research and facilitate the
development of new applications. We invite the community to contribute to the
shared development of this technology to drive the next wave of breakthroughs
in soft robotics.
|
2502.10550
|
Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with
Reinforcement Learning
|
cs.LG cs.AI cs.RO
|
Memory is crucial for enabling agents to tackle complex tasks with temporal
and spatial dependencies. While many reinforcement learning (RL) algorithms
incorporate memory, the field lacks a universal benchmark to assess an agent's
memory capabilities across diverse scenarios. This gap is particularly evident
in tabletop robotic manipulation, where memory is essential for solving tasks
with partial observability and ensuring robust performance, yet no standardized
benchmarks exist. To address this, we introduce MIKASA (Memory-Intensive Skills
Assessment Suite for Agents), a comprehensive benchmark for memory RL, with
three key contributions: (1) we propose a comprehensive classification
framework for memory-intensive RL tasks, (2) we collect MIKASA-Base - a unified
benchmark that enables systematic evaluation of memory-enhanced agents across
diverse scenarios, and (3) we develop MIKASA-Robo - a novel benchmark of 32
carefully designed memory-intensive tasks that assess memory capabilities in
tabletop robotic manipulation. Our contributions establish a unified framework
for advancing memory RL research, driving the development of more reliable
systems for real-world applications. The code is available at
https://sites.google.com/view/memorybenchrobots/.
|
2502.10552
|
Synthesis of Dynamic Masks for Information-Theoretic Opacity in
Stochastic Systems
|
eess.SY cs.AI cs.RO cs.SY
|
In this work, we investigate the synthesis of dynamic information releasing
mechanisms, referred to as ''masks'', to minimize information leakage from a
stochastic system to an external observer. Specifically, for a stochastic
system, an observer aims to infer whether the final state of the system
trajectory belongs to a set of secret states. The dynamic mask seeks to
regulate sensor information in order to maximize the observer's uncertainty
about the final state, a property known as final-state opacity. While existing
supervisory control literature on dynamic masks primarily addresses qualitative
opacity, we propose quantifying opacity in stochastic systems by conditional
entropy, which is a measure of information leakage in information security. We
then formulate a constrained optimization problem to synthesize a dynamic mask
that maximizes final-state opacity under a total cost constraint on masking. To
solve this constrained optimal dynamic mask synthesis problem, we develop a
novel primal-dual policy gradient method. Additionally, we present a technique
for computing the gradient of conditional entropy with respect to the masking
policy parameters, leveraging observable operators in hidden Markov models. To
demonstrate the effectiveness of our approach, we apply our method to an
illustrative example and a stochastic grid world scenario, showing how our
algorithm optimally enforces final-state opacity under cost constraints.
|
2502.10554
|
Benchmarking the rationality of AI decision making using the
transitivity axiom
|
cs.AI
|
Fundamental choice axioms, such as transitivity of preference, provide
testable conditions for determining whether human decision making is rational,
i.e., consistent with a utility representation. Recent work has demonstrated
that AI systems trained on human data can exhibit similar reasoning biases as
humans and that AI can, in turn, bias human judgments through AI recommendation
systems. We evaluate the rationality of AI responses via a series of choice
experiments designed to evaluate transitivity of preference in humans. We
considered ten versions of Meta's Llama 2 and 3 LLM models. We applied Bayesian
model selection to evaluate whether these AI-generated choices violated two
prominent models of transitivity. We found that the Llama 2 and 3 models
generally satisfied transitivity, but when violations did occur, occurred only
in the Chat/Instruct versions of the LLMs. We argue that rationality axioms,
such as transitivity of preference, can be useful for evaluating and
benchmarking the quality of AI-generated responses and provide a foundation for
understanding computational rationality in AI systems more generally.
|
2502.10556
|
Recent Advances in Malware Detection: Graph Learning and Explainability
|
cs.CR cs.LG
|
The rapid evolution of malware has necessitated the development of
sophisticated detection methods that go beyond traditional signature-based
approaches. Graph learning techniques have emerged as powerful tools for
modeling and analyzing the complex relationships inherent in malware behavior,
leveraging advancements in Graph Neural Networks (GNNs) and related methods.
This survey provides a comprehensive exploration of recent advances in malware
detection, focusing on the interplay between graph learning and explainability.
It begins by reviewing malware analysis techniques and datasets, emphasizing
their foundational role in understanding malware behavior and supporting
detection strategies. The survey then discusses feature engineering, graph
reduction, and graph embedding methods, highlighting their significance in
transforming raw data into actionable insights, while ensuring scalability and
efficiency. Furthermore, this survey focuses on explainability techniques and
their applications in malware detection, ensuring transparency and
trustworthiness. By integrating these components, this survey demonstrates how
graph learning and explainability contribute to building robust, interpretable,
and scalable malware detection systems. Future research directions are outlined
to address existing challenges and unlock new opportunities in this critical
area of cybersecurity.
|
2502.10557
|
Can Large Language Model Agents Balance Energy Systems?
|
eess.SY cs.SY
|
This paper presents a hybrid approach that integrates Large Language Models
(LLMs) with a multi-scenario Stochastic Unit Commitment (SUC) framework,
focusing on both efficiency and reliability under high wind generation
uncertainties. Numerical experiments on small-to-medium-sized test systems show
that while the traditional SUC approach yields a total cost of 99.05 million
USD with 3.04 GWh of load curtailment, the LLM-assisted SUC (LLM-SUC) reduces
costs to 98.87 million USD and lowers load curtailment to 2.32 GWh, an
improvement of nearly 24%. Both methods maintain zero wind curtailment,
confirming robust renewable integration. By employing an LLM agent that helps
balance the energy system more effectively, the proposed framework enhances
demand fulfillment at reduced costs, illustrating the potential of AI to inform
generator commitments in uncertain operating conditions. Further gains may be
realized by refining prompt design, incorporating historical operational data,
and extending this approach to higher-dimensional uncertainties and energy
storage systems, ultimately fostering greater resilience, efficiency, and
adaptability in next-generation power system operations.
|
2502.10559
|
SAMRI-2: A Memory-based Model for Cartilage and Meniscus Segmentation in
3D MRIs of the Knee Joint
|
eess.IV cs.AI cs.CV
|
Accurate morphometric assessment of cartilage-such as thickness/volume-via
MRI is essential for monitoring knee osteoarthritis. Segmenting cartilage
remains challenging and dependent on extensive expert-annotated datasets, which
are heavily subjected to inter-reader variability. Recent advancements in
Visual Foundational Models (VFM), especially memory-based approaches, offer
opportunities for improving generalizability and robustness. This study
introduces a deep learning (DL) method for cartilage and meniscus segmentation
from 3D MRIs using interactive, memory-based VFMs. To improve spatial awareness
and convergence, we incorporated a Hybrid Shuffling Strategy (HSS) during
training and applied a segmentation mask propagation technique to enhance
annotation efficiency. We trained four AI models-a CNN-based 3D-VNet, two
automatic transformer-based models (SaMRI2D and SaMRI3D), and a
transformer-based promptable memory-based VFM (SAMRI-2)-on 3D knee MRIs from
270 patients using public and internal datasets and evaluated on 57 external
cases, including multi-radiologist annotations and different data acquisitions.
Model performance was assessed against reference standards using Dice Score
(DSC) and Intersection over Union (IoU), with additional morphometric
evaluations to further quantify segmentation accuracy. SAMRI-2 model, trained
with HSS, outperformed all other models, achieving an average DSC improvement
of 5 points, with a peak improvement of 12 points for tibial cartilage. It also
demonstrated the lowest cartilage thickness errors, reducing discrepancies by
up to threefold. Notably, SAMRI-2 maintained high performance with as few as
three user clicks per volume, reducing annotation effort while ensuring
anatomical precision. This memory-based VFM with spatial awareness offers a
novel approach for reliable AI-assisted knee MRI segmentation, advancing DL in
musculoskeletal imaging.
|
2502.10562
|
Detecting and Monitoring Bias for Subgroups in Breast Cancer Detection
AI
|
cs.CV cs.LG
|
Automated mammography screening plays an important role in early breast
cancer detection. However, current machine learning models, developed on some
training datasets, may exhibit performance degradation and bias when deployed
in real-world settings. In this paper, we analyze the performance of
high-performing AI models on two mammography datasets-the Emory Breast Imaging
Dataset (EMBED) and the RSNA 2022 challenge dataset. Specifically, we evaluate
how these models perform across different subgroups, defined by six attributes,
to detect potential biases using a range of classification metrics. Our
analysis identifies certain subgroups that demonstrate notable
underperformance, highlighting the need for ongoing monitoring of these
subgroups' performance. To address this, we adopt a monitoring method designed
to detect performance drifts over time. Upon identifying a drift, this method
issues an alert, which can enable timely interventions. This approach not only
provides a tool for tracking the performance but also helps ensure that AI
models continue to perform effectively across diverse populations.
|
2502.10563
|
Accelerating Unbiased LLM Evaluation via Synthetic Feedback
|
cs.LG cs.CL
|
When developing new large language models (LLMs), a key step is evaluating
their final performance, often by computing the win-rate against a reference
model based on external feedback. Human feedback is the gold standard,
particularly for capturing nuanced qualities like coherence, readability, and
alignment with human expectations. However, human evaluations are costly --
even for large tech companies -- and when conducted with active users, they may
negatively impact user experience. A promising alternative is synthetic
feedback, where evaluations are conducted by other large language models,
including reward models. While this eliminates the need for costly human
annotations, it introduces biases that may distort the evaluation process. In
this work, we propose a statistically principled framework that integrates
human and synthetic feedback to reduce reliance on human annotations while
maintaining unbiased win-rate calculations. Our experiments demonstrate a
reduction in human annotations by up to 12.2% with an off-the-shelf synthetic
evaluator and up to 24.8% with a finetuned variant. Apart from being
generalizable, scalable, and free of hyper-parameter tuning, our method offers
predictable annotation savings, which can be estimated based on data-dependent
characteristics.
|
2502.10564
|
Efficient Stabilization of Hybrid Coulomb Spacecraft Formations using
Control Lyapunov Functions
|
math.OC cs.SY eess.SY
|
A control allocation algorithm using control Lyapunov functions to determine
stabilizing charges and thrusts of hybrid Coulomb spacecraft formations (HCSFs)
is presented. The goal is to stabilize a desired configuration while minimizing
the thruster actuation and maximizing Coulomb actuation to minimize propellant
usage. A proportion of the decrease of the control Lyapunov function is
designated for Coulomb actuation and the rest is performed by thrusters.
Simulations show that an 85% reduction of propellant compared to using solely
thrusters is attainable using the proposed algorithm. It is shown that the best
role for thrusters in a HCSF is to provide small corrections that cannot be
provided by Coulomb actuation.
|
2502.10567
|
Efficient Hierarchical Contrastive Self-supervising Learning for Time
Series Classification via Importance-aware Resolution Selection
|
cs.LG cs.AI
|
Recently, there has been a significant advancement in designing
Self-Supervised Learning (SSL) frameworks for time series data to reduce the
dependency on data labels. Among these works, hierarchical contrastive
learning-based SSL frameworks, which learn representations by contrasting data
embeddings at multiple resolutions, have gained considerable attention. Due to
their ability to gather more information, they exhibit better generalization in
various downstream tasks. However, when the time series data length is
significant long, the computational cost is often significantly higher than
that of other SSL frameworks. In this paper, to address this challenge, we
propose an efficient way to train hierarchical contrastive learning models.
Inspired by the fact that each resolution's data embedding is highly dependent,
we introduce importance-aware resolution selection based training framework to
reduce the computational cost. In the experiment, we demonstrate that the
proposed method significantly improves training time while preserving the
original model's integrity in extensive time series classification performance
evaluations. Our code could be found here, https://github.com/KEEBVIN/IARS
|
2502.10568
|
Observer-Aware Probabilistic Planning Under Partial Observability
|
cs.AI
|
In this article, we are interested in planning problems where the agent is
aware of the presence of an observer, and where this observer is in a partial
observability situation. The agent has to choose its strategy so as to optimize
the information transmitted by observations. Building on observer-aware Markov
decision processes (OAMDPs), we propose a framework to handle this type of
problems and thus formalize properties such as legibility, explicability and
predictability. This extension of OAMDPs to partial observability can not only
handle more realistic problems, but also permits considering dynamic hidden
variables of interest. These dynamic target variables allow, for instance,
working with predictability, or with legibility problems where the goal might
change during execution. We discuss theoretical properties of PO-OAMDPs and,
experimenting with benchmark problems, we analyze HSVI's convergence behavior
with dedicated initializations and study the resulting strategies.
|
2502.10569
|
HADL Framework for Noise Resilient Long-Term Time Series Forecasting
|
cs.LG cs.AI
|
Long-term time series forecasting is critical in domains such as finance,
economics, and energy, where accurate and reliable predictions over extended
horizons drive strategic decision-making. Despite the progress in machine
learning-based models, the impact of temporal noise in extended lookback
windows remains underexplored, often degrading model performance and
computational efficiency. In this paper, we propose a novel framework that
addresses these challenges by integrating the Discrete Wavelet Transform (DWT)
and Discrete Cosine Transform (DCT) to perform noise reduction and extract
robust long-term features. These transformations enable the separation of
meaningful temporal patterns from noise in both the time and frequency domains.
To complement this, we introduce a lightweight low-rank linear prediction layer
that not only reduces the influence of residual noise but also improves memory
efficiency. Our approach demonstrates competitive robustness to noisy input,
significantly reduces computational complexity, and achieves competitive or
state-of-the-art forecasting performance across diverse benchmark datasets.
Extensive experiments reveal that the proposed framework is particularly
effective in scenarios with high noise levels or irregular patterns, making it
well suited for real-world forecasting tasks. The code is available in
https://github.com/forgee-master/HADL.
|
2502.10570
|
Quantifying the Impact of Motion on 2D Gaze Estimation in Real-World
Mobile Interactions
|
cs.HC cs.CV
|
Mobile gaze tracking involves inferring a user's gaze point or direction on a
mobile device's screen from facial images captured by the device's front
camera. While this technology inspires an increasing number of gaze-interaction
applications, achieving consistent accuracy remains challenging due to dynamic
user-device spatial relationships and varied motion conditions inherent in
mobile contexts. This paper provides empirical evidence on how user mobility
and behaviour affect mobile gaze tracking accuracy. We conduct two user studies
collecting behaviour and gaze data under various motion conditions - from lying
to maze navigation - and during different interaction tasks. Quantitative
analysis has revealed behavioural regularities among daily tasks and identified
head distance, head pose, and device orientation as key factors affecting
accuracy, with errors increasing by up to 48.91% in dynamic conditions compared
to static ones. These findings highlight the need for more robust, adaptive
eye-tracking systems that account for head movements and device deflection to
maintain accuracy across diverse mobile contexts.
|
2502.10573
|
An Innovative Next Activity Prediction Approach Using Process Entropy
and DAW-Transformer
|
cs.LG cs.AI
|
Purpose - In Business Process Management (BPM), accurate prediction of the
next activities is vital for operational efficiency and decision-making.
Current Artificial Intelligence (AI)/Machine Learning (ML) models struggle with
the complexity and evolving nature of business process event logs, balancing
accuracy and interpretability. This paper proposes an entropy-driven model
selection approach and DAW-Transformer, which stands for Dynamic
Attribute-Aware Transformer, to integrate all attributes with a dynamic window
for better accuracy.
Design/methodology/approach - This paper introduces a novel next-activity
prediction approach that uses process entropy to assess the complexity of event
logs and dynamically select the most suitable ML model. A new transformer-based
architecture with multi-head attention and dynamic windowing mechanism,
DAW-Transformer, is proposed to capture long-range dependencies and utilize all
relevant event log attributes. Experiments were conducted on six public
datasets, and the performance was evaluated with process entropy.
Finding - The results demonstrate the effectiveness of the approach across
these publicly available datasets. DAW-Transformer achieved superior
performance, especially on high-entropy datasets such as Sepsis exceeding
Limited window Multi-Transformers by 4.69% and a benchmark CNN-LSTM-SAtt model
by 3.07%. For low-entropy datasets like Road Traffic Fine, simpler, more
interpretable algorithms like Random Forest performed nearly as well as the
more complex DAW-Transformer and offered better handling of imbalanced data and
improved explainability.
Originality/ value - This work's novelty lies in the proposed
DAW-Transformer, with a dynamic window and considering all relevant attributes.
Also, entropy-driven selection methods offer a robust, accurate, and
interpretable solution for next-activity prediction.
|
2502.10574
|
Classifier-free Guidance with Adaptive Scaling
|
cs.CV
|
Classifier-free guidance (CFG) is an essential mechanism in contemporary
text-driven diffusion models. In practice, in controlling the impact of
guidance we can see the trade-off between the quality of the generated images
and correspondence to the prompt. When we use strong guidance, generated images
fit the conditioned text perfectly but at the cost of their quality. Dually, we
can use small guidance to generate high-quality results, but the generated
images do not suit our prompt. In this paper, we present $\beta$-CFG
($\beta$-adaptive scaling in Classifier-Free Guidance), which controls the
impact of guidance during generation to solve the above trade-off. First,
$\beta$-CFG stabilizes the effects of guiding by gradient-based adaptive
normalization. Second, $\beta$-CFG uses the family of single-modal
($\beta$-distribution), time-dependent curves to dynamically adapt the
trade-off between prompt matching and the quality of samples during the
diffusion denoising process. Our model obtained better FID scores, maintaining
the text-to-image CLIP similarity scores at a level similar to that of the
reference CFG.
|
2502.10577
|
Man Made Language Models? Evaluating LLMs' Perpetuation of Masculine
Generics Bias
|
cs.CL cs.AI
|
Large language models (LLMs) have been shown to propagate and even amplify
gender bias, in English and other languages, in specific or constrained
contexts. However, no studies so far have focused on gender biases conveyed by
LLMs' responses to generic instructions, especially with regard to masculine
generics (MG). MG are a linguistic feature found in many gender-marked
languages, denoting the use of the masculine gender as a "default" or
supposedly neutral gender to refer to mixed group of men and women, or of a
person whose gender is irrelevant or unknown. Numerous psycholinguistics
studies have shown that MG are not neutral and induce gender bias. This work
aims to analyze the use of MG by both proprietary and local LLMs in responses
to generic instructions and evaluate their MG bias rate. We focus on French and
create a human noun database from existing lexical resources. We filter
existing French instruction datasets to retrieve generic instructions and
analyze the responses of 6 different LLMs. Overall, we find that
$\approx$39.5\% of LLMs' responses to generic instructions are MG-biased
($\approx$73.1\% across responses with human nouns). Our findings also reveal
that LLMs are reluctant to using gender-fair language spontaneously.
|
2502.10581
|
Do We Need to Verify Step by Step? Rethinking Process Supervision from a
Theoretical Perspective
|
cs.LG cs.AI stat.ML
|
As large language models have evolved, it has become crucial to distinguish
between process supervision and outcome supervision -- two key reinforcement
learning approaches to complex reasoning tasks. While process supervision
offers intuitive advantages for long-term credit assignment, the precise
relationship between these paradigms has remained an open question.
Conventional wisdom suggests that outcome supervision is fundamentally more
challenging due to the trajectory-level coverage problem, leading to
significant investment in collecting fine-grained process supervision data.
In this paper, we take steps towards resolving this debate. Our main theorem
shows that, under standard data coverage assumptions, reinforcement learning
through outcome supervision is no more statistically difficult than through
process supervision, up to polynomial factors in horizon. At the core of this
result lies the novel Change of Trajectory Measure Lemma -- a technical tool
that bridges return-based trajectory measure and step-level distribution shift.
Furthermore, for settings with access to a verifier or a rollout capability, we
prove that any policy's advantage function can serve as an optimal process
reward model, providing a direct connection between outcome and process
supervision. These findings suggest that the empirically observed performance
gap -- if any -- between outcome and process supervision likely stems from
algorithmic limitations rather than inherent statistical difficulties,
potentially transforming how we approach data collection and algorithm design
for reinforcement learning.
|
2502.10582
|
Named entity recognition for Serbian legal documents: Design,
methodology and dataset development
|
cs.CL
|
Recent advancements in the field of natural language processing (NLP) and
especially large language models (LLMs) and their numerous applications have
brought research attention to design of different document processing tools and
enhancements in the process of document archiving, search and retrieval. Domain
of official, legal documents is especially interesting due to vast amount of
data generated on the daily basis, as well as the significant community of
interested practitioners (lawyers, law offices, administrative workers, state
institutions and citizens). Providing efficient ways for automation of everyday
work involving legal documents is therefore expected to have significant impact
in different fields. In this work we present one LLM based solution for Named
Entity Recognition (NER) in the case of legal documents written in Serbian
language. It leverages on the pre-trained bidirectional encoder representations
from transformers (BERT), which had been carefully adapted to the specific task
of identifying and classifying specific data points from textual content.
Besides novel dataset development for Serbian language (involving public court
rulings), presented system design and applied methodology, the paper also
discusses achieved performance metrics and their implications for objective
assessment of the proposed solution. Performed cross-validation tests on the
created manually labeled dataset with mean $F_1$ score of 0.96 and additional
results on the examples of intentionally modified text inputs confirm
applicability of the proposed system design and robustness of the developed NER
solution.
|
2502.10585
|
Prediction uncertainty-aware planning using deep ensembles and
trajectory optimisation
|
cs.RO
|
Human motion is stochastic and ensuring safe robot navigation in a
pedestrian-rich environment requires proactive decision-making. Past research
relied on incorporating deterministic future states of surrounding pedestrians
which can be overconfident leading to unsafe robot behaviour. The current paper
proposes a predictive uncertainty-aware planner that integrates neural network
based probabilistic trajectory prediction into planning. Our method uses a deep
ensemble based network for probabilistic forecasting of surrounding humans and
integrates the predictive uncertainty as constraints into the planner. We
compare numerous constraint satisfaction methods on the planner and evaluated
its performance on real world pedestrian datasets. Further, offline robot
navigation was carried out on out-of-distribution pedestrian trajectories
inside a narrow corridor
|
2502.10587
|
Towards Self-Supervised Covariance Estimation in Deep Heteroscedastic
Regression
|
cs.LG cs.AI stat.ML
|
Deep heteroscedastic regression models the mean and covariance of the target
distribution through neural networks. The challenge arises from
heteroscedasticity, which implies that the covariance is sample dependent and
is often unknown. Consequently, recent methods learn the covariance through
unsupervised frameworks, which unfortunately yield a trade-off between
computational complexity and accuracy. While this trade-off could be alleviated
through supervision, obtaining labels for the covariance is non-trivial. Here,
we study self-supervised covariance estimation in deep heteroscedastic
regression. We address two questions: (1) How should we supervise the
covariance assuming ground truth is available? (2) How can we obtain pseudo
labels in the absence of the ground-truth? We address (1) by analysing two
popular measures: the KL Divergence and the 2-Wasserstein distance.
Subsequently, we derive an upper bound on the 2-Wasserstein distance between
normal distributions with non-commutative covariances that is stable to
optimize. We address (2) through a simple neighborhood based heuristic
algorithm which results in surprisingly effective pseudo labels for the
covariance. Our experiments over a wide range of synthetic and real datasets
demonstrate that the proposed 2-Wasserstein bound coupled with pseudo label
annotations results in a computationally cheaper yet accurate deep
heteroscedastic regression.
|
2502.10596
|
Post-training an LLM for RAG? Train on Self-Generated Demonstrations
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) often struggle with knowledge intensive NLP
tasks, such as answering "Who won the latest World Cup?" because the knowledge
they learn during training may be insufficient or outdated. Conditioning
generation on retrieved documents -- a technique known as retrieval augmented
generation (RAG) -- mitigates these shortcomings by allowing the model to
leverage in-context information. Practitioners can improve LLM RAG performance
by fine-tuning on retrieval-augmented instructions, but must beware that this
can cause undesirable model behaviors like hallucinations. We attribute this
degradation to the fact that the training data is likely to be
out-of-distribution for the model and may suffer from quality issues, such as
misalignment between retrievals and target responses (since retrievals are
frequently added post-hoc). We propose a recipe for training RAG-enabled LLMs
using self-generated demonstrations, thereby avoiding training on
out-of-distribution text and integrating retrievals into the LLM responses. We
evaluate our method on knowledge intensive question answering (QA) tasks and
show that our method teaches LLMs to properly handle in-context retrievals and
abstain from questions it will likely get wrong. Compared to conventional RA-IT
methods, our method prevents model degradation in non-RAG settings while
exhibiting superior QA performance.
|
2502.10597
|
BLI: A High-performance Bucket-based Learned Index with Concurrency
Support
|
cs.DB
|
Learned indexes are promising to replace traditional tree-based indexes. They
typically employ machine learning models to efficiently predict target
positions in strictly sorted linear arrays. However, the strict sorted order 1)
significantly increases insertion overhead, 2) makes it challenging to support
lock-free concurrency, and 3) harms in-node lookup/insertion efficiency due to
model inaccuracy.\
In this paper, we introduce a \textit{Bucket-based Learned Index (BLI)},
which is an updatable in-memory learned index that adopts a "globally sorted,
locally unsorted" approach by replacing linear sorted arrays with
\textit{Buckets}. BLI optimizes the insertion throughput by only sorting
Buckets, not the key-value pairs within a Bucket. BLI strategically balances
three critical performance metrics: tree fanouts, lookup/insert latency for
inner nodes, lookup/insert latency for leaf nodes, and memory consumption. To
minimize maintenance costs, BLI performs lightweight bulk loading, insert, node
scaling, node split, model retraining, and node merging adaptively. BLI
supports lock-free concurrency thanks to the unsorted design with Buckets. Our
results show that BLI achieves up to 2.21x better throughput than
state-of-the-art learned indexes, with up to 3.91x gains under multi-threaded
conditions.
|
2502.10599
|
Federated Learning-Driven Cybersecurity Framework for IoT Networks with
Privacy-Preserving and Real-Time Threat Detection Capabilities
|
cs.CR cs.LG cs.NI
|
The rapid expansion of the Internet of Things (IoT) ecosystem has transformed
various sectors but has also introduced significant cybersecurity challenges.
Traditional centralized security methods often struggle to balance privacy
preservation and real-time threat detection in IoT networks. To address these
issues, this study proposes a Federated Learning-Driven Cybersecurity Framework
designed specifically for IoT environments. The framework enables decentralized
data processing by training models locally on edge devices, ensuring data
privacy. Secure aggregation of these locally trained models is achieved using
homomorphic encryption, allowing collaborative learning without exposing
sensitive information.
The proposed framework utilizes recurrent neural networks (RNNs) for anomaly
detection, optimized for resource-constrained IoT networks. Experimental
results demonstrate that the system effectively detects complex cyber threats,
including distributed denial-of-service (DDoS) attacks, with over 98% accuracy.
Additionally, it improves energy efficiency by reducing resource consumption by
20% compared to centralized approaches.
This research addresses critical gaps in IoT cybersecurity by integrating
federated learning with advanced threat detection techniques. The framework
offers a scalable and privacy-preserving solution adaptable to various IoT
applications. Future work will explore the integration of blockchain for
transparent model aggregation and quantum-resistant cryptographic methods to
further enhance security in evolving technological landscapes.
|
2502.10600
|
Weighted quantization using MMD: From mean field to mean shift via
gradient flows
|
stat.ML cs.LG cs.NA math.NA
|
Approximating a probability distribution using a set of particles is a
fundamental problem in machine learning and statistics, with applications
including clustering and quantization. Formally, we seek a finite weighted
mixture of Dirac measures that best approximates the target distribution. While
much existing work relies on the Wasserstein distance to quantify approximation
errors, maximum mean discrepancy (MMD) has received comparatively less
attention, especially when allowing for variable particle weights. We study the
quantization problem from the perspective of minimizing MMD via gradient flow
in the Wasserstein-Fisher-Rao (WFR) geometry. This gradient flow yields an ODE
system from which we further derive a fixed-point algorithm called mean shift
interacting particles (MSIP). We show that MSIP extends the (non-interacting)
mean shift algorithm, widely used for identifying modes in kernel density
estimates. Moreover, we show that MSIP can be interpreted as preconditioned
gradient descent, and that it acts as a relaxation of Lloyd's algorithm for
clustering. Our numerical experiments demonstrate that MSIP and the WFR ODEs
outperform other algorithms for quantization of multi-modal and
high-dimensional targets.
|
2502.10601
|
Data-driven Super-Resolution of Flood Inundation Maps using Synthetic
Simulations
|
cs.CV cs.LG
|
The frequency of extreme flood events is increasing throughout the world.
Daily, high-resolution (30m) Flood Inundation Maps (FIM) observed from space
play a key role in informing mitigation and preparedness efforts to counter
these extreme events. However, the temporal frequency of publicly available
high-resolution FIMs, e.g., from Landsat, is at the order of two weeks thus
limiting the effective monitoring of flood inundation dynamics. Conversely,
global, low-resolution (~300m) Water Fraction Maps (WFM) are publicly available
from NOAA VIIRS daily. Motivated by the recent successes of deep learning
methods for single image super-resolution, we explore the effectiveness and
limitations of similar data-driven approaches to downscaling low-resolution
WFMs to high-resolution FIMs. To overcome the scarcity of high-resolution FIMs,
we train our models with high-quality synthetic data obtained through
physics-based simulations. We evaluate our models on real-world data from flood
events in the state of Iowa. The study indicates that data-driven approaches
exhibit superior reconstruction accuracy over non-data-driven alternatives and
that the use of synthetic data is a viable proxy for training purposes.
Additionally, we show that our trained models can exhibit superior zero-shot
performance when transferred to regions with hydroclimatological similarity to
the U.S. Midwest.
|
2502.10603
|
Adaptive Neural Networks for Intelligent Data-Driven Development
|
cs.CV
|
Advances in machine learning methods for computer vision tasks have led to
their consideration for safety-critical applications like autonomous driving.
However, effectively integrating these methods into the automotive development
lifecycle remains challenging. Since the performance of machine learning
algorithms relies heavily on the training data provided, the data and model
development lifecycle play a key role in successfully integrating these
components into the product development lifecycle. Existing models frequently
encounter difficulties recognizing or adapting to novel instances not present
in the original training dataset. This poses a significant risk for reliable
deployment in dynamic environments. To address this challenge, we propose an
adaptive neural network architecture and an iterative development framework
that enables users to efficiently incorporate previously unknown objects into
the current perception system. Our approach builds on continuous learning,
emphasizing the necessity of dynamic updates to reflect real-world deployment
conditions. Specifically, we introduce a pipeline with three key components:
(1) a scalable network extension strategy to integrate new classes while
preserving existing performance, (2) a dynamic OoD detection component that
requires no additional retraining for newly added classes, and (3) a
retrieval-based data augmentation process tailored for safety-critical
deployments. The integration of these components establishes a pragmatic and
adaptive pipeline for the continuous evolution of perception systems in the
context of autonomous driving.
|
2502.10605
|
Batch-Adaptive Annotations for Causal Inference with Complex-Embedded
Outcomes
|
stat.ML cs.LG
|
Estimating the causal effects of an intervention on outcomes is crucial. But
often in domains such as healthcare and social services, this critical
information about outcomes is documented by unstructured text, e.g. clinical
notes in healthcare or case notes in social services. For example, street
outreach to homeless populations is a common social services intervention, with
ambiguous and hard-to-measure outcomes. Outreach workers compile case note
records which are informative of outcomes. Although experts can succinctly
extract relevant information from such unstructured case notes, it is costly or
infeasible to do so for an entire corpus, which can span millions of notes.
Recent advances in large language models (LLMs) enable scalable but potentially
inaccurate annotation of unstructured text data. We leverage the decision of
which datapoints should receive expert annotation vs. noisy imputation under
budget constraints in a "design-based" estimator combining limited expert and
plentiful noisy imputation data via \textit{causal inference with missing
outcomes}. We develop a two-stage adaptive algorithm that optimizes the expert
annotation probabilities, estimating the ATE with optimal asymptotic variance.
We demonstrate how expert labels and LLM annotations can be combined
strategically, efficiently and responsibly in a causal estimator. We run
experiments on simulated data and two real-world datasets, including one on
street outreach, to show the versatility of our proposed method.
|
2502.10606
|
HIPPo: Harnessing Image-to-3D Priors for Model-free Zero-shot 6D Pose
Estimation
|
cs.CV cs.RO
|
This work focuses on model-free zero-shot 6D object pose estimation for
robotics applications. While existing methods can estimate the precise 6D pose
of objects, they heavily rely on curated CAD models or reference images, the
preparation of which is a time-consuming and labor-intensive process. Moreover,
in real-world scenarios, 3D models or reference images may not be available in
advance and instant robot reaction is desired. In this work, we propose a novel
framework named HIPPo, which eliminates the need for curated CAD models and
reference images by harnessing image-to-3D priors from Diffusion Models,
enabling model-free zero-shot 6D pose estimation. Specifically, we construct
HIPPo Dreamer, a rapid image-to-mesh model built on a multiview Diffusion Model
and a 3D reconstruction foundation model. Our HIPPo Dreamer can generate a 3D
mesh of any unseen objects from a single glance in just a few seconds. Then, as
more observations are acquired, we propose to continuously refine the diffusion
prior mesh model by joint optimization of object geometry and appearance. This
is achieved by a measurement-guided scheme that gradually replaces the
plausible diffusion priors with more reliable online observations.
Consequently, HIPPo can instantly estimate and track the 6D pose of a novel
object and maintain a complete mesh for immediate robotic applications.
Thorough experiments on various benchmarks show that HIPPo outperforms
state-of-the-art methods in 6D object pose estimation when prior reference
images are limited.
|
2502.10608
|
Universal Lesion Segmentation Challenge 2023: A Comparative Research of
Different Algorithms
|
cs.CV cs.LG
|
In recent years, machine learning algorithms have achieved much success in
segmenting lesions across various tissues. There is, however, not one
satisfying model that works well on all tissue types universally. In response
to this need, we attempt to train a model that 1) works well on all tissue
types, and 2) is capable of still performing fast inferences. To this end, we
design our architectures, test multiple existing architectures, compare their
results, and settle upon SwinUnet. We document our rationales, successes, and
failures. Finally, we propose some further directions that we think are worth
exploring. codes: https://github.com/KWFredShi/ULS2023NGKD.git
|
2502.10610
|
Reachability-Aware Reinforcement Learning for Collision Avoidance in
Human-Machine Shared Control
|
cs.RO cs.SY eess.SY
|
Human-machine shared control in critical collision scenarios aims to aid
drivers' accident avoidance through intervening only when necessary. Existing
methods count on replanning collision-free trajectories and imposing
human-machine tracking, which usually interrupts the driver's intent and
increases the risk of conflict. Additionally, the lack of guaranteed trajectory
feasibility under extreme conditions can compromise safety and reliability.
This paper introduces a Reachability-Aware Reinforcement Learning framework for
shared control, guided by Hamilton-Jacobi (HJ) reachability analysis. Machine
intervention is activated only when the vehicle approaches the Collision
Avoidance Reachable Set (CARS), which represents states where collision is
unavoidable. First, we precompute the reachability distributions and the CARS
by solving the Bellman equation using offline data. To reduce human-machine
conflicts, we develop a driver model for sudden obstacles and propose an
authority allocation strategy considering key collision avoidance features.
Finally, we train a reinforcement learning agent to reduce human-machine
conflicts while enforcing the hard constraint of avoiding entry into the CARS.
The proposed method was tested on a real vehicle platform. Results show that
the controller intervenes effectively near CARS to prevent collisions while
maintaining improved original driving task performance. Robustness analysis
further supports its flexibility across different driver attributes.
|
2502.10611
|
Demonstration of a planar multimodal periodic filter at THz frequencies
|
physics.app-ph cs.SY eess.SY
|
This paper presents a planar multimodal periodic filter that is constructed
from alternating sections of coplanar stripline and the odd-mode of a
finite-ground plane coplanar waveguide constructed on a 1 um silicon nitride
substrate to facilitate operation at THz frequencies. The multimode
configuration differs from standard single-mode periodic filters and enables
flexible designs and the possibility for active control of the filter
characteristics. For this proof-of-concept, we present the relevant theory and
design procedures required to develop a band-stop filter that has a center
frequency of fc = 0.8 THz and a bandwidth of df = 0.07 THz. We find good
agreement between theory, simulation, and experiment.
|
2502.10614
|
Optimizing CNN Architectures for Advanced Thoracic Disease
Classification
|
cs.CV cs.AI cs.LG
|
Machine learning, particularly convolutional neural networks (CNNs), has
shown promise in medical image analysis, especially for thoracic disease
detection using chest X-ray images. In this study, we evaluate various CNN
architectures, including binary classification, multi-label classification, and
ResNet50 models, to address challenges like dataset imbalance, variations in
image quality, and hidden biases. We introduce advanced preprocessing
techniques such as principal component analysis (PCA) for image compression and
propose a novel class-weighted loss function to mitigate imbalance issues. Our
results highlight the potential of CNNs in medical imaging but emphasize that
issues like unbalanced datasets and variations in image acquisition methods
must be addressed for optimal model performance.
|
2502.10615
|
Retrieval-augmented Encoders for Extreme Multi-label Text Classification
|
cs.CL
|
Extreme multi-label classification (XMC) seeks to find relevant labels from
an extremely large label collection for a given text input. To tackle such a
vast label space, current state-of-the-art methods fall into two categories.
The one-versus-all (OVA) method uses learnable label embeddings for each label,
excelling at memorization (i.e., capturing detailed training signals for
accurate head label prediction). In contrast, the dual-encoder (DE) model maps
input and label text into a shared embedding space for better generalization
(i.e., the capability of predicting tail labels with limited training data),
but may fall short at memorization. To achieve generalization and memorization,
existing XMC methods often combine DE and OVA models, which involves complex
training pipelines. Inspired by the success of retrieval-augmented language
models, we propose the Retrieval-augmented Encoders for XMC (RAEXMC), a novel
framework that equips a DE model with retrieval-augmented capability for
efficient memorization without additional trainable parameter. During training,
RAEXMC is optimized by the contrastive loss over a knowledge memory that
consists of both input instances and labels. During inference, given a test
input, RAEXMC retrieves the top-$K$ keys from the knowledge memory, and
aggregates the corresponding values as the prediction scores. We showcase the
effectiveness and efficiency of RAEXMC on four public LF-XMC benchmarks. RAEXMC
not only advances the state-of-the-art (SOTA) DE method DEXML, but also
achieves more than 10x speedup on the largest LF-AmazonTitles-1.3M dataset
under the same 8 A100 GPUs training environments.
|
2502.10616
|
Learning semantical dynamics and spatiotemporal collaboration for human
pose estimation in video
|
cs.CV
|
Temporal modeling and spatio-temporal collaboration are pivotal techniques
for video-based human pose estimation. Most state-of-the-art methods adopt
optical flow or temporal difference, learning local visual content
correspondence across frames at the pixel level, to capture motion dynamics.
However, such a paradigm essentially relies on localized pixel-to-pixel
similarity, which neglects the semantical correlations among frames and is
vulnerable to image quality degradations (e.g. occlusions or blur). Moreover,
existing approaches often combine motion and spatial (appearance) features via
simple concatenation or summation, leading to practical challenges in fully
leveraging these distinct modalities. In this paper, we present a novel
framework that learns multi-level semantical dynamics and dense spatio-temporal
collaboration for multi-frame human pose estimation. Specifically, we first
design a Multi-Level Semantic Motion Encoder using a multi-masked context and
pose reconstruction strategy. This strategy stimulates the model to explore
multi-granularity spatiotemporal semantic relationships among frames by
progressively masking the features of (patch) cubes and frames. We further
introduce a Spatial-Motion Mutual Learning module which densely propagates and
consolidates context information from spatial and motion features to enhance
the capability of the model. Extensive experiments demonstrate that our
approach sets new state-of-the-art results on three benchmark datasets,
PoseTrack2017, PoseTrack2018, and PoseTrack21.
|
2502.10620
|
ProMRVL-CAD: Proactive Dialogue System with Multi-Round Vision-Language
Interactions for Computer-Aided Diagnosis
|
cs.AI
|
Recent advancements in large language models (LLMs) have demonstrated
extraordinary comprehension capabilities with remarkable breakthroughs on
various vision-language tasks. However, the application of LLMs in generating
reliable medical diagnostic reports remains in the early stages. Currently,
medical LLMs typically feature a passive interaction model where doctors
respond to patient queries with little or no involvement in analyzing medical
images. In contrast, some ChatBots simply respond to predefined queries based
on visual inputs, lacking interactive dialogue or consideration of medical
history. As such, there is a gap between LLM-generated patient-ChatBot
interactions and those occurring in actual patient-doctor consultations. To
bridge this gap, we develop an LLM-based dialogue system, namely proactive
multi-round vision-language interactions for computer-aided diagnosis
(ProMRVL-CAD), to generate patient-friendly disease diagnostic reports. The
proposed ProMRVL-CAD system allows proactive dialogue to provide patients with
constant and reliable medical access via an integration of knowledge graph into
a recommendation system. Specifically, we devise two generators: a Proactive
Question Generator (Pro-Q Gen) to generate proactive questions that guide the
diagnostic procedure and a Multi-Vision Patient-Text Diagnostic Report
Generator (MVP-DR Gen) to produce high-quality diagnostic reports. Evaluating
two real-world publicly available datasets, MIMIC-CXR and IU-Xray, our model
has better quality in generating medical reports. We further demonstrate the
performance of ProMRVL achieves robust under the scenarios with low image
quality. Moreover, we have created a synthetic medical dialogue dataset that
simulates proactive diagnostic interactions between patients and doctors,
serving as a valuable resource for training LLM.
|
2502.10624
|
Network evasion detection with Bi-LSTM model
|
cs.CR cs.AI
|
Network evasion detection aims to distinguish whether the network flow comes
from link layer exists network evasion threat, which is a means to disguise the
data traffic on detection system by confusing the signature. Since the previous
research works has all sorts of frauds, we propose a architecture with deep
learning network to handle this problem. In this paper, we extract the critical
information as key features from data frame and also specifically propose to
use bidirectional long short-term memory (Bi-LSTM) neural network which shows
an outstanding performance to trace the serial information, to encode both the
past and future trait on the network flows. Furthermore we introduce a
classifier named Softmax at the bottom of Bi-LSTM, holding a character to
select the correct class. All experiments results shows that we can achieve a
significant performance with a deep Bi-LSTM in network evasion detection and
it's average accuracy reaches 96.1%.
|
2502.10626
|
K-Edit: Language Model Editing with Contextual Knowledge Awareness
|
cs.LG cs.AI
|
As the world changes, we need to be able to update our models and correct
false information without costly retraining. Knowledge-based model editing
enables precise modifications to the weights of large language models in order
to modify the information encoded within. Recent approaches have seen success
in enabling recall of edited information for thousands of edits at once.
However, these approaches fail to produce edits that account for associated
contextual information. We present K-Edit, an effective approach to generating
contextually consistent knowledge edits. By using knowledge graphs, which
maintain contextual consistency when an edge is edited, we are able to generate
additional \textit{contextual edits} that ensure consistency of related
information in the language model. Our experiments demonstrate significant
improvements in multi-hop question answering while maintaining the general
effectiveness and scalability of model edits.
|
2502.10628
|
On Self-Adaptive Perception Loss Function for Sequential Lossy
Compression
|
cs.LG cs.IT math.IT
|
We consider causal, low-latency, sequential lossy compression, with mean
squared-error (MSE) as the distortion loss, and a perception loss function
(PLF) to enhance the realism of reconstructions. As the main contribution, we
propose and analyze a new PLF that considers the joint distribution between the
current source frame and the previous reconstructions. We establish the
theoretical rate-distortion-perception function for first-order Markov sources
and analyze the Gaussian model in detail. From a qualitative perspective, the
proposed metric can simultaneously avoid the error-permanence phenomenon and
also better exploit the temporal correlation between high-quality
reconstructions. The proposed metric is referred to as self-adaptive perception
loss function (PLF-SA), as its behavior adapts to the quality of reconstructed
frames. We provide a detailed comparison of the proposed perception loss
function with previous approaches through both information theoretic analysis
as well as experiments involving moving MNIST and UVG datasets.
|
2502.10631
|
ControllableGPT: A Ground-Up Designed Controllable GPT for Molecule
Optimization
|
cs.LG cs.AI q-bio.BM
|
Large Language Models (LLMs) employ three popular training approaches: Masked
Language Models (MLM), Causal Language Models (CLM), and Sequence-to-Sequence
Models (seq2seq). However, each approach has its strengths and limitations, and
faces challenges in addressing specific tasks that require controllable and
bidirectional generation, such as drug optimization. To address this challenge,
inspired by the biological processes of growth and evolution, which involve the
expansion, shrinking, and mutation of sequences, we introduce ControllableGPT.
This initiative represents the first effort to combine the advantages of MLM,
CLM, and seq2seq into a single unified, controllable GPT framework. It enables
the precise management of specific locations and ranges within a sequence,
allowing for expansion, reduction, or mutation over chosen or random lengths,
while maintaining the integrity of any specified positions or subsequences. In
this work, we designed ControllableGPT for drug optimization from the ground
up, which included proposing the Causally Masked Seq2seq (CMS) objective,
developing the training corpus, introducing a novel pre-training approach, and
devising a unique generation process. We demonstrate the effectiveness and
controllability of ControllableGPT by conducting experiments on drug
optimization tasks for both viral and cancer benchmarks, surpassing competing
baselines.
|
2502.10632
|
Code-Mixed Telugu-English Hate Speech Detection
|
cs.CL
|
Hate speech detection in low-resource languages like Telugu is a growing
challenge in NLP. This study investigates transformer-based models, including
TeluguHateBERT, HateBERT, DeBERTa, Muril, IndicBERT, Roberta, and
Hindi-Abusive-MuRIL, for classifying hate speech in Telugu. We fine-tune these
models using Low-Rank Adaptation (LoRA) to optimize efficiency and performance.
Additionally, we explore a multilingual approach by translating Telugu text
into English using Google Translate to assess its impact on classification
accuracy.
Our experiments reveal that most models show improved performance after
translation, with DeBERTa and Hindi-Abusive-MuRIL achieving higher accuracy and
F1 scores compared to training directly on Telugu text. Notably,
Hindi-Abusive-MuRIL outperforms all other models in both the original Telugu
dataset and the translated dataset, demonstrating its robustness across
different linguistic settings. This suggests that translation enables models to
leverage richer linguistic features available in English, leading to improved
classification performance. The results indicate that multilingual processing
can be an effective approach for hate speech detection in low-resource
languages. These findings demonstrate that transformer models, when fine-tuned
appropriately, can significantly improve hate speech detection in Telugu,
paving the way for more robust multilingual NLP applications.
|
2502.10634
|
Lost in the Passage: Passage-level In-context Learning Does Not
Necessarily Need a "Passage"
|
cs.CL
|
By simply incorporating demonstrations into the context, in-context learning
(ICL) enables large language models (LLMs) to yield awesome performance on many
tasks. In this paper, we focus on passage-level long-context ICL for generation
tasks and find that LLMs cannot learn the intrinsic relationships between the
demonstration passage and the generation output. We conduct experiments with
different LLMs on two typical generation tasks including single-document QA and
distractor generation, demonstrating that even a completely meaningless
demonstration passage with 1/4 length achieves much better performance than the
original full passage. Analysis via attention score reveals that LLMs pay
little attention to passages compared to other components in prompt and little
attention flows from the passage to other parts of the demonstration, which
further confirms our finding. Additionally, experiments on context compression
indicate that compression approaches proven effective on other long-context
tasks are not suitable for passage-level ICL, since simply using shorter
meaningless demonstration passages has achieved competitive performance.
|
2502.10635
|
Privacy Preservation through Practical Machine Unlearning
|
cs.LG cs.CR
|
Machine Learning models thrive on vast datasets, continuously adapting to
provide accurate predictions and recommendations. However, in an era dominated
by privacy concerns, Machine Unlearning emerges as a transformative approach,
enabling the selective removal of data from trained models. This paper examines
methods such as Naive Retraining and Exact Unlearning via the SISA framework,
evaluating their Computational Costs, Consistency, and feasibility using the
$\texttt{HSpam14}$ dataset. We explore the potential of integrating unlearning
principles into Positive Unlabeled (PU) Learning to address challenges posed by
partially labeled datasets. Our findings highlight the promise of unlearning
frameworks like $\textit{DaRE}$ for ensuring privacy compliance while
maintaining model performance, albeit with significant computational
trade-offs. This study underscores the importance of Machine Unlearning in
achieving ethical AI and fostering trust in data-driven systems.
|
2502.10636
|
USER-VLM 360: Personalized Vision Language Models with User-aware Tuning
for Social Human-Robot Interactions
|
cs.AI cs.HC cs.RO
|
The integration of vision-language models into robotic systems constitutes a
significant advancement in enabling machines to interact with their
surroundings in a more intuitive manner. While VLMs offer rich multimodal
reasoning, existing approaches lack user-specific adaptability, often relying
on generic interaction paradigms that fail to account for individual
behavioral, contextual, or socio-emotional nuances. When customization is
attempted, ethical concerns arise from unmitigated biases in user data, risking
exclusion or unfair treatment. To address these dual challenges, we propose
User-VLM 360{\deg}, a holistic framework integrating multimodal user modeling
with bias-aware optimization. Our approach features: (1) user-aware tuning that
adapts interactions in real time using visual-linguistic signals; (2) bias
mitigation via preference optimization; and (3) curated 360{\deg} socio-emotive
interaction datasets annotated with demographic, emotion, and relational
metadata. Evaluations across eight benchmarks demonstrate state-of-the-art
results: +35.3% F1 in personalized VQA, +47.5% F1 in facial features
understanding, 15% bias reduction, and 30X speedup over baselines. Ablation
studies confirm component efficacy, and deployment on the Pepper robot
validates real-time adaptability across diverse users. We open-source
parameter-efficient 3B/10B models and an ethical verification framework for
responsible adaptation.
|
2502.10637
|
Proof of Response
|
cs.DC cs.AI cs.CR
|
We present a mechanism that for a network of participants allows one
participant of the network (Alice) to request some data from another
participant (Bob) and either receive a response from Bob within a
known-in-advance, bounded time b, or receive a proof that at least one edge on
the way to Bob was broken within b, or receive a streaming payment proportional
to time passed beyond b during which neither was received. This mechanism
allows for building downstream applications that require provable responses
from other participants, such as decentralized storage solutions, decentralized
AI agents, and more.
|
2502.10639
|
LSTM-based Selective Dense Text Retrieval Guided by Sparse Lexical
Retrieval
|
cs.IR
|
This paper studies fast fusion of dense retrieval and sparse lexical
retrieval, and proposes a cluster-based selective dense retrieval method called
CluSD guided by sparse lexical retrieval. CluSD takes a lightweight
cluster-based approach and exploits the overlap of sparse retrieval results and
embedding clusters in a two-stage selection process with an LSTM model to
quickly identify relevant clusters while incurring limited extra memory space
overhead. CluSD triggers partial dense retrieval and performs cluster-based
block disk I/O if needed. This paper evaluates CluSD and compares it with
several baselines for searching in-memory and on-disk MS MARCO and BEIR
datasets.
|
2502.10641
|
Toward Equitable Access: Leveraging Crowdsourced Reviews to Investigate
Public Perceptions of Health Resource Accessibility
|
cs.CL
|
Access to health resources is a critical determinant of public well-being and
societal resilience, particularly during public health crises when demand for
medical services and preventive care surges. However, disparities in
accessibility persist across demographic and geographic groups, raising
concerns about equity. Traditional survey methods often fall short due to
limitations in coverage, cost, and timeliness. This study leverages
crowdsourced data from Google Maps reviews, applying advanced natural language
processing techniques, specifically ModernBERT, to extract insights on public
perceptions of health resource accessibility in the United States during the
COVID-19 pandemic. Additionally, we employ Partial Least Squares regression to
examine the relationship between accessibility perceptions and key
socioeconomic and demographic factors including political affiliation, racial
composition, and educational attainment. Our findings reveal that public
perceptions of health resource accessibility varied significantly across the
U.S., with disparities peaking during the pandemic and slightly easing
post-crisis. Political affiliation, racial demographics, and education levels
emerged as key factors shaping these perceptions. These findings underscore the
need for targeted interventions and policy measures to address inequities,
fostering a more inclusive healthcare infrastructure that can better withstand
future public health challenges.
|
2502.10642
|
Demographic User Modeling for Social Robotics with Multimodal
Pre-trained Models
|
cs.AI cs.CV
|
This paper investigates the performance of multimodal pre-trained models in
user profiling tasks based on visual-linguistic demographic data. These models
are critical for adapting to the needs and preferences of human users in social
robotics, thereby providing personalized responses and enhancing interaction
quality. First, we introduce two datasets specifically curated to represent
demographic characteristics derived from user facial images. Next, we evaluate
the performance of a prominent contrastive multimodal pre-trained model, CLIP,
on these datasets, both in its out-of-the-box state and after fine-tuning.
Initial results indicate that CLIP performs suboptimal in matching images to
demographic descriptions without fine-tuning. Although fine-tuning
significantly enhances its predictive capacity, the model continues to exhibit
limitations in effectively generalizing subtle demographic nuances. To address
this, we propose adopting a masked image modeling strategy to improve
generalization and better capture subtle demographic attributes. This approach
offers a pathway for enhancing demographic sensitivity in multimodal user
modeling tasks.
|
2502.10645
|
BabyLM Turns 3: Call for papers for the 2025 BabyLM workshop
|
cs.CL
|
BabyLM aims to dissolve the boundaries between cognitive modeling and
language modeling. We call for both workshop papers and for researchers to join
the 3rd BabyLM competition. As in previous years, we call for participants in
the data-efficient pretraining challenge in the general track. This year, we
also offer a new track: INTERACTION. This new track encourages interactive
behavior, learning from a teacher, and adapting the teaching material to the
student. We also call for papers outside the competition in any relevant areas.
These include training efficiency, cognitively plausible research, weak model
evaluation, and more.
|
2502.10646
|
Dark Deceptions in DHCP: Dismantling Network Defenses
|
cs.CR cs.LG
|
This paper explores vulnerabilities in the Dynamic Host Configuration
Protocol (DHCP) and their implications on the Confidentiality, Integrity, and
Availability (CIA) triad. Through an analysis of various attacks, including
DHCP Starvation, Rogue DHCP Servers, Replay Attacks, and TunnelVision exploits,
the paper provides a taxonomic classification of threats, assesses risks, and
proposes appropriate controls. The discussion also highlights the dangers of
VPN decloaking through DHCP exploits and underscores the importance of
safeguarding network infrastructures. By bringing awareness to the TunnelVision
exploit, this paper aims to mitigate risks associated with these prevalent
vulnerabilities.
|
2502.10647
|
A Power Transform
|
cs.LG stat.ML stat.TH
|
Power transforms, such as the Box-Cox transform and Tukey's ladder of powers,
are a fundamental tool in mathematics and statistics. These transforms are
primarily used for normalizing and standardizing datasets, effectively by
raising values to a power. In this work I present a novel power transform, and
I show that it serves as a unifying framework for wide family of loss
functions, kernel functions, probability distributions, bump functions, and
neural network activation functions.
|
2502.10648
|
LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and
Regularization
|
cs.LG stat.ML
|
We introduce LLM-Lasso, a novel framework that leverages large language
models (LLMs) to guide feature selection in Lasso $\ell_1$ regression. Unlike
traditional methods that rely solely on numerical data, LLM-Lasso incorporates
domain-specific knowledge extracted from natural language, enhanced through a
retrieval-augmented generation (RAG) pipeline, to seamlessly integrate
data-driven modeling with contextual insights. Specifically, the LLM generates
penalty factors for each feature, which are converted into weights for the
Lasso penalty using a simple, tunable model. Features identified as more
relevant by the LLM receive lower penalties, increasing their likelihood of
being retained in the final model, while less relevant features are assigned
higher penalties, reducing their influence. Importantly, LLM-Lasso has an
internal validation step that determines how much to trust the contextual
knowledge in our prediction pipeline. Hence it addresses key challenges in
robustness, making it suitable for mitigating potential inaccuracies or
hallucinations from the LLM. In various biomedical case studies, LLM-Lasso
outperforms standard Lasso and existing feature selection baselines, all while
ensuring the LLM operates without prior access to the datasets. To our
knowledge, this is the first approach to effectively integrate conventional
feature selection techniques directly with LLM-based domain-specific reasoning.
|
2502.10650
|
Generative Adversarial Networks for High-Dimensional Item Factor
Analysis: A Deep Adversarial Learning Algorithm
|
stat.ML cs.LG stat.AP stat.CO stat.ME
|
Advances in deep learning and representation learning have transformed item
factor analysis (IFA) in the item response theory (IRT) literature by enabling
more efficient and accurate parameter estimation. Variational Autoencoders
(VAEs) have been one of the most impactful techniques in modeling
high-dimensional latent variables in this context. However, the limited
expressiveness of the inference model based on traditional VAEs can still
hinder the estimation performance. This study introduces Adversarial
Variational Bayes (AVB) algorithms as an improvement to VAEs for IFA with
improved flexibility and accuracy. By bridging the strengths of VAEs and
Generative Adversarial Networks (GANs), AVB incorporates an auxiliary
discriminator network to reframe the estimation process as a two-player
adversarial game and removes the restrictive assumption of standard normal
distributions in the inference model. Theoretically, AVB can achieve similar or
higher likelihood compared to VAEs. A further enhanced algorithm,
Importance-weighted Adversarial Variational Bayes (IWAVB) is proposed and
compared with Importance-weighted Autoencoders (IWAE). In an exploratory
analysis of real empirical data, IWAVB demonstrated superior expressiveness by
achieving a higher likelihood compared to IWAE. In confirmatory studies with
simulated data, IWAVB achieved similar mean-square error results to IWAE while
consistently achieving higher likelihoods. Moreover, in simulations where
latent variables followed a multimodal distribution, IWAVB outperformed IWAE by
providing more accurate parameter estimates. With its innovative use of GANs,
IWAVB is shown to have the potential to extend IFA to handle large-scale data,
facilitating the potential integration of psychometrics and multimodal data
analysis.
|
2502.10652
|
Deep Learning for Wound Tissue Segmentation: A Comprehensive Evaluation
using A Novel Dataset
|
eess.IV cs.CV cs.LG
|
Deep learning (DL) techniques have emerged as promising solutions for medical
wound tissue segmentation. However, a notable limitation in this field is the
lack of publicly available labelled datasets and a standardised performance
evaluation of state-of-the-art DL models on such datasets. This study addresses
this gap by comprehensively evaluating various DL models for wound tissue
segmentation using a novel dataset. We have curated a dataset comprising 147
wound images exhibiting six tissue types: slough, granulation, maceration,
necrosis, bone, and tendon. The dataset was meticulously labelled for semantic
segmentation employing supervised machine learning techniques. Three distinct
labelling formats were developed -- full image, patch, and superpixel. Our
investigation encompassed a wide array of DL segmentation and classification
methodologies, ranging from conventional approaches like UNet, to generative
adversarial networks such as cGAN, and modified techniques like FPN+VGG16.
Also, we explored DL-based classification methods (e.g., ResNet50) and machine
learning-based classification leveraging DL features (e.g., AlexNet+RF). In
total, 82 wound tissue segmentation models were derived across the three
labelling formats. Our analysis yielded several notable findings, including
identifying optimal DL models for each labelling format based on weighted
average Dice or F1 scores. Notably, FPN+VGG16 emerged as the top-performing DL
model for wound tissue segmentation, achieving a dice score of 82.25%. This
study provides a valuable benchmark for evaluating wound image segmentation and
classification models, offering insights to inform future research and clinical
practice in wound care. The labelled dataset created in this study is available
at https://github.com/akabircs/WoundTissue.
|
2502.10660
|
User Profile with Large Language Models: Construction, Updating, and
Benchmarking
|
cs.CL
|
User profile modeling plays a key role in personalized systems, as it
requires building accurate profiles and updating them with new information. In
this paper, we present two high-quality open-source user profile datasets: one
for profile construction and another for profile updating. These datasets offer
a strong basis for evaluating user profile modeling techniques in dynamic
settings. We also show a methodology that uses large language models (LLMs) to
tackle both profile construction and updating. Our method uses a probabilistic
framework to predict user profiles from input text, allowing for precise and
context-aware profile generation. Our experiments demonstrate that models like
Mistral-7b and Llama2-7b perform strongly in both tasks. LLMs improve the
precision and recall of the generated profiles, and high evaluation scores
confirm the effectiveness of our approach.
|
2502.10662
|
Towards Zero-Shot Task-Generalizable Learning on fMRI
|
eess.IV cs.LG
|
Functional MRI measuring BOLD signal is an increasingly important imaging
modality in studying brain functions and neurological disorders. It can be
acquired in either a resting-state or a task-based paradigm. Compared to
resting-state fMRI, task-based fMRI is acquired while the subject is performing
a specific task designed to enhance study-related brain activities.
Consequently, it generally has more informative task-dependent signals.
However, due to the variety of task designs, it is much more difficult than in
resting state to aggregate task-based fMRI acquired in different tasks to train
a generalizable model. To resolve this complication, we propose a supervised
task-aware network TA-GAT that jointly learns a general-purpose encoder and
task-specific contextual information. The encoder-generated embedding and the
learned contextual information are then combined as input to multiple modules
for performing downstream tasks. We believe that the proposed task-aware
architecture can plug-and-play in any neural network architecture to
incorporate the prior knowledge of fMRI tasks into capturing functional brain
patterns.
|
2502.10667
|
Automated Data Quality Validation in an End-to-End GNN Framework
|
cs.DB
|
Ensuring data quality is crucial in modern data ecosystems, especially for
training or testing datasets in machine learning. Existing validation
approaches rely on computing data quality metrics and/or using expert-defined
constraints. Although there are automated constraint generation methods, they
are often incomplete and may be too strict or too soft, causing false positives
or missed errors, thus requiring expert adjustment. These methods may also fail
to detect subtle data inconsistencies hidden by complex interdependencies
within the data. In this paper, we propose DQuag, an end-to-end data quality
validation and repair framework based on an improved Graph Neural Network (GNN)
and multi-task learning. The proposed method incorporates a dual-decoder
design: one for data quality validation and the other for data repair. Our
approach captures complex feature relationships within tabular datasets using a
multi-layer GNN architecture to automatically detect explicit and hidden data
errors. Unlike previous methods, our model does not require manual input for
constraint generation and learns the underlying feature dependencies, enabling
it to identify complex hidden errors that traditional systems often miss.
Moreover, it can recommend repair values, improving overall data quality.
Experimental results validate the effectiveness of our approach in identifying
and resolving data quality issues. The paper appeared in EDBT 2025.
|
2502.10669
|
Is Self-Supervised Pre-training on Satellite Imagery Better than
ImageNet? A Systematic Study with Sentinel-2
|
cs.CV
|
Self-supervised learning (SSL) has demonstrated significant potential in
pre-training robust models with limited labeled data, making it particularly
valuable for remote sensing (RS) tasks. A common assumption is that
pre-training on domain-aligned data provides maximal benefits on downstream
tasks, particularly when compared to ImageNet-pretraining (INP). In this work,
we investigate this assumption by collecting GeoNet, a large and diverse
dataset of global optical Sentinel-2 imagery, and pre-training SwAV and MAE on
both GeoNet and ImageNet. Evaluating these models on six downstream tasks in
the few-shot setting reveals that SSL pre-training on RS data offers modest
performance improvements over INP, and that it remains competitive in multiple
scenarios. This indicates that the presumed benefits of SSL pre-training on RS
data may be overstated, and the additional costs of data curation and
pre-training could be unjustified.
|
2502.10671
|
Evaluating Beam Sweeping for AoA Estimation with an RIS Prototype:
Indoor/Outdoor Field Trials
|
cs.IT cs.ET math.IT
|
Reconfigurable Intelligent Surfaces (RISs) have emerged as a promising
technology to enhance wireless communication systems by enabling dynamic
control over the propagation environment. However, practical experiments are
crucial towards the validation of the theoretical potential of RISs while
establishing their real-world applicability, especially since most studies rely
on simplified models and lack comprehensive field trials. In this paper, we
present an efficient method for configuring a $1$-bit RIS prototype at sub-$6$
GHz, resulting in a codebook oriented for beam sweeping; an essential protocol
for initial access and Angle of Arrival (AoA) estimation. The measured
radiation patterns of the RIS validate the theoretical model, demonstrating
consistency between the experimental results and the predicted beamforming
behavior. Furthermore, we experimentally prove that RIS can alter channel
properties and by harnessing the diversity it provides, we evaluate beam
sweeping as an AoA estimation technique. Finally, we investigate the frequency
selectivity of the RIS and propose an approach to address indoor challenges by
leveraging the geometry of environment.
|
2502.10673
|
Dataset Protection via Watermarked Canaries in Retrieval-Augmented LLMs
|
cs.CR cs.CL
|
Retrieval-Augmented Generation (RAG) has become an effective method for
enhancing large language models (LLMs) with up-to-date knowledge. However, it
poses a significant risk of IP infringement, as IP datasets may be incorporated
into the knowledge database by malicious Retrieval-Augmented LLMs (RA-LLMs)
without authorization. To protect the rights of the dataset owner, an effective
dataset membership inference algorithm for RA-LLMs is needed. In this work, we
introduce a novel approach to safeguard the ownership of text datasets and
effectively detect unauthorized use by the RA-LLMs. Our approach preserves the
original data completely unchanged while protecting it by inserting
specifically designed canary documents into the IP dataset. These canary
documents are created with synthetic content and embedded watermarks to ensure
uniqueness, stealthiness, and statistical provability. During the detection
process, unauthorized usage is identified by querying the canary documents and
analyzing the responses of RA-LLMs for statistical evidence of the embedded
watermark. Our experimental results demonstrate high query efficiency,
detectability, and stealthiness, along with minimal perturbation to the
original dataset, all without compromising the performance of the RAG system.
|
2502.10674
|
Occlusion-aware Text-Image-Point Cloud Pretraining for Open-World 3D
Object Recognition
|
cs.CV
|
Recent open-world representation learning approaches have leveraged CLIP to
enable zero-shot 3D object recognition. However, performance on real point
clouds with occlusions still falls short due to the unrealistic pretraining
settings. Additionally, these methods incur high inference costs because they
rely on Transformer's attention modules. In this paper, we make two
contributions to address these limitations. First, we propose occlusion-aware
text-image-point cloud pretraining to reduce the training-testing domain gap.
From 52K synthetic 3D objects, our framework generates nearly 630K partial
point clouds for pretraining, consistently improving real-world recognition
performances of existing popular 3D networks. Second, to reduce computational
requirements, we introduce DuoMamba, a two-stream linear state space model
tailored for point clouds. By integrating two space-filling curves with 1D
convolutions, DuoMamba effectively models spatial dependencies between point
tokens, offering a powerful alternative to Transformer. When pretrained with
our framework, DuoMamba surpasses current state-of-the-art methods while
reducing latency and FLOPs, highlighting the potential of our approach for
real-world applications. We will release our data and code to facilitate future
research.
|
2502.10675
|
Hierarchically-Structured Open-Vocabulary Indoor Scene Synthesis with
Pre-trained Large Language Model
|
cs.CV
|
Indoor scene synthesis aims to automatically produce plausible, realistic and
diverse 3D indoor scenes, especially given arbitrary user requirements.
Recently, the promising generalization ability of pre-trained large language
models (LLM) assist in open-vocabulary indoor scene synthesis. However, the
challenge lies in converting the LLM-generated outputs into reasonable and
physically feasible scene layouts. In this paper, we propose to generate
hierarchically structured scene descriptions with LLM and then compute the
scene layouts. Specifically, we train a hierarchy-aware network to infer the
fine-grained relative positions between objects and design a divide-and-conquer
optimization to solve for scene layouts. The advantages of using hierarchically
structured scene representation are two-fold. First, the hierarchical structure
provides a rough grounding for object arrangement, which alleviates
contradictory placements with dense relations and enhances the generalization
ability of the network to infer fine-grained placements. Second, it naturally
supports the divide-and-conquer optimization, by first arranging the sub-scenes
and then the entire scene, to more effectively solve for a feasible layout. We
conduct extensive comparison experiments and ablation studies with both
qualitative and quantitative evaluations to validate the effectiveness of our
key designs with the hierarchically structured scene representation. Our
approach can generate more reasonable scene layouts while better aligned with
the user requirements and LLM descriptions. We also present open-vocabulary
scene synthesis and interactive scene design results to show the strength of
our approach in the applications.
|
2502.10677
|
FocalCount: Towards Class-Count Imbalance in Class-Agnostic Counting
|
cs.CV
|
In class-agnostic object counting, the goal is to estimate the total number
of object instances in an image without distinguishing between specific
categories. Existing methods often predict this count without considering
class-specific outputs, leading to inaccuracies when such outputs are required.
These inaccuracies stem from two key challenges: 1) the prevalence of
single-category images in datasets, which leads models to generalize specific
categories as representative of all objects, and 2) the use of mean squared
error loss during training, which applies uniform penalization. This uniform
penalty disregards errors in less frequent categories, particularly when these
errors contribute minimally to the overall loss. To address these issues, we
propose {FocalCount}, a novel approach that leverages diverse feature
attributes to estimate the number of object categories in an image. This
estimate serves as a weighted factor to correct class-count imbalances.
Additionally, we introduce {Focal-MSE}, a new loss function that integrates
binary cross-entropy to generate stronger error gradients, enhancing the
model's sensitivity to errors in underrepresented categories. Our approach
significantly improves the model's ability to distinguish between specific
classes and general counts, demonstrating superior performance and scalability
in both few-shot and zero-shot scenarios across three object counting datasets.
The code will be released soon.
|
2502.10678
|
GenComUI: Exploring Generative Visual Aids as Medium to Support
Task-Oriented Human-Robot Communication
|
cs.HC cs.AI cs.RO
|
This work investigates the integration of generative visual aids in
human-robot task communication. We developed GenComUI, a system powered by
large language models that dynamically generates contextual visual aids (such
as map annotations, path indicators, and animations) to support verbal task
communication and facilitate the generation of customized task programs for the
robot. This system was informed by a formative study that examined how humans
use external visual tools to assist verbal communication in spatial tasks. To
evaluate its effectiveness, we conducted a user experiment (n = 20) comparing
GenComUI with a voice-only baseline. The results demonstrate that generative
visual aids, through both qualitative and quantitative analysis, enhance verbal
task communication by providing continuous visual feedback, thus promoting
natural and effective human-robot communication. Additionally, the study offers
a set of design implications, emphasizing how dynamically generated visual aids
can serve as an effective communication medium in human-robot interaction.
These findings underscore the potential of generative visual aids to inform the
design of more intuitive and effective human-robot communication, particularly
for complex communication scenarios in human-robot interaction and LLM-based
end-user development.
|
2502.10682
|
Hybrid Deepfake Image Detection: A Comprehensive Dataset-Driven Approach
Integrating Convolutional and Attention Mechanisms with Frequency Domain
Features
|
cs.CV cs.LG eess.IV
|
Effective deepfake detection tools are becoming increasingly essential over
the last few years due to the growing usage of deepfakes in unethical
practices. There exists a diverse range of deepfake generation techniques,
which makes it challenging to develop an accurate universal detection
mechanism. The 2025 Signal Processing Cup (DFWild-Cup competition) provided a
diverse dataset of deepfake images, which are generated from multiple deepfake
image generators, for training machine learning model(s) to emphasize the
generalization of deepfake detection. To this end, we proposed an
ensemble-based approach that employs three different neural network
architectures: a ResNet-34-based architecture, a data-efficient image
transformer (DeiT), and an XceptionNet with Wavelet Transform to capture both
local and global features of deepfakes. We visualize the specific regions that
these models focus for classification using Grad-CAM, and empirically
demonstrate the effectiveness of these models in grouping real and fake images
into cohesive clusters using t-SNE plots. Individually, the ResNet-34
architecture has achieved 88.9% accuracy, whereas the Xception network and the
DeiT architecture have achieved 87.76% and 89.32% accuracy, respectively. With
these networks, our weighted ensemble model achieves an excellent accuracy of
93.23% on the validation dataset of the SP Cup 2025 competition. Finally, the
confusion matrix and an Area Under the ROC curve of 97.44% further confirm the
stability of our proposed method.
|
2502.10683
|
CLoCKDistill: Consistent Location-and-Context-aware Knowledge
Distillation for DETRs
|
cs.CV
|
Object detection has advanced significantly with Detection Transformers
(DETRs). However, these models are computationally demanding, posing challenges
for deployment in resource-constrained environments (e.g., self-driving cars).
Knowledge distillation (KD) is an effective compression method widely applied
to CNN detectors, but its application to DETR models has been limited. Most KD
methods for DETRs fail to distill transformer-specific global context. Also,
they blindly believe in the teacher model, which can sometimes be misleading.
To bridge the gaps, this paper proposes Consistent Location-and-Context-aware
Knowledge Distillation (CLoCKDistill) for DETR detectors, which includes both
feature distillation and logit distillation components. For feature
distillation, instead of distilling backbone features like existing KD methods,
we distill the transformer encoder output (i.e., memory) that contains valuable
global context and long-range dependencies. Also, we enrich this memory with
object location details during feature distillation so that the student model
can prioritize relevant regions while effectively capturing the global context.
To facilitate logit distillation, we create target-aware queries based on the
ground truth, allowing both the student and teacher decoders to attend to
consistent and accurate parts of encoder memory. Experiments on the KITTI and
COCO datasets show our CLoCKDistill method's efficacy across various DETRs,
e.g., single-scale DAB-DETR, multi-scale deformable DETR, and denoising-based
DINO. Our method boosts student detector performance by 2.2% to 6.4%.
|
2502.10684
|
A Fast Quantum Image Compression Algorithm based on Taylor Expansion
|
quant-ph cs.CV
|
With the increasing demand for storing images, traditional image compression
methods face challenges in balancing the compressed size and image quality.
However, the hybrid quantum-classical model can recover this weakness by using
the advantage of qubits. In this study, we upgrade a quantum image compression
algorithm within parameterized quantum circuits. Our approach encodes image
data as unitary operator parameters and applies the quantum compilation
algorithm to emulate the encryption process. By utilizing first-order Taylor
expansion, we significantly reduce both the computational cost and loss, better
than the previous version. Experimental results on benchmark images, including
Lenna and Cameraman, show that our method achieves up to 86\% reduction in the
number of iterations while maintaining a lower compression loss, better for
high-resolution images. The results confirm that the proposed algorithm
provides an efficient and scalable image compression mechanism, making it a
promising candidate for future image processing applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.