id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.15321
|
Figurative-cum-Commonsense Knowledge Infusion for Multimodal Mental
Health Meme Classification
|
cs.CL cs.SI
|
The expression of mental health symptoms through non-traditional means, such
as memes, has gained remarkable attention over the past few years, with users
often highlighting their mental health struggles through figurative intricacies
within memes. While humans rely on commonsense knowledge to interpret these
complex expressions, current Multimodal Language Models (MLMs) struggle to
capture these figurative aspects inherent in memes. To address this gap, we
introduce a novel dataset, AxiOM, derived from the GAD anxiety questionnaire,
which categorizes memes into six fine-grained anxiety symptoms. Next, we
propose a commonsense and domain-enriched framework, M3H, to enhance MLMs'
ability to interpret figurative language and commonsense knowledge. The
overarching goal remains to first understand and then classify the mental
health symptoms expressed in memes. We benchmark M3H against 6 competitive
baselines (with 20 variations), demonstrating improvements in both quantitative
and qualitative metrics, including a detailed human evaluation. We observe a
clear improvement of 4.20% and 4.66% on weighted-F1 metric. To assess the
generalizability, we perform extensive experiments on a public dataset,
RESTORE, for depressive symptom identification, presenting an extensive
ablation study that highlights the contribution of each module in both
datasets. Our findings reveal limitations in existing models and the advantage
of employing commonsense to enhance figurative understanding.
|
2501.15322
|
Scaling laws for decoding images from brain activity
|
eess.IV cs.AI cs.LG q-bio.NC
|
Generative AI has recently propelled the decoding of images from brain
activity. How do these approaches scale with the amount and type of neural
recordings? Here, we systematically compare image decoding from four types of
non-invasive devices: electroencephalography (EEG), magnetoencephalography
(MEG), high-field functional Magnetic Resonance Imaging (3T fMRI) and
ultra-high field (7T) fMRI. For this, we evaluate decoding models on the
largest benchmark to date, encompassing 8 public datasets, 84 volunteers, 498
hours of brain recording and 2.3 million brain responses to natural images.
Unlike previous work, we focus on single-trial decoding performance to simulate
real-time settings. This systematic comparison reveals three main findings.
First, the most precise neuroimaging devices tend to yield the best decoding
performances, when the size of the training sets are similar. However, the gain
enabled by deep learning - in comparison to linear models - is obtained with
the noisiest devices. Second, we do not observe any plateau of decoding
performance as the amount of training data increases. Rather, decoding
performance scales log-linearly with the amount of brain recording. Third, this
scaling law primarily depends on the amount of data per subject. However,
little decoding gain is observed by increasing the number of subjects. Overall,
these findings delineate the path most suitable to scale the decoding of images
from non-invasive brain recordings.
|
2501.15324
|
A Game-Theoretic Framework for Distributed Load Balancing: Static and
Dynamic Game Models
|
cs.GT cs.MA
|
Motivated by applications in job scheduling, queuing networks, and load
balancing in cyber-physical systems, we develop and analyze a game-theoretic
framework to balance the load among servers in both static and dynamic
settings. In these applications, jobs/tasks are often held by selfish entities
that do not want to coordinate with each other, yet the goal is to balance the
load among servers in a distributed manner. First, we provide a static game
formulation in which each player holds a job with a certain processing
requirement and wants to schedule it fractionally among a set of heterogeneous
servers to minimize its average processing time. We show that this static game
is a potential game and admits a pure Nash equilibrium (NE). In particular, the
best-response dynamics converge to such an NE after $n$ iterations, where $n$
is the number of players. We then extend our results to a dynamic game setting,
where jobs arrive and get processed in the system, and players observe the load
(state) on the servers to decide how to schedule their jobs among the servers
in order to minimize their averaged cumulative processing time. In this
setting, we show that if the players update their strategies using dynamic
best-response strategies, the system eventually becomes fully load-balanced and
the players' strategies converge to the pure NE of the static game. In
particular, we show that the convergence time scales only polynomially with
respect to the game parameters. Finally, we provide numerical results to
evaluate the performance of our proposed algorithms under both static and
dynamic settings.
|
2501.15326
|
Recognize Any Surgical Object: Unleashing the Power of Weakly-Supervised
Data
|
cs.CV
|
We present RASO, a foundation model designed to Recognize Any Surgical
Object, offering robust open-set recognition capabilities across a broad range
of surgical procedures and object classes, in both surgical images and videos.
RASO leverages a novel weakly-supervised learning framework that generates
tag-image-text pairs automatically from large-scale unannotated surgical
lecture videos, significantly reducing the need for manual annotations. Our
scalable data generation pipeline gatherers to 2,200 surgical procedures and
produces 3.6 million tag annotations across 2,066 unique surgical tags. Our
experiments show that RASO achieves improvements of 2.9 mAP, 4.5 mAP, 10.6 mAP,
and 7.2 mAP on four standard surgical benchmarks respectively in zero-shot
settings, and surpasses state-of-the-art models in supervised surgical action
recognition tasks. We will open-source our code, model, and dataset to
facilitate further research.
|
2501.15328
|
Physiologically-Informed Predictability of a Teammate's Future Actions
Forecasts Team Performance
|
q-bio.NC cs.LG
|
In collaborative environments, a deep understanding of multi-human teaming
dynamics is essential for optimizing performance. However, the relationship
between individuals' behavioral and physiological markers and their combined
influence on overall team performance remains poorly understood. To explore
this, we designed a triadic human collaborative sensorimotor task in virtual
reality (VR) and introduced a novel predictability metric to examine team
dynamics and performance. Our findings reveal a strong connection between team
performance and the predictability of a team member's future actions based on
other team members' behavioral and physiological data. Contrary to conventional
wisdom that high-performing teams are highly synchronized, our results suggest
that physiological and behavioral synchronizations among team members have a
limited correlation with team performance. These insights provide a new
quantitative framework for understanding multi-human teaming, paving the way
for deeper insights into team dynamics and performance.
|
2501.15337
|
Finite Strain Robust Topology Optimization Considering Multiple
Uncertainties
|
cs.CE math.OC
|
This paper presents a computational framework for the robust stiffness design
of hyperelastic structures at finite deformations subject to various uncertain
sources. In particular, the loading, material properties, and geometry
uncertainties are incorporated within the topology optimization framework and
are modeled by random vectors or random fields. A stochastic perturbation
method is adopted to quantify uncertainties, and analytical adjoint
sensitivities are derived for efficient gradient-based optimization. Moreover,
the mesh distortion of low-density elements under finite deformations is
handled by an adaptive linear energy interpolation scheme. The proposed robust
topology optimization framework is applied to several examples, and the effects
of different uncertain sources on the optimized topologies are systematically
investigated. As demonstrated, robust designs are less sensitive to the
variation of target uncertain sources than deterministic designs. Finally, it
is shown that incorporating symmetry-breaking uncertainties in the topology
optimization framework promotes stable designs compared to the deterministic
counterpart, where -- when no stability constraint is included -- can lead to
unstable designs.
|
2501.15338
|
Fairness-aware Contextual Dynamic Pricing with Strategic Buyers
|
cs.GT cs.LG stat.ML
|
Contextual pricing strategies are prevalent in online retailing, where the
seller adjusts prices based on products' attributes and buyers'
characteristics. Although such strategies can enhance seller's profits, they
raise concerns about fairness when significant price disparities emerge among
specific groups, such as gender or race. These disparities can lead to adverse
perceptions of fairness among buyers and may even violate the law and
regulation. In contrast, price differences can incentivize disadvantaged buyers
to strategically manipulate their group identity to obtain a lower price. In
this paper, we investigate contextual dynamic pricing with fairness
constraints, taking into account buyers' strategic behaviors when their group
status is private and unobservable from the seller. We propose a dynamic
pricing policy that simultaneously achieves price fairness and discourages
strategic behaviors. Our policy achieves an upper bound of $O(\sqrt{T}+H(T))$
regret over $T$ time horizons, where the term $H(T)$ arises from buyers'
assessment of the fairness of the pricing policy based on their learned price
difference. When buyers are able to learn the fairness of the price policy,
this upper bound reduces to $O(\sqrt{T})$. We also prove an $\Omega(\sqrt{T})$
regret lower bound of any pricing policy under our problem setting. We support
our findings with extensive experimental evidence, showcasing our policy's
effectiveness. In our real data analysis, we observe the existence of price
discrimination against race in the loan application even after accounting for
other contextual information. Our proposed pricing policy demonstrates a
significant improvement, achieving 35.06% reduction in regret compared to the
benchmark policy.
|
2501.15339
|
DER Hosting capacity for distribution networks: definitions, attributes,
use-cases and challenges
|
eess.SY cs.SY
|
The rapid adoption of distributed energy resources (DERs) has outpaced grid
modernization, leading to capacity limitations that challenge their further
integration. Hosting Capacity Assessment (HCA) is a critical tool for
evaluating how much DER capacity a grid can handle without breaching
operational limits. HCA serves multiple goals: enabling higher DER penetration,
accelerating grid connection times, guiding infrastructure upgrades or flexible
resource deployment, ensuring equitable policies, and improving grid
flexibility while minimizing curtailment. HCA lacks a universal definition,
varying by modelling approaches, uncertainty considerations, and objectives.
This paper addresses five key questions to standardize and enhance HCA
practices. First, it classifies HCA objectives associated with different
stakeholders such as system operators, consumers, market operators and
consumers. Second, it examines model attributes, including modelling
sophistication, data requirements, and uncertainty handling, thus balancing
complexity with computational efficiency. Third, it explores HCA applications,
such as planning grid investments or operational decisions, and summarizes use
cases associated with HCA. Fourth, it emphasizes the need for periodic updates
to reflect dynamic grid conditions, evolving technologies, and new DER
installations. Finally, it identifies challenges, such as ensuring data
quality, managing computational demands, and aligning short-term and long-term
goals. By addressing these aspects, this paper provides a structured approach
to perform and apply HCA, offering insights for engineers, planners, and
policymakers to manage DER integration effectively.
|
2501.15343
|
Development and Application of Self-Supervised Machine Learning for
Smoke Plume and Active Fire Identification from the FIREX-AQ Datasets
|
cs.LG cs.AI cs.CV
|
Fire Influence on Regional to Global Environments and Air Quality (FIREX-AQ)
was a field campaign aimed at better understanding the impact of wildfires and
agricultural fires on air quality and climate. The FIREX-AQ campaign took place
in August 2019 and involved two aircraft and multiple coordinated satellite
observations. This study applied and evaluated a self-supervised machine
learning (ML) method for the active fire and smoke plume identification and
tracking in the satellite and sub-orbital remote sensing datasets collected
during the campaign. Our unique methodology combines remote sensing
observations with different spatial and spectral resolutions. The demonstrated
approach successfully differentiates fire pixels and smoke plumes from
background imagery, enabling the generation of a per-instrument smoke and fire
mask product, as well as smoke and fire masks created from the fusion of
selected data from independent instruments. This ML approach has a potential to
enhance operational wildfire monitoring systems and improve decision-making in
air quality management through fast smoke plume identification12 and tracking
and could improve climate impact studies through fusion data from independent
instruments.
|
2501.15348
|
ReInc: Scaling Training of Dynamic Graph Neural Networks
|
cs.LG cs.DC
|
Dynamic Graph Neural Networks (DGNNs) have gained widespread attention due to
their applicability in diverse domains such as traffic network prediction,
epidemiological forecasting, and social network analysis. In this paper, we
present ReInc, a system designed to enable efficient and scalable training of
DGNNs on large-scale graphs. ReInc introduces key innovations that capitalize
on the unique combination of Graph Neural Networks (GNNs) and Recurrent Neural
Networks (RNNs) inherent in DGNNs. By reusing intermediate results and
incrementally computing aggregations across consecutive graph snapshots, ReInc
significantly enhances computational efficiency. To support these
optimizations, ReInc incorporates a novel two-level caching mechanism with a
specialized caching policy aligned to the DGNN execution workflow.
Additionally, ReInc addresses the challenges of managing structural and
temporal dependencies in dynamic graphs through a new distributed training
strategy. This approach eliminates communication overheads associated with
accessing remote features and redistributing intermediate results. Experimental
results demonstrate that ReInc achieves up to an order of magnitude speedup
compared to state-of-the-art frameworks, tested across various dynamic GNN
architectures and real-world graph datasets.
|
2501.15351
|
Fairness in LLM-Generated Surveys
|
cs.CY cs.LG
|
Large Language Models (LLMs) excel in text generation and understanding,
especially in simulating socio-political and economic patterns, serving as an
alternative to traditional surveys. However, their global applicability remains
questionable due to unexplored biases across socio-demographic and geographic
contexts. This study examines how LLMs perform across diverse populations by
analyzing public surveys from Chile and the United States, focusing on
predictive accuracy and fairness metrics. The results show performance
disparities, with LLM consistently outperforming on U.S. datasets. This bias
originates from the U.S.-centric training data, remaining evident after
accounting for socio-demographic differences. In the U.S., political identity
and race significantly influence prediction accuracy, while in Chile, gender,
education, and religious affiliation play more pronounced roles. Our study
presents a novel framework for measuring socio-demographic biases in LLMs,
offering a path toward ensuring fairer and more equitable model performance
across diverse socio-cultural contexts.
|
2501.15355
|
Large Language Models as Theory of Mind Aware Generative Agents with
Counterfactual Reflection
|
cs.CL cs.AI
|
Recent studies have increasingly demonstrated that large language models
(LLMs) possess significant theory of mind (ToM) capabilities, showing the
potential for simulating the tracking of mental states in generative agents. In
this study, we propose a novel paradigm called ToM-agent, designed to empower
LLMs-based generative agents to simulate ToM in open-domain conversational
interactions. ToM-agent disentangles the confidence from mental states,
facilitating the emulation of an agent's perception of its counterpart's mental
states, such as beliefs, desires, and intentions (BDIs). Using past
conversation history and verbal reflections, ToM-Agent can dynamically adjust
counterparts' inferred BDIs, along with related confidence levels. We further
put forth a counterfactual intervention method that reflects on the gap between
the predicted responses of counterparts and their real utterances, thereby
enhancing the efficiency of reflection. Leveraging empathetic and persuasion
dialogue datasets, we assess the advantages of implementing the ToM-agent with
downstream tasks, as well as its performance in both the first-order and the
\textit{second-order} ToM. Our findings indicate that the ToM-agent can grasp
the underlying reasons for their counterpart's behaviors beyond mere
semantic-emotional supporting or decision-making based on common sense,
providing new insights for studying large-scale LLMs-based simulation of human
social behaviors.
|
2501.15356
|
Federated Class-Incremental Learning: A Hybrid Approach Using Latent
Exemplars and Data-Free Techniques to Address Local and Global Forgetting
|
cs.LG
|
Federated Class-Incremental Learning (FCIL) refers to a scenario where a
dynamically changing number of clients collaboratively learn an ever-increasing
number of incoming tasks. FCIL is known to suffer from local forgetting due to
class imbalance at each client and global forgetting due to class imbalance
across clients. We develop a mathematical framework for FCIL that formulates
local and global forgetting. Then, we propose an approach called Hybrid
Rehearsal (HR), which utilizes latent exemplars and data-free techniques to
address local and global forgetting, respectively. HR employs a customized
autoencoder designed for both data classification and the generation of
synthetic data. To determine the embeddings of new tasks for all clients in the
latent space of the encoder, the server uses the Lennard-Jones Potential
formulations. Meanwhile, at the clients, the decoder decodes the stored
low-dimensional latent space exemplars back to the high-dimensional input
space, used to address local forgetting. To overcome global forgetting, the
decoder generates synthetic data. Furthermore, our mathematical framework
proves that our proposed approach HR can, in principle, tackle the two local
and global forgetting challenges. In practice, extensive experiments
demonstrate that while preserving privacy, our proposed approach outperforms
the state-of-the-art baselines on multiple FCIL benchmarks with low compute and
memory footprints.
|
2501.15357
|
Structural Symmetry, Multiplicity, and Differentiability of
Eigenfrequencies
|
cs.CE math.OC
|
This work investigates the multiplicity and differentiability of
eigenfrequencies in structures with various symmetries. In particular, the
study explores how the geometric and design variable symmetries affect the
distribution of eigenvalues, distinguishing between simple and multiple
eigenvalues in 3-D trusses. Moreover, this article also examines the
differentiability of multiple eigenvalues under various symmetry conditions,
which is crucial for gradient-based optimization. The results presented in this
study show that while full symmetry ensures the differentiability of all
eigenvalues, increased symmetry in optimized design, such as accidental
symmetry, may lead to non-differentiable eigenvalues. Additionally, the study
presents solutions using symmetric functions, demonstrating their effectiveness
in ensuring differentiability in scenarios where multiple eigenvalues are
non-differentiable. The study also highlights a critical insight into the
differentiability criterion of symmetric functions, i.e., the completeness of
eigen-clusters, which is necessary to ensure the differentiability of such
functions.
|
2501.15361
|
Decentralized Low-Rank Fine-Tuning of Large Language Models
|
cs.LG
|
While parameter-efficient fine-tuning (PEFT) techniques like Low-Rank
Adaptation (LoRA) offer computationally efficient adaptations of Large Language
Models (LLMs), their practical deployment often assumes centralized data and
training environments. However, real-world scenarios frequently involve
distributed, privacy-sensitive datasets that require decentralized solutions.
Federated learning (FL) addresses data privacy by coordinating model updates
across clients, but it is typically based on centralized aggregation through a
parameter server, which can introduce bottlenecks and communication
constraints. Decentralized learning, in contrast, eliminates this dependency by
enabling direct collaboration between clients, improving scalability and
efficiency in distributed environments. Despite its advantages, decentralized
LLM fine-tuning remains underexplored. In this work, we propose Dec-LoRA, an
algorithm for decentralized fine-tuning of LLMs based on LoRA. Through
extensive experiments on BERT and LLaMA-2 models, we show that Dec-LoRA
maintains performance comparable to centralized LoRA across various conditions,
including data heterogeneity and quantization constraints. This highlights its
potential for scalable LLM fine-tuning in decentralized environments.
|
2501.15363
|
AI-Driven Secure Data Sharing: A Trustworthy and Privacy-Preserving
Approach
|
cs.CR cs.CV
|
In the era of data-driven decision-making, ensuring the privacy and security
of shared data is paramount across various domains. Applying existing deep
neural networks (DNNs) to encrypted data is critical and often compromises
performance, security, and computational overhead. To address these
limitations, this research introduces a secure framework consisting of a
learnable encryption method based on the block-pixel operation to encrypt the
data and subsequently integrate it with the Vision Transformer (ViT). The
proposed framework ensures data privacy and security by creating unique
scrambling patterns per key, providing robust performance against adversarial
attacks without compromising computational efficiency and data integrity. The
framework was tested on sensitive medical datasets to validate its efficacy,
proving its ability to handle highly confidential information securely. The
suggested framework was validated with a 94\% success rate after extensive
testing on real-world datasets, such as MRI brain tumors and histological scans
of lung and colon cancers. Additionally, the framework was tested under diverse
adversarial attempts against secure data sharing with optimum performance and
demonstrated its effectiveness in various threat scenarios. These comprehensive
analyses underscore its robustness, making it a trustworthy solution for secure
data sharing in critical applications.
|
2501.15365
|
A Transfer Learning Framework for Anomaly Detection in Multivariate IoT
Traffic Data
|
cs.LG cs.CR cs.NI
|
In recent years, rapid technological advancements and expanded Internet
access have led to a significant rise in anomalies within network traffic and
time-series data. Prompt detection of these irregularities is crucial for
ensuring service quality, preventing financial losses, and maintaining robust
security standards. While machine learning algorithms have shown promise in
achieving high accuracy for anomaly detection, their performance is often
constrained by the specific conditions of their training data. A persistent
challenge in this domain is the scarcity of labeled data for anomaly detection
in time-series datasets. This limitation hampers the training efficacy of both
traditional machine learning and advanced deep learning models. To address
this, unsupervised transfer learning emerges as a viable solution, leveraging
unlabeled data from a source domain to identify anomalies in an unlabeled
target domain. However, many existing approaches still depend on a small amount
of labeled data from the target domain. To overcome these constraints, we
propose a transfer learning-based model for anomaly detection in multivariate
time-series datasets. Unlike conventional methods, our approach does not
require labeled data in either the source or target domains. Empirical
evaluations on novel intrusion detection datasets demonstrate that our model
outperforms existing techniques in accurately identifying anomalies within an
entirely unlabeled target domain.
|
2501.15368
|
Baichuan-Omni-1.5 Technical Report
|
cs.CL cs.SD eess.AS
|
We introduce Baichuan-Omni-1.5, an omni-modal model that not only has
omni-modal understanding capabilities but also provides end-to-end audio
generation capabilities. To achieve fluent and high-quality interaction across
modalities without compromising the capabilities of any modality, we
prioritized optimizing three key aspects. First, we establish a comprehensive
data cleaning and synthesis pipeline for multimodal data, obtaining about 500B
high-quality data (text, audio, and vision). Second, an audio-tokenizer
(Baichuan-Audio-Tokenizer) has been designed to capture both semantic and
acoustic information from audio, enabling seamless integration and enhanced
compatibility with MLLM. Lastly, we designed a multi-stage training strategy
that progressively integrates multimodal alignment and multitask fine-tuning,
ensuring effective synergy across all modalities. Baichuan-Omni-1.5 leads
contemporary models (including GPT4o-mini and MiniCPM-o 2.6) in terms of
comprehensive omni-modal capabilities. Notably, it achieves results comparable
to leading models such as Qwen2-VL-72B across various multimodal medical
benchmarks.
|
2501.15369
|
iFormer: Integrating ConvNet and Transformer for Mobile Application
|
cs.CV cs.AI cs.LG
|
We present a new family of mobile hybrid vision networks, called iFormer,
with a focus on optimizing latency and accuracy on mobile applications. iFormer
effectively integrates the fast local representation capacity of convolution
with the efficient global modeling ability of self-attention. The local
interactions are derived from transforming a standard convolutional network,
\textit{i.e.}, ConvNeXt, to design a more lightweight mobile network. Our newly
introduced mobile modulation attention removes memory-intensive operations in
MHA and employs an efficient modulation mechanism to boost dynamic global
representational capacity. We conduct comprehensive experiments demonstrating
that iFormer outperforms existing lightweight networks across various tasks.
Notably, iFormer achieves an impressive Top-1 accuracy of 80.4\% on ImageNet-1k
with a latency of only 1.10 ms on an iPhone 13, surpassing the recently
proposed MobileNetV4 under similar latency constraints. Additionally, our
method shows significant improvements in downstream tasks, including COCO
object detection, instance segmentation, and ADE20k semantic segmentation,
while still maintaining low latency on mobile devices for high-resolution
inputs in these scenarios.
|
2501.15370
|
Scaling Large Vision-Language Models for Enhanced Multimodal
Comprehension In Biomedical Image Analysis
|
cs.CV cs.AI
|
Large language models (LLMs) have demonstrated immense capabilities in
understanding textual data and are increasingly being adopted to help
researchers accelerate scientific discovery through knowledge extraction
(information retrieval), knowledge distillation (summarizing key findings and
methodologies into concise forms), and knowledge synthesis (aggregating
information from multiple scientific sources to address complex queries,
generate hypothesis and formulate experimental plans). However, scientific data
often exists in both visual and textual modalities. Vision language models
(VLMs) address this by incorporating a pretrained vision backbone for
processing images and a cross-modal projector that adapts image tokens into the
LLM dimensional space, thereby providing richer multimodal comprehension.
Nevertheless, off-the-shelf VLMs show limited capabilities in handling
domain-specific data and are prone to hallucinations. We developed intelligent
assistants finetuned from LLaVA models to enhance multimodal understanding in
low-dose radiation therapy (LDRT)-a benign approach used in the treatment of
cancer-related illnesses. Using multilingual data from 42,673 articles, we
devise complex reasoning and detailed description tasks for visual question
answering (VQA) benchmarks. Our assistants, trained on 50,882 image-text pairs,
demonstrate superior performance over base models as evaluated using
LLM-as-a-judge approach, particularly in reducing hallucination and improving
domain-specific comprehension.
|
2501.15371
|
Acquiring Submillimeter-Accurate Multi-Task Vision Datasets for
Computer-Assisted Orthopedic Surgery
|
cs.CV
|
Advances in computer vision, particularly in optical image-based 3D
reconstruction and feature matching, enable applications like marker-less
surgical navigation and digitization of surgery. However, their development is
hindered by a lack of suitable datasets with 3D ground truth. This work
explores an approach to generating realistic and accurate ex vivo datasets
tailored for 3D reconstruction and feature matching in open orthopedic surgery.
A set of posed images and an accurately registered ground truth surface mesh of
the scene are required to develop vision-based 3D reconstruction and matching
methods suitable for surgery. We propose a framework consisting of three core
steps and compare different methods for each step: 3D scanning, calibration of
viewpoints for a set of high-resolution RGB images, and an optical-based method
for scene registration. We evaluate each step of this framework on an ex vivo
scoliosis surgery using a pig spine, conducted under real operating room
conditions. A mean 3D Euclidean error of 0.35 mm is achieved with respect to
the 3D ground truth. The proposed method results in submillimeter accurate 3D
ground truths and surgical images with a spatial resolution of 0.1 mm. This
opens the door to acquiring future surgical datasets for high-precision
applications.
|
2501.15373
|
Learning-Enhanced Safeguard Control for High-Relative-Degree Systems:
Robust Optimization under Disturbances and Faults
|
eess.SY cs.AI cs.LG cs.SY math.OC nlin.AO
|
Merely pursuing performance may adversely affect the safety, while a
conservative policy for safe exploration will degrade the performance. How to
balance the safety and performance in learning-based control problems is an
interesting yet challenging issue. This paper aims to enhance system
performance with safety guarantee in solving the reinforcement learning
(RL)-based optimal control problems of nonlinear systems subject to
high-relative-degree state constraints and unknown time-varying
disturbance/actuator faults. First, to combine control barrier functions (CBFs)
with RL, a new type of CBFs, termed high-order reciprocal control barrier
function (HO-RCBF) is proposed to deal with high-relative-degree constraints
during the learning process. Then, the concept of gradient similarity is
proposed to quantify the relationship between the gradient of safety and the
gradient of performance. Finally, gradient manipulation and adaptive mechanisms
are introduced in the safe RL framework to enhance the performance with a
safety guarantee. Two simulation examples illustrate that the proposed safe RL
framework can address high-relative-degree constraint, enhance safety
robustness and improve system performance.
|
2501.15374
|
Evaluating the Effectiveness of XAI Techniques for Encoder-Based
Language Models
|
cs.CL cs.AI
|
The black-box nature of large language models (LLMs) necessitates the
development of eXplainable AI (XAI) techniques for transparency and
trustworthiness. However, evaluating these techniques remains a challenge. This
study presents a general evaluation framework using four key metrics:
Human-reasoning Agreement (HA), Robustness, Consistency, and Contrastivity. We
assess the effectiveness of six explainability techniques from five different
XAI categories model simplification (LIME), perturbation-based methods (SHAP),
gradient-based approaches (InputXGradient, Grad-CAM), Layer-wise Relevance
Propagation (LRP), and attention mechanisms-based explainability methods
(Attention Mechanism Visualization, AMV) across five encoder-based language
models: TinyBERT, BERTbase, BERTlarge, XLM-R large, and DeBERTa-xlarge, using
the IMDB Movie Reviews and Tweet Sentiment Extraction (TSE) datasets. Our
findings show that the model simplification-based XAI method (LIME)
consistently outperforms across multiple metrics and models, significantly
excelling in HA with a score of 0.9685 on DeBERTa-xlarge, robustness, and
consistency as the complexity of large language models increases. AMV
demonstrates the best Robustness, with scores as low as 0.0020. It also excels
in Consistency, achieving near-perfect scores of 0.9999 across all models.
Regarding Contrastivity, LRP performs the best, particularly on more complex
models, with scores up to 0.9371.
|
2501.15377
|
Fine Tuning without Catastrophic Forgetting via Selective Low Rank
Adaptation
|
cs.CV
|
Adapting deep learning models to new domains often requires computationally
intensive retraining and risks catastrophic forgetting. While fine-tuning
enables domain-specific adaptation, it can reduce robustness to distribution
shifts, impacting out-of-distribution (OOD) performance. Pre-trained zero-shot
models like CLIP offer strong generalization but may suffer degraded robustness
after fine-tuning. Building on Task Adaptive Parameter Sharing (TAPS), we
propose a simple yet effective extension as a parameter-efficient fine-tuning
(PEFT) method, using an indicator function to selectively activate Low-Rank
Adaptation (LoRA) blocks. Our approach minimizes knowledge loss, retains its
generalization strengths under domain shifts, and significantly reduces
computational costs compared to traditional fine-tuning. We demonstrate that
effective fine-tuning can be achieved with as few as 5\% of active blocks,
substantially improving efficiency. Evaluations on pre-trained models such as
CLIP and DINO-ViT demonstrate our method's broad applicability and
effectiveness in maintaining performance and knowledge retention.
|
2501.15378
|
How to Mitigate Information Loss in Knowledge Graphs for GraphRAG:
Leveraging Triple Context Restoration and Query-Driven Feedback
|
cs.AI cs.IR
|
Knowledge Graph (KG)-augmented Large Language Models (LLMs) have recently
propelled significant advances in complex reasoning tasks, thanks to their
broad domain knowledge and contextual awareness. Unfortunately, current methods
often assume KGs to be complete, which is impractical given the inherent
limitations of KG construction and the potential loss of contextual cues when
converting unstructured text into entity-relation triples. In response, this
paper proposes the Triple Context Restoration and Query-driven Feedback
(TCR-QF) framework, which reconstructs the textual context underlying each
triple to mitigate information loss, while dynamically refining the KG
structure by iteratively incorporating query-relevant missing knowledge.
Experiments on five benchmark question-answering datasets substantiate the
effectiveness of TCR-QF in KG and LLM integration, where itachieves a 29.1%
improvement in Exact Match and a 15.5% improvement in F1 over its
state-of-the-art GraphRAG competitors.
|
2501.15379
|
Zero-Shot Interactive Text-to-Image Retrieval via Diffusion-Augmented
Representations
|
cs.IR cs.AI cs.CV
|
Interactive Text-to-Image Retrieval (I-TIR) has emerged as a transformative
user-interactive tool for applications in domains such as e-commerce and
education. Yet, current methodologies predominantly depend on finetuned
Multimodal Large Language Models (MLLMs), which face two critical limitations:
(1) Finetuning imposes prohibitive computational overhead and long-term
maintenance costs. (2) Finetuning narrows the pretrained knowledge distribution
of MLLMs, reducing their adaptability to novel scenarios. These issues are
exacerbated by the inherently dynamic nature of real-world I-TIR systems, where
queries and image databases evolve in complexity and diversity, often deviating
from static training distributions. To overcome these constraints, we propose
Diffusion Augmented Retrieval (DAR), a paradigm-shifting framework that
bypasses MLLM finetuning entirely. DAR synergizes Large Language Model
(LLM)-guided query refinement with Diffusion Model (DM)-based visual synthesis
to create contextually enriched intermediate representations. This
dual-modality approach deciphers nuanced user intent more holistically,
enabling precise alignment between textual queries and visually relevant
images. Rigorous evaluations across four benchmarks reveal DAR's dual
strengths: (1) Matches state-of-the-art finetuned I-TIR models on
straightforward queries without task-specific training. (2) Scalable
Generalization: Surpasses finetuned baselines by 7.61% in Hits@10 (top-10
accuracy) under multi-turn conversational complexity, demonstrating robustness
to intricate, distributionally shifted interactions. By eliminating finetuning
dependencies and leveraging generative-augmented representations, DAR
establishes a new trajectory for efficient, adaptive, and scalable cross-modal
retrieval systems.
|
2501.15383
|
Qwen2.5-1M Technical Report
|
cs.CL
|
We introduce Qwen2.5-1M, a series of models that extend the context length to
1 million tokens. Compared to the previous 128K version, the Qwen2.5-1M series
have significantly enhanced long-context capabilities through long-context
pre-training and post-training. Key techniques such as long data synthesis,
progressive pre-training, and multi-stage supervised fine-tuning are employed
to effectively enhance long-context performance while reducing training costs.
To promote the use of long-context models among a broader user base, we
present and open-source our inference framework. This framework includes a
length extrapolation method that can expand the model context lengths by at
least four times, or even more, without additional training. To reduce
inference costs, we implement a sparse attention method along with chunked
prefill optimization for deployment scenarios and a sparsity refinement method
to improve precision. Additionally, we detail our optimizations in the
inference engine, including kernel optimization, pipeline parallelism, and
scheduling optimization, which significantly enhance overall inference
performance. By leveraging our inference framework, the Qwen2.5-1M models
achieve a remarkable 3x to 7x prefill speedup in scenarios with 1 million
tokens of context. This framework provides an efficient and powerful solution
for developing applications that require long-context processing using
open-source models.
The Qwen2.5-1M series currently includes the open-source models
Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, as well as the API-accessed
model Qwen2.5-Turbo. Evaluations show that Qwen2.5-1M models have been greatly
improved in long-context tasks without compromising performance in
short-context scenarios. Specifically, the Qwen2.5-14B-Instruct-1M model
significantly outperforms GPT-4o-mini in long-context tasks and supports
contexts eight times longer.
|
2501.15384
|
MetaOcc: Surround-View 4D Radar and Camera Fusion Framework for 3D
Occupancy Prediction with Dual Training Strategies
|
cs.CV cs.AI
|
3D occupancy prediction is crucial for autonomous driving perception. Fusion
of 4D radar and camera provides a potential solution of robust occupancy
prediction on serve weather with least cost. How to achieve effective
multi-modal feature fusion and reduce annotation costs remains significant
challenges. In this work, we propose MetaOcc, a novel multi-modal occupancy
prediction framework that fuses surround-view cameras and 4D radar for
comprehensive environmental perception. We first design a height self-attention
module for effective 3D feature extraction from sparse radar points. Then, a
local-global fusion mechanism is proposed to adaptively capture modality
contributions while handling spatio-temporal misalignments. Temporal alignment
and fusion module is employed to further aggregate historical feature.
Furthermore, we develop a semi-supervised training procedure leveraging
open-set segmentor and geometric constraints for pseudo-label generation,
enabling robust perception with limited annotations. Extensive experiments on
OmniHD-Scenes dataset demonstrate that MetaOcc achieves state-of-the-art
performance, surpassing previous methods by significant margins. Notably, as
the first semi-supervised 4D radar and camera fusion-based occupancy prediction
approach, MetaOcc maintains 92.5% of the fully-supervised performance while
using only 50% of ground truth annotations, establishing a new benchmark for
multi-modal 3D occupancy prediction. Code and data are available at
https://github.com/LucasYang567/MetaOcc.
|
2501.15385
|
DDUNet: Dual Dynamic U-Net for Highly-Efficient Cloud Segmentation
|
cs.CV eess.IV
|
Cloud segmentation amounts to separating cloud pixels from non-cloud pixels
in an image. Current deep learning methods for cloud segmentation suffer from
three issues. (a) Constrain on their receptive field due to the fixed size of
the convolution kernel. (b) Lack of robustness towards different scenarios. (c)
Requirement of a large number of parameters and limitations for real-time
implementation. To address these issues, we propose a Dual Dynamic U-Net
(DDUNet) for supervised cloud segmentation. The DDUNet adheres to a U-Net
architecture and integrates two crucial modules: the dynamic multi-scale
convolution (DMSC), improving merging features under different reception
fields, and the dynamic weights and bias generator (DWBG) in classification
layers to enhance generalization ability. More importantly, owing to the use of
depth-wise convolution, the DDUNet is a lightweight network that can achieve
95.3% accuracy on the SWINySEG dataset with only 0.33M parameters, and achieve
superior performance over three different configurations of the SWINySEg
dataset in both accuracy and efficiency.
|
2501.15388
|
Guaranteed Multidimensional Time Series Prediction via Deterministic
Tensor Completion Theory
|
cs.LG
|
In recent years, the prediction of multidimensional time series data has
become increasingly important due to its wide-ranging applications.
Tensor-based prediction methods have gained attention for their ability to
preserve the inherent structure of such data. However, existing approaches,
such as tensor autoregression and tensor decomposition, often have consistently
failed to provide clear assertions regarding the number of samples that can be
exactly predicted. While matrix-based methods using nuclear norms address this
limitation, their reliance on matrices limits accuracy and increases
computational costs when handling multidimensional data. To overcome these
challenges, we reformulate multidimensional time series prediction as a
deterministic tensor completion problem and propose a novel theoretical
framework. Specifically, we develop a deterministic tensor completion theory
and introduce the Temporal Convolutional Tensor Nuclear Norm (TCTNN) model. By
convolving the multidimensional time series along the temporal dimension and
applying the tensor nuclear norm, our approach identifies the maximum forecast
horizon for exact predictions. Additionally, TCTNN achieves superior
performance in prediction accuracy and computational efficiency compared to
existing methods across diverse real-world datasets, including climate
temperature, network flow, and traffic ride data. Our implementation is
publicly available at https://github.com/HaoShu2000/TCTNN.
|
2501.15389
|
CP2M: Clustered-Patch-Mixed Mosaic Augmentation for Aerial Image
Segmentation
|
cs.CV
|
Remote sensing image segmentation is pivotal for earth observation,
underpinning applications such as environmental monitoring and urban planning.
Due to the limited annotation data available in remote sensing images, numerous
studies have focused on data augmentation as a means to alleviate overfitting
in deep learning networks. However, some existing data augmentation strategies
rely on simple transformations that may not sufficiently enhance data diversity
or model generalization capabilities. This paper proposes a novel augmentation
strategy, Clustered-Patch-Mixed Mosaic (CP2M), designed to address these
limitations. CP2M integrates a Mosaic augmentation phase with a clustered patch
mix phase. The former stage constructs a new sample from four random samples,
while the latter phase uses the connected component labeling algorithm to
ensure the augmented data maintains spatial coherence and avoids introducing
irrelevant semantics when pasting random patches. Our experiments on the ISPRS
Potsdam dataset demonstrate that CP2M substantially mitigates overfitting,
setting new benchmarks for segmentation accuracy and model robustness in remote
sensing tasks.
|
2501.15392
|
Faster Configuration Performance Bug Testing with Neural Dual-level
Prioritization
|
cs.SE cs.AI
|
As software systems become more complex and configurable, more performance
problems tend to arise from the configuration designs. This has caused some
configuration options to unexpectedly degrade performance which deviates from
their original expectations designed by the developers. Such discrepancies,
namely configuration performance bugs (CPBugs), are devastating and can be
deeply hidden in the source code. Yet, efficiently testing CPBugs is difficult,
not only due to the test oracle is hard to set, but also because the
configuration measurement is expensive and there are simply too many possible
configurations to test. As such, existing testing tools suffer from lengthy
runtime or have been ineffective in detecting CPBugs when the budget is
limited, compounded by inaccurate test oracle. In this paper, we seek to
achieve significantly faster CPBug testing by neurally prioritizing the testing
at both the configuration option and value range levels with automated oracle
estimation. Our proposed tool, dubbed NDP, is a general framework that works
with different heuristic generators. The idea is to leverage two neural
language models: one to estimate the CPBug types that serve as the oracle
while, more vitally, the other to infer the probabilities of an option being
CPBug-related, based on which the options and the value ranges to be searched
can be prioritized. Experiments on several widely-used systems of different
versions reveal that NDP can, in general, better predict CPBug type in 87%
cases and find more CPBugs with up to 88.88x testing efficiency speedup over
the state-of-the-art tools.
|
2501.15393
|
Diffusion-based Hierarchical Negative Sampling for Multimodal Knowledge
Graph Completion
|
cs.AI cs.CL
|
Multimodal Knowledge Graph Completion (MMKGC) aims to address the critical
issue of missing knowledge in multimodal knowledge graphs (MMKGs) for their
better applications. However, both the previous MMGKC and negative sampling
(NS) approaches ignore the employment of multimodal information to generate
diverse and high-quality negative triples from various semantic levels and
hardness levels, thereby limiting the effectiveness of training MMKGC models.
Thus, we propose a novel Diffusion-based Hierarchical Negative Sampling (DHNS)
scheme tailored for MMKGC tasks, which tackles the challenge of generating
high-quality negative triples by leveraging a Diffusion-based Hierarchical
Embedding Generation (DiffHEG) that progressively conditions on entities and
relations as well as multimodal semantics. Furthermore, we develop a Negative
Triple-Adaptive Training (NTAT) strategy that dynamically adjusts training
margins associated with the hardness level of the synthesized negative triples,
facilitating a more robust and effective learning procedure to distinguish
between positive and negative triples. Extensive experiments on three MMKGC
benchmark datasets demonstrate that our framework outperforms several
state-of-the-art MMKGC models and negative sampling techniques, illustrating
the effectiveness of our DHNS for training MMKGC models. The source codes and
datasets of this paper are available at https://github.com/ngl567/DHNS.
|
2501.15394
|
Doracamom: Joint 3D Detection and Occupancy Prediction with Multi-view
4D Radars and Cameras for Omnidirectional Perception
|
cs.CV
|
3D object detection and occupancy prediction are critical tasks in autonomous
driving, attracting significant attention. Despite the potential of recent
vision-based methods, they encounter challenges under adverse conditions. Thus,
integrating cameras with next-generation 4D imaging radar to achieve unified
multi-task perception is highly significant, though research in this domain
remains limited. In this paper, we propose Doracamom, the first framework that
fuses multi-view cameras and 4D radar for joint 3D object detection and
semantic occupancy prediction, enabling comprehensive environmental perception.
Specifically, we introduce a novel Coarse Voxel Queries Generator that
integrates geometric priors from 4D radar with semantic features from images to
initialize voxel queries, establishing a robust foundation for subsequent
Transformer-based refinement. To leverage temporal information, we design a
Dual-Branch Temporal Encoder that processes multi-modal temporal features in
parallel across BEV and voxel spaces, enabling comprehensive spatio-temporal
representation learning. Furthermore, we propose a Cross-Modal BEV-Voxel Fusion
module that adaptively fuses complementary features through attention
mechanisms while employing auxiliary tasks to enhance feature quality.
Extensive experiments on the OmniHD-Scenes, View-of-Delft (VoD), and TJ4DRadSet
datasets demonstrate that Doracamom achieves state-of-the-art performance in
both tasks, establishing new benchmarks for multi-modal 3D perception. Code and
models will be publicly available.
|
2501.15396
|
Foundations of a Knee Joint Digital Twin from qMRI Biomarkers for
Osteoarthritis and Knee Replacement
|
q-bio.QM cs.CV cs.LG eess.IV stat.AP
|
This study forms the basis of a digital twin system of the knee joint, using
advanced quantitative MRI (qMRI) and machine learning to advance precision
health in osteoarthritis (OA) management and knee replacement (KR) prediction.
We combined deep learning-based segmentation of knee joint structures with
dimensionality reduction to create an embedded feature space of imaging
biomarkers. Through cross-sectional cohort analysis and statistical modeling,
we identified specific biomarkers, including variations in cartilage thickness
and medial meniscus shape, that are significantly associated with OA incidence
and KR outcomes. Integrating these findings into a comprehensive framework
represents a considerable step toward personalized knee-joint digital twins,
which could enhance therapeutic strategies and inform clinical decision-making
in rheumatological care. This versatile and reliable infrastructure has the
potential to be extended to broader clinical applications in precision health.
|
2501.15398
|
How Green are Neural Language Models? Analyzing Energy Consumption in
Text Summarization Fine-tuning
|
cs.CL
|
Artificial intelligence systems significantly impact the environment,
particularly in natural language processing (NLP) tasks. These tasks often
require extensive computational resources to train deep neural networks,
including large-scale language models containing billions of parameters. This
study analyzes the trade-offs between energy consumption and performance across
three neural language models: two pre-trained models (T5-base and BART-base),
and one large language model (LLaMA 3-8B). These models were fine-tuned for the
text summarization task, focusing on generating research paper highlights that
encapsulate the core themes of each paper. A wide range of evaluation metrics,
including ROUGE, METEOR, MoverScore, BERTScore, and SciBERTScore, were employed
to assess their performance. Furthermore, the carbon footprint associated with
fine-tuning each model was measured, offering a comprehensive assessment of
their environmental impact. This research underscores the importance of
incorporating environmental considerations into the design and implementation
of neural language models and calls for the advancement of energy-efficient AI
methodologies.
|
2501.15403
|
Scaling of hardware-compatible perturbative training algorithms
|
cs.LG cs.NE math.OC
|
In this work, we explore the capabilities of multiplexed gradient descent
(MGD), a scalable and efficient perturbative zeroth-order training method for
estimating the gradient of a loss function in hardware and training it via
stochastic gradient descent. We extend the framework to include both weight and
node perturbation, and discuss the advantages and disadvantages of each
approach. We investigate the time to train networks using MGD as a function of
network size and task complexity. Previous research has suggested that
perturbative training methods do not scale well to large problems, since in
these methods the time to estimate the gradient scales linearly with the number
of network parameters. However, in this work we show that the time to reach a
target accuracy--that is, actually solve the problem of interest--does not
follow this undesirable linear scaling, and in fact often decreases with
network size. Furthermore, we demonstrate that MGD can be used to calculate a
drop-in replacement for the gradient in stochastic gradient descent, and
therefore optimization accelerators such as momentum can be used alongside MGD,
ensuring compatibility with existing machine learning practices. Our results
indicate that MGD can efficiently train large networks on hardware, achieving
accuracy comparable to backpropagation, thus presenting a practical solution
for future neuromorphic computing systems.
|
2501.15404
|
A Neurosymbolic Framework for Geometric Reduction of Binary Forms
|
cs.AI cs.LG
|
This paper compares Julia reduction and hyperbolic reduction with the aim of
finding equivalent binary forms with minimal coefficients. We demonstrate that
hyperbolic reduction generally outperforms Julia reduction, particularly in the
cases of sextics and decimics, though neither method guarantees achieving the
minimal form. We further propose an additional shift and scaling to approximate
the minimal form more closely. Finally, we introduce a machine learning
framework to identify optimal transformations that minimize the heights of
binary forms. This study provides new insights into the geometry and algebra of
binary forms and highlights the potential of AI in advancing symbolic
computation and reduction techniques. The findings, supported by extensive
computational experiments, lay the groundwork for hybrid approaches that
integrate traditional reduction methods with data-driven techniques.
|
2501.15405
|
Semantic Layered Embedding Diffusion in Large Language Models for
Multi-Contextual Consistency
|
cs.CL cs.AI
|
The Semantic Layered Embedding Diffusion (SLED) mechanism redefines the
representation of hierarchical semantics within transformer-based
architectures, enabling enhanced contextual consistency across a wide array of
linguistic tasks. By introducing a multi-layered diffusion process grounded in
spectral analysis, it achieves a complex balance between global and local
semantic coherence. Experimental results demonstrate significant improvements
in perplexity and BLEU scores, emphasizing the mechanism's ability to adapt
effectively across diverse domains, including multilingual and cross-domain
text generation. A rigorous mathematical framework underpins the embedding
diffusion process, incorporating weighted adjacency matrices, kernel-based
refinements, and dynamic layer-wise normalization. Error distribution analysis
reveals that SLED addresses challenges in semantic alignment and coherence,
outperforming baseline approaches across varied benchmarks. Scalability studies
illustrate that its performance gains are maintained consistently across
different model sizes, reflecting a practical balance between computational
efficiency and linguistic precision. The implementation also achieves energy
efficiency, reducing resource consumption during training and inference phases
without compromising accuracy. Qualitative case studies further validate its
adaptability to extended narratives and context-intensive scenarios,
highlighting the mechanism's potential for real-world applications. SLED offers
a different perspective on embedding design and its implications for advancing
language modeling.
|
2501.15406
|
A Token-FCM based risk assessment method for complex engineering designs
|
cs.CE
|
Engineering design risks could cause unaffordable losses, and thus risk
assessment plays a critical role in engineering design. On the other hand, the
high complexity of modern engineering designs makes it difficult to assess
risks effectively and accurately due to the complex two-way, dynamic
causal-effect risk relations in engineering designs. To address this problem,
this paper proposes a new risk assessment method called token fuzzy cognitive
map (Token-FCM). Its basic idea is to model the two-way causal-risk relations
with the FCM method, and then augment FCM with a token mechanism to model the
dynamics in causal-effect risk relations. Furthermore, the fuzzy sets and the
group decision-making method are introduced to initialize the Token-FCM method
so that comprehensive and accurate risk assessments can be attained. The
effectiveness of the proposed method has been demonstrated by a real example of
engine design for a horizontal directional drilling machine.
|
2501.15407
|
Turn That Frown Upside Down: FaceID Customization via Cross-Training
Data
|
cs.CV cs.AI cs.LG
|
Existing face identity (FaceID) customization methods perform well but are
limited to generating identical faces as the input, while in real-world
applications, users often desire images of the same person but with variations,
such as different expressions (e.g., smiling, angry) or angles (e.g., side
profile). This limitation arises from the lack of datasets with controlled
input-output facial variations, restricting models' ability to learn effective
modifications.
To address this issue, we propose CrossFaceID, the first large-scale,
high-quality, and publicly available dataset specifically designed to improve
the facial modification capabilities of FaceID customization models.
Specifically, CrossFaceID consists of 40,000 text-image pairs from
approximately 2,000 persons, with each person represented by around 20 images
showcasing diverse facial attributes such as poses, expressions, angles, and
adornments. During the training stage, a specific face of a person is used as
input, and the FaceID customization model is forced to generate another image
of the same person but with altered facial features. This allows the FaceID
customization model to acquire the ability to personalize and modify known
facial features during the inference stage. Experiments show that models
fine-tuned on the CrossFaceID dataset retain its performance in preserving
FaceID fidelity while significantly improving its face customization
capabilities.
To facilitate further advancements in the FaceID customization field, our
code, constructed datasets, and trained models are fully available to the
public.
|
2501.15409
|
TdAttenMix: Top-Down Attention Guided Mixup
|
cs.CV cs.AI
|
CutMix is a data augmentation strategy that cuts and pastes image patches to
mixup training data. Existing methods pick either random or salient areas which
are often inconsistent to labels, thus misguiding the training model. By our
knowledge, we integrate human gaze to guide cutmix for the first time. Since
human attention is driven by both high-level recognition and low-level clues,
we propose a controllable Top-down Attention Guided Module to obtain a general
artificial attention which balances top-down and bottom-up attention. The
proposed TdATttenMix then picks the patches and adjust the label mixing ratio
that focuses on regions relevant to the current label. Experimental results
demonstrate that our TdAttenMix outperforms existing state-of-the-art mixup
methods across eight different benchmarks. Additionally, we introduce a new
metric based on the human gaze and use this metric to investigate the issue of
image-label inconsistency. Project page:
\url{https://github.com/morning12138/TdAttenMix}
|
2501.15411
|
The Potential of Large Language Models in Supply Chain Management:
Advancing Decision-Making, Efficiency, and Innovation
|
cs.CY cs.CL
|
The integration of large language models (LLMs) into supply chain management
(SCM) is revolutionizing the industry by improving decision-making, predictive
analytics, and operational efficiency. This white paper explores the
transformative impact of LLMs on various SCM functions, including demand
forecasting, inventory management, supplier relationship management, and
logistics optimization. By leveraging advanced data analytics and real-time
insights, LLMs enable organizations to optimize resources, reduce costs, and
improve responsiveness to market changes. Key findings highlight the benefits
of integrating LLMs with emerging technologies such as IoT, blockchain, and
robotics, which together create smarter and more autonomous supply chains.
Ethical considerations, including bias mitigation and data protection, are
taken into account to ensure fair and transparent AI practices. In addition,
the paper discusses the need to educate the workforce on how to manage new
AI-driven processes and the long-term strategic benefits of adopting LLMs.
Strategic recommendations for SCM professionals include investing in
high-quality data management, promoting cross-functional collaboration, and
aligning LLM initiatives with overall business goals. The findings highlight
the potential of LLMs to drive innovation, sustainability, and competitive
advantage in the ever-changing supply chain management landscape.
|
2501.15415
|
OCSU: Optical Chemical Structure Understanding for Molecule-centric
Scientific Discovery
|
cs.CV
|
Understanding the chemical structure from a graphical representation of a
molecule is a challenging image caption task that would greatly benefit
molecule-centric scientific discovery. Variations in molecular images and
caption subtasks pose a significant challenge in both image representation
learning and task modeling. Yet, existing methods only focus on a specific
caption task that translates a molecular image into its graph structure, i.e.,
OCSR. In this paper, we propose the Optical Chemical Structure Understanding
(OCSU) task, which extends OCSR to molecular image caption from motif level to
molecule level and abstract level. We present two approaches for that,
including an OCSR-based method and an end-to-end OCSR-free method. The proposed
Double-Check achieves SOTA OCSR performance on real-world patent and journal
article scenarios via attentive feature enhancement for local ambiguous atoms.
Cascading with SMILES-based molecule understanding methods, it can leverage the
power of existing task-specific models for OCSU. While Mol-VL is an end-to-end
optimized VLM-based model. An OCSU dataset, Vis-CheBI20, is built based on the
widely used CheBI20 dataset for training and evaluation. Extensive experimental
results on Vis-CheBI20 demonstrate the effectiveness of the proposed
approaches. Improving OCSR capability can lead to a better OCSU performance for
OCSR-based approach, and the SOTA performance of Mol-VL demonstrates the great
potential of end-to-end approach.
|
2501.15417
|
AnyEnhance: A Unified Generative Model with Prompt-Guidance and
Self-Critic for Voice Enhancement
|
cs.SD cs.AI cs.LG eess.AS
|
We introduce AnyEnhance, a unified generative model for voice enhancement
that processes both speech and singing voices. Based on a masked generative
model, AnyEnhance is capable of handling both speech and singing voices,
supporting a wide range of enhancement tasks including denoising,
dereverberation, declipping, super-resolution, and target speaker extraction,
all simultaneously and without fine-tuning. AnyEnhance introduces a
prompt-guidance mechanism for in-context learning, which allows the model to
natively accept a reference speaker's timbre. In this way, it could boost
enhancement performance when a reference audio is available and enable the
target speaker extraction task without altering the underlying architecture.
Moreover, we also introduce a self-critic mechanism into the generative process
for masked generative models, yielding higher-quality outputs through iterative
self-assessment and refinement. Extensive experiments on various enhancement
tasks demonstrate AnyEnhance outperforms existing methods in terms of both
objective metrics and subjective listening tests. Demo audios are publicly
available at https://amphionspace.github.io/anyenhance/.
|
2501.15418
|
Episodic Novelty Through Temporal Distance
|
cs.LG cs.AI
|
Exploration in sparse reward environments remains a significant challenge in
reinforcement learning, particularly in Contextual Markov Decision Processes
(CMDPs), where environments differ across episodes. Existing episodic intrinsic
motivation methods for CMDPs primarily rely on count-based approaches, which
are ineffective in large state spaces, or on similarity-based methods that lack
appropriate metrics for state comparison. To address these shortcomings, we
propose Episodic Novelty Through Temporal Distance (ETD), a novel approach that
introduces temporal distance as a robust metric for state similarity and
intrinsic reward computation. By employing contrastive learning, ETD accurately
estimates temporal distances and derives intrinsic rewards based on the novelty
of states within the current episode. Extensive experiments on various
benchmark tasks demonstrate that ETD significantly outperforms state-of-the-art
methods, highlighting its effectiveness in enhancing exploration in sparse
reward CMDPs.
|
2501.15420
|
Visual Generation Without Guidance
|
cs.CV cs.AI cs.LG
|
Classifier-Free Guidance (CFG) has been a default technique in various visual
generative models, yet it requires inference from both conditional and
unconditional models during sampling. We propose to build visual models that
are free from guided sampling. The resulting algorithm, Guidance-Free Training
(GFT), matches the performance of CFG while reducing sampling to a single
model, halving the computational cost. Unlike previous distillation-based
approaches that rely on pretrained CFG networks, GFT enables training directly
from scratch. GFT is simple to implement. It retains the same maximum
likelihood objective as CFG and differs mainly in the parameterization of
conditional models. Implementing GFT requires only minimal modifications to
existing codebases, as most design choices and hyperparameters are directly
inherited from CFG. Our extensive experiments across five distinct visual
models demonstrate the effectiveness and versatility of GFT. Across domains of
diffusion, autoregressive, and masked-prediction modeling, GFT consistently
achieves comparable or even lower FID scores, with similar diversity-fidelity
trade-offs compared with CFG baselines, all while being guidance-free. Code
will be available at https://github.com/thu-ml/GFT.
|
2501.15423
|
Stroke Lesion Segmentation using Multi-Stage Cross-Scale Attention
|
eess.IV cs.CV
|
Precise characterization of stroke lesions from MRI data has immense value in
prognosticating clinical and cognitive outcomes following a stroke. Manual
stroke lesion segmentation is time-consuming and requires the expertise of
neurologists and neuroradiologists. Often, lesions are grossly characterized
for their location and overall extent using bounding boxes without specific
delineation of their boundaries. While such characterization provides some
clinical value, to develop a precise mechanistic understanding of the impact of
lesions on post-stroke vascular contributions to cognitive impairments and
dementia (VCID), the stroke lesions need to be fully segmented with accurate
boundaries. This work introduces the Multi-Stage Cross-Scale Attention (MSCSA)
mechanism, applied to the U-Net family, to improve the mapping between brain
structural features and lesions of varying sizes. Using the Anatomical Tracings
of Lesions After Stroke (ATLAS) v2.0 dataset, MSCSA outperforms all baseline
methods in both Dice and F1 scores on a subset focusing on small lesions, while
maintaining competitive performance across the entire dataset. Notably, the
ensemble strategy incorporating MSCSA achieves the highest scores for Dice and
F1 on both the full dataset and the small lesion subset. These results
demonstrate the effectiveness of MSCSA in segmenting small lesions and
highlight its robustness across different training schemes for large stroke
lesions. Our code is available at: https://github.com/nadluru/StrokeLesSeg.
|
2501.15425
|
An Empirically-parametrized Spatio-Temporal Extended-SIR Model for
Combined Dilution and Vaccination Mitigation for Rabies Outbreaks in Wild
Jackals
|
cs.IR physics.soc-ph
|
The transmission of zoonotic diseases between animals and humans poses an
increasing threat. Rabies is a prominent example with various instances
globally, facilitated by a surplus of meso-predators (commonly, facultative
synanthropic species e.g., golden jackals [Canis aureus, hereafter jackals])
thanks to the abundance of anthropogenic resources leading to dense populations
close to human establishments. To mitigate rabies outbreaks and prevent human
infections, authorities target the jackal which is the main rabies vector in
many regions, through the dissemination of oral vaccines in known jackals'
activity centers, as well as opportunistic culling to reduce population
density. Because dilution (i.e., culling) is not selective towards sick or
un-vaccinated individuals, these two complementary epizootic intervention
policies (EIPs) can interfere with each other. Nonetheless, there is only
limited examination of the interactive effectiveness of these EIPs and their
potential influence on rabies epizootic spread dynamics, highlighting the need
to understand these measures and the spread of rabies in wild jackals. In this
study, we introduce a novel spatio-temporal extended-SIR
(susceptible-infected-recovered) model with a graph-based spatial framework for
evaluating mitigation efficiency. We implement the model in a case study using
a jackal population in northern Israel, and using spatial and movement data
collected by Advanced Tracking and Localization of Animals in real-life Systems
(ATLAS) telemetry. An agent-based simulation approach allows us to explore
various biologically-realistic scenarios, and assess the impact of different
EIPs configurations. Our model suggests that under biologically-realistic
underlying assumptions and scenarios, the effectiveness of both EIPs is not
influenced much by the jackal population size but is sensitive to their
dispersal between activity centers.
|
2501.15426
|
FAVbot: An Autonomous Target Tracking Micro-Robot with Frequency
Actuation Control
|
cs.RO cs.SY eess.SY
|
Robotic autonomy at centimeter scale requires compact and
miniaturization-friendly actuation integrated with sensing and neural network
processing assembly within a tiny form factor. Applications of such systems
have witnessed significant advancements in recent years in fields such as
healthcare, manufacturing, and post-disaster rescue. The system design at this
scale puts stringent constraints on power consumption for both the sensory
front-end and actuation back-end and the weight of the electronic assembly for
robust operation. In this paper, we introduce FAVbot, the first autonomous
mobile micro-robotic system integrated with a novel actuation mechanism and
convolutional neural network (CNN) based computer vision - all integrated
within a compact 3-cm form factor. The novel actuation mechanism utilizes
mechanical resonance phenomenon to achieve frequency-controlled steering with a
single piezoelectric actuator. Experimental results demonstrate the
effectiveness of FAVbot's frequency-controlled actuation, which offers a
diverse selection of resonance modes with different motion characteristics. The
actuation system is complemented with the vision front-end where a camera along
with a microcontroller supports object detection for closed-loop control and
autonomous target tracking. This enables adaptive navigation in dynamic
environments. This work contributes to the evolving landscape of neural
network-enabled micro-robotic systems showing the smallest autonomous robot
built using controllable multi-directional single-actuator mechanism.
|
2501.15427
|
OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale
Synthetic Personas
|
cs.CL
|
Customizable role-playing in large language models (LLMs), also known as
character generalization, is gaining increasing attention for its versatility
and cost-efficiency in developing and deploying role-playing dialogue agents.
This study explores a large-scale data synthesis approach to equip LLMs with
character generalization capabilities. We begin by synthesizing large-scale
character profiles using personas from Persona Hub and then explore two
strategies: response rewriting and response generation, to create
character-aligned instructional responses. To validate the effectiveness of our
synthetic instruction tuning data for character generalization, we perform
supervised fine-tuning (SFT) using the LLaMA-3 8B model. Our best-performing
model strengthens the original LLaMA-3 8B Instruct model and achieves
performance comparable to GPT-4o models on role-playing dialogue. We release
our synthetic characters and instruction-tuning dialogues to support public
research.
|
2501.15429
|
An Aspect Performance-aware Hypergraph Neural Network for Review-based
Recommendation
|
cs.IR
|
Online reviews allow consumers to provide detailed feedback on various
aspects of items. Existing methods utilize these aspects to model users'
fine-grained preferences for specific item features through graph neural
networks. We argue that the performance of items on different aspects is
important for making precise recommendations, which has not been taken into
account by existing approaches, due to lack of data. In this paper, we propose
an aspect performance-aware hypergraph neural network (APH) for the
review-based recommendation, which learns the performance of items from the
conflicting sentiment polarity of user reviews. Specifically, APH
comprehensively models the relationships among users, items, aspects, and
sentiment polarity by systematically constructing an aspect hypergraph based on
user reviews. In addition, APH aggregates aspects representing users and items
by employing an aspect performance-aware hypergraph aggregation method. It
aggregates the sentiment polarities from multiple users by jointly considering
user preferences and the semantics of their sentiments, determining the weights
of sentiment polarities to infer the performance of items on various aspects.
Such performances are then used as weights to aggregate neighboring aspects.
Experiments on six real-world datasets demonstrate that APH improves MSE,
Precision@5, and Recall@5 by an average of 2.30%, 4.89%, and 1.60% over the
best baseline. The source code and data are available at
https://github.com/dianziliu/APH.
|
2501.15430
|
Evaluating Simple Debiasing Techniques in RoBERTa-based Hate Speech
Detection Models
|
cs.CL
|
The hate speech detection task is known to suffer from bias against African
American English (AAE) dialect text, due to the annotation bias present in the
underlying hate speech datasets used to train these models. This leads to a
disparity where normal AAE text is more likely to be misclassified as
abusive/hateful compared to non-AAE text. Simple debiasing techniques have been
developed in the past to counter this sort of disparity, and in this work, we
apply and evaluate these techniques in the scope of RoBERTa-based encoders.
Experimental results suggest that the success of these techniques depends
heavily on the methods used for training dataset construction, but with proper
consideration of representation bias, they can reduce the disparity seen among
dialect subgroups on the hate speech detection task.
|
2501.15431
|
Self-supervised Benchmark Lottery on ImageNet: Do Marginal Improvements
Translate to Improvements on Similar Datasets?
|
cs.CV cs.AI cs.LG
|
Machine learning (ML) research strongly relies on benchmarks in order to
determine the relative effectiveness of newly proposed models. Recently, a
number of prominent research effort argued that a number of models that improve
the state-of-the-art by a small margin tend to do so by winning what they call
a "benchmark lottery". An important benchmark in the field of machine learning
and computer vision is the ImageNet where newly proposed models are often
showcased based on their performance on this dataset. Given the large number of
self-supervised learning (SSL) frameworks that has been proposed in the past
couple of years each coming with marginal improvements on the ImageNet dataset,
in this work, we evaluate whether those marginal improvements on ImageNet
translate to improvements on similar datasets or not. To do so, we investigate
twelve popular SSL frameworks on five ImageNet variants and discover that
models that seem to perform well on ImageNet may experience significant
performance declines on similar datasets. Specifically, state-of-the-art
frameworks such as DINO and Swav, which are praised for their performance,
exhibit substantial drops in performance while MoCo and Barlow Twins displays
comparatively good results. As a result, we argue that otherwise good and
desirable properties of models remain hidden when benchmarking is only
performed on the ImageNet validation set, making us call for more adequate
benchmarking. To avoid the "benchmark lottery" on ImageNet and to ensure a fair
benchmarking process, we investigate the usage of a unified metric that takes
into account the performance of models on other ImageNet variant datasets.
|
2501.15434
|
Mitigating Spurious Negative Pairs for Robust Industrial Anomaly
Detection
|
cs.CV
|
Despite significant progress in Anomaly Detection (AD), the robustness of
existing detection methods against adversarial attacks remains a challenge,
compromising their reliability in critical real-world applications such as
autonomous driving. This issue primarily arises from the AD setup, which
assumes that training data is limited to a group of unlabeled normal samples,
making the detectors vulnerable to adversarial anomaly samples during testing.
Additionally, implementing adversarial training as a safeguard encounters
difficulties, such as formulating an effective objective function without
access to labels. An ideal objective function for adversarial training in AD
should promote strong perturbations both within and between the normal and
anomaly groups to maximize margin between normal and anomaly distribution. To
address these issues, we first propose crafting a pseudo-anomaly group derived
from normal group samples. Then, we demonstrate that adversarial training with
contrastive loss could serve as an ideal objective function, as it creates both
inter- and intra-group perturbations. However, we notice that spurious negative
pairs compromise the conventional contrastive loss to achieve robust AD.
Spurious negative pairs are those that should be closely mapped but are
erroneously separated. These pairs introduce noise and misguide the direction
of inter-group adversarial perturbations. To overcome the effect of spurious
negative pairs, we define opposite pairs and adversarially pull them apart to
strengthen inter-group perturbations. Experimental results demonstrate our
superior performance in both clean and adversarial scenarios, with a 26.1%
improvement in robust detection across various challenging benchmark datasets.
The implementation of our work is available at:
https://github.com/rohban-lab/COBRA.
|
2501.15435
|
Making Sense Of Distributed Representations With Activation Spectroscopy
|
cs.LG cs.CV
|
In the study of neural network interpretability, there is growing evidence to
suggest that relevant features are encoded across many neurons in a distributed
fashion. Making sense of these distributed representations without knowledge of
the network's encoding strategy is a combinatorial task that is not guaranteed
to be tractable. This work explores one feasible path to both detecting and
tracing the joint influence of neurons in a distributed representation. We term
this approach Activation Spectroscopy (ActSpec), owing to its analysis of the
pseudo-Boolean Fourier spectrum defined over the activation patterns of a
network layer. The sub-network defined between a given layer and an output
logit is cast as a special class of pseudo-Boolean function. The contributions
of each subset of neurons in the specified layer can be quantified through the
function's Fourier coefficients. We propose a combinatorial optimization
procedure to search for Fourier coefficients that are simultaneously
high-valued, and non-redundant. This procedure can be viewed as an extension of
the Goldreich-Levin algorithm which incorporates additional problem-specific
constraints. The resulting coefficients specify a collection of subsets, which
are used to test the degree to which a representation is distributed. We verify
our approach in a number of synthetic settings and compare against existing
interpretability benchmarks. We conclude with a number of experimental
evaluations on an MNIST classifier, and a transformer-based network for
sentiment analysis.
|
2501.15438
|
Cross-Modal Transfer from Memes to Videos: Addressing Data Scarcity in
Hateful Video Detection
|
cs.CV cs.MM
|
Detecting hate speech in online content is essential to ensuring safer
digital spaces. While significant progress has been made in text and meme
modalities, video-based hate speech detection remains under-explored, hindered
by a lack of annotated datasets and the high cost of video annotation. This gap
is particularly problematic given the growing reliance on large models, which
demand substantial amounts of training data. To address this challenge, we
leverage meme datasets as both a substitution and an augmentation strategy for
training hateful video detection models. Our approach introduces a
human-assisted reannotation pipeline to align meme dataset labels with video
datasets, ensuring consistency with minimal labeling effort. Using two
state-of-the-art vision-language models, we demonstrate that meme data can
substitute for video data in resource-scarce scenarios and augment video
datasets to achieve further performance gains. Our results consistently
outperform state-of-the-art benchmarks, showcasing the potential of cross-modal
transfer learning for advancing hateful video detection. Dataset and code are
available at https://github.com/Social-AI-Studio/CrossModalTransferLearning.
|
2501.15440
|
Dfilled: Repurposing Edge-Enhancing Diffusion for Guided DSM Void
Filling
|
cs.CV
|
Digital Surface Models (DSMs) are essential for accurately representing
Earth's topography in geospatial analyses. DSMs capture detailed elevations of
natural and manmade features, crucial for applications like urban planning,
vegetation studies, and 3D reconstruction. However, DSMs derived from stereo
satellite imagery often contain voids or missing data due to occlusions,
shadows, and lowsignal areas. Previous studies have primarily focused on void
filling for digital elevation models (DEMs) and Digital Terrain Models (DTMs),
employing methods such as inverse distance weighting (IDW), kriging, and spline
interpolation. While effective for simpler terrains, these approaches often
fail to handle the intricate structures present in DSMs. To overcome these
limitations, we introduce Dfilled, a guided DSM void filling method that
leverages optical remote sensing images through edge-enhancing diffusion.
Dfilled repurposes deep anisotropic diffusion models, which originally designed
for super-resolution tasks, to inpaint DSMs. Additionally, we utilize Perlin
noise to create inpainting masks that mimic natural void patterns in DSMs.
Experimental evaluations demonstrate that Dfilled surpasses traditional
interpolation methods and deep learning approaches in DSM void filling tasks.
Both quantitative and qualitative assessments highlight the method's ability to
manage complex features and deliver accurate, visually coherent results.
|
2501.15442
|
Overview of the Amphion Toolkit (v0.2)
|
cs.SD cs.AI eess.AS
|
Amphion is an open-source toolkit for Audio, Music, and Speech Generation,
designed to lower the entry barrier for junior researchers and engineers in
these fields. It provides a versatile framework that supports a variety of
generation tasks and models. In this report, we introduce Amphion v0.2, the
second major release developed in 2024. This release features a 100K-hour
open-source multilingual dataset, a robust data preparation pipeline, and novel
models for tasks such as text-to-speech, audio coding, and voice conversion.
Furthermore, the report includes multiple tutorials that guide users through
the functionalities and usage of the newly released models.
|
2501.15443
|
InfoBFR: Real-World Blind Face Restoration via Information Bottleneck
|
cs.CV
|
Blind face restoration (BFR) is a highly challenging problem due to the
uncertainty of data degradation patterns. Current BFR methods have realized
certain restored productions but with inherent neural degradations that limit
real-world generalization in complicated scenarios. In this paper, we propose a
plug-and-play framework InfoBFR to tackle neural degradations, e.g., prior
bias, topological distortion, textural distortion, and artifact residues, which
achieves high-generalization face restoration in diverse wild and heterogeneous
scenes. Specifically, based on the results from pre-trained BFR models, InfoBFR
considers information compression using manifold information bottleneck (MIB)
and information compensation with efficient diffusion LoRA to conduct
information optimization. InfoBFR effectively synthesizes high-fidelity faces
without attribute and identity distortions. Comprehensive experimental results
demonstrate the superiority of InfoBFR over state-of-the-art GAN-based and
diffusion-based BFR methods, with around 70ms consumption, 16M trainable
parameters, and nearly 85% BFR-boosting. It is promising that InfoBFR will be
the first plug-and-play restorer universally employed by diverse BFR models to
conquer neural degradations.
|
2501.15445
|
StochSync: Stochastic Diffusion Synchronization for Image Generation in
Arbitrary Spaces
|
cs.CV cs.AI
|
We propose a zero-shot method for generating images in arbitrary spaces
(e.g., a sphere for 360{\deg} panoramas and a mesh surface for texture) using a
pretrained image diffusion model. The zero-shot generation of various visual
content using a pretrained image diffusion model has been explored mainly in
two directions. First, Diffusion Synchronization-performing reverse diffusion
processes jointly across different projected spaces while synchronizing them in
the target space-generates high-quality outputs when enough conditioning is
provided, but it struggles in its absence. Second, Score Distillation
Sampling-gradually updating the target space data through gradient
descent-results in better coherence but often lacks detail. In this paper, we
reveal for the first time the interconnection between these two methods while
highlighting their differences. To this end, we propose StochSync, a novel
approach that combines the strengths of both, enabling effective performance
with weak conditioning. Our experiments demonstrate that StochSync provides the
best performance in 360{\deg} panorama generation (where image conditioning is
not given), outperforming previous finetuning-based methods, and also delivers
comparable results in 3D mesh texturing (where depth conditioning is provided)
with previous methods.
|
2501.15446
|
Token Democracy: The Architectural Limits of Alignment in
Transformer-Based Language Models
|
cs.CL cs.AI
|
Modern language models paradoxically combine unprecedented capability with
persistent vulnerability in that they can draft poetry yet cannot reliably
refuse harmful requests. We reveal this fragility stems not from inadequate
training, but from a fundamental architectural limitation: transformers process
all tokens as equals. Transformers operate as computational democracies,
granting equal voice to all tokens. This is a design tragically unsuited for
AGI, where we cannot risk adversarial "candidates" hijacking the system.
Through formal analysis, we demonstrate that safety instructions fundamentally
lack privileged status in transformer architectures, that they compete with
adversarial inputs in the same computational arena, making robust alignment
through prompting or fine-tuning inherently limited. This "token democracy"
explains why jailbreaks bypass even extensively safety-trained models and why
positional shifts erode prompt effectiveness. Our work systematizes
practitioners' tacit knowledge into an architectural critique, showing current
alignment approaches create mere preferences, not constraints.
|
2501.15448
|
SQ-DM: Accelerating Diffusion Models with Aggressive Quantization and
Temporal Sparsity
|
cs.CV cs.AI cs.AR cs.LG
|
Diffusion models have gained significant popularity in image generation
tasks. However, generating high-quality content remains notably slow because it
requires running model inference over many time steps. To accelerate these
models, we propose to aggressively quantize both weights and activations, while
simultaneously promoting significant activation sparsity. We further observe
that the stated sparsity pattern varies among different channels and evolves
across time steps. To support this quantization and sparsity scheme, we present
a novel diffusion model accelerator featuring a heterogeneous mixed-precision
dense-sparse architecture, channel-last address mapping, and a time-step-aware
sparsity detector for efficient handling of the sparsity pattern. Our 4-bit
quantization technique demonstrates superior generation quality compared to
existing 4-bit methods. Our custom accelerator achieves 6.91x speed-up and
51.5% energy reduction compared to traditional dense accelerators.
|
2501.15449
|
Breaking the SSL-AL Barrier: A Synergistic Semi-Supervised Active
Learning Framework for 3D Object Detection
|
cs.CV
|
To address the annotation burden in LiDAR-based 3D object detection, active
learning (AL) methods offer a promising solution. However, traditional active
learning approaches solely rely on a small amount of labeled data to train an
initial model for data selection, overlooking the potential of leveraging the
abundance of unlabeled data. Recently, attempts to integrate semi-supervised
learning (SSL) into AL with the goal of leveraging unlabeled data have faced
challenges in effectively resolving the conflict between the two paradigms,
resulting in less satisfactory performance. To tackle this conflict, we propose
a Synergistic Semi-Supervised Active Learning framework, dubbed as S-SSAL.
Specifically, from the perspective of SSL, we propose a Collaborative
PseudoScene Pre-training (CPSP) method that effectively learns from unlabeled
data without introducing adverse effects. From the perspective of AL, we design
a Collaborative Active Learning (CAL) method, which complements the uncertainty
and diversity methods by model cascading. This allows us to fully exploit the
potential of the CPSP pre-trained model. Extensive experiments conducted on
KITTI and Waymo demonstrate the effectiveness of our S-SSAL framework. Notably,
on the KITTI dataset, utilizing only 2% labeled data, S-SSAL can achieve
performance comparable to models trained on the full dataset.
|
2501.15450
|
FlatTrack: Eye-tracking with ultra-thin lensless cameras
|
eess.IV cs.CV
|
Existing eye trackers use cameras based on thick compound optical elements,
necessitating the cameras to be placed at focusing distance from the eyes. This
results in the overall bulk of wearable eye trackers, especially for augmented
and virtual reality (AR/VR) headsets. We overcome this limitation by building a
compact flat eye gaze tracker using mask-based lensless cameras. These cameras,
in combination with co-designed lightweight deep neural network algorithm, can
be placed in extreme close proximity to the eye, within the eyeglasses frame,
resulting in ultra-flat and lightweight eye gaze tracker system. We collect a
large dataset of near-eye lensless camera measurements along with their
calibrated gaze directions for training the gaze tracking network. Through real
and simulation experiments, we show that the proposed gaze tracking system
performs on par with conventional lens-based trackers while maintaining a
significantly flatter and more compact form-factor. Moreover, our gaze
regressor boasts real-time (>125 fps) performance for gaze tracking.
|
2501.15451
|
STATE ToxiCN: A Benchmark for Span-level Target-Aware Toxicity
Extraction in Chinese Hate Speech Detection
|
cs.CL
|
The proliferation of hate speech has caused significant harm to society. The
intensity and directionality of hate are closely tied to the target and
argument it is associated with. However, research on hate speech detection in
Chinese has lagged behind, and existing datasets lack span-level fine-grained
annotations. Furthermore, the lack of research on Chinese hateful slang poses a
significant challenge. In this paper, we provide a solution for fine-grained
detection of Chinese hate speech. First, we construct a dataset containing
Target-Argument-Hateful-Group quadruples (STATE ToxiCN), which is the first
span-level Chinese hate speech dataset. Secondly, we evaluate the span-level
hate speech detection performance of existing models using STATE ToxiCN.
Finally, we conduct the first study on Chinese hateful slang and evaluate the
ability of LLMs to detect such expressions. Our work contributes valuable
resources and insights to advance span-level hate speech detection in Chinese.
|
2501.15452
|
Identifying Critical Tokens for Accurate Predictions in
Transformer-based Medical Imaging Models
|
cs.CV cs.AI
|
With the advancements in self-supervised learning (SSL), transformer-based
computer vision models have recently demonstrated superior results compared to
convolutional neural networks (CNNs) and are poised to dominate the field of
artificial intelligence (AI)-based medical imaging in the upcoming years.
Nevertheless, similar to CNNs, unveiling the decision-making process of
transformer-based models remains a challenge. In this work, we take a step
towards demystifying the decision-making process of transformer-based medical
imaging models and propose Token Insight, a novel method that identifies the
critical tokens that contribute to the prediction made by the model. Our method
relies on the principled approach of token discarding native to
transformer-based models, requires no additional module, and can be applied to
any transformer model. Using the proposed approach, we quantify the importance
of each token based on its contribution to the prediction and enable a more
nuanced understanding of the model's decisions. Our experimental results which
are showcased on the problem of colonic polyp identification using both
supervised and self-supervised pretrained vision transformers indicate that
Token Insight contributes to a more transparent and interpretable
transformer-based medical imaging model, fostering trust and facilitating
broader adoption in clinical settings.
|
2501.15453
|
Data-adaptive Safety Rules for Training Reward Models
|
cs.CL
|
Reinforcement Learning from Human Feedback (RLHF) is commonly employed to
tailor models to human preferences, especially to improve the safety of outputs
from large language models (LLMs). Traditionally, this method depends on
selecting preferred responses from pairs. However, due to the variability in
human opinions and the challenges in directly comparing two responses, there is
an increasing trend towards fine-grained annotation approaches that evaluate
responses using multiple targeted metrics or rules. The challenge lies in
efficiently choosing and applying these rules to handle the diverse range of
preference data. In this paper, we propose a dynamic method that adaptively
selects the most important rules for each response pair. We introduce a
mathematical framework that utilizes the maximum discrepancy across paired
responses and demonstrate theoretically that this approach maximizes the mutual
information between the rule-based annotations and the underlying true
preferences. We then train an 8B reward model using this adaptively labeled
preference dataset and assess its efficacy using RewardBench. As of January 25,
2025, our model achieved the highest safety performance on the leaderboard,
surpassing various larger models.
|
2501.15454
|
On the Discrimination and Consistency for Exemplar-Free Class
Incremental Learning
|
cs.CV
|
Exemplar-free class incremental learning (EF-CIL) is a nontrivial task that
requires continuously enriching model capability with new classes while
maintaining previously learned knowledge without storing and replaying any old
class exemplars. An emerging theory-guided framework for CIL trains
task-specific models for a shared network, shifting the pressure of forgetting
to task-id prediction. In EF-CIL, task-id prediction is more challenging due to
the lack of inter-task interaction (e.g., replays of exemplars). To address
this issue, we conduct a theoretical analysis of the importance and feasibility
of preserving a discriminative and consistent feature space, upon which we
propose a novel method termed DCNet. Concretely, it progressively maps class
representations into a hyperspherical space, in which different classes are
orthogonally distributed to achieve ample inter-class separation. Meanwhile, it
also introduces compensatory training to adaptively adjust supervision
intensity, thereby aligning the degree of intra-class aggregation. Extensive
experiments and theoretical analysis verified the superiority of the proposed
DCNet.
|
2501.15455
|
CD-Lamba: Boosting Remote Sensing Change Detection via a Cross-Temporal
Locally Adaptive State Space Model
|
cs.CV
|
Mamba, with its advantages of global perception and linear complexity, has
been widely applied to identify changes of the target regions within the remote
sensing (RS) images captured under complex scenarios and varied conditions.
However, existing remote sensing change detection (RSCD) approaches based on
Mamba frequently struggle to effectively perceive the inherent locality of
change regions as they direct flatten and scan RS images (i.e., the features of
the same region of changes are not distributed continuously within the sequence
but are mixed with features from other regions throughout the sequence). In
this paper, we propose a novel locally adaptive SSM-based approach, termed
CD-Lamba, which effectively enhances the locality of change detection while
maintaining global perception. Specifically, our CD-Lamba includes a Locally
Adaptive State-Space Scan (LASS) strategy for locality enhancement, a
Cross-Temporal State-Space Scan (CTSS) strategy for bi-temporal feature fusion,
and a Window Shifting and Perception (WSP) mechanism to enhance interactions
across segmented windows. These strategies are integrated into a multi-scale
Cross-Temporal Locally Adaptive State-Space Scan (CT-LASS) module to
effectively highlight changes and refine changes' representations feature
generation. CD-Lamba significantly enhances local-global spatio-temporal
interactions in bi-temporal images, offering improved performance in RSCD
tasks. Extensive experimental results show that CD-Lamba achieves
state-of-the-art performance on four benchmark datasets with a satisfactory
efficiency-accuracy trade-off. Our code is publicly available at
https://github.com/xwmaxwma/rschange.
|
2501.15458
|
Amortized Safe Active Learning for Real-Time Decision-Making: Pretrained
Neural Policies from Simulated Nonparametric Functions
|
cs.LG
|
Active Learning (AL) is a sequential learning approach aiming at selecting
the most informative data for model training. In many systems, safety
constraints appear during data evaluation, requiring the development of safe AL
methods. Key challenges of AL are the repeated model training and acquisition
optimization required for data selection, which become particularly restrictive
under safety constraints. This repeated effort often creates a bottleneck,
especially in physical systems requiring real-time decision-making. In this
paper, we propose a novel amortized safe AL framework. By leveraging a
pretrained neural network policy, our method eliminates the need for repeated
model training and acquisition optimization, achieving substantial speed
improvements while maintaining competitive learning outcomes and safety
awareness. The policy is trained entirely on synthetic data utilizing a novel
safe AL objective. The resulting policy is highly versatile and adapts to a
wide range of systems, as we demonstrate in our experiments. Furthermore, our
framework is modular and we empirically show that we also achieve superior
performance for unconstrained time-sensitive AL tasks if we omit the safety
requirement.
|
2501.15461
|
Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with
Selective State Space
|
cs.LG
|
Graph Neural Networks (GNNs) have shown great success in various graph-based
learning tasks. However, it often faces the issue of over-smoothing as the
model depth increases, which causes all node representations to converge to a
single value and become indistinguishable. This issue stems from the inherent
limitations of GNNs, which struggle to distinguish the importance of
information from different neighborhoods. In this paper, we introduce MbaGCN, a
novel graph convolutional architecture that draws inspiration from the Mamba
paradigm-originally designed for sequence modeling. MbaGCN presents a new
backbone for GNNs, consisting of three key components: the Message Aggregation
Layer, the Selective State Space Transition Layer, and the Node State
Prediction Layer. These components work in tandem to adaptively aggregate
neighborhood information, providing greater flexibility and scalability for
deep GNN models. While MbaGCN may not consistently outperform all existing
methods on each dataset, it provides a foundational framework that demonstrates
the effective integration of the Mamba paradigm into graph representation
learning. Through extensive experiments on benchmark datasets, we demonstrate
that MbaGCN paves the way for future advancements in graph neural network
research.
|
2501.15463
|
Mind the Value-Action Gap: Do LLMs Act in Alignment with Their Values?
|
cs.HC cs.AI cs.CL
|
Existing research primarily evaluates the values of LLMs by examining their
stated inclinations towards specific values. However, the "Value-Action Gap," a
phenomenon rooted in environmental and social psychology, reveals discrepancies
between individuals' stated values and their actions in real-world contexts. To
what extent do LLMs exhibit a similar gap between their stated values and their
actions informed by those values? This study introduces ValueActionLens, an
evaluation framework to assess the alignment between LLMs' stated values and
their value-informed actions. The framework encompasses the generation of a
dataset comprising 14.8k value-informed actions across twelve cultures and
eleven social topics, and two tasks to evaluate how well LLMs' stated value
inclinations and value-informed actions align across three different alignment
measures. Extensive experiments reveal that the alignment between LLMs' stated
values and actions is sub-optimal, varying significantly across scenarios and
models. Analysis of misaligned results identifies potential harms from certain
value-action gaps. To predict the value-action gaps, we also uncover that
leveraging reasoned explanations improves performance. These findings
underscore the risks of relying solely on the LLMs' stated values to predict
their behaviors and emphasize the importance of context-aware evaluations of
LLM values and value-action gaps.
|
2501.15464
|
TractoGPT: A GPT architecture for White Matter Segmentation
|
cs.CV cs.AI
|
White matter bundle segmentation is crucial for studying brain structural
connectivity, neurosurgical planning, and neurological disorders. White Matter
Segmentation remains challenging due to structural similarity in streamlines,
subject variability, symmetry in 2 hemispheres, etc. To address these
challenges, we propose TractoGPT, a GPT-based architecture trained on
streamline, cluster, and fusion data representations separately. TractoGPT is a
fully-automatic method that generalizes across datasets and retains shape
information of the white matter bundles. Experiments also show that TractoGPT
outperforms state-of-the-art methods on average DICE, Overlap and Overreach
scores. We use TractoInferno and 105HCP datasets and validate generalization
across dataset.
|
2501.15465
|
Geometry of symplectic group and optimal EAQECC codes
|
quant-ph cs.IT math.IT
|
A new type of link between geometry of symplectic group and
entanglement-assisted (EA) quantum error-correcting codes (EAQECCs) is
presented. Relations of symplectic subspaces and quaternary additive codes
concerning parameters of EAQECCs are described. Thus, parameters of EA
stabilizer codes are revealed in the nomenclature of additive codes. Our
techniques enable us solve some open problems about optimal EAQECCs and
entanglement-assisted quantum minimum distance separable (EAQMDS) codes, and
are also useful for designing encoding and decoding quantum circuit of EA
stabilizer codes.
|
2501.15469
|
CISOL: An Open and Extensible Dataset for Table Structure Recognition in
the Construction Industry
|
cs.CV
|
Reproducibility and replicability are critical pillars of empirical research,
particularly in machine learning, where they depend not only on the
availability of models, but also on the datasets used to train and evaluate
those models. In this paper, we introduce the Construction Industry Steel
Ordering List (CISOL) dataset, which was developed with a focus on transparency
to ensure reproducibility, replicability, and extensibility. CISOL provides a
valuable new research resource and highlights the importance of having diverse
datasets, even in niche application domains such as table extraction in civil
engineering.
CISOL is unique in that it contains real-world civil engineering documents
from industry, making it a distinctive contribution to the field. The dataset
contains more than 120,000 annotated instances in over 800 document images,
positioning it as a medium-sized dataset that provides a robust foundation for
Table Structure Recognition (TSR) and Table Detection (TD) tasks.
Benchmarking results show that CISOL achieves 67.22 mAP@0.5:0.95:0.05 using
the YOLOv8 model, outperforming the TSR-specific TATR model. This highlights
the effectiveness of CISOL as a benchmark for advancing TSR, especially in
specialized domains.
|
2501.15470
|
Unveiling the Potential of Multimodal Retrieval Augmented Generation
with Planning
|
cs.IR cs.MA
|
Multimodal Retrieval Augmented Generation (MRAG) systems, while promising for
enhancing Multimodal Large Language Models (MLLMs), often rely on rigid,
single-step retrieval methods. This limitation hinders their ability to
effectively address real-world scenarios that demand adaptive information
acquisition and query refinement. To overcome this, we introduce the novel task
of Multimodal Retrieval Augmented Generation Planning (MRAG Planning), focusing
on optimizing MLLM performance while minimizing computational overhead. We
present CogPlanner, a versatile framework inspired by human cognitive
processes. CogPlanner iteratively refines queries and selects retrieval
strategies, enabling both parallel and sequential modeling approaches. To
rigorously evaluate MRAG Planning, we introduce CogBench, a new benchmark
specifically designed for this task. CogBench facilitates the integration of
lightweight CogPlanner with resource-efficient MLLMs. Our experimental findings
demonstrate that CogPlanner surpasses existing MRAG baselines, achieving
significant improvements in both accuracy and efficiency with minimal
computational overhead.
|
2501.15471
|
Dynamic Regressor Extension and Mixing-based Re-design of Adaptive
Observer for Affine Systems
|
eess.SY cs.SY
|
The dynamic regressor extension and mixing procedure is employed to redesign
a conventional adaptive observer algorithm for affine systems. A reduced-order
observer is designed without the construction of the state transition matrix.
The dynamics of the regressor are redesigned to incorporate feedback from its
extension, transforming the regressor dynamics into a perturbed damped
nonlinear oscillator form. This introduces some flexibility in reducing the
degradation of parameter convergence due to the lack of the transition matrix
and in enhancing the excitation property of the extension matrix.
|
2501.15478
|
LoRAGuard: An Effective Black-box Watermarking Approach for LoRAs
|
cs.CR cs.LG
|
LoRA (Low-Rank Adaptation) has achieved remarkable success in the
parameter-efficient fine-tuning of large models. The trained LoRA matrix can be
integrated with the base model through addition or negation operation to
improve performance on downstream tasks. However, the unauthorized use of LoRAs
to generate harmful content highlights the need for effective mechanisms to
trace their usage. A natural solution is to embed watermarks into LoRAs to
detect unauthorized misuse. However, existing methods struggle when multiple
LoRAs are combined or negation operation is applied, as these can significantly
degrade watermark performance. In this paper, we introduce LoRAGuard, a novel
black-box watermarking technique for detecting unauthorized misuse of LoRAs. To
support both addition and negation operations, we propose the Yin-Yang
watermark technique, where the Yin watermark is verified during negation
operation and the Yang watermark during addition operation. Additionally, we
propose a shadow-model-based watermark training approach that significantly
improves effectiveness in scenarios involving multiple integrated LoRAs.
Extensive experiments on both language and diffusion models show that LoRAGuard
achieves nearly 100% watermark verification success and demonstrates strong
effectiveness.
|
2501.15481
|
Query-based versus resource-based cache strategies in tag-based browsing
systems
|
cs.CL
|
Tag-based browsing is a popular interaction model for navigating digital
libraries. According to this model, users select descriptive tags to filter
resources in the collections. Typical implementations of the model are based on
inverted indexes. However, these implementations can require a considerable
amount of set operations to update the browsing state. To palliate this
inconven-ience, it is possible to adopt suitable cache strategies. In this
paper we describe and compare two of these strategies: (i) a query-based
strategy, according to which previously computed browsing states are indexed by
sets of selected tags; and (ii) a resource-based strategy, according to which
browsing states are in-dexed by sets of filtered resources. Our comparison
focused on runtime perfor-mance, and was carried out empirically, using a
real-world web-based collec-tion in the field of digital humanities. The
results obtained show that the re-source-based strategy clearly outperforms the
query-based one.
|
2501.15484
|
PhoTorch: A robust and generalized biochemical photosynthesis model
fitting package based on PyTorch
|
cs.CE q-bio.QM
|
Advancements in artificial intelligence (AI) have greatly benefited plant
phenotyping and predictive modeling. However, unrealized opportunities exist in
leveraging AI advancements in model parameter optimization for parameter
fitting in complex biophysical models. This work developed novel software,
PhoTorch, for fitting parameters of the Farquhar, von Caemmerer, and Berry
(FvCB) biochemical photosynthesis model based the parameter optimization
components of the popular AI framework PyTorch. The primary novelty of the
software lies in its computational efficiency, robustness of parameter
estimation, and flexibility in handling different types of response curves and
sub-model functional forms. PhoTorch can fit both steady-state and
non-steady-state gas exchange data with high efficiency and accuracy. Its
flexibility allows for optional fitting of temperature and light response
parameters, and can simultaneously fit light response curves and standard A/Ci
curves. These features are not available within presently available A/Ci curve
fitting packages. Results illustrated the robustness and efficiency of PhoTorch
in fitting A/Ci curves with high variability and some level of artifacts and
noise. PhoTorch is more than four times faster than benchmark software, which
may be relevant when processing many non-steady-state A/Ci curves with hundreds
of data points per curve. PhoTorch provides researchers from various fields
with a reliable and efficient tool for analyzing photosynthetic data. The
Python package is openly accessible from the repository:
https://github.com/GEMINI-Breeding/photorch.
|
2501.15485
|
Differentiable Low-computation Global Correlation Loss for Monotonicity
Evaluation in Quality Assessment
|
eess.IV cs.CV
|
In this paper, we propose a global monotonicity consistency training strategy
for quality assessment, which includes a differentiable, low-computation
monotonicity evaluation loss function and a global perception training
mechanism. Specifically, unlike conventional ranking loss and linear
programming approaches that indirectly implement the Spearman rank-order
correlation coefficient (SROCC) function, our method directly converts SROCC
into a loss function by making the sorting operation within SROCC
differentiable and functional. Furthermore, to mitigate the discrepancies
between batch optimization during network training and global evaluation of
SROCC, we introduce a memory bank mechanism. This mechanism stores
gradient-free predicted results from previous batches and uses them in the
current batch's training to prevent abrupt gradient changes. We evaluate the
performance of the proposed method on both images and point clouds quality
assessment tasks, demonstrating performance gains in both cases.
|
2501.15486
|
FedAlign: Federated Domain Generalization with Cross-Client Feature
Alignment
|
cs.LG cs.AI cs.CV cs.DC
|
Federated Learning (FL) offers a decentralized paradigm for collaborative
model training without direct data sharing, yet it poses unique challenges for
Domain Generalization (DG), including strict privacy constraints, non-i.i.d.
local data, and limited domain diversity. We introduce FedAlign, a lightweight,
privacy-preserving framework designed to enhance DG in federated settings by
simultaneously increasing feature diversity and promoting domain invariance.
First, a cross-client feature extension module broadens local domain
representations through domain-invariant feature perturbation and selective
cross-client feature transfer, allowing each client to safely access a richer
domain space. Second, a dual-stage alignment module refines global feature
learning by aligning both feature embeddings and predictions across clients,
thereby distilling robust, domain-invariant features. By integrating these
modules, our method achieves superior generalization to unseen domains while
maintaining data privacy and operating with minimal computational and
communication overhead.
|
2501.15487
|
Multilevel Browsing of Folksonomy-Based Digital Collections
|
cs.CL
|
This paper describes how to extend the usual one-level tag selection
navigation paradigm in folksonomy-based digital collections to a multilevel
browsing one, according to which it is possible to incrementally narrow down
the set of selected objects in a collection by sequentially adding more and
more filtering tags. For this purpose, we present a browsing strategy based on
finite automata. Also, we provide some experimental results concerning the
application of the approach in Clavy, a system for managing digital collections
with reconfigurable structures in digital humanities and educational settings.
|
2501.15488
|
Qualitative Mechanism Independence
|
cs.IT math.IT
|
We define what it means for a joint probability distribution to be compatible
with a set of independent causal mechanisms, at a qualitative level -- or, more
precisely, with a directed hypergraph ${\mathcal{A}}$, which is the qualitative
structure of a probabilistic dependency graph (PDG). When ${\mathcal{A}}$
represents a qualitative Bayesian network, QIM-compatibility with
${\mathcal{A}}$ reduces to satisfying the appropriate conditional
independencies. But giving semantics to hypergraphs using QIM-compatibility
lets us do much more. For one thing, we can capture functional dependencies.
For another, we can capture important aspects of causality using compatibility:
we can use compatibility to understand cyclic causal graphs, and to demonstrate
structural compatibility, we must essentially produce a causal model. Finally,
QIM-compatibility has deep connections to information theory. Applying our
notion to cyclic structures helps to clarify a longstanding conceptual issue in
information theory.
|
2501.15489
|
AI in Oncology: Transforming Cancer Detection through Machine Learning
and Deep Learning Applications
|
cs.AI eess.IV q-bio.QM
|
Artificial intelligence (AI) has potential to revolutionize the field of
oncology by enhancing the precision of cancer diagnosis, optimizing treatment
strategies, and personalizing therapies for a variety of cancers. This review
examines the limitations of conventional diagnostic techniques and explores the
transformative role of AI in diagnosing and treating cancers such as lung,
breast, colorectal, liver, stomach, esophageal, cervical, thyroid, prostate,
and skin cancers. The primary objective of this paper is to highlight the
significant advancements that AI algorithms have brought to oncology within the
medical industry. By enabling early cancer detection, improving diagnostic
accuracy, and facilitating targeted treatment delivery, AI contributes to
substantial improvements in patient outcomes. The integration of AI in medical
imaging, genomic analysis, and pathology enhances diagnostic precision and
introduces a novel, less invasive approach to cancer screening. This not only
boosts the effectiveness of medical facilities but also reduces operational
costs. The study delves into the application of AI in radiomics for detailed
cancer characterization, predictive analytics for identifying associated risks,
and the development of algorithm-driven robots for immediate diagnosis.
Furthermore, it investigates the impact of AI on addressing healthcare
challenges, particularly in underserved and remote regions. The overarching
goal of this platform is to support the development of expert recommendations
and to provide universal, efficient diagnostic procedures. By reviewing
existing research and clinical studies, this paper underscores the pivotal role
of AI in improving the overall cancer care system. It emphasizes how AI-enabled
systems can enhance clinical decision-making and expand treatment options,
thereby underscoring the importance of AI in advancing precision oncology
|
2501.15492
|
Color Flow Imaging Microscopy Improves Identification of Stress Sources
of Protein Aggregates in Biopharmaceuticals
|
cs.CV cs.AI
|
Protein-based therapeutics play a pivotal role in modern medicine targeting
various diseases. Despite their therapeutic importance, these products can
aggregate and form subvisible particles (SvPs), which can compromise their
efficacy and trigger immunological responses, emphasizing the critical need for
robust monitoring techniques. Flow Imaging Microscopy (FIM) has been a
significant advancement in detecting SvPs, evolving from monochrome to more
recently incorporating color imaging. Complementing SvP images obtained via
FIM, deep learning techniques have recently been employed successfully for
stress source identification of monochrome SvPs. In this study, we explore the
potential of color FIM to enhance the characterization of stress sources in
SvPs. To achieve this, we curate a new dataset comprising 16,000 SvPs from
eight commercial monoclonal antibodies subjected to heat and mechanical stress.
Using both supervised and self-supervised convolutional neural networks, as
well as vision transformers in large-scale experiments, we demonstrate that
deep learning with color FIM images consistently outperforms monochrome images,
thus highlighting the potential of color FIM in stress source classification
compared to its monochrome counterparts.
|
2501.15493
|
RLER-TTE: An Efficient and Effective Framework for En Route Travel Time
Estimation with Reinforcement Learning
|
cs.LG
|
En Route Travel Time Estimation (ER-TTE) aims to learn driving patterns from
traveled routes to achieve rapid and accurate real-time predictions. However,
existing methods ignore the complexity and dynamism of real-world traffic
systems, resulting in significant gaps in efficiency and accuracy in real-time
scenarios. Addressing this issue is a critical yet challenging task. This paper
proposes a novel framework that redefines the implementation path of ER-TTE to
achieve highly efficient and effective predictions. Firstly, we introduce a
novel pipeline consisting of a Decision Maker and a Predictor to rectify the
inefficient prediction strategies of current methods. The Decision Maker
performs efficient real-time decisions to determine whether the high-complexity
prediction model in the Predictor needs to be invoked, and the Predictor
recalculates the travel time or infers from historical prediction results based
on these decisions. Next, to tackle the dynamic and uncertain real-time
scenarios, we model the online decision-making problem as a Markov decision
process and design an intelligent agent based on reinforcement learning for
autonomous decision-making. Moreover, to fully exploit the spatio-temporal
correlation between online data and offline data, we meticulously design
feature representation and encoding techniques based on the attention
mechanism. Finally, to improve the flawed training and evaluation strategies of
existing methods, we propose an end-to-end training and evaluation approach,
incorporating curriculum learning strategies to manage spatio-temporal data for
more advanced training algorithms. Extensive evaluations on three real-world
datasets confirm that our method significantly outperforms state-of-the-art
solutions in both accuracy and efficiency.
|
2501.15495
|
Expert-Free Online Transfer Learning in Multi-Agent Reinforcement
Learning
|
cs.AI cs.LG cs.MA
|
Reinforcement Learning (RL) enables an intelligent agent to optimise its
performance in a task by continuously taking action from an observed state and
receiving a feedback from the environment in form of rewards. RL typically uses
tables or linear approximators to map state-action tuples that maximises the
reward. Combining RL with deep neural networks (DRL) significantly increases
its scalability and enables it to address more complex problems than before.
However, DRL also inherits downsides from both RL and deep learning. Despite
DRL improves generalisation across similar state-action pairs when compared to
simpler RL policy representations like tabular methods, it still requires the
agent to adequately explore the state-action space. Additionally, deep methods
require more training data, with the volume of data escalating with the
complexity and size of the neural network. As a result, deep RL requires a long
time to collect enough agent-environment samples and to successfully learn the
underlying policy. Furthermore, often even a slight alteration to the task
invalidates any previous acquired knowledge. To address these shortcomings,
Transfer Learning (TL) has been introduced, which enables the use of external
knowledge from other tasks or agents to enhance a learning process. The goal of
TL is to reduce the learning complexity for an agent dealing with an unfamiliar
task by simplifying the exploration process. This is achieved by lowering the
amount of new information required by its learning model, resulting in a
reduced overall convergence time...
|
2501.15499
|
One Model to Forecast Them All and in Entity Distributions Bind Them
|
cs.LG cs.SY eess.SY
|
Probabilistic forecasting in power systems often involves multi-entity
datasets like households, feeders, and wind turbines, where generating reliable
entity-specific forecasts presents significant challenges. Traditional
approaches require training individual models for each entity, making them
inefficient and hard to scale. This study addresses this problem using
GUIDE-VAE, a conditional variational autoencoder that allows entity-specific
probabilistic forecasting using a single model. GUIDE-VAE provides flexible
outputs, ranging from interpretable point estimates to full probability
distributions, thanks to its advanced covariance composition structure. These
distributions capture uncertainty and temporal dependencies, offering richer
insights than traditional methods. To evaluate our GUIDE-VAE-based forecaster,
we use household electricity consumption data as a case study due to its
multi-entity and highly stochastic nature. Experimental results demonstrate
that GUIDE-VAE outperforms conventional quantile regression techniques across
key metrics while ensuring scalability and versatility. These features make
GUIDE-VAE a powerful and generalizable tool for probabilistic forecasting
tasks, with potential applications beyond household electricity consumption.
|
2501.15503
|
Domain Adaptation from Generated Multi-Weather Images for Unsupervised
Maritime Object Classification
|
cs.CV
|
The classification and recognition of maritime objects are crucial for
enhancing maritime safety, monitoring, and intelligent sea environment
prediction. However, existing unsupervised methods for maritime object
classification often struggle with the long-tail data distributions in both
object categories and weather conditions. In this paper, we construct a dataset
named AIMO produced by large-scale generative models with diverse weather
conditions and balanced object categories, and collect a dataset named RMO with
real-world images where long-tail issue exists. We propose a novel domain
adaptation approach that leverages AIMO (source domain) to address the problem
of limited labeled data, unbalanced distribution and domain shift in RMO
(target domain), and enhance the generalization of source features with the
Vision-Language Models such as CLIP. Experimental results shows that the
proposed method significantly improves the classification accuracy,
particularly for samples within rare object categories and weather conditions.
Datasets and codes will be publicly available at
https://github.com/honoria0204/AIMO.
|
2501.15505
|
Unveiling the Potential of iMarkers: Invisible Fiducial Markers for
Advanced Robotics
|
cs.RO cs.CV
|
Fiducial markers are widely used in various robotics tasks, facilitating
enhanced navigation, object recognition, and scene understanding. Despite their
advantages for robots and Augmented Reality (AR) applications, they often
disrupt the visual aesthetics of environments because they are visible to
humans, making them unsuitable for non-intrusive use cases. To address this
gap, this paper presents "iMarkers"-innovative, unobtrusive fiducial markers
detectable exclusively by robots equipped with specialized sensors. These
markers offer high flexibility in production, allowing customization of their
visibility range and encoding algorithms to suit various demands. The paper
also introduces the hardware designs and software algorithms developed for
detecting iMarkers, highlighting their adaptability and robustness in the
detection and recognition stages. Various evaluations have demonstrated the
effectiveness of iMarkers compared to conventional (printed) and blended
fiducial markers and confirmed their applicability in diverse robotics
scenarios.
|
2501.15509
|
FIT-Print: Towards False-claim-resistant Model Ownership Verification
via Targeted Fingerprint
|
cs.CR cs.AI cs.LG
|
Model fingerprinting is a widely adopted approach to safeguard the
intellectual property rights of open-source models by preventing their
unauthorized reuse. It is promising and convenient since it does not
necessitate modifying the protected model. In this paper, we revisit existing
fingerprinting methods and reveal that they are vulnerable to false claim
attacks where adversaries falsely assert ownership of any third-party model. We
demonstrate that this vulnerability mostly stems from their untargeted nature,
where they generally compare the outputs of given samples on different models
instead of the similarities to specific references. Motivated by these
findings, we propose a targeted fingerprinting paradigm (i.e., FIT-Print) to
counteract false claim attacks. Specifically, FIT-Print transforms the
fingerprint into a targeted signature via optimization. Building on the
principles of FIT-Print, we develop bit-wise and list-wise black-box model
fingerprinting methods, i.e., FIT-ModelDiff and FIT-LIME, which exploit the
distance between model outputs and the feature attribution of specific samples
as the fingerprint, respectively. Extensive experiments on benchmark models and
datasets verify the effectiveness, conferrability, and resistance to false
claim attacks of our FIT-Print.
|
2501.15510
|
Universal Image Restoration Pre-training via Degradation Classification
|
cs.CV
|
This paper proposes the Degradation Classification Pre-Training (DCPT), which
enables models to learn how to classify the degradation type of input images
for universal image restoration pre-training. Unlike the existing
self-supervised pre-training methods, DCPT utilizes the degradation type of the
input image as an extremely weak supervision, which can be effortlessly
obtained, even intrinsic in all image restoration datasets. DCPT comprises two
primary stages. Initially, image features are extracted from the encoder.
Subsequently, a lightweight decoder, such as ResNet18, is leveraged to classify
the degradation type of the input image solely based on the features extracted
in the first stage, without utilizing the input image. The encoder is
pre-trained with a straightforward yet potent DCPT, which is used to address
universal image restoration and achieve outstanding performance. Following
DCPT, both convolutional neural networks (CNNs) and transformers demonstrate
performance improvements, with gains of up to 2.55 dB in the 10D all-in-one
restoration task and 6.53 dB in the mixed degradation scenarios. Moreover,
previous self-supervised pretraining methods, such as masked image modeling,
discard the decoder after pre-training, while our DCPT utilizes the pre-trained
parameters more effectively. This superiority arises from the degradation
classifier acquired during DCPT, which facilitates transfer learning between
models of identical architecture trained on diverse degradation types. Source
code and models are available at https://github.com/MILab-PKU/dcpt.
|
2501.15513
|
TinyLLaVA-Video: A Simple Framework of Small-scale Large Multimodal
Models for Video Understanding
|
cs.CV
|
We present the TinyLLaVA-Video, a video understanding model with parameters
not exceeding 4B that processes video sequences in a simple manner, without the
need for complex architectures, supporting both fps sampling and uniform frame
sampling. Our model is characterized by modularity and scalability, allowing
training and inference with limited computational resources and enabling users
to replace components based on their needs. We validate the effectiveness of
this framework through experiments, the best model achieving performance
comparable to certain existing 7B models on multiple video understanding
benchmarks. The code and training recipes are fully open source, with all
components and training data publicly available. We hope this work can serve as
a baseline for practitioners exploring small-scale multimodal models for video
understanding. It is available at
\url{https://github.com/ZhangXJ199/TinyLLaVA-Video}.
|
2501.15519
|
Fuzzy-aware Loss for Source-free Domain Adaptation in Visual Emotion
Recognition
|
cs.CV cs.LG
|
Source-free domain adaptation in visual emotion recognition (SFDA-VER) is a
highly challenging task that requires adapting VER models to the target domain
without relying on source data, which is of great significance for data privacy
protection. However, due to the unignorable disparities between visual emotion
data and traditional image classification data, existing SFDA methods perform
poorly on this task. In this paper, we investigate the SFDA-VER task from a
fuzzy perspective and identify two key issues: fuzzy emotion labels and fuzzy
pseudo-labels. These issues arise from the inherent uncertainty of emotion
annotations and the potential mispredictions in pseudo-labels. To address these
issues, we propose a novel fuzzy-aware loss (FAL) to enable the VER model to
better learn and adapt to new domains under fuzzy labels. Specifically, FAL
modifies the standard cross entropy loss and focuses on adjusting the losses of
non-predicted categories, which prevents a large number of uncertain or
incorrect predictions from overwhelming the VER model during adaptation. In
addition, we provide a theoretical analysis of FAL and prove its robustness in
handling the noise in generated pseudo-labels. Extensive experiments on 26
domain adaptation sub-tasks across three benchmark datasets demonstrate the
effectiveness of our method.
|
2501.15520
|
Efficient Self-Supervised Grading of Prostate Cancer Pathology
|
cs.CV
|
Prostate cancer grading using the ISUP system (International Society of
Urological Pathology) for treatment decisions is highly subjective and requires
considerable expertise. Despite advances in computer-aided diagnosis systems,
few have handled efficient ISUP grading on Whole Slide Images (WSIs) of
prostate biopsies based only on slide-level labels. Some of the general
challenges include managing gigapixel WSIs, obtaining patch-level annotations,
and dealing with stain variability across centers. One of the main
task-specific challenges faced by deep learning in ISUP grading, is the
learning of patch-level features of Gleason patterns (GPs) based only on their
slide labels. In this scenario, an efficient framework for ISUP grading is
developed.
The proposed TSOR is based on a novel Task-specific Self-supervised learning
(SSL) model, which is fine-tuned using Ordinal Regression. Since the diversity
of training samples plays a crucial role in SSL, a patch-level dataset is
created to be relatively balanced w.r.t. the Gleason grades (GGs). This
balanced dataset is used for pre-training, so that the model can effectively
learn stain-agnostic features of the GP for better generalization. In medical
image grading, it is desirable that misclassifications be as close as possible
to the actual grade. From this perspective, the model is then fine-tuned for
the task of ISUP grading using an ordinal regression-based approach.
Experimental results on the most extensive multicenter prostate biopsies
dataset (PANDA challenge), as well as the SICAP dataset, demonstrate the
effectiveness of this novel framework compared to state-of-the-art methods.
|
2501.15522
|
Estimating Committor Functions via Deep Adaptive Sampling on Rare
Transition Paths
|
stat.ML cs.LG q-bio.QM
|
The committor functions are central to investigating rare but important
events in molecular simulations. It is known that computing the committor
function suffers from the curse of dimensionality. Recently, using neural
networks to estimate the committor function has gained attention due to its
potential for high-dimensional problems. Training neural networks to
approximate the committor function needs to sample transition data from
straightforward simulations of rare events, which is very inefficient. The
scarcity of transition data makes it challenging to approximate the committor
function. To address this problem, we propose an efficient framework to
generate data points in the transition state region that helps train neural
networks to approximate the committor function. We design a Deep Adaptive
Sampling method for TRansition paths (DASTR), where deep generative models are
employed to generate samples to capture the information of transitions
effectively. In particular, we treat a non-negative function in the integrand
of the loss functional as an unnormalized probability density function and
approximate it with the deep generative model. The new samples from the deep
generative model are located in the transition state region and fewer samples
are located in the other region. This distribution provides effective samples
for approximating the committor function and significantly improves the
accuracy. We demonstrate the effectiveness of the proposed method through both
simulations and realistic examples.
|
2501.15529
|
UNIDOOR: A Universal Framework for Action-Level Backdoor Attacks in Deep
Reinforcement Learning
|
cs.LG cs.AI cs.CR
|
Deep reinforcement learning (DRL) is widely applied to safety-critical
decision-making scenarios. However, DRL is vulnerable to backdoor attacks,
especially action-level backdoors, which pose significant threats through
precise manipulation and flexible activation, risking outcomes like vehicle
collisions or drone crashes. The key distinction of action-level backdoors lies
in the utilization of the backdoor reward function to associate triggers with
target actions. Nevertheless, existing studies typically rely on backdoor
reward functions with fixed values or conditional flipping, which lack
universality across diverse DRL tasks and backdoor designs, resulting in
fluctuations or even failure in practice.
This paper proposes the first universal action-level backdoor attack
framework, called UNIDOOR, which enables adaptive exploration of backdoor
reward functions through performance monitoring, eliminating the reliance on
expert knowledge and grid search. We highlight that action tampering serves as
a crucial component of action-level backdoor attacks in continuous action
scenarios, as it addresses attack failures caused by low-frequency target
actions. Extensive evaluations demonstrate that UNIDOOR significantly enhances
the attack performance of action-level backdoors, showcasing its universality
across diverse attack scenarios, including single/multiple agents,
single/multiple backdoors, discrete/continuous action spaces, and sparse/dense
reward signals. Furthermore, visualization results encompassing state
distribution, neuron activation, and animations demonstrate the stealthiness of
UNIDOOR. The source code of UNIDOOR can be found at
https://github.com/maoubo/UNIDOOR.
|
2501.15536
|
Intelligent Surface Assisted Radar Stealth Against Unauthorized ISAC
|
cs.IT eess.SP math.IT
|
The integration of radar sensors and communication networks as envisioned for
the 6G wireless networks poses significant security risks, e.g., the user
position information can be released to an unauthorized dual-functional base
station (DFBS). To address this issue, we propose an intelligent surface
(IS)-assisted radar stealth technology that prevents adversarial sensing.
Specifically, we modify the wireless channels by tuning the phase shifts of IS
in order to protect the target user from unauthorized sensing without
jeopardizing the wireless communication link. In principle, we wish to maximize
the distortion between the estimated angle-of-arrival (AoA) by the DFBS and the
ground truth given the minimum signal-to-noise-radio (SNR) constraint for
communication. Toward this end, we propose characterizing the problem as a game
played by the DFBS and the IS, in which the DFBS aims to maximize a particular
utility while the IS aims to minimize the utility. Although the problem is
nonconvex, this paper shows that it can be optimally solved in closed form from
a geometric perspective. According to the simulations, the proposed closed-form
algorithm outperforms the baseline methods significantly in combating
unauthorized sensing while limiting the impacts on wireless communications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.