id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.09327
|
On Learning Informative Trajectory Embeddings for Imitation,
Classification and Regression
|
cs.LG cs.AI
|
In real-world sequential decision making tasks like autonomous driving,
robotics, and healthcare, learning from observed state-action trajectories is
critical for tasks like imitation, classification, and clustering. For example,
self-driving cars must replicate human driving behaviors, while robots and
healthcare systems benefit from modeling decision sequences, whether or not
they come from expert data. Existing trajectory encoding methods often focus on
specific tasks or rely on reward signals, limiting their ability to generalize
across domains and tasks. Inspired by the success of embedding models like CLIP
and BERT in static domains, we propose a novel method for embedding
state-action trajectories into a latent space that captures the skills and
competencies in the dynamic underlying decision-making processes. This method
operates without the need for reward labels, enabling better generalization
across diverse domains and tasks. Our contributions are threefold: (1) We
introduce a trajectory embedding approach that captures multiple abilities from
state-action data. (2) The learned embeddings exhibit strong representational
power across downstream tasks, including imitation, classification, clustering,
and regression. (3) The embeddings demonstrate unique properties, such as
controlling agent behaviors in IQ-Learn and an additive structure in the latent
space. Experimental results confirm that our method outperforms traditional
approaches, offering more flexible and powerful trajectory representations for
various applications. Our code is available at
https://github.com/Erasmo1015/vte.
|
2501.09328
|
Neural Honeytrace: A Robust Plug-and-Play Watermarking Framework against
Model Extraction Attacks
|
cs.CR cs.AI
|
Developing high-performance deep learning models is resource-intensive,
leading model owners to utilize Machine Learning as a Service (MLaaS) platforms
instead of publicly releasing their models. However, malicious users may
exploit query interfaces to execute model extraction attacks, reconstructing
the target model's functionality locally. While prior research has investigated
triggerable watermarking techniques for asserting ownership, existing methods
face significant challenges: (1) most approaches require additional training,
resulting in high overhead and limited flexibility, and (2) they often fail to
account for advanced attackers, leaving them vulnerable to adaptive attacks.
In this paper, we propose Neural Honeytrace, a robust plug-and-play
watermarking framework against model extraction attacks. We first formulate a
watermark transmission model from an information-theoretic perspective,
providing an interpretable account of the principles and limitations of
existing triggerable watermarking. Guided by the model, we further introduce:
(1) a similarity-based training-free watermarking method for plug-and-play and
flexible watermarking, and (2) a distribution-based multi-step watermark
information transmission strategy for robust watermarking. Comprehensive
experiments on four datasets demonstrate that Neural Honeytrace outperforms
previous methods in efficiency and resisting adaptive attacks. Neural
Honeytrace reduces the average number of samples required for a worst-case
t-Test-based copyright claim from $12,000$ to $200$ with zero training cost.
|
2501.09331
|
Identifying Information from Observations with Uncertainty and Novelty
|
cs.LG stat.ML
|
A machine learning tasks from observations must encounter and process
uncertainty and novelty, especially when it is expected to maintain performance
when observing new information and to choose the best fitting hypothesis to the
currently observed information. In this context, some key questions arise: what
is information, how much information did the observations provide, how much
information is required to identify the data-generating process, how many
observations remain to get that information, and how does a predictor determine
that it has observed novel information? This paper strengthens existing answers
to these questions by formalizing the notion of "identifiable information" that
arises from the language used to express the relationship between distinct
states. Model identifiability and sample complexity are defined via computation
of an indicator function over a set of hypotheses. Their properties and
asymptotic statistics are described for data-generating processes ranging from
deterministic processes to ergodic stationary stochastic processes. This
connects the notion of identifying information in finite steps with asymptotic
statistics and PAC-learning. The indicator function's computation naturally
formalizes novel information and its identification from observations with
respect to a hypothesis set. We also proved that computable PAC-Bayes learners'
sample complexity distribution is determined by its moments in terms of the the
prior probability distribution over a fixed finite hypothesis set.
|
2501.09333
|
Prompt-CAM: A Simpler Interpretable Transformer for Fine-Grained
Analysis
|
cs.CV cs.AI
|
We present a simple usage of pre-trained Vision Transformers (ViTs) for
fine-grained analysis, aiming to identify and localize the traits that
distinguish visually similar categories, such as different bird species or dog
breeds. Pre-trained ViTs such as DINO have shown remarkable capabilities to
extract localized, informative features. However, using saliency maps like
Grad-CAM can hardly point out the traits: they often locate the whole object by
a blurred, coarse heatmap, not traits. We propose a novel approach Prompt Class
Attention Map (Prompt-CAM) to the rescue. Prompt-CAM learns class-specific
prompts to a pre-trained ViT and uses the corresponding outputs for
classification. To classify an image correctly, the true-class prompt must
attend to the unique image patches not seen in other classes' images, i.e.,
traits. As such, the true class's multi-head attention maps reveal traits and
their locations. Implementation-wise, Prompt-CAM is almost a free lunch by
simply modifying the prediction head of Visual Prompt Tuning (VPT). This makes
Prompt-CAM fairly easy to train and apply, sharply contrasting other
interpretable methods that design specific models and training processes. It is
even simpler than the recently published INterpretable TRansformer (INTR),
whose encoder-decoder architecture prevents it from leveraging pre-trained
ViTs. Extensive empirical studies on a dozen datasets from various domains
(e.g., birds, fishes, insects, fungi, flowers, food, and cars) validate
Prompt-CAM superior interpretation capability.
|
2501.09334
|
Jodes: Efficient Oblivious Join in the Distributed Setting
|
cs.CR cs.DB cs.DC
|
Trusted execution environment (TEE) has provided an isolated and secure
environment for building cloud-based analytic systems, but it still suffers
from access pattern leakages caused by side-channel attacks. To better secure
the data, computation inside TEE enclave should be made oblivious, which
introduces significant overhead and severely slows down the computation. A
natural way to speed up is to build the analytic system with multiple servers
in the distributed setting. However, this setting raises a new security concern
-- the volumes of the transmissions among these servers can leak sensitive
information to a network adversary. Existing works have designed specialized
algorithms to address this concern, but their supports for equi-join, one of
the most important but non-trivial database operators, are either inefficient,
limited, or under a weak security assumption.
In this paper, we present Jodes, an efficient oblivious join algorithm in the
distributed setting. Jodes prevents the leakage on both the network and enclave
sides, supports a general equi-join operation, and provides a high security
level protection that only publicizes the input sizes and the output size.
Meanwhile, it achieves both communication cost and computation cost
asymptotically superior to existing algorithms. To demonstrate the practicality
of Jodes, we conduct experiments in the distributed setting comprising 16
servers. Empirical results show that Jodes achieves up to a sixfold performance
improvement over state-of-the-art join algorithms.
|
2501.09336
|
Estimating shared subspace with AJIVE: the power and limitation of
multiple data matrices
|
stat.ML cs.LG math.ST stat.TH
|
Integrative data analysis often requires disentangling joint and individual
variations across multiple datasets, a challenge commonly addressed by the
Joint and Individual Variation Explained (JIVE) model. While numerous methods
have been developed to estimate the shared subspace under JIVE, the theoretical
understanding of their performance remains limited, particularly in the context
of multiple matrices and varying degrees of subspace misalignment. This paper
bridges this gap by providing a systematic analysis of shared subspace
estimation in multi-matrix settings.
We focus on the Angle-based Joint and Individual Variation Explained (AJIVE)
method, a two-stage spectral approach, and establish new performance guarantees
that uncover its strengths and limitations. Specifically, we show that in high
signal-to-noise ratio (SNR) regimes, AJIVE's estimation error decreases with
the number of matrices, demonstrating the power of multi-matrix integration.
Conversely, in low-SNR settings, AJIVE exhibits a non-diminishing error,
highlighting fundamental limitations. To complement these results, we derive
minimax lower bounds, showing that AJIVE achieves optimal rates in high-SNR
regimes. Furthermore, we analyze an oracle-aided spectral estimator to
demonstrate that the non-diminishing error in low-SNR scenarios is a
fundamental barrier. Extensive numerical experiments corroborate our
theoretical findings, providing insights into the interplay between SNR, the
number of matrices, and subspace misalignment.
|
2501.09338
|
Robust UAV Path Planning with Obstacle Avoidance for Emergency Rescue
|
cs.RO cs.SY eess.SY
|
The unmanned aerial vehicles (UAVs) are efficient tools for diverse tasks
such as electronic reconnaissance, agricultural operations and disaster relief.
In the complex three-dimensional (3D) environments, the path planning with
obstacle avoidance for UAVs is a significant issue for security assurance. In
this paper, we construct a comprehensive 3D scenario with obstacles and no-fly
zones for dynamic UAV trajectory. Moreover, a novel artificial potential field
algorithm coupled with simulated annealing (APF-SA) is proposed to tackle the
robust path planning problem. APF-SA modifies the attractive and repulsive
potential functions and leverages simulated annealing to escape local minimum
and converge to globally optimal solutions. Simulation results demonstrate that
the effectiveness of APF-SA, enabling efficient autonomous path planning for
UAVs with obstacle avoidance.
|
2501.09341
|
SE-BSFV: Online Subspace Learning based Shadow Enhancement and
Background Suppression for ViSAR under Complex Background
|
cs.CV
|
Video synthetic aperture radar (ViSAR) has attracted substantial attention in
the moving target detection (MTD) field due to its ability to continuously
monitor changes in the target area. In ViSAR, the moving targets' shadows will
not offset and defocus, which is widely used as a feature for MTD. However, the
shadows are difficult to distinguish from the low scattering region in the
background, which will cause more missing and false alarms. Therefore, it is
worth investigating how to enhance the distinction between the shadows and
background. In this study, we proposed the Shadow Enhancement and Background
Suppression for ViSAR (SE-BSFV) algorithm. The SE-BSFV algorithm is based on
the low-rank representation (LRR) theory and adopts online subspace learning
technique to enhance shadows and suppress background for ViSAR images. Firstly,
we use a registration algorithm to register the ViSAR images and utilize
Gaussian mixture distribution (GMD) to model the ViSAR data. Secondly, the
knowledge learned from the previous frames is leveraged to estimate the GMD
parameters of the current frame, and the Expectation-maximization (EM)
algorithm is used to estimate the subspace parameters. Then, the foreground
matrix of the current frame can be obtained. Finally, the alternating direction
method of multipliers (ADMM) is used to eliminate strong scattering objects in
the foreground matrix to obtain the final results. The experimental results
indicate that the SE-BSFV algorithm significantly enhances the shadows'
saliency and greatly improves the detection performance while ensuring
efficiency compared with several other advanced pre-processing algorithms.
|
2501.09345
|
Rational Tuning of LLM Cascades via Probabilistic Modeling
|
cs.LG cs.AI stat.ML
|
Understanding the reliability of large language models (LLMs) has recently
garnered significant attention. Given LLMs' propensity to hallucinate, as well
as their high sensitivity to prompt design, it is already challenging to
predict the performance of an individual LLM. However, the problem becomes more
complex for compound LLM systems such as cascades, where in addition to each
model's standalone performance, we must understand how the error rates of
different models interact. In this paper, we present a probabilistic model for
the joint performance distribution of a sequence of LLMs, which enables a
framework for rationally tuning the confidence thresholds of a LLM cascade
using continuous optimization. Compared to selecting confidence thresholds
using grid search, our parametric Markov-copula model significantly improves
runtime scaling with respect to the length of the cascade and the desired
resolution of the cost-error curve, turning them from intractable into
low-order polynomial. In addition, the optimal thresholds computed using our
continuous optimization-based algorithm increasingly outperform those found via
grid search as cascade length grows, improving the area under the cost-error
curve by 1.9% on average for cascades consisting of at least three models.
Overall, our Markov-copula model provides a rational basis for tuning LLM
cascade performance and points to the potential of probabilistic methods in
analyzing LLM systems.
|
2501.09347
|
UVRM: A Scalable 3D Reconstruction Model from Unposed Videos
|
cs.CV
|
Large Reconstruction Models (LRMs) have recently become a popular method for
creating 3D foundational models. Training 3D reconstruction models with 2D
visual data traditionally requires prior knowledge of camera poses for the
training samples, a process that is both time-consuming and prone to errors.
Consequently, 3D reconstruction training has been confined to either synthetic
3D datasets or small-scale datasets with annotated poses. In this study, we
investigate the feasibility of 3D reconstruction using unposed video data of
various objects. We introduce UVRM, a novel 3D reconstruction model capable of
being trained and evaluated on monocular videos without requiring any
information about the pose. UVRM uses a transformer network to implicitly
aggregate video frames into a pose-invariant latent feature space, which is
then decoded into a tri-plane 3D representation. To obviate the need for
ground-truth pose annotations during training, UVRM employs a combination of
the score distillation sampling (SDS) method and an analysis-by-synthesis
approach, progressively synthesizing pseudo novel-views using a pre-trained
diffusion model. We qualitatively and quantitatively evaluate UVRM's
performance on the G-Objaverse and CO3D datasets without relying on pose
information. Extensive experiments show that UVRM is capable of effectively and
efficiently reconstructing a wide range of 3D objects from unposed videos.
|
2501.09349
|
ChartInsighter: An Approach for Mitigating Hallucination in Time-series
Chart Summary Generation with A Benchmark Dataset
|
cs.CL cs.HC
|
Effective chart summary can significantly reduce the time and effort decision
makers spend interpreting charts, enabling precise and efficient communication
of data insights. Previous studies have faced challenges in generating accurate
and semantically rich summaries of time-series data charts. In this paper, we
identify summary elements and common hallucination types in the generation of
time-series chart summaries, which serve as our guidelines for automatic
generation. We introduce ChartInsighter, which automatically generates chart
summaries of time-series data, effectively reducing hallucinations in chart
summary generation. Specifically, we assign multiple agents to generate the
initial chart summary and collaborate iteratively, during which they invoke
external data analysis modules to extract insights and compile them into a
coherent summary. Additionally, we implement a self-consistency test method to
validate and correct our summary. We create a high-quality benchmark of charts
and summaries, with hallucination types annotated on a sentence-by-sentence
basis, facilitating the evaluation of the effectiveness of reducing
hallucinations. Our evaluations using our benchmark show that our method
surpasses state-of-the-art models, and that our summary hallucination rate is
the lowest, which effectively reduces various hallucinations and improves
summary quality. The benchmark is available at
https://github.com/wangfen01/ChartInsighter.
|
2501.09350
|
Making Your Dreams A Reality: Decoding the Dreams into a Coherent Video
Story from fMRI Signals
|
cs.CV
|
This paper studies the brave new idea for Multimedia community, and proposes
a novel framework to convert dreams into coherent video narratives using fMRI
data. Essentially, dreams have intrigued humanity for centuries, offering
glimpses into our subconscious minds. Recent advancements in brain imaging,
particularly functional magnetic resonance imaging (fMRI), have provided new
ways to explore the neural basis of dreaming. By combining subjective dream
experiences with objective neurophysiological data, we aim to understand the
visual aspects of dreams and create complete video narratives. Our process
involves three main steps: reconstructing visual perception, decoding dream
imagery, and integrating dream stories. Using innovative techniques in fMRI
analysis and language modeling, we seek to push the boundaries of dream
research and gain deeper insights into visual experiences during sleep. This
technical report introduces a novel approach to visually decoding dreams using
fMRI signals and weaving dream visuals into narratives using language models.
We gather a dataset of dreams along with descriptions to assess the
effectiveness of our framework.
|
2501.09351
|
RIS-Aided Fluid Antenna Array-Mounted UAV Networks
|
cs.IT eess.SP math.IT
|
This paper investigates reconfigurable intelligent surface (RIS)-assisted
unmanned aerial vehicle (UAV) downlink networks with fluid antennas (FA), where
RIS enables non-line-of-sight (NLoS) transmissions. Moreover, the FA is
equipped on the UAV offering dynamic antenna position adjustment, enhancing
spatial diversity besides UAV deployment. We aim at total downlink rate
maximization while ensuring minimum user rate requirement. We consider joint
optimization of active UAV beamforming, passive RIS beamforming, UAV deployment
and FA position adjustment. To address the complex problem, we propose
beamfomring for RIS/UAV and FA-UAV deployment (BRAUD) scheme by employing
alternative optimization, successive convex approximation (SCA) and sequential
rank-one constraint relaxation (SROCR) method for the decomposed subproblems.
Simulation results demonstrate the effectiveness of RIS-FA-UAV, achieving the
highest rate among existing architectures without FA/UAV/RIS deployment and
without proper beamforming. Moreover, BRAUD achieves the highest rate among
benchmarks of drop-rank method, heuristic optimizations and conventional
zero-forcing beamforming as well as random method.
|
2501.09352
|
PAL: Prompting Analytic Learning with Missing Modality for Multi-Modal
Class-Incremental Learning
|
cs.LG cs.MM eess.IV
|
Multi-modal class-incremental learning (MMCIL) seeks to leverage multi-modal
data, such as audio-visual and image-text pairs, thereby enabling models to
learn continuously across a sequence of tasks while mitigating forgetting.
While existing studies primarily focus on the integration and utilization of
multi-modal information for MMCIL, a critical challenge remains: the issue of
missing modalities during incremental learning phases. This oversight can
exacerbate severe forgetting and significantly impair model performance. To
bridge this gap, we propose PAL, a novel exemplar-free framework tailored to
MMCIL under missing-modality scenarios. Concretely, we devise modality-specific
prompts to compensate for missing information, facilitating the model to
maintain a holistic representation of the data. On this foundation, we
reformulate the MMCIL problem into a Recursive Least-Squares task, delivering
an analytical linear solution. Building upon these, PAL not only alleviates the
inherent under-fitting limitation in analytic learning but also preserves the
holistic representation of missing-modality data, achieving superior
performance with less forgetting across various multi-modal incremental
scenarios. Extensive experiments demonstrate that PAL significantly outperforms
competitive methods across various datasets, including UPMC-Food101 and
N24News, showcasing its robustness towards modality absence and its
anti-forgetting ability to maintain high incremental accuracy.
|
2501.09354
|
Style4Rec: Enhancing Transformer-based E-commerce Recommendation Systems
with Style and Shopping Cart Information
|
cs.IR cs.AI
|
Understanding users' product preferences is essential to the efficacy of a
recommendation system. Precision marketing leverages users' historical data to
discern these preferences and recommends products that align with them.
However, recent browsing and purchase records might better reflect current
purchasing inclinations. Transformer-based recommendation systems have made
strides in sequential recommendation tasks, but they often fall short in
utilizing product image style information and shopping cart data effectively.
In light of this, we propose Style4Rec, a transformer-based e-commerce
recommendation system that harnesses style and shopping cart information to
enhance existing transformer-based sequential product recommendation systems.
Style4Rec represents a significant step forward in personalized e-commerce
recommendations, outperforming benchmarks across various evaluation metrics.
Style4Rec resulted in notable improvements: HR@5 increased from 0.681 to 0.735,
NDCG@5 increased from 0.594 to 0.674, and MRR@5 increased from 0.559 to 0.654.
We tested our model using an e-commerce dataset from our partnering company and
found that it exceeded established transformer-based sequential recommendation
benchmarks across various evaluation metrics. Thus, Style4Rec presents a
significant step forward in personalized e-commerce recommendation systems.
|
2501.09355
|
YETI (YET to Intervene) Proactive Interventions by Multimodal AI Agents
in Augmented Reality Tasks
|
cs.AI cs.CV cs.ET cs.MA
|
Multimodal AI Agents are AI models that have the capability of interactively
and cooperatively assisting human users to solve day-to-day tasks. Augmented
Reality (AR) head worn devices can uniquely improve the user experience of
solving procedural day-to-day tasks by providing egocentric multimodal (audio
and video) observational capabilities to AI Agents. Such AR capabilities can
help AI Agents see and listen to actions that users take which can relate to
multimodal capabilities of human users. Existing AI Agents, either Large
Language Models (LLMs) or Multimodal Vision-Language Models (VLMs) are reactive
in nature, which means that models cannot take an action without reading or
listening to the human user's prompts. Proactivity of AI Agents on the other
hand can help the human user detect and correct any mistakes in agent observed
tasks, encourage users when they do tasks correctly or simply engage in
conversation with the user - akin to a human teaching or assisting a user. Our
proposed YET to Intervene (YETI) multimodal agent focuses on the research
question of identifying circumstances that may require the agent to intervene
proactively. This allows the agent to understand when it can intervene in a
conversation with human users that can help the user correct mistakes on tasks,
like cooking, using AR. Our YETI Agent learns scene understanding signals based
on interpretable notions of Structural Similarity (SSIM) on consecutive video
frames. We also define the alignment signal which the AI Agent can learn to
identify if the video frames corresponding to the user's actions on the task
are consistent with expected actions. These signals are used by our AI Agent to
determine when it should proactively intervene. We compare our results on the
instances of proactive intervention in the HoloAssist multimodal benchmark for
an expert agent guiding a user to complete procedural tasks.
|
2501.09357
|
Path Planning for a UAV Swarm Using Formation Teaching-Learning-Based
Optimization
|
cs.RO cs.SY eess.SY
|
This work addresses the path planning problem for a group of unmanned aerial
vehicles (UAVs) to maintain a desired formation during operation. Our approach
formulates the problem as an optimization task by defining a set of fitness
functions that not only ensure the formation but also include constraints for
optimal and safe UAV operation. To optimize the fitness function and obtain a
suboptimal path, we employ the teaching-learning-based optimization algorithm
and then further enhance it with mechanisms such as mutation, elite strategy,
and multi-subject combination. A number of simulations and experiments have
been conducted to evaluate the proposed method. The results demonstrate that
the algorithm successfully generates valid paths for the UAVs to fly in a
triangular formation for an inspection task.
|
2501.09359
|
A Multi-tiered Solution for Personalized Baggage Item Recommendations
using FastText and Association Rule Mining
|
cs.IR
|
This paper introduces an intelligent baggage item recommendation system to
optimize packing for air travelers by providing tailored suggestions based on
specific travel needs and destinations. Using FastText word embeddings and
Association Rule Mining (ARM), the system ensures efficient luggage space
utilization, compliance with weight limits, and an enhanced travel experience.
The methodology comprises four phases: (1) data collection and preprocessing
with pre-trained FastText embeddings for text representation and similarity
scoring (2) a content-based recommendation system enriched by user search
history (3) application of ARM to user interactions to uncover meaningful item
associations and (4) integration of FastText and ARM for accurate, personalized
recommendations. Performance is evaluated using metrics such as coverage,
support, confidence, lift, leverage, and conviction. Results demonstrate the
system's effectiveness in providing relevant suggestions, improving customer
satisfaction, and simplifying the packing process. These insights advance
personalized recommendations, targeted marketing, and product optimization in
air travel and beyond.
|
2501.09361
|
Strategic Base Representation Learning via Feature Augmentations for
Few-Shot Class Incremental Learning
|
cs.CV
|
Few-shot class incremental learning implies the model to learn new classes
while retaining knowledge of previously learned classes with a small number of
training instances. Existing frameworks typically freeze the parameters of the
previously learned classes during the incorporation of new classes. However,
this approach often results in suboptimal class separation of previously
learned classes, leading to overlap between old and new classes. Consequently,
the performance of old classes degrades on new classes. To address these
challenges, we propose a novel feature augmentation driven contrastive learning
framework designed to enhance the separation of previously learned classes to
accommodate new classes. Our approach involves augmenting feature vectors and
assigning proxy labels to these vectors. This strategy expands the feature
space, ensuring seamless integration of new classes within the expanded space.
Additionally, we employ a self-supervised contrastive loss to improve the
separation between previous classes. We validate our framework through
experiments on three FSCIL benchmark datasets: CIFAR100, miniImageNet, and
CUB200. The results demonstrate that our Feature Augmentation driven
Contrastive Learning framework significantly outperforms other approaches,
achieving state-of-the-art performance.
|
2501.09362
|
A Revisit to Rate-distortion Theory via Optimal Weak Transport
|
cs.IT math.IT
|
This paper revisits the rate-distortion theory from the perspective of
optimal weak transport, as recently introduced by Gozlan et al. While the
conditions for optimality and the existence of solutions are well-understood in
the case of discrete alphabets, the extension to abstract alphabets requires
more intricate analysis. Within the framework of weak transport problems, we
derive a parametric representation of the rate-distortion function, thereby
connecting the rate-distortion function with the Schr\"odinger bridge problem,
and establish necessary conditions for its optimality. As a byproduct of our
analysis, we reproduce K. Rose's conclusions regarding the achievability of
Shannon lower bound concisely, without reliance on variational calculus.
|
2501.09363
|
Identification of Traditional Medicinal Plant Leaves Using an effective
Deep Learning model and Self-Curated Dataset
|
cs.CV
|
Medicinal plants have been a key component in producing traditional and
modern medicines, especially in the field of Ayurveda, an ancient Indian
medical system. Producing these medicines and collecting and extracting the
right plant is a crucial step due to the visually similar nature of some
plants. The extraction of these plants from nonmedicinal plants requires human
expert intervention. To solve the issue of accurate plant identification and
reduce the need for a human expert in the collection process; employing
computer vision methods will be efficient and beneficial. In this paper, we
have proposed a model that solves such issues. The proposed model is a custom
convolutional neural network (CNN) architecture with 6 convolution layers,
max-pooling layers, and dense layers. The model was tested on three different
datasets named Indian Medicinal Leaves Image Dataset,MED117 Medicinal Plant
Leaf Dataset, and the self-curated dataset by the authors. The proposed model
achieved respective accuracies of 99.5%, 98.4%, and 99.7% using various
optimizers including Adam, RMSprop, and SGD with momentum.
|
2501.09368
|
Aligning Instruction Tuning with Pre-training
|
cs.AI
|
Instruction tuning enhances large language models (LLMs) to follow human
instructions across diverse tasks, relying on high-quality datasets to guide
behavior. However, these datasets, whether manually curated or synthetically
generated, are often narrowly focused and misaligned with the broad
distributions captured during pre-training, limiting LLM generalization and
effective use of pre-trained knowledge. We propose Aligning Instruction Tuning
with Pre-training (AITP), a method that bridges this gap by identifying
coverage shortfalls in instruction-tuning datasets and rewriting
underrepresented pre-training data into high-quality instruction-response
pairs. This approach enriches dataset diversity while preserving task-specific
objectives. Evaluations on three fully open LLMs across eight benchmarks
demonstrate consistent performance improvements with AITP. Ablations highlight
the benefits of adaptive data selection, controlled rewriting, and balanced
integration, emphasizing the importance of aligning instruction tuning with
pre-training distributions to unlock the full potential of LLMs.
|
2501.09372
|
Image Segmentation with transformers: An Overview, Challenges and Future
|
cs.CV
|
Image segmentation, a key task in computer vision, has traditionally relied
on convolutional neural networks (CNNs), yet these models struggle with
capturing complex spatial dependencies, objects with varying scales, need for
manually crafted architecture components and contextual information. This paper
explores the shortcomings of CNN-based models and the shift towards transformer
architectures -to overcome those limitations. This work reviews
state-of-the-art transformer-based segmentation models, addressing
segmentation-specific challenges and their solutions. The paper discusses
current challenges in transformer-based segmentation and outlines promising
future trends, such as lightweight architectures and enhanced data efficiency.
This survey serves as a guide for understanding the impact of transformers in
advancing segmentation capabilities and overcoming the limitations of
traditional models.
|
2501.09384
|
Evaluating LLM Abilities to Understand Tabular Electronic Health
Records: A Comprehensive Study of Patient Data Extraction and Retrieval
|
cs.CL cs.IR
|
Electronic Health Record (EHR) tables pose unique challenges among which is
the presence of hidden contextual dependencies between medical features with a
high level of data dimensionality and sparsity. This study presents the first
investigation into the abilities of LLMs to comprehend EHRs for patient data
extraction and retrieval. We conduct extensive experiments using the MIMICSQL
dataset to explore the impact of the prompt structure, instruction, context,
and demonstration, of two backbone LLMs, Llama2 and Meditron, based on task
performance. Through quantitative and qualitative analyses, our findings show
that optimal feature selection and serialization methods can enhance task
performance by up to 26.79% compared to naive approaches. Similarly, in-context
learning setups with relevant example selection improve data extraction
performance by 5.95%. Based on our study findings, we propose guidelines that
we believe would help the design of LLM-based models to support health search.
|
2501.09393
|
SVIA: A Street View Image Anonymization Framework for Self-Driving
Applications
|
cs.CV
|
In recent years, there has been an increasing interest in image
anonymization, particularly focusing on the de-identification of faces and
individuals. However, for self-driving applications, merely de-identifying
faces and individuals might not provide sufficient privacy protection since
street views like vehicles and buildings can still disclose locations,
trajectories, and other sensitive information. Therefore, it remains crucial to
extend anonymization techniques to street view images to fully preserve the
privacy of users, pedestrians, and vehicles. In this paper, we propose a Street
View Image Anonymization (SVIA) framework for self-driving applications. The
SVIA framework consists of three integral components: a semantic segmenter to
segment an input image into functional regions, an inpainter to generate
alternatives to privacy-sensitive regions, and a harmonizer to seamlessly
stitch modified regions to guarantee visual coherence. Compared to existing
methods, SVIA achieves a much better trade-off between image generation quality
and privacy protection, as evidenced by experimental results for five common
metrics on two widely used public datasets.
|
2501.09394
|
Quantum-Enhanced Transformers for Robust Acoustic Scene Classification
in IoT Environments
|
eess.AS cs.AI cs.LG cs.PF cs.SD
|
The proliferation of Internet of Things (IoT) devices equipped with acoustic
sensors necessitates robust acoustic scene classification (ASC) capabilities,
even in noisy and data-limited environments. Traditional machine learning
methods often struggle to generalize effectively under such conditions. To
address this, we introduce Q-ASC, a novel Quantum-Inspired Acoustic Scene
Classifier that leverages the power of quantum-inspired transformers. By
integrating quantum concepts like superposition and entanglement, Q-ASC
achieves superior feature learning and enhanced noise resilience compared to
classical models. Furthermore, we introduce a Quantum Variational Autoencoder
(QVAE) based data augmentation technique to mitigate the challenge of limited
labeled data in IoT deployments. Extensive evaluations on the Tampere
University of Technology (TUT) Acoustic Scenes 2016 benchmark dataset
demonstrate that Q-ASC achieves remarkable accuracy between 68.3% and 88.5%
under challenging conditions, outperforming state-of-the-art methods by over 5%
in the best case. This research paves the way for deploying intelligent
acoustic sensing in IoT networks, with potential applications in smart homes,
industrial monitoring, and environmental surveillance, even in adverse acoustic
environments.
|
2501.09395
|
ELM-DeepONets: Backpropagation-Free Training of Deep Operator Networks
via Extreme Learning Machines
|
cs.LG cs.AI cs.NA math.NA
|
Deep Operator Networks (DeepONets) are among the most prominent frameworks
for operator learning, grounded in the universal approximation theorem for
operators. However, training DeepONets typically requires significant
computational resources. To address this limitation, we propose ELM-DeepONets,
an Extreme Learning Machine (ELM) framework for DeepONets that leverages the
backpropagation-free nature of ELM. By reformulating DeepONet training as a
least-squares problem for newly introduced parameters, the ELM-DeepONet
approach significantly reduces training complexity. Validation on benchmark
problems, including nonlinear ODEs and PDEs, demonstrates that the proposed
method not only achieves superior accuracy but also drastically reduces
computational costs. This work offers a scalable and efficient alternative for
operator learning in scientific computing.
|
2501.09396
|
Joint Transmission and Deblurring: A Semantic Communication Approach
Using Events
|
eess.IV cs.CV
|
Deep learning-based joint source-channel coding (JSCC) is emerging as a
promising technology for effective image transmission. However, most existing
approaches focus on transmitting clear images, overlooking real-world
challenges such as motion blur caused by camera shaking or fast-moving objects.
Motion blur often degrades image quality, making transmission and
reconstruction more challenging. Event cameras, which asynchronously record
pixel intensity changes with extremely low latency, have shown great potential
for motion deblurring tasks. However, the efficient transmission of the
abundant data generated by event cameras remains a significant challenge. In
this work, we propose a novel JSCC framework for the joint transmission of
blurry images and events, aimed at achieving high-quality reconstructions under
limited channel bandwidth. This approach is designed as a deblurring
task-oriented JSCC system. Since RGB cameras and event cameras capture the same
scene through different modalities, their outputs contain both shared and
domain-specific information. To avoid repeatedly transmitting the shared
information, we extract and transmit their shared information and
domain-specific information, respectively. At the receiver, the received
signals are processed by a deblurring decoder to generate clear images.
Additionally, we introduce a multi-stage training strategy to train the
proposed model. Simulation results demonstrate that our method significantly
outperforms existing JSCC-based image transmission schemes, addressing motion
blur effectively.
|
2501.09399
|
Fast Searching of Extreme Operating Conditions for Relay Protection
Setting Calculation Based on Graph Neural Network and Reinforcement Learning
|
cs.LG
|
Searching for the Extreme Operating Conditions (EOCs) is one of the core
problems of power system relay protection setting calculation. The current
methods based on brute-force search, heuristic algorithms, and mathematical
programming can hardly meet the requirements of today's power systems in terms
of computation speed due to the drastic changes in operating conditions induced
by renewables and power electronics. This paper proposes an EOC fast search
method, named Graph Dueling Double Deep Q Network (Graph D3QN), which combines
graph neural network and deep reinforcement learning to address this challenge.
First, the EOC search problem is modeled as a Markov decision process, where
the information of the underlying power system is extracted using graph neural
networks, so that the EOC of the system can be found via deep reinforcement
learning. Then, a two-stage Guided Learning and Free Exploration (GLFE)
training framework is constructed to accelerate the convergence speed of
reinforcement learning. Finally, the proposed Graph D3QN method is validated
through case studies of searching maximum fault current for relay protection
setting calculation on the IEEE 39-bus and 118-bus systems. The experimental
results demonstrate that Graph D3QN can reduce the computation time by 10 to
1000 times while guaranteeing the accuracy of the selected EOCs.
|
2501.09400
|
Joint Antenna Selection and Beamforming Design for Active RIS-aided ISAC
Systems
|
cs.IT eess.SP math.IT
|
Active reconfigurable intelligent surface (A-RIS) aided integrated sensing
and communications (ISAC) system has been considered as a promising paradigm to
improve spectrum efficiency. However, massive energy-hungry radio frequency
(RF) chains hinder its large-scale deployment. To address this issue, an
A-RIS-aided ISAC system with antenna selection (AS) is proposed in this work,
where a target is sensed while multiple communication users are served with
specifically selected antennas. Specifically, a cuckoo search-based scheme is
first utilized to select the antennas associated with high-gain channels.
Subsequently, with the properly selected antennas, the weighted sum-rate (WSR)
of the system is optimized under the condition of radar probing power level,
power budget for the A-RIS and transmitter. To solve the highly non-convex
optimization problem, we develop an efficient algorithm based on weighted
minimum mean square error (WMMSE) and fractional programming (FP). Simulation
results show that the proposed AS scheme and the algorithm are effective, which
reduce the number of RF chains without significant performance degradation.
|
2501.09403
|
PISCO: Self-Supervised k-Space Regularization for Improved Neural
Implicit k-Space Representations of Dynamic MRI
|
eess.IV cs.CV cs.LG eess.SP physics.med-ph
|
Neural implicit k-space representations (NIK) have shown promising results
for dynamic magnetic resonance imaging (MRI) at high temporal resolutions. Yet,
reducing acquisition time, and thereby available training data, results in
severe performance drops due to overfitting. To address this, we introduce a
novel self-supervised k-space loss function $\mathcal{L}_\mathrm{PISCO}$,
applicable for regularization of NIK-based reconstructions. The proposed loss
function is based on the concept of parallel imaging-inspired self-consistency
(PISCO), enforcing a consistent global k-space neighborhood relationship
without requiring additional data. Quantitative and qualitative evaluations on
static and dynamic MR reconstructions show that integrating PISCO significantly
improves NIK representations. Particularly for high acceleration factors
(R$\geq$54), NIK with PISCO achieves superior spatio-temporal reconstruction
quality compared to state-of-the-art methods. Furthermore, an extensive
analysis of the loss assumptions and stability shows PISCO's potential as
versatile self-supervised k-space loss function for further applications and
architectures. Code is available at:
https://github.com/compai-lab/2025-pisco-spieker
|
2501.09408
|
On the distribution of the statistical sum related to BSC
|
cs.IT math.IT math.PR
|
The distribution function of the sum of i.i.d. random variables of the
special form is considered. Such sum describes messages posterior probabilities
for random coding in binary symmetric channel. Close non-asymptotic lower and
upper bounds for that function are derived.
|
2501.09409
|
mGeNTE: A Multilingual Resource for Gender-Neutral Language and
Translation
|
cs.CL
|
Gender-neutral language reflects societal and linguistic shifts towards
greater inclusivity by avoiding the implication that one gender is the norm
over others. This is particularly relevant for grammatical gender languages,
which heavily encode the gender of terms for human referents and over-relies on
masculine forms, even when gender is unspecified or irrelevant. Language
technologies are known to mirror these inequalities, being affected by a male
bias and perpetuating stereotypical associations when translating into
languages with extensive gendered morphology. In such cases, gender-neutral
language can help avoid undue binary assumptions. However, despite its
importance for creating fairer multi- and cross-lingual technologies, inclusive
language research remains scarce and insufficiently supported in current
resources. To address this gap, we present the multilingual mGeNTe dataset.
Derived from the bilingual GeNTE (Piergentili et al., 2023), mGeNTE extends the
original corpus to include the English-Italian/German/Spanish language pairs.
Since each language pair is English-aligned with gendered and neutral sentences
in the target languages, mGeNTE enables research in both automatic
Gender-Neutral Translation (GNT) and language modelling for three grammatical
gender languages.
|
2501.09410
|
MoE$^2$: Optimizing Collaborative Inference for Edge Large Language
Models
|
cs.NI cs.AI cs.LG
|
Large language models (LLMs) have demonstrated remarkable capabilities across
a wide range of natural language processing tasks. Exploiting the heterogeneous
capabilities of edge LLMs is crucial for diverse emerging applications, as it
enables greater cost-effectiveness and reduced latency. In this work, we
introduce \textit{Mixture-of-Edge-Experts (MoE$^2$)}, a novel collaborative
inference framework for edge LLMs. We formulate the joint gating and expert
selection problem to optimize inference performance under energy and latency
constraints. Unlike conventional MoE problems, LLM expert selection is
significantly more challenging due to the combinatorial nature and the
heterogeneity of edge LLMs across various attributes. To this end, we propose a
two-level expert selection mechanism through which we uncover an
optimality-preserving property of gating parameters across expert selections.
This property enables the decomposition of the training and selection
processes, significantly reducing complexity. Furthermore, we leverage the
objective's monotonicity and design a discrete monotonic optimization algorithm
for optimal expert selection. We implement edge servers with NVIDIA Jetson AGX
Orins and NVIDIA RTX 4090 GPUs, and perform extensive experiments. Our results
validate that performance improvements of various LLM models and show that our
MoE$^2$ method can achieve optimal trade-offs among different delay and energy
budgets, and outperforms baselines under various system resource constraints.
|
2501.09411
|
Towards Robust and Realistic Human Pose Estimation via WiFi Signals
|
cs.CV
|
Robust WiFi-based human pose estimation is a challenging task that bridges
discrete and subtle WiFi signals to human skeletons. This paper revisits this
problem and reveals two critical yet overlooked issues: 1) cross-domain gap,
i.e., due to significant variations between source-target domain pose
distributions; and 2) structural fidelity gap, i.e., predicted skeletal poses
manifest distorted topology, usually with misplaced joints and disproportionate
bone lengths. This paper fills these gaps by reformulating the task into a
novel two-phase framework dubbed DT-Pose: Domain-consistent representation
learning and Topology-constrained Pose decoding. Concretely, we first propose a
temporal-consistent contrastive learning strategy with uniformity
regularization, coupled with self-supervised masking-reconstruction operations,
to enable robust learning of domain-consistent and motion-discriminative
WiFi-specific representations. Beyond this, we introduce a simple yet effective
pose decoder with task prompts, which integrates Graph Convolution Network
(GCN) and Transformer layers to constrain the topology structure of the
generated skeleton by exploring the adjacent-overarching relationships among
human joints. Extensive experiments conducted on various benchmark datasets
highlight the superior performance of our method in tackling these fundamental
challenges in both 2D/3D human pose estimation tasks.
|
2501.09412
|
FASP: Fast and Accurate Structured Pruning of Large Language Models
|
cs.LG
|
The rapid increase in the size of large language models (LLMs) has
significantly escalated their computational and memory demands, posing
challenges for efficient deployment, especially on resource-constrained
devices. Structured pruning has emerged as an effective model compression
method that can reduce these demands while preserving performance. In this
paper, we introduce FASP (Fast and Accurate Structured Pruning), a novel
structured pruning framework for LLMs that emphasizes both speed and accuracy.
FASP employs a distinctive pruning structure that interlinks sequential layers,
allowing for the removal of columns in one layer while simultaneously
eliminating corresponding rows in the preceding layer without incurring
additional performance loss. The pruning metric, inspired by Wanda, is
computationally efficient and effectively selects components to prune.
Additionally, we propose a restoration mechanism that enhances model fidelity
by adjusting the remaining weights post-pruning. We evaluate FASP on the OPT
and LLaMA model families, demonstrating superior performance in terms of
perplexity and accuracy on downstream tasks compared to state-of-the-art
methods. Our approach achieves significant speed-ups, pruning models such as
OPT-125M in 17 seconds and LLaMA-30B in 15 minutes on a single NVIDIA RTX 4090
GPU, making it a highly practical solution for optimizing LLMs.
|
2501.09420
|
Dynamic Neural Style Transfer for Artistic Image Generation using VGG19
|
cs.CV cs.AI cs.LG eess.IV
|
Throughout history, humans have created remarkable works of art, but
artificial intelligence has only recently started to make strides in generating
visually compelling art. Breakthroughs in the past few years have focused on
using convolutional neural networks (CNNs) to separate and manipulate the
content and style of images, applying texture synthesis techniques.
Nevertheless, a number of current techniques continue to encounter obstacles,
including lengthy processing times, restricted choices of style images, and the
inability to modify the weight ratio of styles. We proposed a neural style
transfer system that can add various artistic styles to a desired image to
address these constraints allowing flexible adjustments to style weight ratios
and reducing processing time. The system uses the VGG19 model for feature
extraction, ensuring high-quality, flexible stylization without compromising
content integrity.
|
2501.09425
|
Vision-Language Models Do Not Understand Negation
|
cs.CV cs.CL
|
Many practical vision-language applications require models that understand
negation, e.g., when using natural language to retrieve images which contain
certain objects but not others. Despite advancements in vision-language models
(VLMs) through large-scale training, their ability to comprehend negation
remains underexplored. This study addresses the question: how well do current
VLMs understand negation? We introduce NegBench, a new benchmark designed to
evaluate negation understanding across 18 task variations and 79k examples
spanning image, video, and medical datasets. The benchmark consists of two core
tasks designed to evaluate negation understanding in diverse multimodal
settings: Retrieval with Negation and Multiple Choice Questions with Negated
Captions. Our evaluation reveals that modern VLMs struggle significantly with
negation, often performing at chance level. To address these shortcomings, we
explore a data-centric approach wherein we finetune CLIP models on large-scale
synthetic datasets containing millions of negated captions. We show that this
approach can result in a 10% increase in recall on negated queries and a 40%
boost in accuracy on multiple-choice questions with negated captions.
|
2501.09426
|
AutoCBT: An Autonomous Multi-agent Framework for Cognitive Behavioral
Therapy in Psychological Counseling
|
cs.CL
|
Traditional in-person psychological counseling remains primarily niche, often
chosen by individuals with psychological issues, while online automated
counseling offers a potential solution for those hesitant to seek help due to
feelings of shame. Cognitive Behavioral Therapy (CBT) is an essential and
widely used approach in psychological counseling. The advent of large language
models (LLMs) and agent technology enables automatic CBT diagnosis and
treatment. However, current LLM-based CBT systems use agents with a fixed
structure, limiting their self-optimization capabilities, or providing hollow,
unhelpful suggestions due to redundant response patterns. In this work, we
utilize Quora-like and YiXinLi single-round consultation models to build a
general agent framework that generates high-quality responses for single-turn
psychological consultation scenarios. We use a bilingual dataset to evaluate
the quality of single-response consultations generated by each framework. Then,
we incorporate dynamic routing and supervisory mechanisms inspired by real
psychological counseling to construct a CBT-oriented autonomous multi-agent
framework, demonstrating its general applicability. Experimental results
indicate that AutoCBT can provide higher-quality automated psychological
counseling services.
|
2501.09428
|
AugRefer: Advancing 3D Visual Grounding via Cross-Modal Augmentation and
Spatial Relation-based Referring
|
cs.CV
|
3D visual grounding (3DVG), which aims to correlate a natural language
description with the target object within a 3D scene, is a significant yet
challenging task. Despite recent advancements in this domain, existing
approaches commonly encounter a shortage: a limited amount and diversity of
text3D pairs available for training. Moreover, they fall short in effectively
leveraging different contextual clues (e.g., rich spatial relations within the
3D visual space) for grounding. To address these limitations, we propose
AugRefer, a novel approach for advancing 3D visual grounding. AugRefer
introduces cross-modal augmentation designed to extensively generate diverse
text-3D pairs by placing objects into 3D scenes and creating accurate and
semantically rich descriptions using foundation models. Notably, the resulting
pairs can be utilized by any existing 3DVG methods for enriching their training
data. Additionally, AugRefer presents a language-spatial adaptive decoder that
effectively adapts the potential referring objects based on the language
description and various 3D spatial relations. Extensive experiments on three
benchmark datasets clearly validate the effectiveness of AugRefer.
|
2501.09429
|
ADAGE: A generic two-layer framework for adaptive agent based modelling
|
cs.MA cs.AI cs.LG econ.GN q-fin.CP q-fin.EC
|
Agent-based models (ABMs) are valuable for modelling complex, potentially
out-of-equilibria scenarios. However, ABMs have long suffered from the Lucas
critique, stating that agent behaviour should adapt to environmental changes.
Furthermore, the environment itself often adapts to these behavioural changes,
creating a complex bi-level adaptation problem. Recent progress integrating
multi-agent reinforcement learning into ABMs introduces adaptive agent
behaviour, beginning to address the first part of this critique, however, the
approaches are still relatively ad hoc, lacking a general formulation, and
furthermore, do not tackle the second aspect of simultaneously adapting
environmental level characteristics in addition to the agent behaviours. In
this work, we develop a generic two-layer framework for ADaptive AGEnt based
modelling (ADAGE) for addressing these problems. This framework formalises the
bi-level problem as a Stackelberg game with conditional behavioural policies,
providing a consolidated framework for adaptive agent-based modelling based on
solving a coupled set of non-linear equations. We demonstrate how this generic
approach encapsulates several common (previously viewed as distinct) ABM tasks,
such as policy design, calibration, scenario generation, and robust behavioural
learning under one unified framework. We provide example simulations on
multiple complex economic and financial environments, showing the strength of
the novel framework under these canonical settings, addressing long-standing
critiques of traditional ABMs.
|
2501.09430
|
HpC: A Calculus for Hybrid and Mobile Systems -- Full Version
|
cs.PL cs.LO cs.NI cs.SY eess.SY
|
Networked cybernetic and physical systems of the Internet of Things (IoT)
immerse civilian and industrial infrastructures into an interconnected and
dynamic web of hybrid and mobile devices. The key feature of such systems is
the hybrid and tight coupling of mobile and pervasive discrete communications
in a continuously evolving environment (discrete computations with predominant
continuous dynamics). In the aim of ensuring the correctness and reliability of
such heterogeneous infrastructures, we introduce the hybrid {\pi}-calculus
(HpC), to formally capture both mobility, pervasiveness and hybridisation in
infrastructures where the network topology and its communicating entities
evolve continuously in the physical world. The {\pi}-calculus proposed by Robin
Milner et al. is a process calculus that can model mobile communications and
computations in a very elegant manner. The HpC we propose is a conservative
extension of the classical {\pi}-calculus, i.e., the extension is ``minimal'',
and yet describes mobility, time and physics of systems, while allowing to lift
all theoretical results (e.g. bisimulation) to the context of that extension.
We showcase the HpC by considering a realistic handover protocol among mobile
devices.
|
2501.09431
|
A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and
Mitigation Strategy
|
cs.AI cs.CL cs.CR cs.CY
|
While large language models (LLMs) present significant potential for
supporting numerous real-world applications and delivering positive social
impacts, they still face significant challenges in terms of the inherent risk
of privacy leakage, hallucinated outputs, and value misalignment, and can be
maliciously used for generating toxic content and unethical purposes after been
jailbroken. Therefore, in this survey, we present a comprehensive review of
recent advancements aimed at mitigating these issues, organized across the four
phases of LLM development and usage: data collecting and pre-training,
fine-tuning and alignment, prompting and reasoning, and post-processing and
auditing. We elaborate on the recent advances for enhancing the performance of
LLMs in terms of privacy protection, hallucination reduction, value alignment,
toxicity elimination, and jailbreak defenses. In contrast to previous surveys
that focus on a single dimension of responsible LLMs, this survey presents a
unified framework that encompasses these diverse dimensions, providing a
comprehensive view of enhancing LLMs to better serve real-world applications.
|
2501.09433
|
CaPa: Carve-n-Paint Synthesis for Efficient 4K Textured Mesh Generation
|
cs.CV cs.GR
|
The synthesis of high-quality 3D assets from textual or visual inputs has
become a central objective in modern generative modeling. Despite the
proliferation of 3D generation algorithms, they frequently grapple with
challenges such as multi-view inconsistency, slow generation times, low
fidelity, and surface reconstruction problems. While some studies have
addressed some of these issues, a comprehensive solution remains elusive. In
this paper, we introduce \textbf{CaPa}, a carve-and-paint framework that
generates high-fidelity 3D assets efficiently. CaPa employs a two-stage
process, decoupling geometry generation from texture synthesis. Initially, a 3D
latent diffusion model generates geometry guided by multi-view inputs, ensuring
structural consistency across perspectives. Subsequently, leveraging a novel,
model-agnostic Spatially Decoupled Attention, the framework synthesizes
high-resolution textures (up to 4K) for a given geometry. Furthermore, we
propose a 3D-aware occlusion inpainting algorithm that fills untextured
regions, resulting in cohesive results across the entire model. This pipeline
generates high-quality 3D assets in less than 30 seconds, providing
ready-to-use outputs for commercial applications. Experimental results
demonstrate that CaPa excels in both texture fidelity and geometric stability,
establishing a new standard for practical, scalable 3D asset generation.
|
2501.09436
|
Scaling up self-supervised learning for improved surgical foundation
models
|
cs.CV
|
Foundation models have revolutionized computer vision by achieving vastly
superior performance across diverse tasks through large-scale pretraining on
extensive datasets. However, their application in surgical computer vision has
been limited. This study addresses this gap by introducing SurgeNetXL, a novel
surgical foundation model that sets a new benchmark in surgical computer
vision. Trained on the largest reported surgical dataset to date, comprising
over 4.7 million video frames, SurgeNetXL achieves consistent top-tier
performance across six datasets spanning four surgical procedures and three
tasks, including semantic segmentation, phase recognition, and critical view of
safety (CVS) classification. Compared with the best-performing surgical
foundation models, SurgeNetXL shows mean improvements of 2.4, 9.0, and 12.6
percent for semantic segmentation, phase recognition, and CVS classification,
respectively. Additionally, SurgeNetXL outperforms the best-performing
ImageNet-based variants by 14.4, 4.0, and 1.6 percent in the respective tasks.
In addition to advancing model performance, this study provides key insights
into scaling pretraining datasets, extending training durations, and optimizing
model architectures specifically for surgical computer vision. These findings
pave the way for improved generalizability and robustness in data-scarce
scenarios, offering a comprehensive framework for future research in this
domain. All models and a subset of the SurgeNetXL dataset, including over 2
million video frames, are publicly available at:
https://github.com/TimJaspers0801/SurgeNet.
|
2501.09444
|
Solving the Unsolvable: Translating Case Law in Hong Kong
|
cs.CL cs.AI cs.LG cs.MA
|
This paper addresses the challenges translating case law under Hong Kong's
bilingual legal system. It highlights the initial success of translating all
written statutes into Chinese before the 1997 handover, a task mandated by the
Basic Law. The effort involved significant collaboration among legal,
linguistic, and translation experts, resulting in a comprehensive and
culturally appropriate bilingual legal system. However, translating case law
remains a significant challenge due to the sheer volume and continuous growth
of judicial decisions. The paper critiques the governments and judiciarys
sporadic and uncoordinated efforts to translate case law, contrasting it with
the thorough approach previously taken for statute translation. Although the
government acknowledges the importance of legal bilingualism, it lacks a
sustainable strategy for translating case law. The Judiciarys position that
translating all judgments is unnecessary, unrealistic, and not cost-effectiveis
analyzed and critiqued for its impact on legal transparency and public trust. A
proposed solution involves leveraging machine translation technology through a
human-machine interactive translation platform, which undergoes two major
transitions. Initially based on a neural model, the platform transitions to
using a large language model for improved translation accuracy. Furthermore, it
evolves from a single-agent system to a multi-agent system, incorporating
Translator, Annotator, and Proofreader agents. This multi-agent approach,
supported by a grant, aims to facilitate efficient, high-quality translation of
judicial judgments by integrating advanced artificial intelligence and
continuous feedback mechanisms, thus better meeting the needs of a bilingual
legal system.
|
2501.09446
|
Double Visual Defense: Adversarial Pre-training and Instruction Tuning
for Improving Vision-Language Model Robustness
|
cs.CV
|
This paper investigates the robustness of vision-language models against
adversarial visual perturbations and introduces a novel ``double visual
defense" to enhance this robustness. Unlike previous approaches that resort to
lightweight adversarial fine-tuning of a pre-trained CLIP model, we perform
large-scale adversarial vision-language pre-training from scratch using
web-scale data. We then strengthen the defense by incorporating adversarial
visual instruction tuning. The resulting models from each stage, $\Delta$CLIP
and $\Delta^2$LLaVA, show substantially enhanced zero-shot robustness and set a
new state-of-the-art in adversarial defense for vision-language models. For
example, the adversarial robustness of $\Delta$CLIP surpasses that of the
previous best models on ImageNet-1k by ~20%. %For example, $\Delta$CLIP
surpasses the previous best models on ImageNet-1k by ~20% in terms of
adversarial robustness. Similarly, compared to prior art, $\Delta^2$LLaVA
brings a ~30% robustness improvement to image captioning task and a ~20%
robustness improvement to visual question answering task. Furthermore, our
models exhibit stronger zero-shot recognition capability, fewer hallucinations,
and superior reasoning performance compared to baselines. Our project page is
https://doublevisualdefense.github.io/.
|
2501.09450
|
Real-Time Generation of Near-Minimum-Energy Trajectories via
Constraint-Informed Residual Learning
|
cs.RO
|
Industrial robotics demands significant energy to operate, making
energy-reduction methodologies increasingly important. Strategies for planning
minimum-energy trajectories typically involve solving nonlinear optimal control
problems (OCPs), which rarely cope with real-time requirements. In this paper,
we propose a paradigm for generating near minimum-energy trajectories for
manipulators by learning from optimal solutions. Our paradigm leverages a
residual learning approach, which embeds boundary conditions while focusing on
learning only the adjustments needed to steer a standard solution to an optimal
one. Compared to a computationally expensive OCP-based planner, our paradigm
achieves 87.3% of the performance near the training dataset and 50.8% far from
the dataset, while being two to three orders of magnitude faster.
|
2501.09451
|
Scaling Graph-Based Dependency Parsing with Arc Vectorization and
Attention-Based Refinement
|
cs.CL
|
We propose a novel architecture for graph-based dependency parsing that
explicitly constructs vectors, from which both arcs and labels are scored. Our
method addresses key limitations of the standard two-pipeline approach by
unifying arc scoring and labeling into a single network, reducing scalability
issues caused by the information bottleneck and lack of parameter sharing.
Additionally, our architecture overcomes limited arc interactions with
transformer layers to efficiently simulate higher-order dependencies.
Experiments on PTB and UD show that our model outperforms state-of-the-art
parsers in both accuracy and efficiency.
|
2501.09456
|
On the Relation between Optical Aperture and Automotive Object Detection
|
cs.CV
|
We explore the impact of aperture size and shape on automotive camera systems
for deep-learning-based tasks like traffic sign recognition and light state
detection. A method is proposed to simulate optical effects using the point
spread function (PSF), enhancing realism and reducing the domain gap between
synthetic and real-world images. Computer-generated scenes are refined with
this technique to model optical distortions and improve simulation accuracy.
|
2501.09459
|
Teaching Wav2Vec2 the Language of the Brain
|
cs.LG
|
The decoding of continuously spoken speech from neuronal activity has the
potential to become an important clinical solution for paralyzed patients. Deep
Learning Brain Computer Interfaces (BCIs) have recently successfully mapped
neuronal activity to text contents in subjects who attempted to formulate
speech. However, only small BCI datasets are available. In contrast, labeled
data and pre-trained models for the closely related task of speech recognition
from audio are widely available. One such model is Wav2Vec2 which has been
trained in a self-supervised fashion to create meaningful representations of
speech audio data. In this study, we show that patterns learned by Wav2Vec2 are
transferable to brain data. Specifically, we replace its audio feature
extractor with an untrained Brain Feature Extractor (BFE) model. We then
execute full fine-tuning with pre-trained weights for Wav2Vec2, training ''from
scratch'' without pre-trained weights as well as freezing a pre-trained
Wav2Vec2 and training only the BFE each for 45 different BFE architectures.
Across these experiments, the best run is from full fine-tuning with
pre-trained weights, achieving a Character Error Rate (CER) of 18.54\%,
outperforming the best training from scratch run by 20.46\% and that of frozen
Wav2Vec2 training by 15.92\% percentage points. These results indicate that
knowledge transfer from audio speech recognition to brain decoding is possible
and significantly improves brain decoding performance for the same
architectures. Related source code is available at
https://github.com/tfiedlerdev/Wav2Vec2ForBrain.
|
2501.09460
|
Normal-NeRF: Ambiguity-Robust Normal Estimation for Highly Reflective
Scenes
|
cs.CV
|
Neural Radiance Fields (NeRF) often struggle with reconstructing and
rendering highly reflective scenes. Recent advancements have developed various
reflection-aware appearance models to enhance NeRF's capability to render
specular reflections. However, the robust reconstruction of highly reflective
scenes is still hindered by the inherent shape ambiguity on specular surfaces.
Existing methods typically rely on additional geometry priors to regularize the
shape prediction, but this can lead to oversmoothed geometry in complex scenes.
Observing the critical role of surface normals in parameterizing reflections,
we introduce a transmittance-gradient-based normal estimation technique that
remains robust even under ambiguous shape conditions. Furthermore, we propose a
dual activated densities module that effectively bridges the gap between smooth
surface normals and sharp object boundaries. Combined with a reflection-aware
appearance model, our proposed method achieves robust reconstruction and
high-fidelity rendering of scenes featuring both highly specular reflections
and intricate geometric structures. Extensive experiments demonstrate that our
method outperforms existing state-of-the-art methods on various datasets.
|
2501.09464
|
Pruning for Sparse Diffusion Models based on Gradient Flow
|
cs.LG
|
Diffusion Models (DMs) have impressive capabilities among generation models,
but are limited to slower inference speeds and higher computational costs.
Previous works utilize one-shot structure pruning to derive lightweight DMs
from pre-trained ones, but this approach often leads to a significant drop in
generation quality and may result in the removal of crucial weights. Thus we
propose a iterative pruning method based on gradient flow, including the
gradient flow pruning process and the gradient flow pruning criterion. We
employ a progressive soft pruning strategy to maintain the continuity of the
mask matrix and guide it along the gradient flow of the energy function based
on the pruning criterion in sparse space, thereby avoiding the sudden
information loss typically caused by one-shot pruning. Gradient-flow based
criterion prune parameters whose removal increases the gradient norm of loss
function and can enable fast convergence for a pruned model in iterative
pruning stage. Our extensive experiments on widely used datasets demonstrate
that our method achieves superior performance in efficiency and consistency
with pre-trained models.
|
2501.09465
|
RE-POSE: Synergizing Reinforcement Learning-Based Partitioning and
Offloading for Edge Object Detection
|
cs.CV cs.AI cs.DC
|
Object detection plays a crucial role in smart video analysis, with
applications ranging from autonomous driving and security to smart cities.
However, achieving real-time object detection on edge devices presents
significant challenges due to their limited computational resources and the
high demands of deep neural network (DNN)-based detection models, particularly
when processing high-resolution video. Conventional strategies, such as input
down-sampling and network up-scaling, often compromise detection accuracy for
faster performance or lead to higher inference latency. To address these
issues, this paper introduces RE-POSE, a Reinforcement Learning (RL)-Driven
Partitioning and Edge Offloading framework designed to optimize the
accuracy-latency trade-off in resource-constrained edge environments. Our
approach features an RL-Based Dynamic Clustering Algorithm (RL-DCA) that
partitions video frames into non-uniform blocks based on object distribution
and the computational characteristics of DNNs. Furthermore, a parallel edge
offloading scheme is implemented to distribute these blocks across multiple
edge servers for concurrent processing. Experimental evaluations show that
RE-POSE significantly enhances detection accuracy and reduces inference
latency, surpassing existing methods.
|
2501.09466
|
DEFOM-Stereo: Depth Foundation Model Based Stereo Matching
|
cs.CV
|
Stereo matching is a key technique for metric depth estimation in computer
vision and robotics. Real-world challenges like occlusion and non-texture
hinder accurate disparity estimation from binocular matching cues. Recently,
monocular relative depth estimation has shown remarkable generalization using
vision foundation models. Thus, to facilitate robust stereo matching with
monocular depth cues, we incorporate a robust monocular relative depth model
into the recurrent stereo-matching framework, building a new framework for
depth foundation model-based stereo-matching, DEFOM-Stereo. In the feature
extraction stage, we construct the combined context and matching feature
encoder by integrating features from conventional CNNs and DEFOM. In the update
stage, we use the depth predicted by DEFOM to initialize the recurrent
disparity and introduce a scale update module to refine the disparity at the
correct scale. DEFOM-Stereo is verified to have comparable performance on the
Scene Flow dataset with state-of-the-art (SOTA) methods and notably shows much
stronger zero-shot generalization. Moreover, DEFOM-Stereo achieves SOTA
performance on the KITTI 2012, KITTI 2015, Middlebury, and ETH3D benchmarks,
ranking 1st on many metrics. In the joint evaluation under the robust vision
challenge, our model simultaneously outperforms previous models on the
individual benchmarks. Both results demonstrate the outstanding capabilities of
the proposed model.
|
2501.09468
|
Sensorimotor Control Strategies for Tactile Robotics
|
cs.RO
|
How are robots becoming smarter at interacting with their surroundings?
Recent advances have reshaped how robots use tactile sensing to perceive and
engage with the world. Tactile sensing is a game-changer, allowing robots to
embed sensorimotor control strategies to interact with complex environments and
skillfully handle heterogeneous objects. Such control frameworks plan
contact-driven motions while staying responsive to sudden changes. We review
the latest methods for building perception and control systems in tactile
robotics while offering practical guidelines for their design and
implementation. We also address key challenges to shape the future of
intelligent robots.
|
2501.09469
|
Predicting Air Temperature from Volumetric Urban Morphology with Machine
Learning
|
cs.LG cs.AI
|
In this study, we firstly introduce a method that converts CityGML data into
voxels which works efficiently and fast in high resolution for large scale
datasets such as cities but by sacrificing some building details to overcome
the limitations of previous voxelization methodologies that have been
computationally intensive and inefficient at transforming large-scale urban
areas into voxel representations for high resolution. Those voxelized 3D city
data from multiple cities and corresponding air temperature data are used to
develop a machine learning model. Before the model training, Gaussian blurring
is implemented on input data to consider spatial relationships, as a result the
correlation rate between air temperature and volumetric building morphology is
also increased after the Gaussian blurring. After the model training, the
prediction results are not just evaluated with Mean Square Error (MSE) but some
image similarity metrics such as Structural Similarity Index Measure (SSIM) and
Learned Perceptual Image Patch Similarity (LPIPS) that are able to detect and
consider spatial relations during the evaluation process. This trained model is
capable of predicting the spatial distribution of air temperature by using
building volume information of corresponding pixel as input. By doing so, this
research aims to assist urban planners in incorporating environmental
parameters into their planning strategies, thereby facilitating more
sustainable and inhabitable urban environments.
|
2501.09480
|
Utilizing AI Language Models to Identify Prognostic Factors for Coronary
Artery Disease: A Study in Mashhad Residents
|
cs.LG
|
Abstract: Background: Understanding cardiovascular artery disease risk
factors, the leading global cause of mortality, is crucial for influencing its
etiology, prevalence, and treatment. This study aims to evaluate prognostic
markers for coronary artery disease in Mashhad using Naive Bayes, REP Tree,
J48, CART, and CHAID algorithms. Methods:
Using data from the 2009 MASHAD STUDY, prognostic factors for coronary artery
disease were determined with Naive Bayes, REP Tree, J48, CART, CHAID, and
Random Forest algorithms using R 3.5.3 and WEKA 3.9.4. Model efficiency was
compared by sensitivity, specificity, and accuracy. Cases were patients with
coronary artery disease; each had three controls (totally 940). Results:
Prognostic factors for coronary artery disease in Mashhad residents varied by
algorithm. CHAID identified age, myocardial infarction history, and
hypertension. CART included depression score and physical activity. REP added
education level and anxiety score. NB included diabetes and family history. J48
highlighted father's heart disease and weight loss. CHAID had the highest
accuracy (0.80).
Conclusion:
Key prognostic factors for coronary artery disease in CART and CHAID models
include age, myocardial infarction history, hypertension, depression score,
physical activity, and BMI. NB, REP Tree, and J48 identified numerous factors.
CHAID had the highest accuracy, sensitivity, and specificity. CART offers
simpler interpretation, aiding physician and paramedic model selection based on
specific. Keywords: RF, Na\"ive Bayes, REP, J48 algorithms, Coronary Artery
Disease (CAD).
|
2501.09481
|
MonoSOWA: Scalable monocular 3D Object detector Without human
Annotations
|
cs.CV cs.AI cs.LG
|
Detecting the three-dimensional position and orientation of objects using a
single RGB camera is a foundational task in computer vision with many important
applications. Traditionally, 3D object detection methods are trained in a
fully-supervised setup, requiring vast amounts of human annotations, which are
laborious, costly, and do not scale well with the ever-increasing amounts of
data being captured.
In this paper, we present the first method to train 3D object detectors for
monocular RGB cameras without domain-specific human annotations, thus making
orders of magnitude more data available for training. Thanks to newly proposed
Canonical Object Space, the method can not only exploit data across a variety
of datasets and camera setups to train a single 3D detector, but unlike
previous work it also works out of the box in previously unseen camera setups.
All this is crucial for practical applications, where the data and cameras are
extremely heterogeneous.
The method is evaluated on two standard autonomous driving datasets, where it
outperforms previous works, which, unlike our method, still rely on 2D human
annotations.
|
2501.09484
|
Exploring the Inquiry-Diagnosis Relationship with Advanced Patient
Simulators
|
cs.CL
|
Online medical consultation (OMC) restricts doctors to gathering patient
information solely through inquiries, making the already complex sequential
decision-making process of diagnosis even more challenging. Recently, the rapid
advancement of large language models has demonstrated a significant potential
to transform OMC. However, most studies have primarily focused on improving
diagnostic accuracy under conditions of relatively sufficient information,
while paying limited attention to the "inquiry" phase of the consultation
process. This lack of focus has left the relationship between "inquiry" and
"diagnosis" insufficiently explored. In this paper, we first extract real
patient interaction strategies from authentic doctor-patient conversations and
use these strategies to guide the training of a patient simulator that closely
mirrors real-world behavior. By inputting medical records into our patient
simulator to simulate patient responses, we conduct extensive experiments to
explore the relationship between "inquiry" and "diagnosis" in the consultation
process. Experimental results demonstrate that inquiry and diagnosis adhere to
the Liebig's law: poor inquiry quality limits the effectiveness of diagnosis,
regardless of diagnostic capability, and vice versa. Furthermore, the
experiments reveal significant differences in the inquiry performance of
various models. To investigate this phenomenon, we categorize the inquiry
process into four types: (1) chief complaint inquiry; (2) specification of
known symptoms; (3) inquiry about accompanying symptoms; and (4) gathering
family or medical history. We analyze the distribution of inquiries across the
four types for different models to explore the reasons behind their significant
performance differences. We plan to open-source the weights and related code of
our patient simulator at https://github.com/LIO-H-ZEN/PatientSimulator.
|
2501.09485
|
The Devil is in the Details: Simple Remedies for Image-to-LiDAR
Representation Learning
|
cs.CV
|
LiDAR is a crucial sensor in autonomous driving, commonly used alongside
cameras. By exploiting this camera-LiDAR setup and recent advances in image
representation learning, prior studies have shown the promising potential of
image-to-LiDAR distillation. These prior arts focus on the designs of their own
losses to effectively distill the pre-trained 2D image representations into a
3D model. However, the other parts of the designs have been surprisingly
unexplored. We find that fundamental design elements, e.g., the LiDAR
coordinate system, quantization according to the existing input interface, and
data utilization, are more critical than developing loss functions, which have
been overlooked in prior works. In this work, we show that simple fixes to
these designs notably outperform existing methods by 16% in 3D semantic
segmentation on the nuScenes dataset and 13% in 3D object detection on the
KITTI dataset in downstream task performance. We focus on overlooked design
choices along the spatial and temporal axes. Spatially, prior work has used
cylindrical coordinate and voxel sizes without considering their side effects
yielded with a commonly deployed sparse convolution layer input interface,
leading to spatial quantization errors in 3D models. Temporally, existing work
has avoided cumbersome data curation by discarding unsynced data, limiting the
use to only the small portion of data that is temporally synced across sensors.
We analyze these effects and propose simple solutions for each overlooked
aspect.
|
2501.09490
|
Comparison of Various SLAM Systems for Mobile Robot in an Indoor
Environment
|
cs.RO cs.CV
|
This article presents a comparative analysis of a mobile robot trajectories
computed by various ROS-based SLAM systems. For this reason we developed a
prototype of a mobile robot with common sensors: 2D lidar, a monocular and ZED
stereo cameras. Then we conducted experiments in a typical office environment
and collected data from all sensors, running all tested SLAM systems based on
the acquired dataset. We studied the following SLAM systems: (a) 2D
lidar-based: GMapping, Hector SLAM, Cartographer; (b) monocular camera-based:
Large Scale Direct monocular SLAM (LSD SLAM), ORB SLAM, Direct Sparse Odometry
(DSO); and (c) stereo camera-based: ZEDfu, Real-Time Appearance-Based Mapping
(RTAB map), ORB SLAM, Stereo Parallel Tracking and Mapping (S-PTAM). Since all
SLAM methods were tested on the same dataset we compared results for different
SLAM systems with appropriate metrics, demonstrating encouraging results for
lidar-based Cartographer SLAM, Monocular ORB SLAM and Stereo RTAB Map methods.
|
2501.09493
|
Large Language Models as Evaluators for Conversational Recommender
Systems: Benchmarking System Performance from a User-Centric Perspective
|
cs.IR
|
Conversational recommender systems (CRS) involve both recommendation and
dialogue tasks, which makes their evaluation a unique challenge. Although past
research has analyzed various factors that may affect user satisfaction with
CRS interactions from the perspective of user studies, few evaluation metrics
for CRS have been proposed. Recent studies have shown that LLMs can align with
human preferences, and several LLM-based text quality evaluation measures have
been introduced. However, the application of LLMs in CRS evaluation remains
relatively limited. To address this research gap and advance the development of
user-centric conversational recommender systems, this study proposes an
automated LLM-based CRS evaluation framework, building upon existing research
in human-computer interaction and psychology. The framework evaluates CRS from
four dimensions: dialogue behavior, language expression, recommendation items,
and response content. We use this framework to evaluate four different
conversational recommender systems.
|
2501.09499
|
VanGogh: A Unified Multimodal Diffusion-based Framework for Video
Colorization
|
cs.CV
|
Video colorization aims to transform grayscale videos into vivid color
representations while maintaining temporal consistency and structural
integrity. Existing video colorization methods often suffer from color bleeding
and lack comprehensive control, particularly under complex motion or diverse
semantic cues. To this end, we introduce VanGogh, a unified multimodal
diffusion-based framework for video colorization. VanGogh tackles these
challenges using a Dual Qformer to align and fuse features from multiple
modalities, complemented by a depth-guided generation process and an optical
flow loss, which help reduce color overflow. Additionally, a color injection
strategy and luma channel replacement are implemented to improve generalization
and mitigate flickering artifacts. Thanks to this design, users can exercise
both global and local control over the generation process, resulting in
higher-quality colorized videos. Extensive qualitative and quantitative
evaluations, and user studies, demonstrate that VanGogh achieves superior
temporal consistency and color fidelity.Project page:
https://becauseimbatman0.github.io/VanGogh.
|
2501.09502
|
Omni-Emotion: Extending Video MLLM with Detailed Face and Audio Modeling
for Multimodal Emotion Analysis
|
cs.CV
|
Understanding emotions accurately is essential for fields like human-computer
interaction. Due to the complexity of emotions and their multi-modal nature
(e.g., emotions are influenced by facial expressions and audio), researchers
have turned to using multi-modal models to understand human emotions rather
than single-modality. However, current video multi-modal large language models
(MLLMs) encounter difficulties in effectively integrating audio and identifying
subtle facial micro-expressions. Furthermore, the lack of detailed emotion
analysis datasets also limits the development of multimodal emotion analysis.
To address these issues, we introduce a self-reviewed dataset and a
human-reviewed dataset, comprising 24,137 coarse-grained samples and 3,500
manually annotated samples with detailed emotion annotations, respectively.
These datasets allow models to learn from diverse scenarios and better
generalize to real-world applications. Moreover, in addition to the audio
modeling, we propose to explicitly integrate facial encoding models into the
existing advanced Video MLLM, enabling the MLLM to effectively unify audio and
the subtle facial cues for emotion understanding. By aligning these features
within a unified space and employing instruction tuning in our proposed
datasets, our Omni-Emotion achieves state-of-the-art performance in both
emotion recognition and reasoning tasks.
|
2501.09503
|
AnyStory: Towards Unified Single and Multiple Subject Personalization in
Text-to-Image Generation
|
cs.CV
|
Recently, large-scale generative models have demonstrated outstanding
text-to-image generation capabilities. However, generating high-fidelity
personalized images with specific subjects still presents challenges,
especially in cases involving multiple subjects. In this paper, we propose
AnyStory, a unified approach for personalized subject generation. AnyStory not
only achieves high-fidelity personalization for single subjects, but also for
multiple subjects, without sacrificing subject fidelity. Specifically, AnyStory
models the subject personalization problem in an "encode-then-route" manner. In
the encoding step, AnyStory utilizes a universal and powerful image encoder,
i.e., ReferenceNet, in conjunction with CLIP vision encoder to achieve
high-fidelity encoding of subject features. In the routing step, AnyStory
utilizes a decoupled instance-aware subject router to accurately perceive and
predict the potential location of the corresponding subject in the latent
space, and guide the injection of subject conditions. Detailed experimental
results demonstrate the excellent performance of our method in retaining
subject details, aligning text descriptions, and personalizing for multiple
subjects. The project page is at https://aigcdesigngroup.github.io/AnyStory/ .
|
2501.09504
|
HydraMix: Multi-Image Feature Mixing for Small Data Image Classification
|
cs.CV
|
Training deep neural networks requires datasets with a large number of
annotated examples. The collection and annotation of these datasets is not only
extremely expensive but also faces legal and privacy problems. These factors
are a significant limitation for many real-world applications. To address this,
we introduce HydraMix, a novel architecture that generates new image
compositions by mixing multiple different images from the same class. HydraMix
learns the fusion of the content of various images guided by a
segmentation-based mixing mask in feature space and is optimized via a
combination of unsupervised and adversarial training. Our data augmentation
scheme allows the creation of models trained from scratch on very small
datasets. We conduct extensive experiments on ciFAIR-10, STL-10, and
ciFAIR-100. Additionally, we introduce a novel text-image metric to assess the
generality of the augmented datasets. Our results show that HydraMix
outperforms existing state-of-the-art methods for image classification on small
datasets.
|
2501.09506
|
Multimodal Marvels of Deep Learning in Medical Diagnosis: A
Comprehensive Review of COVID-19 Detection
|
cs.LG cs.SD eess.AS eess.IV
|
This study presents a comprehensive review of the potential of multimodal
deep learning (DL) in medical diagnosis, using COVID-19 as a case example.
Motivated by the success of artificial intelligence applications during the
COVID-19 pandemic, this research aims to uncover the capabilities of DL in
disease screening, prediction, and classification, and to derive insights that
enhance the resilience, sustainability, and inclusiveness of science,
technology, and innovation systems. Adopting a systematic approach, we
investigate the fundamental methodologies, data sources, preprocessing steps,
and challenges encountered in various studies and implementations. We explore
the architecture of deep learning models, emphasising their data-specific
structures and underlying algorithms. Subsequently, we compare different deep
learning strategies utilised in COVID-19 analysis, evaluating them based on
methodology, data, performance, and prerequisites for future research. By
examining diverse data types and diagnostic modalities, this research
contributes to scientific understanding and knowledge of the multimodal
application of DL and its effectiveness in diagnosis. We have implemented and
analysed 11 deep learning models using COVID-19 image, text, and speech (ie,
cough) data. Our analysis revealed that the MobileNet model achieved the
highest accuracy of 99.97% for COVID-19 image data and 93.73% for speech data
(i.e., cough). However, the BiGRU model demonstrated superior performance in
COVID-19 text classification with an accuracy of 99.89%. The broader
implications of this research suggest potential benefits for other domains and
disciplines that could leverage deep learning techniques for image, text, and
speech analysis.
|
2501.09509
|
Power-Efficient RAN Intelligent Controllers Through Optimized KPI
Monitoring
|
eess.SY cs.SY
|
The Open Radio Access Network (RAN) paradigm envisions a more flexible,
interoperable, and intelligent RAN ecosystem via new open interfaces and
elements like the RAN Intelligent Controller (RIC). However, the impact of
these elements on Open RAN's power consumption remains heavily unexplored. This
work for the first time evaluates the impact of Key Performance Indicator (KPI)
monitoring on RIC's power consumption using real traffic and power
measurements. By analyzing various RIC-RAN communication scenarios, we identify
that RIC's power consumption can become a scalability bottleneck, particularly
in large-scale deployments, even when RIC is limited to its core operational
functionalities and without incorporating application-specific processes. In
this context, also for the first time we explore potential power savings
through the elimination of redundant KPI transmissions, extending existing
techniques for identical subscription removal and KPI selection, achieving
significant power consumption gains exceeding 87\% of the overall RIC power
consumption.
|
2501.09512
|
PIER: A Novel Metric for Evaluating What Matters in Code-Switching
|
cs.CL cs.LG
|
Code-switching, the alternation of languages within a single discourse,
presents a significant challenge for Automatic Speech Recognition. Despite the
unique nature of the task, performance is commonly measured with established
metrics such as Word-Error-Rate (WER). However, in this paper, we question
whether these general metrics accurately assess performance on code-switching.
Specifically, using both Connectionist-Temporal-Classification and
Encoder-Decoder models, we show fine-tuning on non-code-switched data from both
matrix and embedded language improves classical metrics on code-switching test
sets, although actual code-switched words worsen (as expected). Therefore, we
propose Point-of-Interest Error Rate (PIER), a variant of WER that focuses only
on specific words of interest. We instantiate PIER on code-switched utterances
and show that this more accurately describes the code-switching performance,
showing huge room for improvement in future work. This focused evaluation
allows for a more precise assessment of model performance, particularly in
challenging aspects such as inter-word and intra-word code-switching.
|
2501.09513
|
A Dataset Generation Toolbox for Dynamic Security Assessment: On the
Role of the Security Boundary
|
eess.SY cs.SY
|
Dynamic security assessment (DSA) is crucial for ensuring the reliable
operation of power systems. However, conventional DSA approaches are becoming
intractable for future power systems, driving interest in more computationally
efficient data-driven methods. Efficient dataset generation is a cornerstone of
these methods. While importance and generic sampling techniques often focus on
operating points near the system's security boundary, systematic methods for
sampling in this region remain scarce. Furthermore, the impact of sampling near
the security boundary on the performance of data-driven DSA methods has yet to
be established. This paper highlights the critical role of accurately capturing
security boundaries for effective security assessment. As such, we propose a
novel method for generating a high number of samples close to the security
boundary, considering both AC feasibility and small-signal stability. Case
studies on the PGLib-OPF 39-bus and PGLib-OPF 162-bus systems demonstrate the
importance of including boundary-adjacent operating points in training datasets
while maintaining a balanced distribution of secure and insecure points.
|
2501.09514
|
A Runtime Analysis of the Multi-Valued Compact Genetic Algorithm on
Generalized LeadingOnes
|
cs.NE
|
In the literature on runtime analyses of estimation of distribution
algorithms (EDAs), researchers have recently explored univariate EDAs for
multi-valued decision variables. Particularly, Jedidia et al. gave the first
runtime analysis of the multi-valued UMDA on the r-valued LeadingOnes
(r-LeadingOnes) functions and Adak et al. gave the first runtime analysis of
the multi-valued cGA (r-cGA) on the r-valued OneMax function. We utilize their
framework to conduct an analysis of the multi-valued cGA on the r-valued
LeadingOnes function. Even for the binary case, a runtime analysis of the
classical cGA on LeadingOnes was not yet available. In this work, we show that
the runtime of the r-cGA on r-LeadingOnes is O(n^2r^2 log^3 n log^2 r) with
high probability.
|
2501.09519
|
Multi-task deep-learning for sleep event detection and stage
classification
|
eess.SP cs.LG
|
Polysomnographic sleep analysis is the standard clinical method to accurately
diagnose and treat sleep disorders. It is an intricate process which involves
the manual identification, classification, and location of multiple sleep event
patterns. This is complex, for which identification of different types of
events involves focusing on different subsets of signals, resulting on an
iterative time-consuming process entailing several visual analysis passes. In
this paper we propose a multi-task deep-learning approach for the simultaneous
detection of sleep events and hypnogram construction in one single pass. Taking
as reference state-of-the-art methodology for object-detection in the field of
Computer Vision, we reformulate the problem for the analysis of multi-variate
time sequences, and more specifically for pattern detection in the sleep
analysis scenario. We investigate the performance of the resulting method in
identifying different assembly combinations of EEG arousals, respiratory events
(apneas and hypopneas) and sleep stages, also considering different input
signal montage configurations. Furthermore, we evaluate our approach using two
independent datasets, assessing true-generalization effects involving local and
external validation scenarios. Based on our results, we analyze and discuss our
method's capabilities and its potential wide-range applicability across
different settings and datasets.
|
2501.09520
|
RWZC: A Model-Driven Approach for Learning-based Robust Wyner-Ziv Coding
|
cs.IT math.IT
|
In this paper, a novel learning-based Wyner-Ziv coding framework is
considered under a distributed image transmission scenario, where the
correlated source is only available at the receiver. Unlike other learnable
frameworks, our approach demonstrates robustness to non-stationary source
correlation, where the overlapping information between image pairs varies.
Specifically, we first model the affine relationship between correlated images
and leverage this model for learnable mask generation and rate-adaptive joint
source-channel coding. Moreover, we also provide a warping-prediction network
to remove the distortion from channel interference and affine transform.
Intuitively, the observed performance improvement is largely due to focusing on
the simple geometric relationship, rather than the complex joint distribution
between the sources. Numerical results show that our framework achieves a 1.5
dB gain in PSNR and a 0.2 improvement in MS-SSIM, along with a significant
superiority in perceptual metrics, compared to state-of-the-art methods when
applied to real-world samples with non-stationary correlations.
|
2501.09521
|
Augmenting a Large Language Model with a Combination of Text and Visual
Data for Conversational Visualization of Global Geospatial Data
|
cs.HC cs.CL
|
We present a method for augmenting a Large Language Model (LLM) with a
combination of text and visual data to enable accurate question answering in
visualization of scientific data, making conversational visualization possible.
LLMs struggle with tasks like visual data interaction, as they lack contextual
visual information. We address this problem by merging a text description of a
visualization and dataset with snapshots of the visualization. We extract their
essential features into a structured text file, highly compact, yet descriptive
enough to appropriately augment the LLM with contextual information, without
any fine-tuning. This approach can be applied to any visualization that is
already finally rendered, as long as it is associated with some textual
description.
|
2501.09522
|
Merging Models on the Fly Without Retraining: A Sequential Approach to
Scalable Continual Model Merging
|
cs.LG
|
Deep model merging represents an emerging research direction that combines
multiple fine-tuned models to harness their specialized capabilities across
different tasks and domains. Current model merging techniques focus on merging
all available models simultaneously, with weight interpolation-based methods
being the predominant approaches. However, these conventional approaches are
not well-suited for scenarios where models become available sequentially, and
they often suffer from high memory requirements and potential interference
between tasks. In this study, we propose a training-free projection-based
continual merging method that processes models sequentially through orthogonal
projections of weight matrices and adaptive scaling mechanisms. Our method
operates by projecting new parameter updates onto subspaces orthogonal to
existing merged parameter updates while using an adaptive scaling mechanism to
maintain stable parameter distances, enabling efficient sequential integration
of task-specific knowledge. Our approach maintains constant memory complexity
to the number of models, minimizes interference between tasks through
orthogonal projections, and retains the performance of previously merged models
through adaptive task vector scaling. Extensive experiments on CLIP-ViT models
demonstrate that our method achieves a 5-8% average accuracy improvement while
maintaining robust performance in different task orderings.
|
2501.09525
|
Class Incremental Fault Diagnosis under Limited Fault Data via
Supervised Contrastive Knowledge Distillation
|
cs.LG cs.AI
|
Class-incremental fault diagnosis requires a model to adapt to new fault
classes while retaining previous knowledge. However, limited research exists
for imbalanced and long-tailed data. Extracting discriminative features from
few-shot fault data is challenging, and adding new fault classes often demands
costly model retraining. Moreover, incremental training of existing methods
risks catastrophic forgetting, and severe class imbalance can bias the model's
decisions toward normal classes. To tackle these issues, we introduce a
Supervised Contrastive knowledge distiLlation for class Incremental Fault
Diagnosis (SCLIFD) framework proposing supervised contrastive knowledge
distillation for improved representation learning capability and less
forgetting, a novel prioritized exemplar selection method for sample replay to
alleviate catastrophic forgetting, and the Random Forest Classifier to address
the class imbalance. Extensive experimentation on simulated and real-world
industrial datasets across various imbalance ratios demonstrates the
superiority of SCLIFD over existing approaches. Our code can be found at
https://github.com/Zhang-Henry/SCLIFD_TII.
|
2501.09527
|
Confidence Estimation for Error Detection in Text-to-SQL Systems
|
cs.LG cs.CL
|
Text-to-SQL enables users to interact with databases through natural
language, simplifying the retrieval and synthesis of information. Despite the
success of large language models (LLMs) in converting natural language
questions into SQL queries, their broader adoption is limited by two main
challenges: achieving robust generalization across diverse queries and ensuring
interpretative confidence in their predictions. To tackle these issues, our
research investigates the integration of selective classifiers into Text-to-SQL
systems. We analyse the trade-off between coverage and risk using entropy based
confidence estimation with selective classifiers and assess its impact on the
overall performance of Text-to-SQL models. Additionally, we explore the models'
initial calibration and improve it with calibration techniques for better model
alignment between confidence and accuracy. Our experimental results show that
encoder-decoder T5 is better calibrated than in-context-learning GPT 4 and
decoder-only Llama 3, thus the designated external entropy-based selective
classifier has better performance. The study also reveal that, in terms of
error detection, selective classifier with a higher probability detects errors
associated with irrelevant questions rather than incorrect query generations.
|
2501.09528
|
Comprehensive Survey of QML: From Data Analysis to Algorithmic
Advancements
|
quant-ph cs.IT math.IT
|
Quantum Machine Learning represents a paradigm shift at the intersection of
Quantum Computing and Machine Learning, leveraging quantum phenomena such as
superposition, entanglement, and quantum parallelism to address the limitations
of classical approaches in processing high-dimensional and large-scale
datasets. This survey provides a comprehensive analysis of Quantum Machine
Learning, detailing foundational concepts, algorithmic advancements, and their
applications across domains such as healthcare, finance, and quantum chemistry.
Key techniques, including Quantum Support Vector Machine, Quantum Neural
Network, Quantum Decision Trees, and hybrid quantum-classical models, are
explored with a focus on their theoretical foundations, computational benefits,
and comparative performance against classical counterparts. While the potential
for exponential speedups and enhanced efficiency is evident, the field faces
significant challenges, including hardware constraints, noise, and limited
qubit coherence in the current era of Noisy Intermediate-Scale Quantum devices.
Emerging solutions, such as error mitigation techniques, hybrid frameworks, and
advancements in quantum hardware, are discussed as critical enablers for
scalable and fault-tolerant Quantum Machine Learning systems. By synthesizing
state-of-the-art developments and identifying research gaps, this survey aims
to provide a foundational resource for advancing Quantum Machine Learning
toward practical, real-world applications in tackling computationally intensive
problems.
|
2501.09531
|
MOGNET: A Mux-residual quantized Network leveraging Online-Generated
weights
|
cs.LG cs.AR
|
This paper presents a compact model architecture called MOGNET, compatible
with a resource-limited hardware. MOGNET uses a streamlined Convolutional
factorization block based on a combination of 2 point-wise (1x1) convolutions
with a group-wise convolution in-between. To further limit the overall model
size and reduce the on-chip required memory, the second point-wise
convolution's parameters are on-line generated by a Cellular Automaton
structure. In addition, MOGNET enables the use of low-precision weights and
activations, by taking advantage of a Multiplexer mechanism with a proper
Bitshift rescaling for integrating residual paths without increasing the
hardware-related complexity. To efficiently train this model we also introduce
a novel weight ternarization method favoring the balance between quantized
levels. Experimental results show that given tiny memory budget (sub-2Mb),
MOGNET can achieve higher accuracy with a clear gap up to 1% at a similar or
even lower model size compared to recent state-of-the-art methods.
|
2501.09532
|
AdaFV: Rethinking of Visual-Language alignment for VLM acceleration
|
cs.CV
|
The success of VLMs often relies on the dynamic high-resolution schema that
adaptively augments the input images to multiple crops, so that the details of
the images can be retained. However, such approaches result in a large number
of redundant visual tokens, thus significantly reducing the efficiency of the
VLMs. To improve the VLMs' efficiency without introducing extra training costs,
many research works are proposed to reduce the visual tokens by filtering the
uninformative visual tokens or aggregating their information. Some approaches
propose to reduce the visual tokens according to the self-attention of VLMs,
which are biased, to result in inaccurate responses. The token reduction
approaches solely rely on visual cues are text-agnostic, and fail to focus on
the areas that are most relevant to the question, especially when the queried
objects are non-salient to the image. In this work, we first conduct
experiments to show that the original text embeddings are aligned with the
visual tokens, without bias on the tailed visual tokens. We then propose a
self-adaptive cross-modality attention mixture mechanism that dynamically
leverages the effectiveness of visual saliency and text-to-image similarity in
the pre-LLM layers to select the visual tokens that are informative. Extensive
experiments demonstrate that the proposed approach achieves state-of-the-art
training-free VLM acceleration performance, especially when the reduction rate
is sufficiently large.
|
2501.09534
|
AI in Support of Diversity and Inclusion
|
cs.AI
|
In this paper, we elaborate on how AI can support diversity and inclusion and
exemplify research projects conducted in that direction. We start by looking at
the challenges and progress in making large language models (LLMs) more
transparent, inclusive, and aware of social biases. Even though LLMs like
ChatGPT have impressive abilities, they struggle to understand different
cultural contexts and engage in meaningful, human like conversations. A key
issue is that biases in language processing, especially in machine translation,
can reinforce inequality. Tackling these biases requires a multidisciplinary
approach to ensure AI promotes diversity, fairness, and inclusion. We also
highlight AI's role in identifying biased content in media, which is important
for improving representation. By detecting unequal portrayals of social groups,
AI can help challenge stereotypes and create more inclusive technologies.
Transparent AI algorithms, which clearly explain their decisions, are essential
for building trust and reducing bias in AI systems. We also stress AI systems
need diverse and inclusive training data. Projects like the Child Growth
Monitor show how using a wide range of data can help address real world
problems like malnutrition and poverty. We present a project that demonstrates
how AI can be applied to monitor the role of search engines in spreading
disinformation about the LGBTQ+ community. Moreover, we discuss the SignON
project as an example of how technology can bridge communication gaps between
hearing and deaf people, emphasizing the importance of collaboration and mutual
trust in developing inclusive AI. Overall, with this paper, we advocate for AI
systems that are not only effective but also socially responsible, promoting
fair and inclusive interactions between humans and machines.
|
2501.09538
|
Analyzing Continuous Semantic Shifts with Diachronic Word Similarity
Matrices
|
cs.CL
|
The meanings and relationships of words shift over time. This phenomenon is
referred to as semantic shift. Research focused on understanding how semantic
shifts occur over multiple time periods is essential for gaining a detailed
understanding of semantic shifts. However, detecting change points only between
adjacent time periods is insufficient for analyzing detailed semantic shifts,
and using BERT-based methods to examine word sense proportions incurs a high
computational cost. To address those issues, we propose a simple yet intuitive
framework for how semantic shifts occur over multiple time periods by
leveraging a similarity matrix between the embeddings of the same word through
time. We compute a diachronic word similarity matrix using fast and lightweight
word embeddings across arbitrary time periods, making it deeper to analyze
continuous semantic shifts. Additionally, by clustering the similarity matrices
for different words, we can categorize words that exhibit similar behavior of
semantic shift in an unsupervised manner.
|
2501.09551
|
Intra-day Solar and Power Forecast for Optimization of Intraday Market
Participation
|
cs.LG cs.SY eess.SP eess.SY
|
The prediction of solar irradiance enhances reliability in photovoltaic (PV)
solar plant generation and grid integration. In Colombia, PV plants face
penalties if energy production deviates beyond governmental thresholds from
intraday market offers. This research employs Long Short-Term Memory (LSTM) and
Bidirectional-LSTM (Bi-LSTM) models, utilizing meteorological data from a PV
plant in El Paso, Cesar, Colombia, to predict solar irradiance with a 6-hour
horizon and 10-minute resolution. While Bi-LSTM showed superior performance,
the LSTM model achieved comparable results with significantly reduced training
time (6 hours versus 18 hours), making it computationally advantageous. The
LSTM predictions were averaged to create an hourly resolution model, evaluated
using Mean Absolute Error, Root-Mean-Square Error, Normalized Root-Mean-Square
Error, and Mean Absolute Percentage Error metrics. Comparison with the Global
Forecast System (GFS) revealed similar performance, with both models
effectively capturing daily solar irradiance patterns. The forecast model
integrates with an Object-Oriented power production model, enabling accurate
energy offers in the intraday market while minimizing penalty costs.
|
2501.09552
|
Exploring AI-based System Design for Pixel-level Protected Health
Information Detection in Medical Images
|
cs.CV
|
Purpose: This study aims to evaluate different setups of an AI-based solution
to detect Protected Health Information (PHI) in medical images.
Materials and Methods: Text from eight PHI and eight non-PHI categories are
simulated and incorporated into a curated dataset comprising 1,000 medical
images across four modalities: CT, X-ray, bone scan, and MRI. The proposed PHI
detection pipeline comprises three key components: text localization,
extraction, and analysis. Three vision and language models, YOLOv11, EasyOCR,
and GPT-4o, are benchmarked in different setups corresponding to three key
components. The performance is evaluated with classification metrics, including
precision, recall, F1 score, and accuracy.
Results: All four setups demonstrate strong performance in detecting PHI
imprints, with all metrics exceeding 0.9. The setup that utilizes YOLOv11 for
text localization and GPT-4o for text extraction and analysis achieves the
highest performance in PHI detection. However, this setup incurs the highest
cost due to the increased number of generated tokens associated with GPT-4o
model. Conversely, the setup using solely GPT-4o for the end-to-end pipeline
exhibits the lowest performance but showcases the feasibility of multi-modal
models in solving complex tasks.
Conclusion: For optimal text localization and extraction, it is recommended
to fine-tune an object detection model and utilize built-in Optical Character
Recognition (OCR) software. Large language models like GPT-4o can be
effectively leveraged to reason about and semantically analyze the PHI content.
Although the vision capability of GPT-4o is promising for reading image crops,
it remains limited for end-to-end pipeline applications with whole images.
|
2501.09555
|
Text-driven Adaptation of Foundation Models for Few-shot Surgical
Workflow Analysis
|
cs.CV cs.AI
|
Purpose: Surgical workflow analysis is crucial for improving surgical
efficiency and safety. However, previous studies rely heavily on large-scale
annotated datasets, posing challenges in cost, scalability, and reliance on
expert annotations. To address this, we propose Surg-FTDA (Few-shot Text-driven
Adaptation), designed to handle various surgical workflow analysis tasks with
minimal paired image-label data.
Methods: Our approach has two key components. First, Few-shot selection-based
modality alignment selects a small subset of images and aligns their embeddings
with text embeddings from the downstream task, bridging the modality gap.
Second, Text-driven adaptation leverages only text data to train a decoder,
eliminating the need for paired image-text data. This decoder is then applied
to aligned image embeddings, enabling image-related tasks without explicit
image-text pairs.
Results: We evaluate our approach to generative tasks (image captioning) and
discriminative tasks (triplet recognition and phase recognition). Results show
that Surg-FTDA outperforms baselines and generalizes well across downstream
tasks.
Conclusion: We propose a text-driven adaptation approach that mitigates the
modality gap and handles multiple downstream tasks in surgical workflow
analysis, with minimal reliance on large annotated datasets. The code and
dataset will be released in https://github.com/CAMMA-public/Surg-FTDA
|
2501.09556
|
Overshoot: Taking advantage of future gradients in momentum-based
stochastic optimization
|
cs.LG
|
Overshoot is a novel, momentum-based stochastic gradient descent optimization
method designed to enhance performance beyond standard and Nesterov's momentum.
In conventional momentum methods, gradients from previous steps are aggregated
with the gradient at current model weights before taking a step and updating
the model. Rather than calculating gradient at the current model weights,
Overshoot calculates the gradient at model weights shifted in the direction of
the current momentum. This sacrifices the immediate benefit of using the
gradient w.r.t. the exact model weights now, in favor of evaluating at a point,
which will likely be more relevant for future updates. We show that
incorporating this principle into momentum-based optimizers (SGD with momentum
and Adam) results in faster convergence (saving on average at least 15% of
steps). Overshoot consistently outperforms both standard and Nesterov's
momentum across a wide range of tasks and integrates into popular
momentum-based optimizers with zero memory and small computational overhead.
|
2501.09561
|
Stylomech: Unveiling Authorship via Computational Stylometry in English
and Romanized Sinhala
|
cs.CL
|
With the advent of Web 2.0, the development in social technology coupled with
global communication systematically brought positive and negative impacts to
society. Copyright claims and Author identification are deemed crucial as there
has been a considerable amount of increase in content violation owing to the
lack of proper ethics in society. The Author's attribution in both English and
Romanized Sinhala became a major requirement in the last few decades. As an
area largely unexplored, particularly within the context of Romanized Sinhala,
the research contributes significantly to the field of computational
linguistics. The proposed author attribution system offers a unique approach,
allowing for the comparison of only two sets of text: suspect author and
anonymous text, a departure from traditional methodologies which often rely on
larger corpora. This work focuses on using the numerical representation of
various pairs of the same and different authors allowing for, the model to
train on these representations as opposed to text, this allows for it to apply
to a multitude of authors and contexts, given that the suspected author text,
and the anonymous text are of reasonable quality. By expanding the scope of
authorship attribution to encompass diverse linguistic contexts, the work
contributes to fostering trust and accountability in digital communication,
especially in Sri Lanka. This research presents a pioneering approach to author
attribution in both English and Romanized Sinhala, addressing a critical need
for content verification and intellectual property rights enforcement in the
digital age.
|
2501.09563
|
A Multi-agent System for Hybrid Optimization
|
math.OC cs.MA
|
Optimization problems in process engineering, including design and operation,
can often pose challenges to many solvers: multi-modal, non-smooth, and
discontinuous models often with large computational requirements. In such
cases, the optimization problem is often treated as a black box in which only
the value of the objective function is required, sometimes with some indication
of the measure of the violation of the constraints. Such problems have
traditionally been tackled through the use of direct search and meta-heuristic
methods. The challenge, then, is to determine which of these methods or
combination of methods should be considered to make most effective use of
finite computational resources.
This paper presents a multi-agent system for optimization which enables a set
of solvers to be applied simultaneously to an optimization problem, including
different instantiations of any solver. The evaluation of the optimization
problem model is controlled by a scheduler agent which facilitates cooperation
and competition between optimization methods. The architecture and
implementation of the agent system is described in detail, including the
solver, model evaluation, and scheduler agents. A suite of direct search and
meta-heuristic methods has been developed for use with this system. Case
studies from process systems engineering applications are presented and the
results show the potential benefits of automated cooperation between different
optimization solvers and motivates the implementation of competition between
solvers.
|
2501.09565
|
A New Teacher-Reviewer-Student Framework for Semi-supervised 2D Human
Pose Estimation
|
cs.CV
|
Conventional 2D human pose estimation methods typically require extensive
labeled annotations, which are both labor-intensive and expensive. In contrast,
semi-supervised 2D human pose estimation can alleviate the above problems by
leveraging a large amount of unlabeled data along with a small portion of
labeled data. Existing semi-supervised 2D human pose estimation methods update
the network through backpropagation, ignoring crucial historical information
from the previous training process. Therefore, we propose a novel
semi-supervised 2D human pose estimation method by utilizing a newly designed
Teacher-Reviewer-Student framework. Specifically, we first mimic the phenomenon
that human beings constantly review previous knowledge for consolidation to
design our framework, in which the teacher predicts results to guide the
student's learning and the reviewer stores important historical parameters to
provide additional supervision signals. Secondly, we introduce a Multi-level
Feature Learning strategy, which utilizes the outputs from different stages of
the backbone to estimate the heatmap to guide network training, enriching the
supervisory information while effectively capturing keypoint relationships.
Finally, we design a data augmentation strategy, i.e., Keypoint-Mix, to perturb
pose information by mixing different keypoints, thus enhancing the network's
ability to discern keypoints. Extensive experiments on publicly available
datasets, demonstrate our method achieves significant improvements compared to
the existing methods.
|
2501.09571
|
MatrixNet: Learning over symmetry groups using learned group
representations
|
cs.LG cs.AI math.RT
|
Group theory has been used in machine learning to provide a theoretically
grounded approach for incorporating known symmetry transformations in tasks
from robotics to protein modeling. In these applications, equivariant neural
networks use known symmetry groups with predefined representations to learn
over geometric input data. We propose MatrixNet, a neural network architecture
that learns matrix representations of group element inputs instead of using
predefined representations. MatrixNet achieves higher sample efficiency and
generalization over several standard baselines in prediction tasks over the
several finite groups and the Artin braid group. We also show that MatrixNet
respects group relations allowing generalization to group elements of greater
word length than in the training set.
|
2501.09572
|
Towards Spectral Convergence of Locally Linear Embedding on Manifolds
with Boundary
|
math.AP cs.LG
|
We study the eigenvalues and eigenfunctions of a differential operator that
governs the asymptotic behavior of the unsupervised learning algorithm known as
Locally Linear Embedding when a large data set is sampled from an interval or
disc. In particular, the differential operator is of second order, mixed-type,
and degenerates near the boundary. We show that a natural regularity condition
on the eigenfunctions imposes a consistent boundary condition and use the
Frobenius method to estimate pointwise behavior. We then determine the limiting
sequence of eigenvalues analytically and compare them to numerical predictions.
Finally, we propose a variational framework for determining eigenvalues on
other compact manifolds.
|
2501.09579
|
Sequential PatchCore: Anomaly Detection for Surface Inspection using
Synthetic Impurities
|
cs.CV cs.GR cs.LG
|
The appearance of surface impurities (e.g., water stains, fingerprints,
stickers) is an often-mentioned issue that causes degradation of automated
visual inspection systems. At the same time, synthetic data generation
techniques for visual surface inspection have focused primarily on generating
perfect examples and defects, disregarding impurities. This study highlights
the importance of considering impurities when generating synthetic data. We
introduce a procedural method to include photorealistic water stains in
synthetic data. The synthetic datasets are generated to correspond to real
datasets and are further used to train an anomaly detection model and
investigate the influence of water stains. The high-resolution images used for
surface inspection lead to memory bottlenecks during anomaly detection
training. To address this, we introduce Sequential PatchCore - a method to
build coresets sequentially and make training on large images using
consumer-grade hardware tractable. This allows us to perform transfer learning
using coresets pre-trained on different dataset versions. Our results show the
benefits of using synthetic data for pre-training an explicit coreset anomaly
model and the extended performance benefits of finetuning the coreset using
real data. We observed how the impurities and labelling ambiguity lower the
model performance and have additionally reported the defect-wise recall to
provide an industrially relevant perspective on model performance.
|
2501.09588
|
Atleus: Accelerating Transformers on the Edge Enabled by 3D
Heterogeneous Manycore Architectures
|
cs.AR cs.LG
|
Transformer architectures have become the standard neural network model for
various machine learning applications including natural language processing and
computer vision. However, the compute and memory requirements introduced by
transformer models make them challenging to adopt for edge applications.
Furthermore, fine-tuning pre-trained transformers (e.g., foundation models) is
a common task to enhance the model's predictive performance on specific
tasks/applications. Existing transformer accelerators are oblivious to
complexities introduced by fine-tuning. In this paper, we propose the design of
a three-dimensional (3D) heterogeneous architecture referred to as Atleus that
incorporates heterogeneous computing resources specifically optimized to
accelerate transformer models for the dual purposes of fine-tuning and
inference. Specifically, Atleus utilizes non-volatile memory and systolic array
for accelerating transformer computational kernels using an integrated 3D
platform. Moreover, we design a suitable NoC to achieve high performance and
energy efficiency. Finally, Atleus adopts an effective quantization scheme to
support model compression. Experimental results demonstrate that Atleus
outperforms existing state-of-the-art by up to 56x and 64.5x in terms of
performance and energy efficiency respectively
|
2501.09591
|
Metrics for Inter-Dataset Similarity with Example Applications in
Synthetic Data and Feature Selection Evaluation -- Extended Version
|
cs.LG
|
Measuring inter-dataset similarity is an important task in machine learning
and data mining with various use cases and applications. Existing methods for
measuring inter-dataset similarity are computationally expensive, limited, or
sensitive to different entities and non-trivial choices for parameters. They
also lack a holistic perspective on the entire dataset. In this paper, we
propose two novel metrics for measuring inter-dataset similarity. We discuss
the mathematical foundation and the theoretical basis of our proposed metrics.
We demonstrate the effectiveness of the proposed metrics by investigating two
applications in the evaluation of synthetic data and in the evaluation of
feature selection methods. The theoretical and empirical studies conducted in
this paper illustrate the effectiveness of the proposed metrics.
|
2501.09595
|
IFRA: a machine learning-based Instrumented Fall Risk Assessment Scale
derived from Instrumented Timed Up and Go test in stroke patients
|
cs.LG cs.AI
|
Effective fall risk assessment is critical for post-stroke patients. The
present study proposes a novel, data-informed fall risk assessment method based
on the instrumented Timed Up and Go (ITUG) test data, bringing in many mobility
measures that traditional clinical scales fail to capture. IFRA, which stands
for Instrumented Fall Risk Assessment, has been developed using a two-step
process: first, features with the highest predictive power among those
collected in a ITUG test have been identified using machine learning
techniques; then, a strategy is proposed to stratify patients into low, medium,
or high-risk strata. The dataset used in our analysis consists of 142
participants, out of which 93 were used for training (15 synthetically
generated), 17 for validation and 32 to test the resulting IFRA scale (22
non-fallers and 10 fallers). Features considered in the IFRA scale include gait
speed, vertical acceleration during sit-to-walk transition, and turning angular
velocity, which align well with established literature on the risk of fall in
neurological patients. In a comparison with traditional clinical scales such as
the traditional Timed Up & Go and the Mini-BESTest, IFRA demonstrates
competitive performance, being the only scale to correctly assign more than
half of the fallers to the high-risk stratum (Fischer's Exact test p = 0.004).
Despite the dataset's limited size, this is the first proof-of-concept study to
pave the way for future evidence regarding the use of IFRA tool for continuous
patient monitoring and fall prevention both in clinical stroke rehabilitation
and at home post-discharge.
|
2501.09597
|
Reducing the Sensitivity of Neural Physics Simulators to Mesh Topology
via Pretraining
|
cs.LG cs.AI
|
Meshes are used to represent complex objects in high fidelity physics
simulators across a variety of domains, such as radar sensing and aerodynamics.
There is growing interest in using neural networks to accelerate physics
simulations, and also a growing body of work on applying neural networks
directly to irregular mesh data. Since multiple mesh topologies can represent
the same object, mesh augmentation is typically required to handle topological
variation when training neural networks. Due to the sensitivity of physics
simulators to small changes in mesh shape, it is challenging to use these
augmentations when training neural network-based physics simulators. In this
work, we show that variations in mesh topology can significantly reduce the
performance of neural network simulators. We evaluate whether pretraining can
be used to address this issue, and find that employing an established
autoencoder pretraining technique with graph embedding models reduces the
sensitivity of neural network simulators to variations in mesh topology.
Finally, we highlight future research directions that may further reduce neural
simulator sensitivity to mesh topology.
|
2501.09600
|
Mesh2SLAM in VR: A Fast Geometry-Based SLAM Framework for Rapid
Prototyping in Virtual Reality Applications
|
cs.RO cs.CV
|
SLAM is a foundational technique with broad applications in robotics and
AR/VR. SLAM simulations evaluate new concepts, but testing on
resource-constrained devices, such as VR HMDs, faces challenges: high
computational cost and restricted sensor data access. This work proposes a
sparse framework using mesh geometry projections as features, which improves
efficiency and circumvents direct sensor data access, advancing SLAM research
as we demonstrate in VR and through numerical evaluation.
|
2501.09604
|
From Scarcity to Capability: Empowering Fake News Detection in
Low-Resource Languages with LLMs
|
cs.CL
|
The rapid spread of fake news presents a significant global challenge,
particularly in low-resource languages like Bangla, which lack adequate
datasets and detection tools. Although manual fact-checking is accurate, it is
expensive and slow to prevent the dissemination of fake news. Addressing this
gap, we introduce BanFakeNews-2.0, a robust dataset to enhance Bangla fake news
detection. This version includes 11,700 additional, meticulously curated fake
news articles validated from credible sources, creating a proportional dataset
of 47,000 authentic and 13,000 fake news items across 13 categories. In
addition, we created a manually curated independent test set of 460 fake and
540 authentic news items for rigorous evaluation. We invest efforts in
collecting fake news from credible sources and manually verified while
preserving the linguistic richness. We develop a benchmark system utilizing
transformer-based architectures, including fine-tuned Bidirectional Encoder
Representations from Transformers variants (F1-87\%) and Large Language Models
with Quantized Low-Rank Approximation (F1-89\%), that significantly outperforms
traditional methods. BanFakeNews-2.0 offers a valuable resource to advance
research and application in fake news detection for low-resourced languages. We
publicly release our dataset and model on Github to foster research in this
direction.
|
2501.09605
|
Managed-Retention Memory: A New Class of Memory for the AI Era
|
cs.AR cs.AI cs.DC cs.ET
|
AI clusters today are one of the major uses of High Bandwidth Memory (HBM).
However, HBM is suboptimal for AI workloads for several reasons. Analysis shows
HBM is overprovisioned on write performance, but underprovisioned on density
and read bandwidth, and also has significant energy per bit overheads. It is
also expensive, with lower yield than DRAM due to manufacturing complexity. We
propose a new memory class: Managed-Retention Memory (MRM), which is more
optimized to store key data structures for AI inference workloads. We believe
that MRM may finally provide a path to viability for technologies that were
originally proposed to support Storage Class Memory (SCM). These technologies
traditionally offered long-term persistence (10+ years) but provided poor IO
performance and/or endurance. MRM makes different trade-offs, and by
understanding the workload IO patterns, MRM foregoes long-term data retention
and write performance for better potential performance on the metrics important
for these workloads.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.