id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.02469 | LoRaConnect: Unlocking HTTP Potential on LoRa Backbones for Remote Areas
and Ad-Hoc Networks | cs.NI cs.CY cs.SY eess.SY | The minimal infrastructure requirements of LoRa make it suitable for
deployments in remote and disaster-stricken areas. Concomitantly, the modern
era is witnessing the proliferation of web applications in all aspects of human
life, including IoT and other network services. Contemporary IoT and network
solutions heavily rely on web applications to render services. However, despite
the recent research and development pivoted around LoRa, there is still a lack
of studies focusing on web application access over LoRa networks. Specifically,
technical challenges like payload size limitation, low data rate, and
contentions in multi-user setups limit the applicability of LoRa for web
applications. Hence, we propose LoRaWeb, which enables web access over LoRa
networks. The LoRaWeb hardware tethers a WiFi hotspot to which the client
devices connect and access the web pages using a web browser. LoRa backbone of
the network handles the web page transmission from the requester and receiver
devices. LoRaWeb implements a synchronization procedure to address the
aforementioned challenges for effective message exchange between requesters and
responders. The system implements a caching mechanism to reduce latency and
contention. Additionally, it implements a message-slicing mechanism in the
application layer to overcome the hardware limitations on the message length.
The actual hardware-based implementation results indicate seamless deployment,
and the results indicate an average access time of ~$0.95 S$ for a $1.5 KB$ and
~$6 S$ for a $10 KB$ size web page.
|
2501.02471 | Hengqin-RA-v1: Advanced Large Language Model for Diagnosis and Treatment
of Rheumatoid Arthritis with Dataset based Traditional Chinese Medicine | cs.CL cs.AI | Large language models (LLMs) primarily trained on English texts, often face
biases and inaccuracies in Chinese contexts. Their limitations are pronounced
in fields like Traditional Chinese Medicine (TCM), where cultural and clinical
subtleties are vital, further hindered by a lack of domain-specific data, such
as rheumatoid arthritis (RA). To address these issues, this paper introduces
Hengqin-RA-v1, the first large language model specifically tailored for TCM
with a focus on diagnosing and treating RA. We also present HQ-GCM-RA-C1, a
comprehensive RA-specific dataset curated from ancient Chinese medical
literature, classical texts, and modern clinical studies. This dataset empowers
Hengqin-RA-v1 to deliver accurate and culturally informed responses,
effectively bridging the gaps left by general-purpose models. Extensive
experiments demonstrate that Hengqin-RA-v1 outperforms state-of-the-art models,
even surpassing the diagnostic accuracy of TCM practitioners in certain cases.
|
2501.02473 | IRIS: A Bayesian Approach for Image Reconstruction in Radio
Interferometry with expressive Score-Based priors | astro-ph.IM cs.LG eess.IV | Inferring sky surface brightness distributions from noisy interferometric
data in a principled statistical framework has been a key challenge in radio
astronomy. In this work, we introduce Imaging for Radio Interferometry with
Score-based models (IRIS). We use score-based models trained on optical images
of galaxies as an expressive prior in combination with a Gaussian likelihood in
the uv-space to infer images of protoplanetary disks from visibility data of
the DSHARP survey conducted by ALMA. We demonstrate the advantages of this
framework compared with traditional radio interferometry imaging algorithms,
showing that it produces plausible posterior samples despite the use of a
misspecified galaxy prior. Through coverage testing on simulations, we
empirically evaluate the accuracy of this approach to generate calibrated
posterior samples.
|
2501.02474 | Generalization-Enhanced Few-Shot Object Detection in Remote Sensing | cs.CV | Remote sensing object detection is particularly challenging due to the high
resolution, multi-scale features, and diverse ground object characteristics
inherent in satellite and UAV imagery. These challenges necessitate more
advanced approaches for effective object detection in such environments. While
deep learning methods have achieved remarkable success in remote sensing object
detection, they typically rely on large amounts of labeled data. Acquiring
sufficient labeled data, particularly for novel or rare objects, is both
challenging and time-consuming in remote sensing scenarios, limiting the
generalization capabilities of existing models. To address these challenges,
few-shot learning (FSL) has emerged as a promising approach, aiming to enable
models to learn new classes from limited labeled examples. Building on this
concept, few-shot object detection (FSOD) specifically targets object detection
challenges in data-limited conditions. However, the generalization capability
of FSOD models, particularly in remote sensing, is often constrained by the
complex and diverse characteristics of the objects present in such
environments. In this paper, we propose the Generalization-Enhanced Few-Shot
Object Detection (GE-FSOD) model to improve the generalization capability in
remote sensing FSOD tasks. Our model introduces three key innovations: the
Cross-Level Fusion Pyramid Attention Network (CFPAN) for enhanced multi-scale
feature representation, the Multi-Stage Refinement Region Proposal Network
(MRRPN) for more accurate region proposals, and the Generalized Classification
Loss (GCL) for improved classification performance in few-shot scenarios.
Extensive experiments on the DIOR and NWPU VHR-10 datasets show that our model
achieves state-of-the-art performance for few-shot object detection in remote
sensing.
|
2501.02476 | Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data | cs.CV cs.LG | We focus on the challenging problem of learning an unbiased classifier from a
large number of potentially relevant but noisily labeled web images given only
a few clean labeled images. This problem is particularly practical because it
reduces the expensive annotation costs by utilizing freely accessible web
images with noisy labels. Typically, prototypes are representative images or
features used to classify or identify other images. However, in the few clean
and many noisy scenarios, the class prototype can be severely biased due to the
presence of irrelevant noisy images. The resulting prototypes are less compact
and discriminative, as previous methods do not take into account the diverse
range of images in the noisy web image collections. On the other hand, the
relation modeling between noisy and clean images is not learned for the class
prototype generation in an end-to-end manner, which results in a suboptimal
class prototype. In this article, we introduce a similarity maximization loss
named SimNoiPro. Our SimNoiPro first generates noise-tolerant hybrid prototypes
composed of clean and noise-tolerant prototypes and then pulls them closer to
each other. Our approach considers the diversity of noisy images by explicit
division and overcomes the optimization discrepancy issue. This enables better
relation modeling between clean and noisy images and helps extract judicious
information from the noisy image set. The evaluation results on two extended
few-shot classification benchmarks confirm that our SimNoiPro outperforms prior
methods in measuring image relations and cleaning noisy data.
|
2501.02477 | A Deep Positive-Negative Prototype Approach to Integrated Prototypical
Discriminative Learning | cs.LG cs.CV | This paper proposes a novel Deep Positive-Negative Prototype (DPNP) model
that combines prototype-based learning (PbL) with discriminative methods to
improve class compactness and separability in deep neural networks. While PbL
traditionally emphasizes interpretability by classifying samples based on their
similarity to representative prototypes, it struggles with creating optimal
decision boundaries in complex scenarios. Conversely, discriminative methods
effectively separate classes but often lack intuitive interpretability. Toward
exploiting advantages of these two approaches, the suggested DPNP model bridges
between them by unifying class prototypes with weight vectors, thereby
establishing a structured latent space that enables accurate classification
using interpretable prototypes alongside a properly learned feature
representation. Based on this central idea of unified prototype-weight
representation, Deep Positive Prototype (DPP) is formed in the latent space as
a representative for each class using off-the-shelf deep networks as feature
extractors. Then, rival neighboring class DPPs are treated as implicit negative
prototypes with repulsive force in DPNP, which push away DPPs from each other.
This helps to enhance inter-class separation without the need for any extra
parameters. Hence, through a novel loss function that integrates cross-entropy,
prototype alignment, and separation terms, DPNP achieves well-organized feature
space geometry, maximizing intra-class compactness and inter-class margins. We
show that DPNP can organize prototypes in nearly regular positions within
feature space, such that it is possible to achieve competitive classification
accuracy even in much lower-dimensional feature spaces. Experimental results on
several datasets demonstrate that DPNP outperforms state-of-the-art models,
while using smaller networks.
|
2501.02481 | The Meta-Representation Hypothesis | cs.LG cs.AI | Humans rely on high-level understandings of things, i.e.,
meta-representations, to engage in abstract reasoning. In complex cognitive
tasks, these meta-representations help individuals abstract general rules from
experience. However, constructing such meta-representations from
high-dimensional observations remains a longstanding challenge for
reinforcement learning (RL) agents. For instance, a well-trained agent often
fails to generalize to even minor variations of the same task, such as changes
in background color, while humans can easily handle. In this paper, we
theoretically investigate how meta-representations contribute to the
generalization ability of RL agents, demonstrating that learning
meta-representations from high-dimensional observations enhance an agent's
ability to generalize across varied environments. We further hypothesize that
deep mutual learning (DML) among agents can help them learn the
meta-representations that capture the underlying essence of the task. Empirical
results provide strong support for both our theory and hypothesis. Overall,
this work provides a new perspective on the generalization of deep
reinforcement learning.
|
2501.02482 | Decoding News Bias: Multi Bias Detection in News Articles | cs.CL | News Articles provides crucial information about various events happening in
the society but they unfortunately come with different kind of biases. These
biases can significantly distort public opinion and trust in the media, making
it essential to develop techniques to detect and address them. Previous works
have majorly worked towards identifying biases in particular domains e.g.,
Political, gender biases. However, more comprehensive studies are needed to
detect biases across diverse domains. Large language models (LLMs) offer a
powerful way to analyze and understand natural language, making them ideal for
constructing datasets and detecting these biases. In this work, we have
explored various biases present in the news articles, built a dataset using
LLMs and present results obtained using multiple detection techniques. Our
approach highlights the importance of broad-spectrum bias detection and offers
new insights for improving the integrity of news articles.
|
2501.02486 | LLMPC: Large Language Model Predictive Control | cs.AI cs.CL | Recent advancements in prompting techniques for Large Language Models (LLMs)
have improved their reasoning, planning, and action abilities. This paper
examines these prompting techniques through the lens of model predictive
control (MPC). We show that LLMs act as implicit planning cost function
minimizers when planning prompts are used. Under our framework we demonstrate
that LLM planning performance can be improved further by incorporating real
planning cost functions and evaluators.
|
2501.02487 | ACE++: Instruction-Based Image Creation and Editing via Context-Aware
Content Filling | cs.CV | We report ACE++, an instruction-based diffusion framework that tackles
various image generation and editing tasks. Inspired by the input format for
the inpainting task proposed by FLUX.1-Fill-dev, we improve the Long-context
Condition Unit (LCU) introduced in ACE and extend this input paradigm to any
editing and generation tasks. To take full advantage of image generative
priors, we develop a two-stage training scheme to minimize the efforts of
finetuning powerful text-to-image diffusion models like FLUX.1-dev. In the
first stage, we pre-train the model using task data with the 0-ref tasks from
the text-to-image model. There are many models in the community based on the
post-training of text-to-image foundational models that meet this training
paradigm of the first stage. For example, FLUX.1-Fill-dev deals primarily with
painting tasks and can be used as an initialization to accelerate the training
process. In the second stage, we finetune the above model to support the
general instructions using all tasks defined in ACE. To promote the widespread
application of ACE++ in different scenarios, we provide a comprehensive set of
models that cover both full finetuning and lightweight finetuning, while
considering general applicability and applicability in vertical scenarios. The
qualitative analysis showcases the superiority of ACE++ in terms of generating
image quality and prompt following ability. Code and models will be available
on the project page: https://ali-vilab. github.io/ACE_plus_page/.
|
2501.02491 | Rethinking IDE Customization for Enhanced HAX: A Hyperdimensional
Perspective | cs.SE cs.AI | As Integrated Development Environments (IDEs) increasingly integrate
Artificial Intelligence, Software Engineering faces both benefits like
productivity gains and challenges like mismatched user preferences. We propose
Hyper-Dimensional (HD) vector spaces to model Human-Computer Interaction,
focusing on user actions, stylistic preferences, and project context. These
contributions aim to inspire further research on applying HD computing in IDE
design.
|
2501.02493 | Predicting Vulnerability to Malware Using Machine Learning Models: A
Study on Microsoft Windows Machines | cs.CR cs.LG | In an era of escalating cyber threats, malware poses significant risks to
individuals and organizations, potentially leading to data breaches, system
failures, and substantial financial losses. This study addresses the urgent
need for effective malware detection strategies by leveraging Machine Learning
(ML) techniques on extensive datasets collected from Microsoft Windows
Defender. Our research aims to develop an advanced ML model that accurately
predicts malware vulnerabilities based on the specific conditions of individual
machines. Moving beyond traditional signature-based detection methods, we
incorporate historical data and innovative feature engineering to enhance
detection capabilities. This study makes several contributions: first, it
advances existing malware detection techniques by employing sophisticated ML
algorithms; second, it utilizes a large-scale, real-world dataset to ensure the
applicability of findings; third, it highlights the importance of feature
analysis in identifying key indicators of malware infections; and fourth, it
proposes models that can be adapted for enterprise environments, offering a
proactive approach to safeguarding extensive networks against emerging threats.
We aim to improve cybersecurity resilience, providing critical insights for
practitioners in the field and addressing the evolving challenges posed by
malware in a digital landscape. Finally, discussions on results, insights, and
conclusions are presented.
|
2501.02497 | Test-time Computing: from System-1 Thinking to System-2 Thinking | cs.AI cs.CL cs.LG | The remarkable performance of the o1 model in complex reasoning demonstrates
that test-time computing scaling can further unlock the model's potential,
enabling powerful System-2 thinking. However, there is still a lack of
comprehensive surveys for test-time computing scaling. We trace the concept of
test-time computing back to System-1 models. In System-1 models, test-time
computing addresses distribution shifts and improves robustness and
generalization through parameter updating, input modification, representation
editing, and output calibration. In System-2 models, it enhances the model's
reasoning ability to solve complex problems through repeated sampling,
self-correction, and tree search. We organize this survey according to the
trend of System-1 to System-2 thinking, highlighting the key role of test-time
computing in the transition from System-1 models to weak System-2 models, and
then to strong System-2 models. We also point out a few possible future
directions.
|
2501.02504 | Watch Video, Catch Keyword: Context-aware Keyword Attention for Moment
Retrieval and Highlight Detection | cs.CV cs.AI | The goal of video moment retrieval and highlight detection is to identify
specific segments and highlights based on a given text query. With the rapid
growth of video content and the overlap between these tasks, recent works have
addressed both simultaneously. However, they still struggle to fully capture
the overall video context, making it challenging to determine which words are
most relevant. In this paper, we present a novel Video Context-aware Keyword
Attention module that overcomes this limitation by capturing keyword variation
within the context of the entire video. To achieve this, we introduce a video
context clustering module that provides concise representations of the overall
video context, thereby enhancing the understanding of keyword dynamics.
Furthermore, we propose a keyword weight detection module with keyword-aware
contrastive learning that incorporates keyword information to enhance
fine-grained alignment between visual and textual features. Extensive
experiments on the QVHighlights, TVSum, and Charades-STA benchmarks demonstrate
that our proposed method significantly improves performance in moment retrieval
and highlight detection tasks compared to existing approaches. Our code is
available at: https://github.com/VisualAIKHU/Keyword-DETR
|
2501.02505 | Learning when to rank: Estimation of partial rankings from sparse, noisy
comparisons | physics.soc-ph cs.SI stat.ML | A common task arising in various domains is that of ranking items based on
the outcomes of pairwise comparisons, from ranking players and teams in sports
to ranking products or brands in marketing studies and recommendation systems.
Statistical inference-based methods such as the Bradley-Terry model, which
extract rankings based on an underlying generative model of the comparison
outcomes, have emerged as flexible and powerful tools to tackle the task of
ranking in empirical data. In situations with limited and/or noisy comparisons,
it is often challenging to confidently distinguish the performance of different
items based on the evidence available in the data. However, existing
inference-based ranking methods overwhelmingly choose to assign each item to a
unique rank or score, suggesting a meaningful distinction when there is none.
Here, we address this problem by developing a principled Bayesian methodology
for learning partial rankings -- rankings with ties -- that distinguishes among
the ranks of different items only when there is sufficient evidence available
in the data. Our framework is adaptable to any statistical ranking method in
which the outcomes of pairwise observations depend on the ranks or scores of
the items being compared. We develop a fast agglomerative algorithm to perform
Maximum A Posteriori (MAP) inference of partial rankings under our framework
and examine the performance of our method on a variety of real and synthetic
network datasets, finding that it frequently gives a more parsimonious summary
of the data than traditional ranking, particularly when observations are
sparse.
|
2501.02506 | ToolHop: A Query-Driven Benchmark for Evaluating Large Language Models
in Multi-Hop Tool Use | cs.CL | Effective evaluation of multi-hop tool use is critical for analyzing the
understanding, reasoning, and function-calling capabilities of large language
models (LLMs). However, progress has been hindered by a lack of reliable
evaluation datasets. To address this, we present ToolHop, a dataset comprising
995 user queries and 3,912 associated tools, specifically designed for rigorous
evaluation of multi-hop tool use. ToolHop ensures diverse queries, meaningful
interdependencies, locally executable tools, detailed feedback, and verifiable
answers through a novel query-driven data construction approach that includes
tool creation, document refinement, and code generation. We evaluate 14 LLMs
across five model families (i.e., LLaMA3.1, Qwen2.5, Gemini1.5, Claude3.5, and
GPT), uncovering significant challenges in handling multi-hop tool-use
scenarios. The leading model, GPT-4o, achieves an accuracy of 49.04%,
underscoring substantial room for improvement. Further analysis reveals
variations in tool-use strategies for various families, offering actionable
insights to guide the development of more effective approaches. Code and data
can be found in https://huggingface.co/datasets/bytedance-research/ToolHop.
|
2501.02508 | PTEENet: Post-Trained Early-Exit Neural Networks Augmentation for
Inference Cost Optimization | cs.LG cs.AI cs.CV | For many practical applications, a high computational cost of inference over
deep network architectures might be unacceptable. A small degradation in the
overall inference accuracy might be a reasonable price to pay for a significant
reduction in the required computational resources. In this work, we describe a
method for introducing "shortcuts" into the DNN feedforward inference process
by skipping costly feedforward computations whenever possible. The proposed
method is based on the previously described BranchyNet (Teerapittayanon et al.,
2016) and the EEnet (Demir, 2019) architectures that jointly train the main
network and early exit branches. We extend those methods by attaching branches
to pre-trained models and, thus, eliminating the need to alter the original
weights of the network. We also suggest a new branch architecture based on
convolutional building blocks to allow enough training capacity when applied on
large DNNs. The proposed architecture includes confidence heads that are used
for predicting the confidence level in the corresponding early exits. By
defining adjusted thresholds on these confidence extensions, we can control in
real-time the amount of data exiting from each branch and the overall tradeoff
between speed and accuracy of our model. In our experiments, we evaluate our
method using image datasets (SVHN and CIFAR10) and several DNN architectures
(ResNet, DenseNet, VGG) with varied depth. Our results demonstrate that the
proposed method enables us to reduce the average inference computational cost
and further controlling the tradeoff between the model accuracy and the
computation cost.
|
2501.02509 | Facial Attractiveness Prediction in Live Streaming: A New Benchmark and
Multi-modal Method | cs.CV | Facial attractiveness prediction (FAP) has long been an important computer
vision task, which could be widely applied in live streaming for facial
retouching, content recommendation, etc. However, previous FAP datasets are
either small, closed-source, or lack diversity. Moreover, the corresponding FAP
models exhibit limited generalization and adaptation ability. To overcome these
limitations, in this paper we present LiveBeauty, the first large-scale
live-specific FAP dataset, in a more challenging application scenario, i.e.,
live streaming. 10,000 face images are collected from a live streaming platform
directly, with 200,000 corresponding attractiveness annotations obtained from a
well-devised subjective experiment, making LiveBeauty the largest open-access
FAP dataset in the challenging live scenario. Furthermore, a multi-modal FAP
method is proposed to measure the facial attractiveness in live streaming.
Specifically, we first extract holistic facial prior knowledge and multi-modal
aesthetic semantic features via a Personalized Attractiveness Prior Module
(PAPM) and a Multi-modal Attractiveness Encoder Module (MAEM), respectively,
then integrate the extracted features through a Cross-Modal Fusion Module
(CMFM). Extensive experiments conducted on both LiveBeauty and other
open-source FAP datasets demonstrate that our proposed method achieves
state-of-the-art performance. Dataset will be available soon.
|
2501.02511 | Can Impressions of Music be Extracted from Thumbnail Images? | cs.CL cs.CV cs.IR cs.SD eess.AS | In recent years, there has been a notable increase in research on machine
learning models for music retrieval and generation systems that are capable of
taking natural language sentences as inputs. However, there is a scarcity of
large-scale publicly available datasets, consisting of music data and their
corresponding natural language descriptions known as music captions. In
particular, non-musical information such as suitable situations for listening
to a track and the emotions elicited upon listening is crucial for describing
music. This type of information is underrepresented in existing music caption
datasets due to the challenges associated with extracting it directly from
music data. To address this issue, we propose a method for generating music
caption data that incorporates non-musical aspects inferred from music
thumbnail images, and validated the effectiveness of our approach through human
evaluations. Additionally, we created a dataset with approximately 360,000
captions containing non-musical aspects. Leveraging this dataset, we trained a
music retrieval model and demonstrated its effectiveness in music retrieval
tasks through evaluation.
|
2501.02518 | CHAIR -- Classifier of Hallucination as Improver | cs.CL | In this work, we introduce CHAIR (Classifier of Hallucination As ImproveR), a
supervised framework for detecting hallucinations by analyzing internal logits
from each layer of every token. Our method extracts a compact set of features
such as maximum, minimum, mean, standard deviation, and slope-from the token
logits across all layers, enabling effective hallucination detection without
overfitting. Experiments on TruthfulQA and MMLU datasets demonstrate that CHAIR
significantly improves detection accuracy, particularly in zero-shot scenarios,
showcasing its robustness and generalizability. Beyond hallucination detection,
CHAIR highlights the potential of using internal representations for designing
advanced decoding strategies. By leveraging patterns in logits, we suggest that
more sophisticated models and adaptive decoding methods could further reduce
hallucinations and enhance text completion quality. CHAIR not only offers a
practical solution for detecting hallucinations but also lays the groundwork
for exploring richer representations in LLMs to improve their factuality and
coherence.
|
2501.02519 | Layout2Scene: 3D Semantic Layout Guided Scene Generation via Geometry
and Appearance Diffusion Priors | cs.CV | 3D scene generation conditioned on text prompts has significantly progressed
due to the development of 2D diffusion generation models. However, the textual
description of 3D scenes is inherently inaccurate and lacks fine-grained
control during training, leading to implausible scene generation. As an
intuitive and feasible solution, the 3D layout allows for precise specification
of object locations within the scene. To this end, we present a text-to-scene
generation method (namely, Layout2Scene) using additional semantic layout as
the prompt to inject precise control of 3D object positions. Specifically, we
first introduce a scene hybrid representation to decouple objects and
backgrounds, which is initialized via a pre-trained text-to-3D model. Then, we
propose a two-stage scheme to optimize the geometry and appearance of the
initialized scene separately. To fully leverage 2D diffusion priors in geometry
and appearance generation, we introduce a semantic-guided geometry diffusion
model and a semantic-geometry guided diffusion model which are finetuned on a
scene dataset. Extensive experiments demonstrate that our method can generate
more plausible and realistic scenes as compared to state-of-the-art approaches.
Furthermore, the generated scene allows for flexible yet precise editing,
thereby facilitating multiple downstream applications.
|
2501.02521 | Remote Inference over Dynamic Links via Adaptive Rate Deep Task-Oriented
Vector Quantization | eess.SP cs.AI | A broad range of technologies rely on remote inference, wherein data acquired
is conveyed over a communication channel for inference in a remote server.
Communication between the participating entities is often carried out over
rate-limited channels, necessitating data compression for reducing latency.
While deep learning facilitates joint design of the compression mapping along
with encoding and inference rules, existing learned compression mechanisms are
static, and struggle in adapting their resolution to changes in channel
conditions and to dynamic links. To address this, we propose Adaptive Rate
Task-Oriented Vector Quantization (ARTOVeQ), a learned compression mechanism
that is tailored for remote inference over dynamic links. ARTOVeQ is based on
designing nested codebooks along with a learning algorithm employing
progressive learning. We show that ARTOVeQ extends to support low-latency
inference that is gradually refined via successive refinement principles, and
that it enables the simultaneous usage of multiple resolutions when conveying
high-dimensional data. Numerical results demonstrate that the proposed scheme
yields remote deep inference that operates with multiple rates, supports a
broad range of bit budgets, and facilitates rapid inference that gradually
improves with more bits exchanged, while approaching the performance of
single-rate deep quantization methods.
|
2501.02523 | Face-MakeUp: Multimodal Facial Prompts for Text-to-Image Generation | cs.CV cs.AI | Facial images have extensive practical applications. Although the current
large-scale text-image diffusion models exhibit strong generation capabilities,
it is challenging to generate the desired facial images using only text prompt.
Image prompts are a logical choice. However, current methods of this type
generally focus on general domain. In this paper, we aim to optimize image
makeup techniques to generate the desired facial images. Specifically, (1) we
built a dataset of 4 million high-quality face image-text pairs
(FaceCaptionHQ-4M) based on LAION-Face to train our Face-MakeUp model; (2) to
maintain consistency with the reference facial image, we extract/learn
multi-scale content features and pose features for the facial image,
integrating these into the diffusion model to enhance the preservation of
facial identity features for diffusion models. Validation on two face-related
test datasets demonstrates that our Face-MakeUp can achieve the best
comprehensive performance.All codes are available
at:https://github.com/ddw2AIGROUP2CQUPT/Face-MakeUp
|
2501.02526 | Unified Guidance for Geometry-Conditioned Molecular Generation | q-bio.BM cs.LG | Effectively designing molecular geometries is essential to advancing
pharmaceutical innovations, a domain, which has experienced great attention
through the success of generative models and, in particular, diffusion models.
However, current molecular diffusion models are tailored towards a specific
downstream task and lack adaptability. We introduce UniGuide, a framework for
controlled geometric guidance of unconditional diffusion models that allows
flexible conditioning during inference without the requirement of extra
training or networks. We show how applications such as structure-based,
fragment-based, and ligand-based drug design are formulated in the UniGuide
framework and demonstrate on-par or superior performance compared to
specialised models. Offering a more versatile approach, UniGuide has the
potential to streamline the development of molecular generative models,
allowing them to be readily used in diverse application scenarios.
|
2501.02527 | Vision-Driven Prompt Optimization for Large Language Models in
Multimodal Generative Tasks | cs.CV | Vision generation remains a challenging frontier in artificial intelligence,
requiring seamless integration of visual understanding and generative
capabilities. In this paper, we propose a novel framework, Vision-Driven Prompt
Optimization (VDPO), that leverages Large Language Models (LLMs) to dynamically
generate textual prompts from visual inputs, guiding high-fidelity image
synthesis. VDPO combines a visual embedding prompt tuner, a textual instruction
generator, and a vision generation module to achieve state-of-the-art
performance in diverse vision generation tasks. Extensive experiments on
benchmarks such as COCO and Sketchy demonstrate that VDPO consistently
outperforms existing methods, achieving significant improvements in FID, LPIPS,
and BLEU/CIDEr scores. Additional analyses reveal the scalability, robustness,
and generalization capabilities of VDPO, making it a versatile solution for
in-domain and out-of-domain tasks. Human evaluations further validate the
practical superiority of VDPO in generating visually appealing and semantically
coherent outputs.
|
2501.02530 | UDMC: Unified Decision-Making and Control Framework for Urban Autonomous
Driving with Motion Prediction of Traffic Participants | cs.RO cs.DC cs.SY eess.SY | Current autonomous driving systems often struggle to balance decision-making
and motion control while ensuring safety and traffic rule compliance,
especially in complex urban environments. Existing methods may fall short due
to separate handling of these functionalities, leading to inefficiencies and
safety compromises. To address these challenges, we introduce UDMC, an
interpretable and unified Level 4 autonomous driving framework. UDMC integrates
decision-making and motion control into a single optimal control problem (OCP),
considering the dynamic interactions with surrounding vehicles, pedestrians,
road lanes, and traffic signals. By employing innovative potential functions to
model traffic participants and regulations, and incorporating a specialized
motion prediction module, our framework enhances on-road safety and rule
adherence. The integrated design allows for real-time execution of flexible
maneuvers suited to diverse driving scenarios. High-fidelity simulations
conducted in CARLA exemplify the framework's computational efficiency,
robustness, and safety, resulting in superior driving performance when compared
against various baseline models. Our open-source project is available at
https://github.com/henryhcliu/udmc_carla.git.
|
2501.02531 | Towards New Benchmark for AI Alignment & Sentiment Analysis in Socially
Important Issues: A Comparative Study of Human and LLMs in the Context of AGI | cs.CY cs.CL | With the expansion of neural networks, such as large language models,
humanity is exponentially heading towards superintelligence. As various AI
systems are increasingly integrated into the fabric of societies-through
recommending values, devising creative solutions, and making decisions-it
becomes critical to assess how these AI systems impact humans in the long run.
This research aims to contribute towards establishing a benchmark for
evaluating the sentiment of various Large Language Models in socially importan
issues. The methodology adopted was a Likert scale survey. Seven LLMs,
including GPT-4 and Bard, were analyzed and compared against sentiment data
from three independent human sample populations. Temporal variations in
sentiment were also evaluated over three consecutive days. The results
highlighted a diversity in sentiment scores among LLMs, ranging from 3.32 to
4.12 out of 5. GPT-4 recorded the most positive sentiment score towards AGI,
whereas Bard was leaning towards the neutral sentiment. The human samples,
contrastingly, showed a lower average sentiment of 2.97. The temporal
comparison revealed differences in sentiment evolution between LLMs in three
days, ranging from 1.03% to 8.21%. The study's analysis outlines the prospect
of potential conflicts of interest and bias possibilities in LLMs' sentiment
formation. Results indicate that LLMs, akin to human cognitive processes, could
potentially develop unique sentiments and subtly influence societies'
perceptions towards various opinions formed within the LLMs.
|
2501.02532 | Evaluating Large Language Models Against Human Annotators in Latent
Content Analysis: Sentiment, Political Leaning, Emotional Intensity, and
Sarcasm | cs.CL cs.AI cs.CY | In the era of rapid digital communication, vast amounts of textual data are
generated daily, demanding efficient methods for latent content analysis to
extract meaningful insights. Large Language Models (LLMs) offer potential for
automating this process, yet comprehensive assessments comparing their
performance to human annotators across multiple dimensions are lacking. This
study evaluates the reliability, consistency, and quality of seven
state-of-the-art LLMs, including variants of OpenAI's GPT-4, Gemini, Llama, and
Mixtral, relative to human annotators in analyzing sentiment, political
leaning, emotional intensity, and sarcasm detection. A total of 33 human
annotators and eight LLM variants assessed 100 curated textual items,
generating 3,300 human and 19,200 LLM annotations, with LLMs evaluated across
three time points to examine temporal consistency. Inter-rater reliability was
measured using Krippendorff's alpha, and intra-class correlation coefficients
assessed consistency over time. The results reveal that both humans and LLMs
exhibit high reliability in sentiment analysis and political leaning
assessments, with LLMs demonstrating higher internal consistency than humans.
In emotional intensity, LLMs displayed higher agreement compared to humans,
though humans rated emotional intensity significantly higher. Both groups
struggled with sarcasm detection, evidenced by low agreement. LLMs showed
excellent temporal consistency across all dimensions, indicating stable
performance over time. This research concludes that LLMs, especially GPT-4, can
effectively replicate human analysis in sentiment and political leaning,
although human expertise remains essential for emotional intensity
interpretation. The findings demonstrate the potential of LLMs for consistent
and high-quality performance in certain areas of latent content analysis.
|
2501.02534 | Pixel-Wise Feature Selection for Perceptual Edge Detection without
post-processing | cs.CV | Although deep convolutional neutral networks (CNNs) have significantly
enhanced performance in image edge detection (ED), current models remain highly
dependent on post-processing techniques such as non-maximum suppression (NMS),
and often fail to deliver satisfactory perceptual results, while the
performance will deteriorate significantly if the allowed error toleration
distance decreases. These limitations arise from the uniform fusion of features
across all pixels, regardless of their specific characteristics, such as the
distinction between textural and edge areas. If the features extracted by the
ED models are selected more meticulously and encompass greater diversity, the
resulting predictions are expected to be more accurate and perceptually
meaningful. Motivated by this observation, this paper proposes a novel feature
selection paradigm for deep networks that facilitates the differential
selection of features and can be seamlessly integrated into existing ED models.
By incorporating this additional structure, the performance of conventional ED
models is substantially enhanced without post-processing, while simultaneously
enhancing the perceptual quality of the predictions. Extensive experimental
evaluations validate the effectiveness of the proposed model.
|
2501.02535 | A completely uniform transformer for parity | cs.LG cs.AI | We construct a 3-layer constant-dimension transformer, recognizing the parity
language, where neither parameter matrices nor the positional encoding depend
on the input length. This improves upon a construction of Chiang and Cholak who
use a positional encoding, depending on the input length (but their
construction has 2 layers).
|
2501.02536 | Low RCS High-Gain Broadband Substrate Integrated Waveguide Antenna Based
on Elliptical Polarization Conversion Metasurface | eess.SY cs.SY | Designed an elliptical polarization conversion metasurface (PCM) for Ka-band
applications, alongside a high-gain substrate integrated waveguide (SIW)
antenna. The PCM elements are integrated into the antenna design in a
chessboard array configuration, with the goal of achieving effective reduction
in the antenna's radar cross section (RCS). Both the PCM elements and antenna
structure exhibit a simple design. The top layer of the metasurface (MS)
elements employs an elliptical pattern symmetric along the diagonal, enabling
efficient conversion of linearly polarized waves. The antenna component, on the
other hand, consists of a broadband dipole antenna fed by SIW slot coupling.
Verified through simulations, the polarization conversion bandwidth of this PCM
unit reaches 80.38% where polarization conversion ratio (PCR) exceeds 90%
(25.3-59.3GHz), demonstrating exceptional conversion performance. When the
dipole antenna is combined with the PCM, its -10dB impedance bandwidth reaches
to 15.09% (33.7-39.2GHz), with a maximum realized gain of 9.1dBi. Notably, the
antenna loaded with the chessboard PCM structure effectively disperses the
energy of scattered echoes around, significantly reducing the concentration of
scattered energy in the direction of the incident wave, thereby achieving an
effective reduction in RCS.
|
2501.02539 | AHMSA-Net: Adaptive Hierarchical Multi-Scale Attention Network for
Micro-Expression Recognition | cs.CV | Micro-expression recognition (MER) presents a significant challenge due to
the transient and subtle nature of the motion changes involved. In recent
years, deep learning methods based on attention mechanisms have made some
breakthroughs in MER. However, these methods still suffer from the limitations
of insufficient feature capture and poor dynamic adaptation when coping with
the instantaneous subtle movement changes of micro-expressions. Therefore, in
this paper, we design an Adaptive Hierarchical Multi-Scale Attention Network
(AHMSA-Net) for MER. Specifically, we first utilize the onset and apex frames
of the micro-expression sequence to extract three-dimensional (3D) optical flow
maps, including horizontal optical flow, vertical optical flow, and optical
flow strain. Subsequently, the optical flow feature maps are inputted into
AHMSA-Net, which consists of two parts: an adaptive hierarchical framework and
a multi-scale attention mechanism. Based on the adaptive downsampling
hierarchical attention framework, AHMSA-Net captures the subtle changes of
micro-expressions from different granularities (fine and coarse) by dynamically
adjusting the size of the optical flow feature map at each layer. Based on the
multi-scale attention mechanism, AHMSA-Net learns micro-expression action
information by fusing features from different scales (channel and spatial).
These two modules work together to comprehensively improve the accuracy of MER.
Additionally, rigorous experiments demonstrate that the proposed method
achieves competitive results on major micro-expression databases, with
AHMSA-Net achieving recognition accuracy of up to 78.21% on composite databases
(SMIC, SAMM, CASMEII) and 77.08% on the CASME^{}3 database.
|
2501.02546 | TreeMatch: A Fully Unsupervised WSD System Using Dependency Knowledge on
a Specific Domain | cs.CL cs.AI | Word sense disambiguation (WSD) is one of the main challenges in
Computational Linguistics. TreeMatch is a WSD system originally developed using
data from SemEval 2007 Task 7 (Coarse-grained English All-words Task) that has
been adapted for use in SemEval 2010 Task 17 (All-words Word Sense
Disambiguation on a Specific Domain). The system is based on a fully
unsupervised method using dependency knowledge drawn from a domain specific
knowledge base that was built for this task. When evaluated on the task, the
system precision performs above the Most Frequent Selection baseline.
|
2501.02547 | Transformers Simulate MLE for Sequence Generation in Bayesian Networks | stat.ML cs.LG | Transformers have achieved significant success in various fields, notably
excelling in tasks involving sequential data like natural language processing.
Despite these achievements, the theoretical understanding of transformers'
capabilities remains limited. In this paper, we investigate the theoretical
capabilities of transformers to autoregressively generate sequences in Bayesian
networks based on in-context maximum likelihood estimation (MLE). Specifically,
we consider a setting where a context is formed by a set of independent
sequences generated according to a Bayesian network. We demonstrate that there
exists a simple transformer model that can (i) estimate the conditional
probabilities of the Bayesian network according to the context, and (ii)
autoregressively generate a new sample according to the Bayesian network with
estimated conditional probabilities. We further demonstrate in extensive
experiments that such a transformer does not only exist in theory, but can also
be effectively obtained through training. Our analysis highlights the potential
of transformers to learn complex probabilistic models and contributes to a
better understanding of large language models as a powerful class of sequence
generators.
|
2501.02548 | AMM: Adaptive Modularized Reinforcement Model for Multi-city Traffic
Signal Control | cs.LG cs.AI | Traffic signal control (TSC) is an important and widely studied direction.
Recently, reinforcement learning (RL) methods have been used to solve TSC
problems and achieve superior performance over conventional TSC methods.
However, applying RL methods to the real world is challenging due to the huge
cost of experiments in real-world traffic environments. One possible solution
is TSC domain adaptation, which adapts trained models to target environments
and reduces the number of interactions and the training cost. However, existing
TSC domain adaptation methods still face two major issues: the lack of
consideration for differences across cities and the low utilization of
multi-city data.
To solve aforementioned issues, we propose an approach named Adaptive
Modularized Model (AMM). By modularizing TSC problems and network models, we
overcome the challenge of possible changes in environmental observations. We
also aggregate multi-city experience through meta-learning. We conduct
extensive experiments on different cities and show that AMM can achieve
excellent performance with limited interactions in target environments and
outperform existing methods. We also demonstrate the feasibility and
generalizability of our method.
|
2501.02549 | From Language To Vision: A Case Study of Text Animation | cs.CL | Information can be expressed in multiple formats including natural language,
images, and motions. Human intelligence usually faces little difficulty to
convert from one format to another format, which often shows a true
understanding of encoded information. Moreover, such conversions have broad
application in many real-world applications. In this paper, we present a text
visualization system that can visualize free text with animations. Our system
is illustrated by visualizing example sentences of elementary Physics laws.
|
2501.02552 | Multi-LLM Collaborative Caption Generation in Scientific Documents | cs.CL cs.CV | Scientific figure captioning is a complex task that requires generating
contextually appropriate descriptions of visual content. However, existing
methods often fall short by utilizing incomplete information, treating the task
solely as either an image-to-text or text summarization problem. This
limitation hinders the generation of high-quality captions that fully capture
the necessary details. Moreover, existing data sourced from arXiv papers
contain low-quality captions, posing significant challenges for training large
language models (LLMs). In this paper, we introduce a framework called
Multi-LLM Collaborative Figure Caption Generation (MLBCAP) to address these
challenges by leveraging specialized LLMs for distinct sub-tasks. Our approach
unfolds in three key modules: (Quality Assessment) We utilize multimodal LLMs
to assess the quality of training data, enabling the filtration of low-quality
captions. (Diverse Caption Generation) We then employ a strategy of
fine-tuning/prompting multiple LLMs on the captioning task to generate
candidate captions. (Judgment) Lastly, we prompt a prominent LLM to select the
highest quality caption from the candidates, followed by refining any remaining
inaccuracies. Human evaluations demonstrate that informative captions produced
by our approach rank better than human-written captions, highlighting its
effectiveness. Our code is available at https://github.com/teamreboott/MLBCAP
|
2501.02556 | Spatial Network Calculus: Toward Deterministic Wireless Networking | cs.NI cs.IT math.IT | This paper extends the classical network calculus to spatial scenarios,
focusing on wireless networks with heterogeneous traffic and varying transmit
power levels. Building on spatial network calculus, a prior extension of
network calculus to spatial settings, we propose a generalized framework by
introducing spatial regulations for stationary marked point processes. The
regulations correspond to two key constraints: the total transmit power within
a spatial region and the cumulative received power at a receiver. Then we prove
the equivalence of ball regulation and shot-noise regulation for stationary
marked point processes and establish a universal lower bound on the performance
of all network links under these constraints. This framework is applicable to
diverse network scenarios, as demonstrated by the analysis of performance
guarantees for networks with multi-class users. In addition, we propose an
SINR-based power control scheme adapted to user traffic, which ensures
differentiated quality of service (QoS) for different user classes. We derive
deterministic performance guarantees for all links in complex and heterogeneous
wireless networks.
|
2501.02558 | Neural Error Covariance Estimation for Precise LiDAR Localization | cs.RO cs.CV | Autonomous vehicles have gained significant attention due to technological
advancements and their potential to transform transportation. A critical
challenge in this domain is precise localization, particularly in LiDAR-based
map matching, which is prone to errors due to degeneracy in the data. Most
sensor fusion techniques, such as the Kalman filter, rely on accurate error
covariance estimates for each sensor to improve localization accuracy. However,
obtaining reliable covariance values for map matching remains a complex task.
To address this challenge, we propose a neural network-based framework for
predicting localization error covariance in LiDAR map matching. To achieve
this, we introduce a novel dataset generation method specifically designed for
error covariance estimation. In our evaluation using a Kalman filter, we
achieved a 2 cm improvement in localization accuracy, a significant enhancement
in this domain.
|
2501.02559 | KM-UNet KAN Mamba UNet for medical image segmentation | eess.IV cs.AI cs.CV | Medical image segmentation is a critical task in medical imaging analysis.
Traditional CNN-based methods struggle with modeling long-range dependencies,
while Transformer-based models, despite their success, suffer from quadratic
computational complexity. To address these limitations, we propose KM-UNet, a
novel U-shaped network architecture that combines the strengths of
Kolmogorov-Arnold Networks (KANs) and state-space models (SSMs). KM-UNet
leverages the Kolmogorov-Arnold representation theorem for efficient feature
representation and SSMs for scalable long-range modeling, achieving a balance
between accuracy and computational efficiency. We evaluate KM-UNet on five
benchmark datasets: ISIC17, ISIC18, CVC, BUSI, and GLAS. Experimental results
demonstrate that KM-UNet achieves competitive performance compared to
state-of-the-art methods in medical image segmentation tasks. To the best of
our knowledge, KM-UNet is the first medical image segmentation framework
integrating KANs and SSMs. This work provides a valuable baseline and new
insights for the development of more efficient and interpretable medical image
segmentation systems. The code is open source at
https://github.com/2760613195/KM_UNet
Keywords:KAN,Manba, state-space models,UNet, Medical image segmentation, Deep
learning
|
2501.02564 | Balanced Multi-view Clustering | cs.CV cs.AI cs.LG | Multi-view clustering (MvC) aims to integrate information from different
views to enhance the capability of the model in capturing the underlying data
structures. The widely used joint training paradigm in MvC is potentially not
fully leverage the multi-view information, since the imbalanced and
under-optimized view-specific features caused by the uniform learning objective
for all views. For instance, particular views with more discriminative
information could dominate the learning process in the joint training paradigm,
leading to other views being under-optimized. To alleviate this issue, we first
analyze the imbalanced phenomenon in the joint-training paradigm of multi-view
clustering from the perspective of gradient descent for each view-specific
feature extractor. Then, we propose a novel balanced multi-view clustering
(BMvC) method, which introduces a view-specific contrastive regularization
(VCR) to modulate the optimization of each view. Concretely, VCR preserves the
sample similarities captured from the joint features and view-specific ones
into the clustering distributions corresponding to view-specific features to
enhance the learning process of view-specific feature extractors. Additionally,
a theoretical analysis is provided to illustrate that VCR adaptively modulates
the magnitudes of gradients for updating the parameters of view-specific
feature extractors to achieve a balanced multi-view learning procedure. In such
a manner, BMvC achieves a better trade-off between the exploitation of
view-specific patterns and the exploration of view-invariance patterns to fully
learn the multi-view information for the clustering task. Finally, a set of
experiments are conducted to verify the superiority of the proposed method
compared with state-of-the-art approaches on eight benchmark MvC datasets.
|
2501.02565 | Efficient Graph Condensation via Gaussian Process | cs.LG | Graph condensation reduces the size of large graphs while preserving
performance, addressing the scalability challenges of Graph Neural Networks
caused by computational inefficiencies on large datasets. Existing methods
often rely on bi-level optimization, requiring extensive GNN training and
limiting their scalability. To address these issues, this paper proposes Graph
Condensation via Gaussian Process (GCGP), a novel and computationally efficient
approach to graph condensation. GCGP utilizes a Gaussian Process (GP), with the
condensed graph serving as observations, to estimate the posterior distribution
of predictions. This approach eliminates the need for the iterative and
resource-intensive training typically required by GNNs. To enhance the
capability of the GCGP in capturing dependencies between function values, we
derive a specialized covariance function that incorporates structural
information. This covariance function broadens the receptive field of input
nodes by local neighborhood aggregation, thereby facilitating the
representation of intricate dependencies within the nodes. To address the
challenge of optimizing binary structural information in condensed graphs,
Concrete random variables are utilized to approximate the binary adjacency
matrix in a continuous counterpart. This relaxation process allows the
adjacency matrix to be represented in a differentiable form, enabling the
application of gradient-based optimization techniques to discrete graph
structures. Experimental results show that the proposed GCGP method efficiently
condenses large-scale graph data while preserving predictive performance,
addressing the scalability and efficiency challenges. The implementation of our
method is publicly available at https://github.com/WANGLin0126/GCGP.
|
2501.02569 | A review on reinforcement learning methods for mobility on demand
systems | cs.MA | Mobility on Demand (MoD) refers to mobility systems that operate on the basis
of immediate travel demand. Typically, such a system consists of a fleet of
vehicles that can be booked by customers when needed. The operation of these
services consists of two main tasks: deciding how vehicles are assigned to
requests (vehicle assignment); and deciding where vehicles move (including
charging stations) when they are not serving a request (rebalancing). A field
of research is emerging around the design of operation strategies for MoD
services, and an increasingly popular trend is the use of learning based (most
often Reinforcement Learning) approaches. We review, in this work, the
literature on algorithms for operation strategies of MoD systems that use
approaches based on Reinforcement Learning with a focus on the types of
algorithms being used. The novelty of our review stands in three aspects:
First, the algorithmic details are discussed and the approaches classified in a
unified framework for sequential decision-making. Second, the use cases on
which approaches are tested and their features are taken into account. Finally,
validation methods that can be found across the literature are discussed. The
review aims at advancing the state of the art by identifying similarities and
differences between approaches and highlighting current research directions.
|
2501.02570 | Decoding fMRI Data into Captions using Prefix Language Modeling | cs.CV cs.AI cs.CL | With the advancements in Large Language and Latent Diffusion models, brain
decoding has achieved remarkable results in recent years. The works on the NSD
dataset, with stimuli images from the COCO dataset, leverage the embeddings
from the CLIP model for image reconstruction and GIT for captioning. However,
the current captioning approach introduces the challenge of potential data
contamination given that the GIT model was trained on the COCO dataset. In this
work, we present an alternative method for decoding brain signals into image
captions by predicting a DINOv2 model's embedding of an image from the
corresponding fMRI signal and then providing its [CLS] token as the prefix to
the GPT-2 language model which decreases computational requirements
considerably. Additionally, instead of commonly used Linear Regression, we
explore 3D Convolutional Neural Network mapping of fMRI signals to image
embedding space for better accounting positional information of voxels.
|
2501.02572 | Energy Optimization of Multi-task DNN Inference in MEC-assisted XR
Devices: A Lyapunov-Guided Reinforcement Learning Approach | cs.NI cs.AI cs.SY eess.SY | Extended reality (XR), blending virtual and real worlds, is a key application
of future networks. While AI advancements enhance XR capabilities, they also
impose significant computational and energy challenges on lightweight XR
devices. In this paper, we developed a distributed queue model for multi-task
DNN inference, addressing issues of resource competition and queue coupling. In
response to the challenges posed by the high energy consumption and limited
resources of XR devices, we designed a dual time-scale joint optimization
strategy for model partitioning and resource allocation, formulated as a
bi-level optimization problem. This strategy aims to minimize the total energy
consumption of XR devices while ensuring queue stability and adhering to
computational and communication resource constraints. To tackle this problem,
we devised a Lyapunov-guided Proximal Policy Optimization algorithm, named
LyaPPO. Numerical results demonstrate that the LyaPPO algorithm outperforms the
baselines, achieving energy conservation of 24.79% to 46.14% under varying
resource capacities. Specifically, the proposed algorithm reduces the energy
consumption of XR devices by 24.29% to 56.62% compared to baseline algorithms.
|
2501.02573 | LeetDecoding: A PyTorch Library for Exponentially Decaying Causal Linear
Attention with CUDA Implementations | cs.LG cs.CL cs.MS | The machine learning and data science community has made significant while
dispersive progress in accelerating transformer-based large language models
(LLMs), and one promising approach is to replace the original causal attention
in a generative pre-trained transformer (GPT) with \emph{exponentially decaying
causal linear attention}. In this paper, we present LeetDecoding, which is the
first Python package that provides a large set of computation routines for this
fundamental operator. The launch of LeetDecoding was motivated by the current
lack of (1) clear understanding of the complexity regarding this operator, (2)
a comprehensive collection of existing computation methods (usually spread in
seemingly unrelated fields), and (3) CUDA implementations for fast inference on
GPU. LeetDecoding's design is easy to integrate with existing linear-attention
LLMs, and allows for researchers to benchmark and evaluate new computation
methods for exponentially decaying causal linear attention. The usage of
LeetDecoding does not require any knowledge of GPU programming and the
underlying complexity analysis, intentionally making LeetDecoding accessible to
LLM practitioners. The source code of LeetDecoding is provided at
\href{https://github.com/Computational-Machine-Intelligence/LeetDecoding}{this
GitHub repository}, and users can simply install LeetDecoding by the command
\texttt{pip install leet-decoding}.
|
2501.02576 | DepthMaster: Taming Diffusion Models for Monocular Depth Estimation | cs.CV | Monocular depth estimation within the diffusion-denoising paradigm
demonstrates impressive generalization ability but suffers from low inference
speed. Recent methods adopt a single-step deterministic paradigm to improve
inference efficiency while maintaining comparable performance. However, they
overlook the gap between generative and discriminative features, leading to
suboptimal results. In this work, we propose DepthMaster, a single-step
diffusion model designed to adapt generative features for the discriminative
depth estimation task. First, to mitigate overfitting to texture details
introduced by generative features, we propose a Feature Alignment module, which
incorporates high-quality semantic features to enhance the denoising network's
representation capability. Second, to address the lack of fine-grained details
in the single-step deterministic framework, we propose a Fourier Enhancement
module to adaptively balance low-frequency structure and high-frequency
details. We adopt a two-stage training strategy to fully leverage the potential
of the two modules. In the first stage, we focus on learning the global scene
structure with the Feature Alignment module, while in the second stage, we
exploit the Fourier Enhancement module to improve the visual quality. Through
these efforts, our model achieves state-of-the-art performance in terms of
generalization and detail preservation, outperforming other diffusion-based
methods across various datasets. Our project page can be found at
https://indu1ge.github.io/DepthMaster_page.
|
2501.02580 | LP-ICP: General Localizability-Aware Point Cloud Registration for Robust
Localization in Extreme Unstructured Environments | cs.RO | The Iterative Closest Point (ICP) algorithm is a crucial component of
LiDAR-based SLAM algorithms. However, its performance can be negatively
affected in unstructured environments that lack features and geometric
structures, leading to low accuracy and poor robustness in localization and
mapping. It is known that degeneracy caused by the lack of geometric
constraints can lead to errors in 6-DOF pose estimation along ill-conditioned
directions. Therefore, there is a need for a broader and more fine-grained
degeneracy detection and handling method. This paper proposes a new point cloud
registration framework, LP-ICP, that combines point-to-line and point-to-plane
distance metrics in the ICP algorithm, with localizability detection and
handling. LP-ICP consists of a localizability detection module and an
optimization module. The localizability detection module performs
localizability analysis by utilizing the correspondences between edge points
(with low local smoothness) to lines and planar points (with high local
smoothness) to planes between the scan and the map. The localizability
contribution of individual correspondence constraints can be applied to a
broader range. The optimization module adds additional soft and hard
constraints to the optimization equations based on the localizability category.
This allows the pose to be constrained along ill-conditioned directions, with
updates either tending towards the constraint value or leaving the initial
estimate unchanged. This improves accuracy and reduces fluctuations. The
proposed method is extensively evaluated through experiments on both simulation
and real-world datasets, demonstrating higher or comparable accuracy than the
state-of-the-art methods. The dataset and code of this paper will also be
open-sourced at https://github.com/xuqingyuan2000/LP-ICP.
|
2501.02583 | Gaze Behavior During a Long-Term, In-Home, Social Robot Intervention for
Children with ASD | cs.RO cs.CV | Atypical gaze behavior is a diagnostic hallmark of Autism Spectrum Disorder
(ASD), playing a substantial role in the social and communicative challenges
that individuals with ASD face. This study explores the impacts of a
month-long, in-home intervention designed to promote triadic interactions
between a social robot, a child with ASD, and their caregiver. Our results
indicate that the intervention successfully promoted appropriate gaze behavior,
encouraging children with ASD to follow the robot's gaze, resulting in more
frequent and prolonged instances of spontaneous eye contact and joint attention
with their caregivers. Additionally, we observed specific timelines for
behavioral variability and novelty effects among users. Furthermore, diagnostic
measures for ASD emerged as strong predictors of gaze patterns for both
caregivers and children. These results deepen our understanding of ASD gaze
patterns and highlight the potential for clinical relevance of robot-assisted
interventions.
|
2501.02584 | Efficient Architectures for High Resolution Vision-Language Models | cs.CV cs.AI cs.CL cs.LG | Vision-Language Models (VLMs) have recently experienced significant
advancements. However, challenges persist in the accurate recognition of fine
details within high resolution images, which limits performance in multiple
tasks. This work introduces Pheye, a novel architecture that efficiently
processes high-resolution images while training fewer parameters than similarly
sized VLMs. Notably, Pheye achieves a high efficiency while maintaining strong
performance, particularly in tasks that demand fine-grained image understanding
and/or the handling of scene-text.
|
2501.02593 | Evolving Skeletons: Motion Dynamics in Action Recognition | cs.CV cs.AI cs.LG | Skeleton-based action recognition has gained significant attention for its
ability to efficiently represent spatiotemporal information in a lightweight
format. Most existing approaches use graph-based models to process skeleton
sequences, where each pose is represented as a skeletal graph structured around
human physical connectivity. Among these, the Spatiotemporal Graph
Convolutional Network (ST-GCN) has become a widely used framework.
Alternatively, hypergraph-based models, such as the Hyperformer, capture
higher-order correlations, offering a more expressive representation of complex
joint interactions. A recent advancement, termed Taylor Videos, introduces
motion-enhanced skeleton sequences by embedding motion concepts, providing a
fresh perspective on interpreting human actions in skeleton-based action
recognition. In this paper, we conduct a comprehensive evaluation of both
traditional skeleton sequences and Taylor-transformed skeletons using ST-GCN
and Hyperformer models on the NTU-60 and NTU-120 datasets. We compare skeletal
graph and hypergraph representations, analyzing static poses against
motion-injected poses. Our findings highlight the strengths and limitations of
Taylor-transformed skeletons, demonstrating their potential to enhance motion
dynamics while exposing current challenges in fully using their benefits. This
study underscores the need for innovative skeletal modelling techniques to
effectively handle motion-rich data and advance the field of action
recognition.
|
2501.02595 | Rotatable Antenna Enabled Wireless Communication: Modeling and
Optimization | cs.IT eess.SP math.IT | Fluid antenna system (FAS) and movable antenna (MA) have recently emerged as
promising technologies to exploit new spatial degrees of freedom (DoFs), which
have attracted growing attention in wireless communication. In this paper, we
propose a new rotatable antenna (RA) model to improve the performance of
wireless communication systems. Different from conventional fixed antennas, the
proposed RA system can flexibly alter the three-dimensional (3D) boresight
direction of each antenna independently by adjusting its deflection angles to
achieve a desired array directional gain pattern. Specifically, we investigate
an RA-enabled uplink communication system, where the receive beamforming and
the deflection angles of all RAs at the base station (BS) are jointly optimized
to maximize the minimum signal-to-interference-plus-noise ratio (SINR) among
all the users. In the special single-user and free-space propagation setup, the
optimal deflection angles of RAs are derived in closed form with the
maximum-ratio combining (MRC) beamformer applied at the BS. Moreover, we
analyze the asymptotic performance with an infinite number of antennas based on
this solution, which theoretically proves that the RA system can achieve a
higher array gain as compared to the fixed-antenna system. In the general
multi-user and multi-path channel setup, we first propose an alternating
optimization (AO) algorithm to alternately optimize the receive beamforming and
the deflection angles of RAs in an iterative manner. Then, a two-stage
algorithm that solves the formulated problem without the need for iteration is
further proposed to reduce computational complexity. Simulation results are
provided to validate our analytical results and demonstrate that the proposed
RA system can significantly outperform other benchmark schemes.
|
2501.02598 | GIT-CXR: End-to-End Transformer for Chest X-Ray Report Generation | cs.CL cs.CV cs.LG | Medical imaging is crucial for diagnosing, monitoring, and treating medical
conditions. The medical reports of radiology images are the primary medium
through which medical professionals attest their findings, but their writing is
time consuming and requires specialized clinical expertise. The automated
generation of radiography reports has thus the potential to improve and
standardize patient care and significantly reduce clinicians workload. Through
our work, we have designed and evaluated an end-to-end transformer-based method
to generate accurate and factually complete radiology reports for X-ray images.
Additionally, we are the first to introduce curriculum learning for end-to-end
transformers in medical imaging and demonstrate its impact in obtaining
improved performance. The experiments have been conducted using the
MIMIC-CXR-JPG database, the largest available chest X-ray dataset. The results
obtained are comparable with the current state-of-the-art on the natural
language generation (NLG) metrics BLEU and ROUGE-L, while setting new
state-of-the-art results on F1 examples-averaged, F1-macro and F1-micro metrics
for clinical accuracy and on the METEOR metric widely used for NLG.
|
2501.02599 | Empowering Bengali Education with AI: Solving Bengali Math Word Problems
through Transformer Models | cs.CL cs.AI cs.CY cs.LG | Mathematical word problems (MWPs) involve the task of converting textual
descriptions into mathematical equations. This poses a significant challenge in
natural language processing, particularly for low-resource languages such as
Bengali. This paper addresses this challenge by developing an innovative
approach to solving Bengali MWPs using transformer-based models, including
Basic Transformer, mT5, BanglaT5, and mBART50. To support this effort, the
"PatiGonit" dataset was introduced, containing 10,000 Bengali math problems,
and these models were fine-tuned to translate the word problems into equations
accurately. The evaluation revealed that the mT5 model achieved the highest
accuracy of 97.30%, demonstrating the effectiveness of transformer models in
this domain. This research marks a significant step forward in Bengali natural
language processing, offering valuable methodologies and resources for
educational AI tools. By improving math education, it also supports the
development of advanced problem-solving skills for Bengali-speaking students.
|
2501.02600 | TAPAS: Thermal- and Power-Aware Scheduling for LLM Inference in Cloud
Platforms | cs.DC cs.AI | The rising demand for generative large language models (LLMs) poses
challenges for thermal and power management in cloud datacenters. Traditional
techniques often are inadequate for LLM inference due to the fine-grained,
millisecond-scale execution phases, each with distinct performance, thermal,
and power profiles. Additionally, LLM inference workloads are sensitive to
various configuration parameters (e.g., model parallelism, size, and
quantization) that involve trade-offs between performance, temperature, power,
and output quality. Moreover, clouds often co-locate SaaS and IaaS workloads,
each with different levels of visibility and flexibility. We propose TAPAS, a
thermal- and power-aware framework designed for LLM inference clusters in the
cloud. TAPAS enhances cooling and power oversubscription capabilities, reducing
the total cost of ownership (TCO) while effectively handling emergencies (e.g.,
cooling and power failures). The system leverages historical temperature and
power data, along with the adaptability of SaaS workloads, to: (1) efficiently
place new GPU workload VMs within cooling and power constraints, (2) route LLM
inference requests across SaaS VMs, and (3) reconfigure SaaS VMs to manage load
spikes and emergency situations. Our evaluation on a large GPU cluster
demonstrates significant reductions in thermal and power throttling events,
boosting system efficiency.
|
2501.02604 | Collision-resistant hash-shuffles on the reals | math.LO cs.CC cs.IT math.IT | Oneway real functions are effective maps on positive-measure sets of reals
that preserve randomness and have no effective probabilistic inversions. We
construct a oneway real function which is collision-resistant: the probability
of effectively producing distinct reals with the same image is zero, and each
real has uncountable inverse image.
|
2501.02612 | Chameleon2++: An Efficient Chameleon2 Clustering with Approximate
Nearest Neighbors | cs.LG cs.DS | Clustering algorithms are fundamental tools in data analysis, with
hierarchical methods being particularly valuable for their flexibility.
Chameleon is a widely used hierarchical clustering algorithm that excels at
identifying high-quality clusters of arbitrary shapes, sizes, and densities.
Chameleon2 is the most recent variant that has demonstrated significant
improvements, but suffers from critical failings and there are certain
improvements that can be made.
The first failure we address is that the complexity of Chameleon2 is claimed
to be $O(n^2)$, while we demonstrate that it is actually $O(n^2\log{n})$, with
$n$ being the number of data points. Furthermore, we suggest improvements to
Chameleon2 that ensure that the complexity remains $O(n^2)$ with minimal to no
loss of performance. The second failing of Chameleon2 is that it lacks
transparency and it does not provide the fine-tuned algorithm parameters used
to obtain the claimed results. We meticulously provide all such parameter
values to enhance replicability.
The improvement which we make in Chameleon2 is that we replace the exact
$k$-NN search with an approximate $k$-NN search. This further reduces the
algorithmic complexity down to $O(n\log{n})$ without any performance loss.
Here, we primarily configure three approximate nearest neighbor search
algorithms (Annoy, FLANN and NMSLIB) to align with the overarching Chameleon2
clustering framework. Experimental evaluations on standard benchmark datasets
demonstrate that the proposed Chameleon2++ algorithm is more efficient, robust,
and computationally optimal.
|
2501.02613 | LWFNet: Coherent Doppler Wind Lidar-Based Network for Wind Field
Retrieval | physics.ao-ph cs.LG | Accurate detection of wind fields within the troposphere is essential for
atmospheric dynamics research and plays a crucial role in extreme weather
forecasting. Coherent Doppler wind lidar (CDWL) is widely regarded as the most
suitable technique for high spatial and temporal resolution wind field
detection. However, since coherent detection relies heavily on the
concentration of aerosol particles, which cause Mie scattering, the received
backscattering lidar signal exhibits significantly low intensity at high
altitudes. As a result, conventional methods, such as spectral centroid
estimation, often fail to produce credible and accurate wind retrieval results
in these regions. To address this issue, we propose LWFNet, the first
Lidar-based Wind Field (WF) retrieval neural Network, built upon Transformer
and the Kolmogorov-Arnold network. Our model is trained solely on targets
derived from the traditional wind retrieval algorithm and utilizes radiosonde
measurements as the ground truth for test results evaluation. Experimental
results demonstrate that LWFNet not only extends the maximum wind field
detection range but also produces more accurate results, exhibiting a level of
precision that surpasses the labeled targets. This phenomenon, which we refer
to as super-accuracy, is explored by investigating the potential underlying
factors that contribute to this intriguing occurrence. In addition, we compare
the performance of LWFNet with other state-of-the-art (SOTA) models,
highlighting its superior effectiveness and capability in high-resolution wind
retrieval. LWFNet demonstrates remarkable performance in lidar-based wind field
retrieval, setting a benchmark for future research and advancing the
development of deep learning models in this domain.
|
2501.02615 | Parsings of Stationary Processes, Stopping Times and the Fundamental
Pointwise Convergence Theorems of Ergodic Theory | math.DS cs.IT math.CO math.IT math.PR | The idea of a parsing of a stationary process according to a collection of
words is introduced, and the basic framework required for the asymptotic
analysis of these parsings is presented. We demonstrate how the pointwise
ergodic theorem and the Shannon-McMillan-Breiman theorem can be deduced from
their respective weaker convergence in probability versions combined with our
observations regarding parsings, where the parsings are done according to
collections that originate in stopping times tailored for that purpose.
|
2501.02616 | Multi-layer Radial Basis Function Networks for Out-of-distribution
Detection | cs.LG cs.CV | Existing methods for out-of-distribution (OOD) detection use various
techniques to produce a score, separate from classification, that determines
how ``OOD'' an input is. Our insight is that OOD detection can be simplified by
using a neural network architecture which can effectively merge classification
and OOD detection into a single step. Radial basis function networks (RBFNs)
inherently link classification confidence and OOD detection; however, these
networks have lost popularity due to the difficult of training them in a
multi-layer fashion. In this work, we develop a multi-layer radial basis
function network (MLRBFN) which can be easily trained. To ensure that these
networks are also effective for OOD detection, we develop a novel depression
mechanism. We apply MLRBFNs as standalone classifiers and as heads on top of
pretrained feature extractors, and find that they are competitive with commonly
used methods for OOD detection. Our MLRBFN architecture demonstrates a
promising new direction for OOD detection methods.
|
2501.02618 | Identifying Surgical Instruments in Pedagogical Cataract Surgery Videos
through an Optimized Aggregation Network | cs.CV | Instructional cataract surgery videos are crucial for ophthalmologists and
trainees to observe surgical details repeatedly. This paper presents a deep
learning model for real-time identification of surgical instruments in these
videos, using a custom dataset scraped from open-access sources. Inspired by
the architecture of YOLOV9, the model employs a Programmable Gradient
Information (PGI) mechanism and a novel Generally-Optimized Efficient Layer
Aggregation Network (Go-ELAN) to address the information bottleneck problem,
enhancing Minimum Average Precision (mAP) at higher Non-Maximum Suppression
Intersection over Union (NMS IoU) scores. The Go-ELAN YOLOV9 model, evaluated
against YOLO v5, v7, v8, v9 vanilla, Laptool and DETR, achieves a superior mAP
of 73.74 at IoU 0.5 on a dataset of 615 images with 10 instrument classes,
demonstrating the effectiveness of the proposed model.
|
2501.02620 | Back to Base: Towards Hands-Off Learning via Safe Resets with
Reach-Avoid Safety Filters | eess.SY cs.RO cs.SY | Designing controllers that accomplish tasks while guaranteeing safety
constraints remains a significant challenge. We often want an agent to perform
well in a nominal task, such as environment exploration, while ensuring it can
avoid unsafe states and return to a desired target by a specific time. In
particular we are motivated by the setting of safe, efficient, hands-off
training for reinforcement learning in the real world. By enabling a robot to
safely and autonomously reset to a desired region (e.g., charging stations)
without human intervention, we can enhance efficiency and facilitate training.
Safety filters, such as those based on control barrier functions, decouple
safety from nominal control objectives and rigorously guarantee safety. Despite
their success, constructing these functions for general nonlinear systems with
control constraints and system uncertainties remains an open problem. This
paper introduces a safety filter obtained from the value function associated
with the reach-avoid problem. The proposed safety filter minimally modifies the
nominal controller while avoiding unsafe regions and guiding the system back to
the desired target set. By preserving policy performance while allowing safe
resetting, we enable efficient hands-off reinforcement learning and advance the
feasibility of safe training for real world robots. We demonstrate our approach
using a modified version of soft actor-critic to safely train a swing-up task
on a modified cartpole stabilization problem.
|
2501.02621 | LLMs Help Alleviate the Cross-Subject Variability in Brain Signal and
Language Alignment | cs.NE cs.AI | Decoding human activity from EEG signals has long been a popular research
topic. While recent studies have increasingly shifted focus from single-subject
to cross-subject analysis, few have explored the model's ability to perform
zero-shot predictions on EEG signals from previously unseen subjects. This
research aims to investigate whether deep learning methods can capture
subject-independent semantic information inherent in human EEG signals. Such
insights are crucial for Brain-Computer Interfaces (BCI) because, on one hand,
they demonstrate the model's robustness against subject-specific temporal
biases, and on the other, they significantly enhance the generalizability of
downstream tasks. We employ Large Language Models (LLMs) as denoising agents to
extract subject-independent semantic features from noisy EEG signals.
Experimental results, including ablation studies, highlight the pivotal role of
LLMs in decoding subject-independent semantic information from noisy EEG data.
We hope our findings will contribute to advancing BCI research and assist both
academia and industry in applying EEG signals to a broader range of
applications.
|
2501.02625 | HALO: Hadamard-Assisted Lower-Precision Optimization for LLMs | cs.LG | Quantized training of Large Language Models (LLMs) remains an open challenge,
as maintaining accuracy while performing all matrix multiplications in low
precision has proven difficult. This is particularly the case when fine-tuning
pre-trained models, which can have large weight and activation outlier values
that make lower-precision optimization difficult. To address this, we present
HALO, a novel quantization-aware training approach for Transformers that
enables accurate and efficient low-precision training by combining 1) strategic
placement of Hadamard rotations in both forward and backward passes, which
mitigate outliers, 2) high-performance kernel support, and 3) FSDP integration
for low-precision communication. Our approach ensures that all large matrix
multiplications during the forward and backward passes are executed in lower
precision. Applied to LLAMA-family models, HALO achieves
near-full-precision-equivalent results during fine-tuning on various tasks,
while delivering up to 1.41x end-to-end speedup for full fine-tuning on RTX
4090 GPUs. HALO efficiently supports both standard and parameterefficient
fine-tuning (PEFT). Our results demonstrate the first practical approach to
fully quantized LLM fine-tuning that maintains accuracy in 8-bit precision,
while delivering performance benefits. Code is available at
\url{https://github.com/IST-DASLab/HALO}.
|
2501.02626 | On the Independence Assumption in Quasi-Cyclic Code-Based Cryptography | cs.IT cs.CR math.IT | Cryptography based on the presumed hardness of decoding codes -- i.e.,
code-based cryptography -- has recently seen increased interest due to its
plausible security against quantum attackers. Notably, of the four proposals
for the NIST post-quantum standardization process that were advanced to their
fourth round for further review, two were code-based. The most efficient
proposals -- including HQC and BIKE, the NIST submissions alluded to above --
in fact rely on the presumed hardness of decoding structured codes. Of
particular relevance to our work, HQC is based on quasi-cyclic codes, which are
codes generated by matrices consisting of two cyclic blocks.
In particular, the security analysis of HQC requires a precise understanding
of the Decryption Failure Rate (DFR), whose analysis relies on the following
heuristic: given random ``sparse'' vectors $e_1,e_2$ (say, each coordinate is
i.i.d. Bernoulli) multiplied by fixed ``sparse'' quasi-cyclic matrices
$A_1,A_2$, the weight of resulting vector $e_1A_1+e_2A_2$ is very concentrated
around its expectation. In the documentation, the authors model the
distribution of $e_1A_1+e_2A_2$ as a vector with independent coordinates (and
correct marginal distribution). However, we uncover cases where this modeling
fails. While this does not invalidate the (empirically verified) heuristic that
the weight of $e_1A_1+e_2A_2$ is concentrated, it does suggest that the
behavior of the noise is a bit more subtle than previously predicted. Lastly,
we also discuss implications of our result for potential worst-case to
average-case reductions for quasi-cyclic codes.
|
2501.02628 | Cracks in The Stack: Hidden Vulnerabilities and Licensing Risks in LLM
Pre-Training Datasets | cs.SE cs.AI | A critical part of creating code suggestion systems is the pre-training of
Large Language Models on vast amounts of source code and natural language text,
often of questionable origin or quality. This may contribute to the presence of
bugs and vulnerabilities in code generated by LLMs. While efforts to identify
bugs at or after code generation exist, it is preferable to pre-train or
fine-tune LLMs on curated, high-quality, and compliant datasets. The need for
vast amounts of training data necessitates that such curation be automated,
minimizing human intervention.
We propose an automated source code autocuration technique that leverages the
complete version history of open-source software projects to improve the
quality of training data. This approach leverages the version history of all
OSS projects to identify training data samples that have been modified or have
undergone changes in at least one OSS project, and pinpoint a subset of samples
that include fixes for bugs or vulnerabilities. We evaluate this method using
The Stack v2 dataset, and find that 17% of the code versions in the dataset
have newer versions, with 17% of those representing bug fixes, including 2.36%
addressing known CVEs. The deduplicated version of Stack v2 still includes
blobs vulnerable to 6,947 known CVEs. Furthermore, 58% of the blobs in the
dataset were never modified after creation, suggesting they likely represent
software with minimal or no use. Misidentified blob origins present an
additional challenge, as they lead to the inclusion of non-permissively
licensed code, raising serious compliance concerns.
By addressing these issues, the training of new models can avoid perpetuating
buggy code patterns or license violations. We expect our results to inspire
process improvements for automated data curation, with the potential to enhance
the reliability of outputs generated by AI tools.
|
2501.02629 | Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for
Jailbreak Attack Defense | cs.CR cs.AI cs.CL | As large language models (LLMs) are increasingly deployed in diverse
applications, including chatbot assistants and code generation, aligning their
behavior with safety and ethical standards has become paramount. However,
jailbreak attacks, which exploit vulnerabilities to elicit unintended or
harmful outputs, threaten LLMs' safety significantly. In this paper, we
introduce Layer-AdvPatcher, a novel methodology designed to defend against
jailbreak attacks by utilizing an unlearning strategy to patch specific layers
within LLMs through self-augmented datasets. Our insight is that certain
layer(s), tend to produce affirmative tokens when faced with harmful prompts.
By identifying these layers and adversarially exposing them to generate more
harmful data, one can understand their inherent and diverse vulnerabilities to
attacks. With these exposures, we then "unlearn" these issues, reducing the
impact of affirmative tokens and hence minimizing jailbreak risks while keeping
the model's responses to safe queries intact. We conduct extensive experiments
on two models, four benchmark datasets, and multiple state-of-the-art jailbreak
attacks to demonstrate the efficacy of our approach. Results indicate that our
framework reduces the harmfulness and attack success rate of jailbreak attacks
without compromising utility for benign queries compared to recent defense
methods. Our code is publicly available at:
https://github.com/oyy2000/LayerAdvPatcher
|
2501.02630 | Soft and Compliant Contact-Rich Hair Manipulation and Care | cs.RO | Hair care robots can help address labor shortages in elderly care while
enabling those with limited mobility to maintain their hair-related identity.
We present MOE-Hair, a soft robot system that performs three hair-care tasks:
head patting, finger combing, and hair grasping. The system features a
tendon-driven soft robot end-effector (MOE) with a wrist-mounted RGBD camera,
leveraging both mechanical compliance for safety and visual force sensing
through deformation. In testing with a force-sensorized mannequin head, MOE
achieved comparable hair-grasping effectiveness while applying significantly
less force than rigid grippers. Our novel force estimation method combines
visual deformation data and tendon tensions from actuators to infer applied
forces, reducing sensing errors by up to 60.1% and 20.3% compared to actuator
current load-only and depth image-only baselines, respectively. A user study
with 12 participants demonstrated statistically significant preferences for
MOE-Hair over a baseline system in terms of comfort, effectiveness, and
appropriate force application. These results demonstrate the unique advantages
of soft robots in contact-rich hair-care tasks, while highlighting the
importance of precise force control despite the inherent compliance of the
system.
|
2501.02631 | Prune or Retrain: Optimizing the Vocabulary of Multilingual Models for
Estonian | cs.CL | Adapting multilingual language models to specific languages can enhance both
their efficiency and performance. In this study, we explore how modifying the
vocabulary of a multilingual encoder model to better suit the Estonian language
affects its downstream performance on the Named Entity Recognition (NER) task.
The motivations for adjusting the vocabulary are twofold: practical benefits
affecting the computational cost, such as reducing the input sequence length
and the model size, and performance enhancements by tailoring the vocabulary to
the particular language. We evaluate the effectiveness of two vocabulary
adaptation approaches -- retraining the tokenizer and pruning unused tokens --
and assess their impact on the model's performance, particularly after
continual training. While retraining the tokenizer degraded the performance of
the NER task, suggesting that longer embedding tuning might be needed, we
observed no negative effects on pruning.
|
2501.02635 | Interactive Information Need Prediction with Intent and Context | cs.IR | The ability to predict a user's information need would have wide-ranging
implications, from saving time and effort to mitigating vocabulary gaps. We
study how to interactively predict a user's information need by letting them
select a pre-search context (e.g., a paragraph, sentence, or singe word) and
specify an optional partial search intent (e.g., "how", "why", "applications",
etc.). We examine how various generative language models can explicitly make
this prediction by generating a question as well as how retrieval models can
implicitly make this prediction by retrieving an answer. We find that this
prediction process is possible in many cases and that user-provided partial
search intent can help mitigate large pre-search contexts. We conclude that
this framework is promising and suitable for real-world applications.
|
2501.02640 | Multispectral Pedestrian Detection with Sparsely Annotated Label | cs.CV | Although existing Sparsely Annotated Object Detection (SAOD) approches have
made progress in handling sparsely annotated environments in multispectral
domain, where only some pedestrians are annotated, they still have the
following limitations: (i) they lack considerations for improving the quality
of pseudo-labels for missing annotations, and (ii) they rely on fixed ground
truth annotations, which leads to learning only a limited range of pedestrian
visual appearances in the multispectral domain. To address these issues, we
propose a novel framework called Sparsely Annotated Multispectral Pedestrian
Detection (SAMPD). For limitation (i), we introduce Multispectral
Pedestrian-aware Adaptive Weight (MPAW) and Positive Pseudo-label Enhancement
(PPE) module. Utilizing multispectral knowledge, these modules ensure the
generation of high-quality pseudo-labels and enable effective learning by
increasing weights for high-quality pseudo-labels based on modality
characteristics. To address limitation (ii), we propose an Adaptive Pedestrian
Retrieval Augmentation (APRA) module, which adaptively incorporates pedestrian
patches from ground-truth and dynamically integrates high-quality pseudo-labels
with the ground-truth, facilitating a more diverse learning pool of
pedestrians. Extensive experimental results demonstrate that our SAMPD
significantly enhances performance in sparsely annotated environments within
the multispectral domain.
|
2501.02647 | Trust and Dependability in Blockchain & AI Based MedIoT Applications:
Research Challenges and Future Directions | cs.CR cs.AI cs.CY | This paper critically reviews the integration of Artificial Intelligence (AI)
and blockchain technologies in the context of Medical Internet of Things
(MedIoT) applications, where they collectively promise to revolutionize
healthcare delivery. By examining current research, we underscore AI's
potential in advancing diagnostics and patient care, alongside blockchain's
capacity to bolster data security and patient privacy. We focus particularly on
the imperative to cultivate trust and ensure reliability within these systems.
Our review highlights innovative solutions for managing healthcare data and
challenges such as ensuring scalability, maintaining privacy, and promoting
ethical practices within the MedIoT domain. We present a vision for integrating
AI-driven insights with blockchain security in healthcare, offering a
comprehensive review of current research and future directions. We conclude
with a set of identified research gaps and propose that addressing these is
crucial for achieving the dependable, secure, and patient -centric MedIoT
applications of tomorrow.
|
2501.02648 | Representation Learning of Lab Values via Masked AutoEncoder | cs.LG cs.AI | Accurate imputation of missing laboratory values in electronic health records
(EHRs) is critical to enable robust clinical predictions and reduce biases in
AI systems in healthcare. Existing methods, such as variational autoencoders
(VAEs) and decision tree-based approaches such as XGBoost, struggle to model
the complex temporal and contextual dependencies in EHR data, mainly in
underrepresented groups. In this work, we propose Lab-MAE, a novel
transformer-based masked autoencoder framework that leverages self-supervised
learning for the imputation of continuous sequential lab values. Lab-MAE
introduces a structured encoding scheme that jointly models laboratory test
values and their corresponding timestamps, enabling explicit capturing temporal
dependencies. Empirical evaluation on the MIMIC-IV dataset demonstrates that
Lab-MAE significantly outperforms the state-of-the-art baselines such as
XGBoost across multiple metrics, including root mean square error (RMSE),
R-squared (R2), and Wasserstein distance (WD). Notably, Lab-MAE achieves
equitable performance across demographic groups of patients, advancing fairness
in clinical predictions. We further investigate the role of follow-up
laboratory values as potential shortcut features, revealing Lab-MAE's
robustness in scenarios where such data is unavailable. The findings suggest
that our transformer-based architecture, adapted to the characteristics of the
EHR data, offers a foundation model for more accurate and fair clinical
imputation models. In addition, we measure and compare the carbon footprint of
Lab-MAE with the baseline XGBoost model, highlighting its environmental
requirements.
|
2501.02649 | Tighnari: Multi-modal Plant Species Prediction Based on Hierarchical
Cross-Attention Using Graph-Based and Vision Backbone-Extracted Features | cs.CV cs.AI | Predicting plant species composition in specific spatiotemporal contexts
plays an important role in biodiversity management and conservation, as well as
in improving species identification tools. Our work utilizes 88,987 plant
survey records conducted in specific spatiotemporal contexts across Europe. We
also use the corresponding satellite images, time series data, climate time
series, and other rasterized environmental data such as land cover, human
footprint, bioclimatic, and soil variables as training data to train the model
to predict the outcomes of 4,716 plant surveys. We propose a feature
construction and result correction method based on the graph structure. Through
comparative experiments, we select the best-performing backbone networks for
feature extraction in both temporal and image modalities. In this process, we
built a backbone network based on the Swin-Transformer Block for extracting
temporal Cubes features. We then design a hierarchical cross-attention
mechanism capable of robustly fusing features from multiple modalities. During
training, we adopt a 10-fold cross-fusion method based on fine-tuning and use a
Threshold Top-K method for post-processing. Ablation experiments demonstrate
the improvements in model performance brought by our proposed solution
pipeline.
|
2501.02652 | A New Interpretation of the Certainty-Equivalence Approach for PAC
Reinforcement Learning with a Generative Model | cs.LG stat.ML | Reinforcement learning (RL) enables an agent interacting with an unknown MDP
$M$ to optimise its behaviour by observing transitions sampled from $M$. A
natural entity that emerges in the agent's reasoning is $\widehat{M}$, the
maximum likelihood estimate of $M$ based on the observed transitions. The
well-known \textit{certainty-equivalence} method (CEM) dictates that the agent
update its behaviour to $\widehat{\pi}$, which is an optimal policy for
$\widehat{M}$. Not only is CEM intuitive, it has been shown to enjoy
minimax-optimal sample complexity in some regions of the parameter space for
PAC RL with a generative model~\citep{Agarwal2020GenModel}.
A seemingly unrelated algorithm is the ``trajectory tree method''
(TTM)~\citep{Kearns+MN:1999}, originally developed for efficient decision-time
planning in large POMDPs. This paper presents a theoretical investigation that
stems from the surprising finding that CEM may indeed be viewed as an
application of TTM. The qualitative benefits of this view are (1) new and
simple proofs of sample complexity upper bounds for CEM, in fact under a (2)
weaker assumption on the rewards than is prevalent in the current literature.
Our analysis applies to both non-stationary and stationary MDPs.
Quantitatively, we obtain (3) improvements in the sample-complexity upper
bounds for CEM both for non-stationary and stationary MDPs, in the regime that
the ``mistake probability'' $\delta$ is small. Additionally, we show (4) a
lower bound on the sample complexity for finite-horizon MDPs, which establishes
the minimax-optimality of our upper bound for non-stationary MDPs in the
small-$\delta$ regime.
|
2501.02654 | Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence
Benchmarks | cs.CL cs.AI | Recent advancements in natural language processing have highlighted the
vulnerability of deep learning models to adversarial attacks. While various
defence mechanisms have been proposed, there is a lack of comprehensive
benchmarks that evaluate these defences across diverse datasets, models, and
tasks. In this work, we address this gap by presenting an extensive benchmark
for textual adversarial defence that significantly expands upon previous work.
Our benchmark incorporates a wide range of datasets, evaluates state-of-the-art
defence mechanisms, and extends the assessment to include critical tasks such
as single-sentence classification, similarity and paraphrase identification,
natural language inference, and commonsense reasoning. This work not only
serves as a valuable resource for researchers and practitioners in the field of
adversarial robustness but also identifies key areas for future research in
textual adversarial defence. By establishing a new standard for benchmarking in
this domain, we aim to accelerate progress towards more robust and reliable
natural language processing systems.
|
2501.02662 | Incentive-Compatible Federated Learning with Stackelberg Game Modeling | cs.LG cs.DC | Federated Learning (FL) has gained prominence as a decentralized machine
learning paradigm, allowing clients to collaboratively train a global model
while preserving data privacy. Despite its potential, FL faces significant
challenges in heterogeneous environments, where varying client resources and
capabilities can undermine overall system performance. Existing approaches
primarily focus on maximizing global model accuracy, often at the expense of
unfairness among clients and suboptimal system efficiency, particularly in
non-IID (non-Independent and Identically Distributed) settings. In this paper,
we introduce FLamma, a novel Federated Learning framework based on adaptive
gamma-based Stackelberg game, designed to address the aforementioned
limitations and promote fairness. Our approach allows the server to act as the
leader, dynamically adjusting a decay factor while clients, acting as
followers, optimally select their number of local epochs to maximize their
utility. Over time, the server incrementally balances client influence,
initially rewarding higher-contributing clients and gradually leveling their
impact, driving the system toward a Stackelberg Equilibrium. Extensive
simulations on both IID and non-IID datasets show that our method significantly
improves fairness in accuracy distribution without compromising overall model
performance or convergence speed, outperforming traditional FL baselines.
|
2501.02666 | Multi-Aggregator Time-Warping Heterogeneous Graph Neural Network for
Personalized Micro-Video Recommendation | cs.IR cs.AI | Micro-video recommendation is attracting global attention and becoming a
popular daily service for people of all ages. Recently, Graph Neural
Networks-based micro-video recommendation has displayed performance improvement
for many kinds of recommendation tasks. However, the existing works fail to
fully consider the characteristics of micro-videos, such as the high timeliness
of news nature micro-video recommendation and sequential interactions of
frequently changed interests. In this paper, a novel Multi-aggregator
Time-warping Heterogeneous Graph Neural Network (MTHGNN) is proposed for
personalized news nature micro-video recommendation based on sequential
sessions, where characteristics of micro-videos are comprehensively studied,
users' preference is mined via multi-aggregator, the temporal and dynamic
changes of users' preference are captured, and timeliness is considered.
Through the comparison with the state-of-the-arts, the experimental results
validate the superiority of our MTHGNN model.
|
2501.02667 | Markov Decision Processes for Satellite Maneuver Planning and Collision
Avoidance | cs.RO cs.SY eess.SY | This paper presents a decentralized, online planning approach for scalable
maneuver planning for large constellations. While decentralized, rule-based
strategies have facilitated efficient scaling, optimal decision-making
algorithms for satellite maneuvers remain underexplored. As commercial
satellite constellations grow, there are benefits of online maneuver planning,
such as using real-time trajectory predictions to improve state knowledge,
thereby reducing maneuver frequency and conserving fuel. We address this gap in
the research by treating the satellite maneuver planning problem as a Markov
decision process (MDP). This approach enables the generation of optimal
maneuver policies online with low computational cost. This formulation is
applied to the low Earth orbit collision avoidance problem, considering the
problem of an active spacecraft deciding to maneuver to avoid a
non-maneuverable object. We test the policies we generate in a simulated low
Earth orbit environment, and compare the results to traditional rule-based
collision avoidance techniques.
|
2501.02669 | Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate
Modality Imbalance in VLMs? | cs.CV cs.CL cs.LG | While Vision Language Models (VLMs) are impressive in tasks such as visual
question answering (VQA) and image captioning, their ability to apply
multi-step reasoning to images has lagged, giving rise to perceptions of
modality imbalance or brittleness. Towards systematic study of such issues, we
introduce a synthetic framework for assessing the ability of VLMs to perform
algorithmic visual reasoning (AVR), comprising three tasks: Table Readout, Grid
Navigation, and Visual Analogy. Each has two levels of difficulty, SIMPLE and
HARD, and even the SIMPLE versions are difficult for frontier VLMs. We seek
strategies for training on the SIMPLE version of the tasks that improve
performance on the corresponding HARD task, i.e., S2H generalization. This
synthetic framework, where each task also has a text-only version, allows a
quantification of the modality imbalance, and how it is impacted by training
strategy. Ablations highlight the importance of explicit image-to-text
conversion in promoting S2H generalization when using auto-regressive training.
We also report results of mechanistic study of this phenomenon, including a
measure of gradient alignment that seems to identify training strategies that
promote better S2H generalization.
|
2501.02670 | Neural networks meet hyperelasticity: A monotonic approach | cs.CE | We apply physics-augmented neural network (PANN) constitutive models to
experimental uniaxial tensile data of rubber-like materials whose behavior
depends on manufacturing parameters. For this, we conduct experimental
investigations on a 3D printed digital material at different mix ratios and
consider several datasets from literature, including Ecoflex at different Shore
hardness and a photocured 3D printing material at different grayscale values.
We introduce a parametrized hyperelastic PANN model which can represent
material behavior at different manufacturing parameters. The proposed model
fulfills common mechanical conditions of hyperelasticity. In addition, the
hyperelastic potential of the proposed model is monotonic in isotropic
isochoric strain invariants of the right Cauchy-Green tensor. In incompressible
hyperelasticity, this is a relaxed version of the ellipticity (or rank-one
convexity) condition. Using this relaxed ellipticity condition, the PANN model
has enough flexibility to be applicable to a wide range of materials while
having enough structure for a stable extrapolation outside the calibration
data. The monotonic PANN yields excellent results for all materials studied and
can represent a wide range of largely varying qualitative and quantitative
stress behavior. Although calibrated on uniaxial tensile data only, it leads to
a stable numerical behavior of 3D finite element simulations. The findings of
our work suggest that monotonicity could play a key role in the formulation of
very general yet robust and stable constitutive models applicable to materials
with highly nonlinear and parametrized behavior.
|
2501.02671 | Quantum Cognition-Inspired EEG-based Recommendation via Graph Neural
Networks | cs.IR | Current recommendation systems recommend goods by considering users'
historical behaviors, social relations, ratings, and other multi-modals.
Although outdated user information presents the trends of a user's interests,
no recommendation system can know the users' real-time thoughts indeed. With
the development of brain-computer interfaces, it is time to explore
next-generation recommenders that show users' real-time thoughts without delay.
Electroencephalography (EEG) is a promising method of collecting brain signals
because of its convenience and mobility. Currently, there is only few research
on EEG-based recommendations due to the complexity of learning human brain
activity. To explore the utility of EEG-based recommendation, we propose a
novel neural network model, QUARK, combining Quantum Cognition Theory and Graph
Convolutional Networks for accurate item recommendations. Compared with the
state-of-the-art recommendation models, the superiority of QUARK is confirmed
via extensive experiments.
|
2501.02672 | Re-examining Granger Causality from Causal Bayesian Networks Perspective | stat.ML cs.LG econ.EM stat.ME | Characterizing cause-effect relationships in complex systems could be
critical to understanding these systems. For many, Granger causality (GC)
remains a computational tool of choice to identify causal relations in time
series data. Like other causal discovery tools, GC has limitations and has been
criticized as a non-causal framework. Here, we addressed one of the recurring
criticisms of GC by endowing it with proper causal interpretation. This was
achieved by analyzing GC from Reichenbach's Common Cause Principles (RCCPs) and
causal Bayesian networks (CBNs) lenses. We showed theoretically and graphically
that this reformulation endowed GC with a proper causal interpretation under
certain assumptions and achieved satisfactory results on simulation.
|
2501.02673 | Exploring the Impact of Dataset Statistical Effect Size on Model
Performance and Data Sample Size Sufficiency | cs.LG | Having a sufficient quantity of quality data is a critical enabler of
training effective machine learning models. Being able to effectively determine
the adequacy of a dataset prior to training and evaluating a model's
performance would be an essential tool for anyone engaged in experimental
design or data collection. However, despite the need for it, the ability to
prospectively assess data sufficiency remains an elusive capability. We report
here on two experiments undertaken in an attempt to better ascertain whether or
not basic descriptive statistical measures can be indicative of how effective a
dataset will be at training a resulting model. Leveraging the effect size of
our features, this work first explores whether or not a correlation exists
between effect size, and resulting model performance (theorizing that the
magnitude of the distinction between classes could correlate to a classifier's
resulting success). We then explore whether or not the magnitude of the effect
size will impact the rate of convergence of our learning rate, (theorizing
again that a greater effect size may indicate that the model will converge more
rapidly, and with a smaller sample size needed). Our results appear to indicate
that this is not an effective heuristic for determining adequate sample size or
projecting model performance, and therefore that additional work is still
needed to better prospectively assess adequacy of data.
|
2501.02675 | A Novel First-Principles Model of Injection-Locked Oscillator Phase
Noise | eess.SY cs.SY | The paper documents the development of a novel time-domain model of
injection-locked oscillator phase-noise response. The methodology follows a
first-principle approach and applies to all circuit topologies, coupling
configurations, parameter dependencies etc. The corresponding numerical
algorithm is readily integrated into all major commercial simulation software
suites. The model advances current state-of-the-art pertaining to analytical
modelling of this class of circuits. Using this novel analytical framework,
several important new insights are revealed which, in-turn, translate into
useful design rules for synthesis of injection-locked oscillator circuits with
optimal noise performance.
|
2501.02680 | From thermodynamics to protein design: Diffusion models for biomolecule
generation towards autonomous protein engineering | q-bio.QM cs.AI cs.LG | Protein design with desirable properties has been a significant challenge for
many decades. Generative artificial intelligence is a promising approach and
has achieved great success in various protein generation tasks. Notably,
diffusion models stand out for their robust mathematical foundations and
impressive generative capabilities, offering unique advantages in certain
applications such as protein design. In this review, we first give the
definition and characteristics of diffusion models and then focus on two
strategies: Denoising Diffusion Probabilistic Models and Score-based Generative
Models, where DDPM is the discrete form of SGM. Furthermore, we discuss their
applications in protein design, peptide generation, drug discovery, and
protein-ligand interaction. Finally, we outline the future perspectives of
diffusion models to advance autonomous protein design and engineering. The E(3)
group consists of all rotations, reflections, and translations in
three-dimensions. The equivariance on the E(3) group can keep the physical
stability of the frame of each amino acid as much as possible, and we reflect
on how to keep the diffusion model E(3) equivariant for protein generation.
|
2501.02683 | From Superficial Patterns to Semantic Understanding: Fine-Tuning
Language Models on Contrast Sets | cs.CL cs.AI | Large-scale pre-trained language models have demonstrated high performance on
standard datasets for natural language inference (NLI) tasks. Unfortunately,
these evaluations can be misleading, as although the models can perform well on
in-distribution data, they perform poorly on out-of-distribution test sets,
such as contrast sets. Contrast sets consist of perturbed instances of data
that have very minor, but meaningful, changes to the input that alter the gold
label, revealing how models can learn superficial patterns in the training data
rather than learning more sophisticated language nuances. As an example, the
ELECTRA-small language model achieves nearly 90% accuracy on an SNLI dataset
but drops to 75% when tested on an out-of-distribution contrast set. The
research carried out in this study explores how the robustness of a language
model can be improved by exposing it to small amounts of more complex contrast
sets during training to help it better learn language patterns. With this
approach, the model recovers performance and achieves nearly 90% accuracy on
contrast sets, highlighting the importance of diverse and challenging training
data.
|
2501.02687 | Improving Quantum Machine Learning via Heat-Bath Algorithmic Cooling | quant-ph cs.LG | This work introduces an approach rooted in quantum thermodynamics to enhance
sampling efficiency in quantum machine learning (QML). We propose
conceptualizing quantum supervised learning as a thermodynamic cooling process.
Building on this concept, we develop a quantum refrigerator protocol that
enhances sample efficiency during training and prediction without the need for
Grover iterations or quantum phase estimation. Inspired by heat-bath
algorithmic cooling protocols, our method alternates entropy compression and
thermalization steps to decrease the entropy of qubits, increasing polarization
towards the dominant bias. This technique minimizes the computational overhead
associated with estimating classification scores and gradients, presenting a
practical and efficient solution for QML algorithms compatible with noisy
intermediate-scale quantum devices.
|
2501.02688 | Decoding specialised feature neurons in LLMs with the final projection
layer | cs.CL | Large Language Models (LLMs) typically have billions of parameters and are
thus often difficult to interpret in their operation. Such black-box models can
pose a significant risk to safety when trusted to make important decisions. The
lack of interpretability of LLMs is more related to their sheer size, rather
than the complexity of their individual components. The TARS method for
knowledge removal (Davies et al 2024) provides strong evidence for the
hypothesis that that linear layer weights which act directly on the residual
stream may have high correlation with different concepts encoded in the
residual stream. Building upon this, we attempt to decode neuron weights
directly into token probabilities through the final projection layer of the
model (the LM-head). Firstly, we show that with Llama 3.1 8B we can utilise the
LM-head to decode specialised feature neurons that respond strongly to certain
concepts, with examples such as "dog" and "California". This is then confirmed
by demonstrating that these neurons can be clamped to affect the probability of
the concept in the output. This extends to the fine-tuned assistant Llama 3.1
8B instruct model, where we find that over 75% of neurons in the up-projection
layers have the same top associated token compared to the pretrained model.
Finally, we demonstrate that clamping the "dog" neuron leads the instruct model
to always discuss dogs when asked about its favourite animal. Through our
method, it is possible to map the entirety of Llama 3.1 8B's up-projection
neurons in less than 15 minutes with no parallelization.
|
2501.02690 | GS-DiT: Advancing Video Generation with Pseudo 4D Gaussian Fields
through Efficient Dense 3D Point Tracking | cs.CV | 4D video control is essential in video generation as it enables the use of
sophisticated lens techniques, such as multi-camera shooting and dolly zoom,
which are currently unsupported by existing methods. Training a video Diffusion
Transformer (DiT) directly to control 4D content requires expensive multi-view
videos. Inspired by Monocular Dynamic novel View Synthesis (MDVS) that
optimizes a 4D representation and renders videos according to different 4D
elements, such as camera pose and object motion editing, we bring pseudo 4D
Gaussian fields to video generation. Specifically, we propose a novel framework
that constructs a pseudo 4D Gaussian field with dense 3D point tracking and
renders the Gaussian field for all video frames. Then we finetune a pretrained
DiT to generate videos following the guidance of the rendered video, dubbed as
GS-DiT. To boost the training of the GS-DiT, we also propose an efficient Dense
3D Point Tracking (D3D-PT) method for the pseudo 4D Gaussian field
construction. Our D3D-PT outperforms SpatialTracker, the state-of-the-art
sparse 3D point tracking method, in accuracy and accelerates the inference
speed by two orders of magnitude. During the inference stage, GS-DiT can
generate videos with the same dynamic content while adhering to different
camera parameters, addressing a significant limitation of current video
generation models. GS-DiT demonstrates strong generalization capabilities and
extends the 4D controllability of Gaussian splatting to video generation beyond
just camera poses. It supports advanced cinematic effects through the
manipulation of the Gaussian field and camera intrinsics, making it a powerful
tool for creative video production. Demos are available at
https://wkbian.github.io/Projects/GS-DiT/.
|
2501.02699 | EAGLE: Enhanced Visual Grounding Minimizes Hallucinations in
Instructional Multimodal Models | cs.CV cs.AI | Large language models and vision transformers have demonstrated impressive
zero-shot capabilities, enabling significant transferability in downstream
tasks. The fusion of these models has resulted in multi-modal architectures
with enhanced instructional capabilities. Despite incorporating vast image and
language pre-training, these multi-modal architectures often generate responses
that deviate from the ground truth in the image data. These failure cases are
known as hallucinations. Current methods for mitigating hallucinations
generally focus on regularizing the language component, improving the fusion
module, or ensembling multiple visual encoders to improve visual
representation. In this paper, we address the hallucination issue by directly
enhancing the capabilities of the visual component. Our approach, named EAGLE,
is fully agnostic to the LLM or fusion module and works as a post-pretraining
approach that improves the grounding and language alignment of the visual
encoder. We show that a straightforward reformulation of the original
contrastive pre-training task results in an improved visual encoder that can be
incorporated into the instructional multi-modal architecture without additional
instructional training. As a result, EAGLE achieves a significant reduction in
hallucinations across multiple challenging benchmarks and tasks.
|
2501.02701 | Underwater Image Restoration Through a Prior Guided Hybrid Sense
Approach and Extensive Benchmark Analysis | cs.CV | Underwater imaging grapples with challenges from light-water interactions,
leading to color distortions and reduced clarity. In response to these
challenges, we propose a novel Color Balance Prior \textbf{Guided}
\textbf{Hyb}rid \textbf{Sens}e \textbf{U}nderwater \textbf{I}mage
\textbf{R}estoration framework (\textbf{GuidedHybSensUIR}). This framework
operates on multiple scales, employing the proposed \textbf{Detail Restorer}
module to restore low-level detailed features at finer scales and utilizing the
proposed \textbf{Feature Contextualizer} module to capture long-range
contextual relations of high-level general features at a broader scale. The
hybridization of these different scales of sensing results effectively
addresses color casts and restores blurry details. In order to effectively
point out the evolutionary direction for the model, we propose a novel
\textbf{Color Balance Prior} as a strong guide in the feature contextualization
step and as a weak guide in the final decoding phase. We construct a
comprehensive benchmark using paired training data from three real-world
underwater datasets and evaluate on six test sets, including three paired and
three unpaired, sourced from four real-world underwater datasets. Subsequently,
we tested 14 traditional and retrained 23 deep learning existing underwater
image restoration methods on this benchmark, obtaining metric results for each
approach. This effort aims to furnish a valuable benchmarking dataset for
standard basis for comparison. The extensive experiment results demonstrate
that our method outperforms 37 other state-of-the-art methods overall on
various benchmark datasets and metrics, despite not achieving the best results
in certain individual cases. The code and dataset are available at
\href{https://github.com/CXH-Research/GuidedHybSensUIR}{https://github.com/CXH-Research/GuidedHybSensUIR}.
|
2501.02702 | QuIM-RAG: Advancing Retrieval-Augmented Generation with Inverted
Question Matching for Enhanced QA Performance | cs.CL cs.AI cs.LG | This work presents a novel architecture for building Retrieval-Augmented
Generation (RAG) systems to improve Question Answering (QA) tasks from a target
corpus. Large Language Models (LLMs) have revolutionized the analyzing and
generation of human-like text. These models rely on pre-trained data and lack
real-time updates unless integrated with live data tools. RAG enhances LLMs by
integrating online resources and databases to generate contextually appropriate
responses. However, traditional RAG still encounters challenges like
information dilution and hallucinations when handling vast amounts of data. Our
approach addresses these challenges by converting corpora into a
domain-specific dataset and RAG architecture is constructed to generate
responses from the target document. We introduce QuIM-RAG (Question-to-question
Inverted Index Matching), a novel approach for the retrieval mechanism in our
system. This strategy generates potential questions from document chunks and
matches these with user queries to identify the most relevant text chunks for
generating accurate answers. We have implemented our RAG system on top of the
open-source Meta-LLaMA3-8B-instruct model by Meta Inc. that is available on
Hugging Face. We constructed a custom corpus of 500+ pages from a high-traffic
website accessed thousands of times daily for answering complex questions,
along with manually prepared ground truth QA for evaluation. We compared our
approach with traditional RAG models using BERT-Score and RAGAS,
state-of-the-art metrics for evaluating LLM applications. Our evaluation
demonstrates that our approach outperforms traditional RAG architectures on
both metrics.
|
2501.02704 | Persistence of Backdoor-based Watermarks for Neural Networks: A
Comprehensive Evaluation | cs.LG cs.MM | Deep Neural Networks (DNNs) have gained considerable traction in recent years
due to the unparalleled results they gathered. However, the cost behind
training such sophisticated models is resource intensive, resulting in many to
consider DNNs to be intellectual property (IP) to model owners. In this era of
cloud computing, high-performance DNNs are often deployed all over the internet
so that people can access them publicly. As such, DNN watermarking schemes,
especially backdoor-based watermarks, have been actively developed in recent
years to preserve proprietary rights. Nonetheless, there lies much uncertainty
on the robustness of existing backdoor watermark schemes, towards both
adversarial attacks and unintended means such as fine-tuning neural network
models. One reason for this is that no complete guarantee of robustness can be
assured in the context of backdoor-based watermark. In this paper, we
extensively evaluate the persistence of recent backdoor-based watermarks within
neural networks in the scenario of fine-tuning, we propose/develop a novel
data-driven idea to restore watermark after fine-tuning without exposing the
trigger set. Our empirical results show that by solely introducing training
data after fine-tuning, the watermark can be restored if model parameters do
not shift dramatically during fine-tuning. Depending on the types of trigger
samples used, trigger accuracy can be reinstated to up to 100%. Our study
further explores how the restoration process works using loss landscape
visualization, as well as the idea of introducing training data in fine-tuning
stage to alleviate watermark vanishing.
|
2501.02705 | Knowledge Distillation with Adapted Weight | cs.LG stat.AP | Although large models have shown a strong capacity to solve large-scale
problems in many areas including natural language and computer vision, their
voluminous parameters are hard to deploy in a real-time system due to
computational and energy constraints. Addressing this, knowledge distillation
through Teacher-Student architecture offers a sustainable pathway to compress
the knowledge of large models into more manageable sizes without significantly
compromising performance. To enhance the robustness and interpretability of
this framework, it is critical to understand how individual training data
impact model performance, which is an area that remains underexplored. We
propose the \textbf{Knowledge Distillation with Adaptive Influence Weight
(KD-AIF)} framework which leverages influence functions from robust statistics
to assign weights to training data, grounded in the four key SAFE principles:
Sustainability, Accuracy, Fairness, and Explainability. This novel approach not
only optimizes distillation but also increases transparency by revealing the
significance of different data. The exploration of various update mechanisms
within the KD-AIF framework further elucidates its potential to significantly
improve learning efficiency and generalization in student models, marking a
step toward more explainable and deployable Large Models. KD-AIF is effective
in knowledge distillation while also showing exceptional performance in
semi-supervised learning with outperforms existing baselines and methods in
multiple benchmarks (CIFAR-100, CIFAR-10-4k, SVHN-1k, and GLUE).
|
2501.02706 | Multilevel Semantic-Aware Model for AI-Generated Video Quality
Assessment | cs.CV | The rapid development of diffusion models has greatly advanced AI-generated
videos in terms of length and consistency recently, yet assessing AI-generated
videos still remains challenging. Previous approaches have often focused on
User-Generated Content(UGC), but few have targeted AI-Generated Video Quality
Assessment methods. In this work, we introduce MSA-VQA, a Multilevel
Semantic-Aware Model for AI-Generated Video Quality Assessment, which leverages
CLIP-based semantic supervision and cross-attention mechanisms. Our
hierarchical framework analyzes video content at three levels: frame, segment,
and video. We propose a Prompt Semantic Supervision Module using text encoder
of CLIP to ensure semantic consistency between videos and conditional prompts.
Additionally, we propose the Semantic Mutation-aware Module to capture subtle
variations between frames. Extensive experiments demonstrate our method
achieves state-of-the-art results.
|
2501.02709 | Horizon Generalization in Reinforcement Learning | cs.LG cs.AI | We study goal-conditioned RL through the lens of generalization, but not in
the traditional sense of random augmentations and domain randomization. Rather,
we aim to learn goal-directed policies that generalize with respect to the
horizon: after training to reach nearby goals (which are easy to learn), these
policies should succeed in reaching distant goals (which are quite challenging
to learn). In the same way that invariance is closely linked with
generalization is other areas of machine learning (e.g., normalization layers
make a network invariant to scale, and therefore generalize to inputs of
varying scales), we show that this notion of horizon generalization is closely
linked with invariance to planning: a policy navigating towards a goal will
select the same actions as if it were navigating to a waypoint en route to that
goal. Thus, such a policy trained to reach nearby goals should succeed at
reaching arbitrarily-distant goals. Our theoretical analysis proves that both
horizon generalization and planning invariance are possible, under some
assumptions. We present new experimental results and recall findings from prior
work in support of our theoretical results. Taken together, our results open
the door to studying how techniques for invariance and generalization developed
in other areas of machine learning might be adapted to achieve this alluring
property.
|
2501.02711 | KG-CF: Knowledge Graph Completion with Context Filtering under the
Guidance of Large Language Models | cs.AI cs.CL | Large Language Models (LLMs) have shown impressive performance in various
tasks, including knowledge graph completion (KGC). However, current studies
mostly apply LLMs to classification tasks, like identifying missing triplets,
rather than ranking-based tasks, where the model ranks candidate entities based
on plausibility. This focus limits the practical use of LLMs in KGC, as
real-world applications prioritize highly plausible triplets. Additionally,
while graph paths can help infer the existence of missing triplets and improve
completion accuracy, they often contain redundant information. To address these
issues, we propose KG-CF, a framework tailored for ranking-based KGC tasks.
KG-CF leverages LLMs' reasoning abilities to filter out irrelevant contexts,
achieving superior results on real-world datasets. The code and datasets are
available at \url{https://anonymous.4open.science/r/KG-CF}.
|
2501.02715 | Improved Data Encoding for Emerging Computing Paradigms: From Stochastic
to Hyperdimensional Computing | cs.ET cs.AI cs.LG cs.NE | Data encoding is a fundamental step in emerging computing paradigms,
particularly in stochastic computing (SC) and hyperdimensional computing (HDC),
where it plays a crucial role in determining the overall system performance and
hardware cost efficiency. This study presents an advanced encoding strategy
that leverages a hardware-friendly class of low-discrepancy (LD) sequences,
specifically powers-of-2 bases of Van der Corput (VDC) sequences (VDC-2^n), as
sources for random number generation. Our approach significantly enhances the
accuracy and efficiency of SC and HDC systems by addressing challenges
associated with randomness. By employing LD sequences, we improve correlation
properties and reduce hardware complexity. Experimental results demonstrate
significant improvements in accuracy and energy savings for SC and HDC systems.
Our solution provides a robust framework for integrating SC and HDC in
resource-constrained environments, paving the way for efficient and scalable AI
implementations.
|
2501.02718 | Multi-Transmission Node DER Aggregation: Chance-Constrained Unit
Commitment with Bounded Hetero-Dimensional Mixture Model for Uncertain
Distribution Factors | eess.SY cs.SY | To facilitate the integration of distributed energy resources (DERs) into the
wholesale market while maintaining the tractability of associated market
operation tools such as unit commitment (UC), existing DER aggregation (DERA)
studies usually consider that each DERA is presented on a single node of the
transmission network. Nevertheless, the increasing scale and geographical
distribution of DERs spur the emergence of DERAs covering multiple transmission
nodes, posing new challenges in modeling such multi-transmission-node DERAs
(M-DERAs). Indeed, assessing the aggregated impact of an M-DERA on power flows
is a non-trivial task, because the sensitivities of each transmission line to
DERs at different transmission nodes are not identical. Inspired by the
distribution factor (DF) based shift factor (SF) aggregation strategy in
industry practice, this paper proposes a novel DF-based chance-constrained UC
(CCUC) model to determine system optimal operation plans with M-DERAs. DFs,
treated as uncertain parameters to describe possible responses of DERs against
aggregated dispatch instructions from regional transmission organizations, are
modeled via a bounded hetero-dimensional mixture model (BHMM) by leveraging
historical DF records distributed on multiple hyperplanes in a bounded space.
With this, power flow limits are modeled as chance constraints in CCUC, which
is reformulated into a scenarios-based stochastic form and solved by Benders
decomposition. The proposed method is tested on an IEEE 24-bus system to
illustrate its effectiveness in managing M-DERA integration while ensuring
operational economics and mitigating the overloading of transmission lines.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.