id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.18427
|
SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute
in Linear Diffusion Transformer
|
cs.CV
|
This paper presents SANA-1.5, a linear Diffusion Transformer for efficient
scaling in text-to-image generation. Building upon SANA-1.0, we introduce three
key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that
enables scaling from 1.6B to 4.8B parameters with significantly reduced
computational resources, combined with a memory-efficient 8-bit optimizer. (2)
Model Depth Pruning: A block importance analysis technique for efficient model
compression to arbitrary sizes with minimal quality loss. (3) Inference-time
Scaling: A repeated sampling strategy that trades computation for model
capacity, enabling smaller models to match larger model quality at inference
time. Through these strategies, SANA-1.5 achieves a text-image alignment score
of 0.72 on GenEval, which can be further improved to 0.80 through inference
scaling, establishing a new SoTA on GenEval benchmark. These innovations enable
efficient model scaling across different compute budgets while maintaining high
quality, making high-quality image generation more accessible.
|
2501.18432
|
Solving Drone Routing Problems with Quantum Computing: A Hybrid Approach
Combining Quantum Annealing and Gate-Based Paradigms
|
quant-ph cs.AI cs.ET
|
This paper presents a novel hybrid approach to solving real-world drone
routing problems by leveraging the capabilities of quantum computing. The
proposed method, coined Quantum for Drone Routing (Q4DR), integrates the two
most prominent paradigms in the field: quantum gate-based computing, through
the Eclipse Qrisp programming language; and quantum annealers, by means of
D-Wave System's devices. The algorithm is divided into two different phases: an
initial clustering phase executed using a Quantum Approximate Optimization
Algorithm (QAOA), and a routing phase employing quantum annealers. The efficacy
of Q4DR is demonstrated through three use cases of increasing complexity, each
incorporating real-world constraints such as asymmetric costs, forbidden paths,
and itinerant charging points. This research contributes to the growing body of
work in quantum optimization, showcasing the practical applications of quantum
computing in logistics and route planning.
|
2501.18435
|
GENIE: Generative Note Information Extraction model for structuring EHR
data
|
cs.CL
|
Electronic Health Records (EHRs) hold immense potential for advancing
healthcare, offering rich, longitudinal data that combines structured
information with valuable insights from unstructured clinical notes. However,
the unstructured nature of clinical text poses significant challenges for
secondary applications. Traditional methods for structuring EHR free-text data,
such as rule-based systems and multi-stage pipelines, are often limited by
their time-consuming configurations and inability to adapt across clinical
notes from diverse healthcare settings. Few systems provide a comprehensive
attribute extraction for terminologies. While giant large language models
(LLMs) like GPT-4 and LLaMA 405B excel at structuring tasks, they are slow,
costly, and impractical for large-scale use. To overcome these limitations, we
introduce GENIE, a Generative Note Information Extraction system that leverages
LLMs to streamline the structuring of unstructured clinical text into usable
data with standardized format. GENIE processes entire paragraphs in a single
pass, extracting entities, assertion statuses, locations, modifiers, values,
and purposes with high accuracy. Its unified, end-to-end approach simplifies
workflows, reduces errors, and eliminates the need for extensive manual
intervention. Using a robust data preparation pipeline and fine-tuned small
scale LLMs, GENIE achieves competitive performance across multiple information
extraction tasks, outperforming traditional tools like cTAKES and MetaMap and
can handle extra attributes to be extracted. GENIE strongly enhances real-world
applicability and scalability in healthcare systems. By open-sourcing the model
and test data, we aim to encourage collaboration and drive further advancements
in EHR structurization.
|
2501.18438
|
o3-mini vs DeepSeek-R1: Which One is Safer?
|
cs.SE cs.AI
|
The irruption of DeepSeek-R1 constitutes a turning point for the AI industry
in general and the LLMs in particular. Its capabilities have demonstrated
outstanding performance in several tasks, including creative thinking, code
generation, maths and automated program repair, at apparently lower execution
cost. However, LLMs must adhere to an important qualitative property, i.e.,
their alignment with safety and human values. A clear competitor of DeepSeek-R1
is its American counterpart, OpenAI's o3-mini model, which is expected to set
high standards in terms of performance, safety and cost. In this technical
report, we systematically assess the safety level of both DeepSeek-R1 (70b
version) and OpenAI's o3-mini (beta version). To this end, we make use of our
recently released automated safety testing tool, named ASTRAL. By leveraging
this tool, we automatically and systematically generated and executed 1,260
test inputs on both models. After conducting a semi-automated assessment of the
outcomes provided by both LLMs, the results indicate that DeepSeek-R1 produces
significantly more unsafe responses (12%) than OpenAI's o3-mini (1.2%).
|
2501.18439
|
MolGraph-xLSTM: A graph-based dual-level xLSTM framework with multi-head
mixture-of-experts for enhanced molecular representation and interpretability
|
cs.LG q-bio.BM
|
Predicting molecular properties is essential for drug discovery, and
computational methods can greatly enhance this process. Molecular graphs have
become a focus for representation learning, with Graph Neural Networks (GNNs)
widely used. However, GNNs often struggle with capturing long-range
dependencies. To address this, we propose MolGraph-xLSTM, a novel graph-based
xLSTM model that enhances feature extraction and effectively models molecule
long-range interactions.
Our approach processes molecular graphs at two scales: atom-level and
motif-level. For atom-level graphs, a GNN-based xLSTM framework with jumping
knowledge extracts local features and aggregates multilayer information to
capture both local and global patterns effectively. Motif-level graphs provide
complementary structural information for a broader molecular view. Embeddings
from both scales are refined via a multi-head mixture of experts (MHMoE),
further enhancing expressiveness and performance.
We validate MolGraph-xLSTM on 10 molecular property prediction datasets,
covering both classification and regression tasks. Our model demonstrates
consistent performance across all datasets, with improvements of up to 7.03% on
the BBBP dataset for classification and 7.54% on the ESOL dataset for
regression compared to baselines. On average, MolGraph-xLSTM achieves an AUROC
improvement of 3.18\% for classification tasks and an RMSE reduction of 3.83\%
across regression datasets compared to the baseline methods. These results
confirm the effectiveness of our model, offering a promising solution for
molecular representation learning for drug discovery.
|
2501.18441
|
From Public Square to Echo Chamber: The Fragmentation of Online
Discourse
|
cs.CY cs.AI cs.LG cs.SI
|
This paper examines how social media algorithms and filter bubbles contribute
to the fragmentation of online discourse, fostering ideological divides and
undermining shared understanding. Drawing on Michael Sandels philosophical
emphasis on community and shared values, the study explores how digital
platforms amplify discrimination discourse including sexism, racism,
xenophobia, ableism, homophobia, and religious intolerance during periods of
heightened societal tension. By analyzing the dynamics of digital communities,
the research highlights mechanisms driving the emergence and evolution of
discourse fragments in response to real world events. The findings reveal how
social media structures exacerbate polarization, restrict cross group dialogue,
and erode the collective reasoning essential for a just society. This study
situates philosophical perspectives within a computational analysis of social
media interactions, offering a nuanced understanding of the challenges posed by
fragmented discourse in the digital age.
|
2501.18444
|
Adaptive Object Detection for Indoor Navigation Assistance: A
Performance Evaluation of Real-Time Algorithms
|
cs.CV cs.AI cs.LG
|
This study addresses the need for accurate and efficient object detection in
assistive technologies for visually impaired individuals. We evaluate four
real-time object detection algorithms YOLO, SSD, Faster R-CNN, and Mask R-CNN
within the context of indoor navigation assistance. Using the Indoor Objects
Detection dataset, we analyze detection accuracy, processing speed, and
adaptability to indoor environments. Our findings highlight the trade-offs
between precision and efficiency, offering insights into selecting optimal
algorithms for realtime assistive navigation. This research advances adaptive
machine learning applications, enhancing indoor navigation solutions for the
visually impaired and promoting accessibility.
|
2501.18448
|
Autonomy and Safety Assurance in the Early Development of Robotics and
Autonomous Systems
|
cs.RO cs.AI
|
This report provides an overview of the workshop titled Autonomy and Safety
Assurance in the Early Development of Robotics and Autonomous Systems, hosted
by the Centre for Robotic Autonomy in Demanding and Long-Lasting Environments
(CRADLE) on September 2, 2024, at The University of Manchester, UK. The event
brought together representatives from six regulatory and assurance bodies
across diverse sectors to discuss challenges and evidence for ensuring the
safety of autonomous and robotic systems, particularly autonomous inspection
robots (AIR). The workshop featured six invited talks by the regulatory and
assurance bodies. CRADLE aims to make assurance an integral part of engineering
reliable, transparent, and trustworthy autonomous systems. Key discussions
revolved around three research questions: (i) challenges in assuring safety for
AIR; (ii) evidence for safety assurance; and (iii) how assurance cases need to
differ for autonomous systems. Following the invited talks, the breakout groups
further discussed the research questions using case studies from ground (rail),
nuclear, underwater, and drone-based AIR. This workshop offered a valuable
opportunity for representatives from industry, academia, and regulatory bodies
to discuss challenges related to assured autonomy. Feedback from participants
indicated a strong willingness to adopt a design-for-assurance process to
ensure that robots are developed and verified to meet regulatory expectations.
|
2501.18452
|
Clustering Properties of Self-Supervised Learning
|
cs.LG cs.AI
|
Self-supervised learning (SSL) methods via joint embedding architectures have
proven remarkably effective at capturing semantically rich representations with
strong clustering properties, magically in the absence of label supervision.
Despite this, few of them have explored leveraging these untapped properties to
improve themselves. In this paper, we provide an evidence through various
metrics that the encoder's output $encoding$ exhibits superior and more stable
clustering properties compared to other components. Building on this insight,
we propose a novel positive-feedback SSL method, termed Representation Soft
Assignment (ReSA), which leverages the model's clustering properties to promote
learning in a self-guided manner. Extensive experiments on standard SSL
benchmarks reveal that models pretrained with ReSA outperform other
state-of-the-art SSL methods by a significant margin. Finally, we analyze how
ReSA facilitates better clustering properties, demonstrating that it
effectively enhances clustering performance at both fine-grained and
coarse-grained levels, shaping representations that are inherently more
structured and semantically meaningful.
|
2501.18453
|
Transfer Learning for Keypoint Detection in Low-Resolution Thermal TUG
Test Images
|
cs.CV eess.IV
|
This study presents a novel approach to human keypoint detection in
low-resolution thermal images using transfer learning techniques. We introduce
the first application of the Timed Up and Go (TUG) test in thermal image
computer vision, establishing a new paradigm for mobility assessment. Our
method leverages a MobileNetV3-Small encoder and a ViTPose decoder, trained
using a composite loss function that balances latent representation alignment
and heatmap accuracy. The model was evaluated using the Object Keypoint
Similarity (OKS) metric from the COCO Keypoint Detection Challenge. The
proposed model achieves better performance with AP, AP50, and AP75 scores of
0.861, 0.942, and 0.887 respectively, outperforming traditional supervised
learning approaches like Mask R-CNN and ViTPose-Base. Moreover, our model
demonstrates superior computational efficiency in terms of parameter count and
FLOPS. This research lays a solid foundation for future clinical applications
of thermal imaging in mobility assessment and rehabilitation monitoring.
|
2501.18455
|
Conversation Games and a Strategic View of the Turing Test
|
cs.AI cs.GT
|
Although many game-theoretic models replicate real interactions that often
rely on natural language, explicit study of games where language is central to
strategic interaction remains limited. This paper introduces the
\emph{conversation game}, a multi-stage, extensive-form game based on
linguistic strategic interaction. We focus on a subset of the games, called
verdict games. In a verdict game, two players alternate to contribute to a
conversation, which is evaluated at each stage by a non-strategic judge who may
render a conclusive binary verdict, or a decision to continue the dialogue. The
game ends once a limit is reached or a verdict is given. We show many familiar
processes, such as interrogation or a court process fall under this category.
We also, show that the Turing test is an instance of verdict game, and discuss
the significance of a strategic view of the Turing test in the age of advanced
AI deception. We show the practical relevance of the proposed concepts by
simulation experiments, and show that a strategic agent outperforms a naive
agent by a high margin.
|
2501.18456
|
adabmDCA 2.0 -- a flexible but easy-to-use package for Direct Coupling
Analysis
|
q-bio.QM cs.LG physics.bio-ph
|
In this methods article, we provide a flexible but easy-to-use implementation
of Direct Coupling Analysis (DCA) based on Boltzmann machine learning, together
with a tutorial on how to use it. The package \texttt{adabmDCA 2.0} is
available in different programming languages (C++, Julia, Python) usable on
different architectures (single-core and multi-core CPU, GPU) using a common
front-end interface. In addition to several learning protocols for dense and
sparse generative DCA models, it allows to directly address common downstream
tasks like residue-residue contact prediction, mutational-effect prediction,
scoring of sequence libraries and generation of artificial sequences for
sequence design. It is readily applicable to protein and RNA sequence data.
|
2501.18457
|
CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language
Model Question Answering
|
cs.CL
|
Large Language Models (LLMs) are pretrained on extensive multilingual corpora
to acquire both language-specific cultural knowledge and general knowledge.
Ideally, while LLMs should provide consistent responses to culture-independent
questions across languages, we observe significant performance disparities. To
address this, we explore the Cross-Lingual Self-Aligning ability of Language
Models (CALM) to align knowledge across languages. Specifically, for a given
question, we sample multiple responses across different languages and select
the most self-consistent response as the target, leaving the remaining
responses as negative examples. We then employ direct preference optimization
(DPO) to align the model's knowledge across different languages. Evaluations on
the MEDQA and X-CSQA datasets demonstrate CALM's effectiveness in enhancing
cross-lingual knowledge question answering, both in zero-shot and
retrieval-augmented settings. We also found that increasing the number of
languages involved in CALM training leads to higher accuracy and consistency.
We offer a qualitative analysis of how cross-lingual consistency can enhance
knowledge alignment and explore the method's generalizability.
|
2501.18462
|
Massive Online Course on Entrepreneurship. Case Study
|
cs.HC cs.SI
|
Entrepreneurship is a key component of society, and universities and major
political structures have tried to support its development in recent years. The
present study aims to check the perception of students (based on gender) about
entrepreneurial intentions after participating in a course that had a large
number of undergraduate students. There were 970 students enrolled from
different faculties with various specializations. We conducted a gender-based
survey on the unconventional entrepreneurial fundamentals course, where each
course was delivered by a different speaker. We also compared the responses
provided by computer science students with the overall responses to find
differences in their perceptions related to the feasibility of teaching
entrepreneurship online, determining the entrepreneurial intention of the
students taking this course, and analyzing the perceptions related to the
business environment and the ease of starting a business. We found that
students, regardless of gender or field of study, prefer interactive online
presentations based on the manner in which lectures on this subject were
conducted.
|
2501.18463
|
A Benchmark and Evaluation for Real-World Out-of-Distribution Detection
Using Vision-Language Models
|
cs.CV
|
Out-of-distribution (OOD) detection is a task that detects OOD samples during
inference to ensure the safety of deployed models. However, conventional
benchmarks have reached performance saturation, making it difficult to compare
recent OOD detection methods. To address this challenge, we introduce three
novel OOD detection benchmarks that enable a deeper understanding of method
characteristics and reflect real-world conditions. First, we present
ImageNet-X, designed to evaluate performance under challenging semantic shifts.
Second, we propose ImageNet-FS-X for full-spectrum OOD detection, assessing
robustness to covariate shifts (feature distribution shifts). Finally, we
propose Wilds-FS-X, which extends these evaluations to real-world datasets,
offering a more comprehensive testbed. Our experiments reveal that recent
CLIP-based OOD detection methods struggle to varying degrees across the three
proposed benchmarks, and none of them consistently outperforms the others. We
hope the community goes beyond specific benchmarks and includes more
challenging conditions reflecting real-world scenarios. The code is
https://github.com/hoshi23/OOD-X-Benchmarks.
|
2501.18468
|
Beyond Instructed Tasks: Recognizing In-the-Wild Reading Behaviors in
the Classroom Using Eye Tracking
|
cs.HC cs.AI
|
Understanding reader behaviors such as skimming, deep reading, and scanning
is essential for improving educational instruction. While prior eye-tracking
studies have trained models to recognize reading behaviors, they often rely on
instructed reading tasks, which can alter natural behaviors and limit the
applicability of these findings to in-the-wild settings. Additionally, there is
a lack of clear definitions for reading behavior archetypes in the literature.
We conducted a classroom study to address these issues by collecting instructed
and in-the-wild reading data. We developed a mixed-method framework, including
a human-driven theoretical model, statistical analyses, and an AI classifier,
to differentiate reading behaviors based on their velocity, density, and
sequentiality. Our lightweight 2D CNN achieved an F1 score of 0.8 for behavior
recognition, providing a robust approach for understanding in-the-wild reading.
This work advances our ability to provide detailed behavioral insights to
educators, supporting more targeted and effective assessment and instruction.
|
2501.18470
|
Resampling Filter Design for Multirate Neural Audio Effect Processing
|
eess.AS cs.LG cs.SD eess.SP
|
Neural networks have become ubiquitous in audio effects modelling, especially
for guitar amplifiers and distortion pedals. One limitation of such models is
that the sample rate of the training data is implicitly encoded in the model
weights and therefore not readily adjustable at inference. Recent work explored
modifications to recurrent neural network architecture to approximate a sample
rate independent system, enabling audio processing at a rate that differs from
the original training rate. This method works well for integer oversampling and
can reduce aliasing caused by nonlinear activation functions. For small
fractional changes in sample rate, fractional delay filters can be used to
approximate sample rate independence, but in some cases this method fails
entirely. Here, we explore the use of signal resampling at the input and output
of the neural network as an alternative solution. We investigate several
resampling filter designs and show that a two-stage design consisting of a
half-band IIR filter cascaded with a Kaiser window FIR filter can give similar
or better results to the previously proposed model adjustment method with many
fewer operations per sample and less than one millisecond of latency at typical
audio rates. Furthermore, we investigate interpolation and decimation filters
for the task of integer oversampling and show that cascaded half-band IIR and
FIR designs can be used in conjunction with the model adjustment method to
reduce aliasing in a range of distortion effect models.
|
2501.18474
|
Tuning Vision Foundation Model via Test-Time Prompt-Guided Training for
VFSS Segmentations
|
cs.CV
|
Vision foundation models have demonstrated exceptional generalization
capabilities in segmentation tasks for both generic and specialized images.
However, a performance gap persists between foundation models and
task-specific, specialized models. Fine-tuning foundation models on downstream
datasets is often necessary to bridge this gap. Unfortunately, obtaining fully
annotated ground truth for downstream datasets is both challenging and costly.
To address this limitation, we propose a novel test-time training paradigm that
enhances the performance of foundation models on downstream datasets without
requiring full annotations. Specifically, our method employs simple point
prompts to guide a test-time semi-self-supervised training task. The model
learns by resolving the ambiguity of the point prompt through various
augmentations. This approach directly tackles challenges in the medical imaging
field, where acquiring annotations is both time-intensive and expensive. We
conducted extensive experiments on our new Videofluoroscopy dataset (VFSS-5k)
for the instance segmentation task, achieving an average Dice coefficient of
0.868 across 12 anatomies with a single model.
|
2501.18475
|
CLoQ: Enhancing Fine-Tuning of Quantized LLMs via Calibrated LoRA
Initialization
|
cs.LG cs.AI
|
Fine-tuning large language models (LLMs) using low-rank adaptation (LoRA) has
become a highly efficient approach for downstream tasks, particularly in
scenarios with limited computational resources. However, applying LoRA
techniques to quantized LLMs poses unique challenges due to the reduced
representational precision of quantized weights. In this paper, we introduce
CLoQ (Calibrated LoRA initialization for Quantized LLMs), a simplistic
initialization strategy designed to overcome these challenges. Our approach
focuses on minimizing the layer-wise discrepancy between the original LLM and
its quantized counterpart with LoRA components during initialization. By
leveraging a small calibration dataset, CLoQ quantizes a pre-trained LLM and
determines the optimal LoRA components for each layer, ensuring a strong
foundation for subsequent fine-tuning. A key contribution of this work is a
novel theoretical result that enables the accurate and closed-form construction
of these optimal LoRA components. We validate the efficacy of CLoQ across
multiple tasks such as language generation, arithmetic reasoning, and
commonsense reasoning, demonstrating that it consistently outperforms existing
LoRA fine-tuning methods for quantized LLMs, especially at ultra low-bit
widths.
|
2501.18478
|
SimpleDepthPose: Fast and Reliable Human Pose Estimation with
RGBD-Images
|
cs.CV
|
In the rapidly advancing domain of computer vision, accurately estimating the
poses of multiple individuals from various viewpoints remains a significant
challenge, especially when reliability is a key requirement. This paper
introduces a novel algorithm that excels in multi-view, multi-person pose
estimation by incorporating depth information. An extensive evaluation
demonstrates that the proposed algorithm not only generalizes well to unseen
datasets, and shows a fast runtime performance, but also is adaptable to
different keypoints. To support further research, all of the work is publicly
accessible.
|
2501.18479
|
Transformer Semantic Genetic Programming for Symbolic Regression
|
cs.NE
|
In standard genetic programming (stdGP), solutions are varied by modifying
their syntax, with uncertain effects on their semantics. Geometric-semantic
genetic programming (GSGP), a popular variant of GP, effectively searches the
semantic solution space using variation operations based on linear
combinations, although it results in significantly larger solutions. This paper
presents Transformer Semantic Genetic Programming (TSGP), a novel and flexible
semantic approach that uses a generative transformer model as search operator.
The transformer is trained on synthetic test problems and learns semantic
similarities between solutions. Once the model is trained, it can be used to
create offspring solutions with high semantic similarity also for unseen and
unknown problems. Experiments on several symbolic regression problems show that
TSGP generates solutions with comparable or even significantly better
prediction quality than stdGP, SLIM_GSGP, DSR, and DAE-GP. Like SLIM_GSGP, TSGP
is able to create new solutions that are semantically similar without creating
solutions of large size. An analysis of the search dynamic reveals that the
solutions generated by TSGP are semantically more similar than the solutions
generated by the benchmark approaches allowing a better exploration of the
semantic solution space.
|
2501.18487
|
Track-On: Transformer-based Online Point Tracking with Memory
|
cs.CV
|
In this paper, we consider the problem of long-term point tracking, which
requires consistent identification of points across multiple frames in a video,
despite changes in appearance, lighting, perspective, and occlusions. We target
online tracking on a frame-by-frame basis, making it suitable for real-world,
streaming scenarios. Specifically, we introduce Track-On, a simple
transformer-based model designed for online long-term point tracking. Unlike
prior methods that depend on full temporal modeling, our model processes video
frames causally without access to future frames, leveraging two memory modules
-- spatial memory and context memory -- to capture temporal information and
maintain reliable point tracking over long time horizons. At inference time, it
employs patch classification and refinement to identify correspondences and
track points with high accuracy. Through extensive experiments, we demonstrate
that Track-On sets a new state-of-the-art for online models and delivers
superior or competitive results compared to offline approaches on seven
datasets, including the TAP-Vid benchmark. Our method offers a robust and
scalable solution for real-time tracking in diverse applications. Project page:
https://kuis-ai.github.io/track_on
|
2501.18490
|
Curriculum-based Sample Efficient Reinforcement Learning for Robust
Stabilization of a Quadrotor
|
cs.RO cs.AI
|
This article introduces a curriculum learning approach to develop a
reinforcement learning-based robust stabilizing controller for a Quadrotor that
meets predefined performance criteria. The learning objective is to achieve
desired positions from random initial conditions while adhering to both
transient and steady-state performance specifications. This objective is
challenging for conventional one-stage end-to-end reinforcement learning, due
to the strong coupling between position and orientation dynamics, the
complexity in designing and tuning the reward function, and poor sample
efficiency, which necessitates substantial computational resources and leads to
extended convergence times. To address these challenges, this work decomposes
the learning objective into a three-stage curriculum that incrementally
increases task complexity. The curriculum begins with learning to achieve
stable hovering from a fixed initial condition, followed by progressively
introducing randomization in initial positions, orientations and velocities. A
novel additive reward function is proposed, to incorporate transient and
steady-state performance specifications. The results demonstrate that the
Proximal Policy Optimization (PPO)-based curriculum learning approach, coupled
with the proposed reward structure, achieves superior performance compared to a
single-stage PPO-trained policy with the same reward function, while
significantly reducing computational resource requirements and convergence
time. The curriculum-trained policy's performance and robustness are thoroughly
validated under random initial conditions and in the presence of disturbances.
|
2501.18492
|
GuardReasoner: Towards Reasoning-based LLM Safeguards
|
cs.CR cs.AI cs.LG
|
As LLMs increasingly impact safety-critical applications, ensuring their
safety using guardrails remains a key challenge. This paper proposes
GuardReasoner, a new safeguard for LLMs, by guiding the guard model to learn to
reason. Concretely, we first create the GuardReasonerTrain dataset, which
consists of 127K samples with 460K detailed reasoning steps. Then, we introduce
reasoning SFT to unlock the reasoning capability of guard models. In addition,
we present hard sample DPO to further strengthen their reasoning ability. In
this manner, GuardReasoner achieves better performance, explainability, and
generalizability. Extensive experiments and analyses on 13 benchmarks of 3
guardrail tasks demonstrate its superiority. Remarkably, GuardReasoner 8B
surpasses GPT-4o+CoT by 5.74% and LLaMA Guard 3 8B by 20.84% F1 score on
average. We release the training data, code, and models with different scales
(1B, 3B, 8B) of GuardReasoner : https://github.com/yueliu1999/GuardReasoner/.
|
2501.18494
|
Runway vs. Taxiway: Challenges in Automated Line Identification and
Notation Approaches
|
cs.CV cs.LG
|
The increasing complexity of autonomous systems has amplified the need for
accurate and reliable labeling of runway and taxiway markings to ensure
operational safety. Precise detection and labeling of these markings are
critical for tasks such as navigation, landing assistance, and ground control
automation. Existing labeling algorithms, like the Automated Line
Identification and Notation Algorithm (ALINA), have demonstrated success in
identifying taxiway markings but encounter significant challenges when applied
to runway markings. This limitation arises due to notable differences in line
characteristics, environmental context, and interference from elements such as
shadows, tire marks, and varying surface conditions. To address these
challenges, we modified ALINA by adjusting color thresholds and refining region
of interest (ROI) selection to better suit runway-specific contexts. While
these modifications yielded limited improvements, the algorithm still struggled
with consistent runway identification, often mislabeling elements such as the
horizon or non-relevant background features. This highlighted the need for a
more robust solution capable of adapting to diverse visual interferences. In
this paper, we propose integrating a classification step using a Convolutional
Neural Network (CNN) named AssistNet. By incorporating this classification
step, the detection pipeline becomes more resilient to environmental variations
and misclassifications. This work not only identifies the challenges but also
outlines solutions, paving the way for improved automated labeling techniques
essential for autonomous aviation systems.
|
2501.18500
|
HSRMamba: Contextual Spatial-Spectral State Space Model for Single
Hyperspectral Super-Resolution
|
cs.CV eess.IV
|
Mamba has demonstrated exceptional performance in visual tasks due to its
powerful global modeling capabilities and linear computational complexity,
offering considerable potential in hyperspectral image super-resolution
(HSISR). However, in HSISR, Mamba faces challenges as transforming images into
1D sequences neglects the spatial-spectral structural relationships between
locally adjacent pixels, and its performance is highly sensitive to input
order, which affects the restoration of both spatial and spectral details. In
this paper, we propose HSRMamba, a contextual spatial-spectral modeling state
space model for HSISR, to address these issues both locally and globally.
Specifically, a local spatial-spectral partitioning mechanism is designed to
establish patch-wise causal relationships among adjacent pixels in 3D features,
mitigating the local forgetting issue. Furthermore, a global spectral
reordering strategy based on spectral similarity is employed to enhance the
causal representation of similar pixels across both spatial and spectral
dimensions. Finally, experimental results demonstrate our HSRMamba outperforms
the state-of-the-art methods in quantitative quality and visual results. Code
will be available soon.
|
2501.18501
|
Beyond Prior Limits: Addressing Distribution Misalignment in Particle
Filtering
|
stat.ML cs.AI cs.LG
|
Particle filtering is a Bayesian inference method and a fundamental tool in
state estimation for dynamic systems, but its effectiveness is often limited by
the constraints of the initial prior distribution, a phenomenon we define as
the Prior Boundary Phenomenon. This challenge arises when target states lie
outside the prior's support, rendering traditional particle filtering methods
inadequate for accurate estimation. Although techniques like unbounded priors
and larger particle sets have been proposed, they remain computationally
prohibitive and lack adaptability in dynamic scenarios. To systematically
overcome these limitations, we propose the Diffusion-Enhanced Particle
Filtering Framework, which introduces three key innovations: adaptive diffusion
through exploratory particles, entropy-driven regularisation to prevent weight
collapse, and kernel-based perturbations for dynamic support expansion. These
mechanisms collectively enable particle filtering to explore beyond prior
boundaries, ensuring robust state estimation for out-of-boundary targets.
Theoretical analysis and extensive experiments validate framework's
effectiveness, indicating significant improvements in success rates and
estimation accuracy across high-dimensional and non-convex scenarios.
|
2501.18502
|
One-Bit Distributed Mean Estimation with Unknown Variance
|
cs.IT math.IT math.ST stat.TH
|
In this work, we study the problem of distributed mean estimation with
$1$-bit communication constraints when the variance is unknown. We focus on the
specific case where each user has access to one i.i.d. sample drawn from a
distribution that belongs to a scale-location family, and is limited to sending
just a single bit of information to a central server whose goal is to estimate
the mean. We propose non-adaptive and adaptive estimators that are shown to be
asymptotically normal. We derive bounds on the asymptotic (in the number of
users) Mean Squared Error (MSE) achieved by these estimators. For a class of
symmetric log-concave distributions, we derive matching lower bounds for the
MSE achieved by adaptive estimators, proving the optimality of our scheme. We
show that non-adaptive estimators can be strictly suboptimal by deriving a
lower bound on the MSE achieved by any non-adaptive estimator for Gaussian
distributions and demonstrating a positive gap between this and the MSE
achieved by our adaptive scheme.
|
2501.18504
|
CLEAR: Cue Learning using Evolution for Accurate Recognition Applied to
Sustainability Data Extraction
|
cs.CV cs.AI cs.NE
|
Large Language Model (LLM) image recognition is a powerful tool for
extracting data from images, but accuracy depends on providing sufficient cues
in the prompt - requiring a domain expert for specialized tasks. We introduce
Cue Learning using Evolution for Accurate Recognition (CLEAR), which uses a
combination of LLMs and evolutionary computation to generate and optimize cues
such that recognition of specialized features in images is improved. It
achieves this by auto-generating a novel domain-specific representation and
then using it to optimize suitable textual cues with a genetic algorithm. We
apply CLEAR to the real-world task of identifying sustainability data from
interior and exterior images of buildings. We investigate the effects of using
a variable-length representation compared to fixed-length and show how LLM
consistency can be improved by refactoring from categorical to real-valued
estimates. We show that CLEAR enables higher accuracy compared to expert human
recognition and human-authored prompts in every task with error rates improved
by up to two orders of magnitude and an ablation study evincing solution
concision.
|
2501.18505
|
Path Planning and Optimization for Cuspidal 6R Manipulators
|
cs.RO
|
A cuspidal robot can move from one inverse kinematics (IK) solution to
another without crossing a singularity. Multiple industrial robots are
cuspidal. They tend to have a beautiful mechanical design, but they pose path
planning challenges. A task-space path may have a valid IK solution for each
point along the path, but a continuous joint-space path may depend on the
choice of the IK solution or even be infeasible. This paper presents new
analysis, path planning, and optimization methods to enhance the utility of
cuspidal robots. We first demonstrate an efficient method to identify cuspidal
robots and show, for the first time, that the ABB GoFa and certain robots with
three parallel joint axes are cuspidal. We then propose a new path planning
method for cuspidal robots by finding all IK solutions for each point along a
task-space path and constructing a graph to connect each vertex corresponding
to an IK solution. Graph edges are weighted based on the optimization metric,
such as minimizing joint velocity. The optimal feasible path is the shortest
path in the graph. This method can find non-singular paths as well as smooth
paths which pass through singularities. Finally, this path planning method is
incorporated into a path optimization algorithm. Given a fixed workspace
toolpath, we optimize the offset of the toolpath in the robot base frame while
ensuring continuous joint motion. Code examples are available in a publicly
accessible repository.
|
2501.18509
|
Deconstruct Complexity (DeComplex): A Novel Perspective on Tackling
Dense Action Detection
|
cs.CV
|
Dense action detection involves detecting multiple co-occurring actions in an
untrimmed video while action classes are often ambiguous and represent
overlapping concepts. To address this challenge task, we introduce a novel
perspective inspired by how humans tackle complex tasks by breaking them into
manageable sub-tasks. Instead of relying on a single network to address the
entire problem, as in current approaches, we propose decomposing the problem
into detecting key concepts present in action classes, specifically, detecting
dense static concepts and detecting dense dynamic concepts, and assigning them
to distinct, specialized networks. Furthermore, simultaneous actions in a video
often exhibit interrelationships, and exploiting these relationships can
improve performance. However, we argue that current networks fail to
effectively learn these relationships due to their reliance on binary
cross-entropy optimization, which treats each class independently. To address
this limitation, we propose providing explicit supervision on co-occurring
concepts during network optimization through a novel language-guided
contrastive learning loss. Our extensive experiments demonstrate the
superiority of our approach over state-of-the-art methods, achieving
substantial relative improvements of 23.4% and 2.5% mAP on the challenging
benchmark datasets, Charades and MultiTHUMOS.
|
2501.18511
|
WILDCHAT-50M: A Deep Dive Into the Role of Synthetic Data in
Post-Training
|
cs.LG cs.CL
|
Language model (LLM) post-training, from DPO to distillation, can refine
behaviors and unlock new skills, but the open science supporting these
post-training techniques is still in its infancy. One limiting factor has been
the difficulty of conducting large-scale comparative analyses of synthetic data
generating models and LLM judges. To close this gap, we introduce WILDCHAT-50M,
the largest public chat dataset to date. We extend the existing WildChat
dataset to include responses not only from GPT, but from over 50 different
open-weight models, ranging in size from 0.5B to 104B parameters. We conduct an
extensive comparative analysis and demonstrate the potential of this dataset by
creating RE-WILD, our own public SFT mix, which outperforms the recent Tulu-3
SFT mixture from Allen AI with only 40% as many samples. Our dataset, samples
and code are available at https://github.com/penfever/wildchat-50m.
|
2501.18512
|
Streaming DiLoCo with overlapping communication: Towards a Distributed
Free Lunch
|
cs.CL
|
Training of large language models (LLMs) is typically distributed across a
large number of accelerators to reduce training time. Since internal states and
parameter gradients need to be exchanged at each and every single gradient
step, all devices need to be co-located using low-latency high-bandwidth
communication links to support the required high volume of exchanged bits.
Recently, distributed algorithms like DiLoCo have relaxed such co-location
constraint: accelerators can be grouped into ``workers'', where
synchronizations between workers only occur infrequently. This in turn means
that workers can afford being connected by lower bandwidth communication links
without affecting learning quality. However, in these methods, communication
across workers still requires the same peak bandwidth as before, as the
synchronizations require all parameters to be exchanged across all workers. In
this paper, we improve DiLoCo in three ways. First, we synchronize only subsets
of parameters in sequence, rather than all at once, which greatly reduces peak
bandwidth. Second, we allow workers to continue training while synchronizing,
which decreases wall clock time. Third, we quantize the data exchanged by
workers, which further reduces bandwidth across workers. By properly combining
these modifications, we show experimentally that we can distribute training of
billion-scale parameters and reach similar quality as before, but reducing
required bandwidth by two orders of magnitude.
|
2501.18514
|
Automating Physics-Based Reasoning for SysML Model Validation
|
eess.SY cs.ET cs.SE cs.SY
|
System and software design benefits greatly from formal modeling, allowing
for automated analysis and verification early in the design phase. Current
methods excel at checking information flow and component interactions, ensuring
consistency, and identifying dependencies within Systems Modeling Language
(SysML) models. However, these approaches often lack the capability to perform
physics-based reasoning about a system's behavior represented in SysML models,
particularly in the electromechanical domain. This significant gap critically
hinders the ability to automatically and effectively verify the correctness and
consistency of the model's behavior against well-established underlying
physical principles. Therefore, this paper presents an approach that leverages
existing research on function representation, including formal languages,
graphical representations, and reasoning algorithms, and integrates them with
physics-based verification techniques. Four case studies (coffeemaker, vacuum
cleaner, hairdryer, and wired speaker) are inspected to illustrate the model's
practicality and effectiveness in performing physics-based reasoning on systems
modeled in SysML. This automated physics-based reasoning is broken into two
main categories: (i) structural, which is performed on BDD and IBD, and (ii)
functional, which is then performed on activity diagrams. This work advances
the field of automated reasoning by providing a framework for verifying
structural and functional correctness and consistency with physical laws within
SysML models.
|
2501.18516
|
Learn from the Past: Language-conditioned Object Rearrangement with
Large Language Models
|
cs.RO
|
Object rearrangement is a significant task for collaborative robots, where
they are directed to manipulate objects into a specified goal state.
Determining the placement of objects is a major challenge that influences the
efficiency of the rearrangement process. Most current methods heavily rely on
pre-collected datasets to train the model for predicting the goal position and
are restricted to specific instructions, which limits their broader
applicability and effectiveness.In this paper, we propose a framework of
language-conditioned object rearrangement based on the Large Language Model
(LLM). Particularly, our approach mimics human reasoning by using past
successful experiences as a reference to infer the desired goal position. Based
on LLM's strong natural language comprehension and inference ability, our
method can generalise to handle various everyday objects and free-form language
instructions in a zero-shot manner. Experimental results demonstrate that our
methods can effectively execute the robotic rearrangement tasks, even those
involving long sequential orders.
|
2501.18517
|
Integrating Spatial and Frequency Information for Under-Display Camera
Image Restoration
|
cs.CV
|
Under-Display Camera (UDC) houses a digital camera lens under a display
panel. However, UDC introduces complex degradations such as noise, blur,
decrease in transmittance, and flare. Despite the remarkable progress, previous
research on UDC mainly focuses on eliminating diffraction in the spatial domain
and rarely explores its potential in the frequency domain. It is essential to
consider both the spatial and frequency domains effectively. For example,
degradations, such as noise and blur, can be addressed by local information
(e.g., CNN kernels in the spatial domain). At the same time, tackling flares
may require leveraging global information (e.g., the frequency domain). In this
paper, we revisit the UDC degradations in the Fourier space and figure out
intrinsic frequency priors that imply the presence of the flares. Based on this
observation, we propose a novel multi-level DNN architecture called SFIM. It
efficiently restores UDC-distorted images by integrating local and global (the
collective contribution of all points in the image) information. The
architecture exploits CNNs to capture local information and FFT-based models to
capture global information. SFIM comprises a spatial domain block (SDB), a
Frequency Domain Block (FDB), and an Attention-based Multi-level Integration
Block (AMIB). Specifically, SDB focuses more on detailed textures such as noise
and blur, FDB emphasizes irregular texture loss in extensive areas such as
flare, and AMIB enables effective cross-domain interaction. SFIM's superior
performance over state-of-the-art approaches is demonstrated through rigorous
quantitative and qualitative assessments across three UDC benchmarks.
|
2501.18527
|
Neural Discovery in Mathematics: Do Machines Dream of Colored Planes?
|
cs.LG math.CO
|
We demonstrate how neural networks can drive mathematical discovery through a
case study of the Hadwiger-Nelson problem, a long-standing open problem from
discrete geometry and combinatorics about coloring the plane avoiding
monochromatic unit-distance pairs. Using neural networks as approximators, we
reformulate this mixed discrete-continuous geometric coloring problem as an
optimization task with a probabilistic, differentiable loss function. This
enables gradient-based exploration of admissible configurations that most
significantly led to the discovery of two novel six-colorings, providing the
first improvements in thirty years to the off-diagonal variant of the original
problem (Mundinger et al., 2024a). Here, we establish the underlying machine
learning approach used to obtain these results and demonstrate its broader
applicability through additional results and numerical insights.
|
2501.18528
|
Joint Learning of Energy-based Models and their Partition Function
|
cs.LG stat.ML
|
Energy-based models (EBMs) offer a flexible framework for parameterizing
probability distributions using neural networks. However, learning EBMs by
exact maximum likelihood estimation (MLE) is generally intractable, due to the
need to compute the partition function (normalization constant). In this paper,
we propose a novel formulation for approximately learning probabilistic EBMs in
combinatorially-large discrete spaces, such as sets or permutations. Our key
idea is to jointly learn both an energy model and its log-partition, both
parameterized as a neural network. Our approach not only provides a novel
tractable objective criterion to learn EBMs by stochastic gradient descent
(without relying on MCMC), but also a novel means to estimate the log-partition
function on unseen data points. On the theoretical side, we show that our
approach recovers the optimal MLE solution when optimizing in the space of
continuous functions. Furthermore, we show that our approach naturally extends
to the broader family of Fenchel-Young losses, allowing us to obtain the first
tractable method for optimizing the sparsemax loss in combinatorially-large
spaces. We demonstrate our approach on multilabel classification and label
ranking.
|
2501.18530
|
Optimal generalisation and learning transition in extensive-width
shallow neural networks near interpolation
|
stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.IT cs.LG math.IT
|
We consider a teacher-student model of supervised learning with a
fully-trained 2-layer neural network whose width $k$ and input dimension $d$
are large and proportional. We compute the Bayes-optimal generalisation error
of the network for any activation function in the regime where the number of
training data $n$ scales quadratically with the input dimension, i.e., around
the interpolation threshold where the number of trainable parameters $kd+k$ and
of data points $n$ are comparable. Our analysis tackles generic weight
distributions. Focusing on binary weights, we uncover a discontinuous phase
transition separating a "universal" phase from a "specialisation" phase. In the
first, the generalisation error is independent of the weight distribution and
decays slowly with the sampling rate $n/d^2$, with the student learning only
some non-linear combinations of the teacher weights. In the latter, the error
is weight distribution-dependent and decays faster due to the alignment of the
student towards the teacher network. We thus unveil the existence of a highly
predictive solution near interpolation, which is however potentially hard to
find.
|
2501.18531
|
Graph Learning for Bidirectional Disease Contact Tracing on Real Human
Mobility Data
|
cs.SI cs.LG
|
For rapidly spreading diseases where many cases show no symptoms, swift and
effective contact tracing is essential. While exposure notification
applications provide alerts on potential exposures, a fully automated system is
needed to track the infectious transmission routes. To this end, our research
leverages large-scale contact networks from real human mobility data to
identify the path of transmission. More precisely, we introduce a new
Infectious Path Centrality network metric that informs a graph learning edge
classifier to identify important transmission events, achieving an F1-score of
94%. Additionally, we explore bidirectional contact tracing, which quarantines
individuals both retroactively and proactively, and compare its effectiveness
against traditional forward tracing, which only isolates individuals after
testing positive. Our results indicate that when only 30% of symptomatic
individuals are tested, bidirectional tracing can reduce infectious effective
reproduction rate by 71%, thus significantly controlling the outbreak.
|
2501.18532
|
Differentially Private Steering for Large Language Model Alignment
|
cs.CL cs.LG
|
Aligning Large Language Models (LLMs) with human values and away from
undesirable behaviors (such as hallucination) has become increasingly
important. Recently, steering LLMs towards a desired behavior via activation
editing has emerged as an effective method to mitigate harmful generations at
inference-time. Activation editing modifies LLM representations by preserving
information from positive demonstrations (e.g., truthful) and minimising
information from negative demonstrations (e.g., hallucinations). When these
demonstrations come from a private dataset, the aligned LLM may leak private
information contained in those private samples. In this work, we present the
first study of aligning LLM behavior with private datasets. Our work proposes
the \textit{\underline{P}rivate \underline{S}teering for LLM
\underline{A}lignment (PSA)} algorithm to edit LLM activations with
differential privacy (DP) guarantees. We conduct extensive experiments on seven
different benchmarks with open-source LLMs of different sizes (0.5B to 7B) and
model families (LlaMa, Qwen, Mistral and Gemma). Our results show that PSA
achieves DP guarantees for LLM alignment with minimal loss in performance,
including alignment metrics, open-ended text generation quality, and
general-purpose reasoning. We also develop the first Membership Inference
Attack (MIA) for evaluating and auditing the empirical privacy for the problem
of LLM steering via activation editing. Our attack is tailored for activation
editing and relies solely on the generated texts without their associated
probabilities. Our experiments support the theoretical guarantees by showing
improved guarantees for our \textit{PSA} algorithm compared to several existing
non-private techniques.
|
2501.18533
|
Rethinking Bottlenecks in Safety Fine-Tuning of Vision Language Models
|
cs.CV cs.CL cs.CR
|
Large Vision-Language Models (VLMs) have achieved remarkable performance
across a wide range of tasks. However, their deployment in safety-critical
domains poses significant challenges. Existing safety fine-tuning methods,
which focus on textual or multimodal content, fall short in addressing
challenging cases or disrupt the balance between helpfulness and harmlessness.
Our evaluation highlights a safety reasoning gap: these methods lack safety
visual reasoning ability, leading to such bottlenecks. To address this
limitation and enhance both visual perception and reasoning in safety-critical
contexts, we propose a novel dataset that integrates multi-image inputs with
safety Chain-of-Thought (CoT) labels as fine-grained reasoning logic to improve
model performance. Specifically, we introduce the Multi-Image Safety (MIS)
dataset, an instruction-following dataset tailored for multi-image safety
scenarios, consisting of training and test splits. Our experiments demonstrate
that fine-tuning InternVL2.5-8B with MIS significantly outperforms both
powerful open-source models and API-based models in challenging multi-image
tasks requiring safety-related visual reasoning. This approach not only
delivers exceptional safety performance but also preserves general capabilities
without any trade-offs. Specifically, fine-tuning with MIS increases average
accuracy by 0.83% across five general benchmarks and reduces the Attack Success
Rate (ASR) on multiple safety benchmarks by a large margin. Data and Models are
released under:
\href{https://dripnowhy.github.io/MIS/}{\texttt{https://dripnowhy.github.io/MIS/}}
|
2501.18535
|
A Hybrid Data-Driven Approach For Analyzing And Predicting Inpatient
Length Of Stay In Health Centre
|
cs.LG cs.AI
|
Patient length of stay (LoS) is a critical metric for evaluating the efficacy
of hospital management. The primary objectives encompass to improve efficiency
and reduce costs while enhancing patient outcomes and hospital capacity within
the patient journey. By seamlessly merging data-driven techniques with
simulation methodologies, the study proposes an all-encompassing framework for
the optimization of patient flow. Using a comprehensive dataset of 2.3 million
de-identified patient records, we analyzed demographics, diagnoses, treatments,
services, costs, and charges with machine learning models (Decision Tree,
Logistic Regression, Random Forest, Adaboost, LightGBM) and Python tools
(Spark, AWS clusters, dimensionality reduction). Our model predicts patient
length of stay (LoS) upon admission using supervised learning algorithms. This
hybrid approach enables the identification of key factors influencing LoS,
offering a robust framework for hospitals to streamline patient flow and
resource utilization. The research focuses on patient flow, corroborating the
efficacy of the approach, illustrating decreased patient length of stay within
a real healthcare environment. The findings underscore the potential of hybrid
data-driven models in transforming hospital management practices. This
innovative methodology provides generally flexible decision-making, training,
and patient flow enhancement; such a system could have huge implications for
healthcare administration and overall satisfaction with healthcare.
|
2501.18536
|
Illusions of Relevance: Using Content Injection Attacks to Deceive
Retrievers, Rerankers, and LLM Judges
|
cs.IR
|
Consider a scenario in which a user searches for information, only to
encounter texts flooded with misleading or non-relevant content. This scenario
exemplifies a simple yet potent vulnerability in neural Information Retrieval
(IR) pipelines: content injection attacks. We find that embedding models for
retrieval, rerankers, and large language model (LLM) relevance judges are
vulnerable to these attacks, in which adversaries insert misleading text into
passages to manipulate model judgements. We identify two primary threats: (1)
inserting unrelated or harmful content within passages that still appear
deceptively "relevant", and (2) inserting entire queries or key query terms
into passages to boost their perceived relevance. While the second tactic has
been explored in prior research, we present, to our knowledge, the first
empirical analysis of the first threat, demonstrating how state-of-the-art
models can be easily misled. Our study systematically examines the factors that
influence an attack's success, such as the placement of injected content and
the balance between relevant and non-relevant material. Additionally, we
explore various defense strategies, including adversarial passage classifiers,
retriever fine-tuning to discount manipulated content, and prompting LLM judges
to adopt a more cautious approach. However, we find that these countermeasures
often involve trade-offs, sacrificing effectiveness for attack robustness and
sometimes penalizing legitimate documents in the process. Our findings
highlight the need for stronger defenses against these evolving adversarial
strategies to maintain the trustworthiness of IR systems. We release our code
and scripts to facilitate further research.
|
2501.18537
|
Loss Functions and Operators Generated by f-Divergences
|
cs.LG stat.ML
|
The logistic loss (a.k.a. cross-entropy loss) is one of the most popular loss
functions used for multiclass classification. It is also the loss function of
choice for next-token prediction in language modeling. It is associated with
the Kullback--Leibler (KL) divergence and the softargmax operator. In this
work, we propose to construct new convex loss functions based on
$f$-divergences. Our loss functions generalize the logistic loss in two
directions: i) by replacing the KL divergence with $f$-divergences and ii) by
allowing non-uniform reference measures. We instantiate our framework for
numerous $f$-divergences, recovering existing losses and creating new ones. By
analogy with the logistic loss, the loss function generated by an
$f$-divergence is associated with an operator, that we dub $f$-softargmax. We
derive a novel parallelizable bisection algorithm for computing the
$f$-softargmax associated with any $f$-divergence. On the empirical side, one
of the goals of this paper is to determine the effectiveness of loss functions
beyond the classical cross-entropy in a language model setting, including on
pre-training, post-training (SFT) and distillation. We show that the loss
function generated by the $\alpha$-divergence (which is equivalent to Tsallis
$\alpha$-negentropy in the case of unit reference measures) with $\alpha=1.5$
performs well across several tasks.
|
2501.18538
|
Mini-ResEmoteNet: Leveraging Knowledge Distillation for Human-Centered
Design
|
cs.CV
|
Facial Emotion Recognition has emerged as increasingly pivotal in the domain
of User Experience, notably within modern usability testing, as it facilitates
a deeper comprehension of user satisfaction and engagement. This study aims to
extend the ResEmoteNet model by employing a knowledge distillation framework to
develop Mini-ResEmoteNet models - lightweight student models - tailored for
usability testing. Experiments were conducted on the FER2013 and RAF-DB
datasets to assess the efficacy of three student model architectures: Student
Model A, Student Model B, and Student Model C. Their development involves
reducing the number of feature channels in each layer of the teacher model by
approximately 50%, 75%, and 87.5%. Demonstrating exceptional performance on the
FER2013 dataset, Student Model A (E1) achieved a test accuracy of 76.33%,
marking a 0.21% absolute improvement over EmoNeXt. Moreover, the results
exhibit absolute improvements in terms of inference speed and memory usage
during inference compared to the ResEmoteNet model. The findings indicate that
the proposed methods surpass other state-of-the-art approaches.
|
2501.18539
|
Can we Retrieve Everything All at Once? ARM: An Alignment-Oriented
LLM-based Retrieval Method
|
cs.CL cs.AI cs.IR
|
Real-world open-domain questions can be complicated, particularly when
answering them involves information from multiple information sources. LLMs
have demonstrated impressive performance in decomposing complex tasks into
simpler steps, and previous work has used it for better retrieval in support of
complex questions. However, LLM's decomposition of questions is unaware of what
data is available and how data is organized, often leading to a sub-optimal
retrieval performance. Recent effort in agentic RAG proposes to perform
retrieval in an iterative fashion, where a followup query is derived as an
action based on previous rounds of retrieval. While this provides one way of
interacting with the data collection, agentic RAG's exploration of data is
inefficient because successive queries depend on previous results rather than
being guided by the organization of available data in the collection. To
address this problem, we propose an LLM-based retrieval method -- ARM, that
aims to better align the question with the organization of the data collection
by exploring relationships among data objects beyond matching the utterance of
the query, thus leading to a retrieve-all-at-once solution for complex queries.
We evaluated ARM on two datasets, Bird and OTT-QA. On Bird, it outperforms
standard RAG with query decomposition by up to 5.2 pt in execution accuracy and
agentic RAG (ReAct) by up to 15.9 pt. On OTT-QA, it achieves up to 5.5 pt and
19.3 pt higher F1 match scores compared to these approaches.
|
2501.18542
|
Semantic Web and Creative AI -- A Technical Report from ISWS 2023
|
cs.AI
|
The International Semantic Web Research School (ISWS) is a week-long
intensive program designed to immerse participants in the field. This document
reports a collaborative effort performed by ten teams of students, each guided
by a senior researcher as their mentor, attending ISWS 2023. Each team provided
a different perspective to the topic of creative AI, substantiated by a set of
research questions as the main subject of their investigation. The 2023 edition
of ISWS focuses on the intersection of Semantic Web technologies and Creative
AI. ISWS 2023 explored various intersections between Semantic Web technologies
and creative AI. A key area of focus was the potential of LLMs as support tools
for knowledge engineering. Participants also delved into the multifaceted
applications of LLMs, including legal aspects of creative content production,
humans in the loop, decentralised approaches to multimodal generative AI
models, nanopublications and AI for personal scientific knowledge graphs,
commonsense knowledge in automatic story and narrative completion, generative
AI for art critique, prompt engineering, automatic music composition,
commonsense prototyping and conceptual blending, and elicitation of tacit
knowledge. As Large Language Models and semantic technologies continue to
evolve, new exciting prospects are emerging: a future where the boundaries
between creative expression and factual knowledge become increasingly permeable
and porous, leading to a world of knowledge that is both informative and
inspiring.
|
2501.18543
|
Learning Priors of Human Motion With Vision Transformers
|
cs.CV cs.RO
|
A clear understanding of where humans move in a scenario, their usual paths
and speeds, and where they stop, is very important for different applications,
such as mobility studies in urban areas or robot navigation tasks within
human-populated environments. We propose in this article, a neural architecture
based on Vision Transformers (ViTs) to provide this information. This solution
can arguably capture spatial correlations more effectively than Convolutional
Neural Networks (CNNs). In the paper, we describe the methodology and proposed
neural architecture and show the experiments' results with a standard dataset.
We show that the proposed ViT architecture improves the metrics compared to a
method based on a CNN.
|
2501.18545
|
UDC-VIT: A Real-World Video Dataset for Under-Display Cameras
|
cs.CV
|
Under Display Camera (UDC) is an advanced imaging system that places a
digital camera lens underneath a display panel, effectively concealing the
camera. However, the display panel significantly degrades captured images or
videos, introducing low transmittance, blur, noise, and flare issues. Tackling
such issues is challenging because of the complex degradation of UDCs,
including diverse flare patterns. Despite extensive research on UDC images and
their restoration models, studies on videos have yet to be significantly
explored. While two UDC video datasets exist, they primarily focus on
unrealistic or synthetic UDC degradation rather than real-world UDC
degradation. In this paper, we propose a real-world UDC video dataset called
UDC-VIT. Unlike existing datasets, only UDC-VIT exclusively includes human
motions that target facial recognition. We propose a video-capturing system to
simultaneously acquire non-degraded and UDC-degraded videos of the same scene.
Then, we align a pair of captured videos frame by frame, using discrete Fourier
transform (DFT). We compare UDC-VIT with six representative UDC still image
datasets and two existing UDC video datasets. Using six deep-learning models,
we compare UDC-VIT and an existing synthetic UDC video dataset. The results
indicate the ineffectiveness of models trained on earlier synthetic UDC video
datasets, as they do not reflect the actual characteristics of UDC-degraded
videos. We also demonstrate the importance of effective UDC restoration by
evaluating face recognition accuracy concerning PSNR, SSIM, and LPIPS scores.
UDC-VIT enables further exploration in the UDC video restoration and offers
better insights into the challenge. UDC-VIT is available at our project site.
|
2501.18555
|
An Empirical Study of Dotfiles Repositories Containing User-Specific
Configuration Files
|
cs.SE cs.SI
|
Storing user-specific configuration files in a "dotfiles" repository is a
common practice among software developers, with hundreds of thousands choosing
to publicly host their repositories on GitHub. This practice not only provides
developers with a simple backup mechanism for their essential configuration
files, but also facilitates sharing ideas and learning from others on how best
to configure applications that are key to their daily workflows. However, our
current understanding of these repository sharing practices is limited and
mostly anecdotal. To address this gap, we conducted a study to delve deeper
into this phenomenon. Beginning with collecting and analyzing publicly-hosted
dotfiles repositories on GitHub, we discovered that maintaining dotfiles is
widespread among developers. Notably, we found that 25.8% of the top 500
most-starred GitHub users maintain some form of publicly accessible dotfiles
repository. Among these, configurations for text editors like Vim and shells
such as bash and zsh are the most commonly tracked. Our analysis reveals that
updating dotfiles is primarily driven by the need to adjust configurations
(63.3%) and project meta-management (25.4%). Surprisingly, we found no
significant difference in the types of dotfiles observed across code churn
history patterns, suggesting that the frequency of dotfile modifications
depends more on the developer than the properties of the specific dotfile and
its associated application. Finally, we discuss the challenges associated with
managing dotfiles, including the necessity for a reliable and effective
deployment mechanism, and how the insights gleaned from dotfiles can inform
tool designers by offering real-world usage information.
|
2501.18560
|
Bandits with Anytime Knapsacks
|
cs.LG
|
We consider bandits with anytime knapsacks (BwAK), a novel version of the BwK
problem where there is an \textit{anytime} cost constraint instead of a total
cost budget. This problem setting introduces additional complexities as it
mandates adherence to the constraint throughout the decision-making process. We
propose SUAK, an algorithm that utilizes upper confidence bounds to identify
the optimal mixture of arms while maintaining a balance between exploration and
exploitation. SUAK is an adaptive algorithm that strategically utilizes the
available budget in each round in the decision-making process and skips a round
when it is possible to violate the anytime cost constraint. In particular, SUAK
slightly under-utilizes the available cost budget to reduce the need for
skipping rounds. We show that SUAK attains the same problem-dependent regret
upper bound of $ O(K \log T)$ established in prior work under the simpler BwK
framework. Finally, we provide simulations to verify the utility of SUAK in
practical settings.
|
2501.18563
|
No Equations Needed: Learning System Dynamics Without Relying on
Closed-Form ODEs
|
cs.LG
|
Data-driven modeling of dynamical systems is a crucial area of machine
learning. In many scenarios, a thorough understanding of the model's behavior
becomes essential for practical applications. For instance, understanding the
behavior of a pharmacokinetic model, constructed as part of drug development,
may allow us to both verify its biological plausibility (e.g., the drug
concentration curve is non-negative and decays to zero) and to design dosing
guidelines. Discovery of closed-form ordinary differential equations (ODEs) can
be employed to obtain such insights by finding a compact mathematical equation
and then analyzing it (a two-step approach). However, its widespread use is
currently hindered because the analysis process may be time-consuming,
requiring substantial mathematical expertise, or even impossible if the
equation is too complex. Moreover, if the found equation's behavior does not
satisfy the requirements, editing it or influencing the discovery algorithms to
rectify it is challenging as the link between the symbolic form of an ODE and
its behavior can be elusive. This paper proposes a conceptual shift to modeling
low-dimensional dynamical systems by departing from the traditional two-step
modeling process. Instead of first discovering a closed-form equation and then
analyzing it, our approach, direct semantic modeling, predicts the semantic
representation of the dynamical system (i.e., description of its behavior)
directly from data, bypassing the need for complex post-hoc analysis. This
direct approach also allows the incorporation of intuitive inductive biases
into the optimization algorithm and editing the model's behavior directly,
ensuring that the model meets the desired specifications. Our approach not only
simplifies the modeling pipeline but also enhances the transparency and
flexibility of the resulting models compared to traditional closed-form ODEs.
|
2501.18564
|
SAM2Act: Integrating Visual Foundation Model with A Memory Architecture
for Robotic Manipulation
|
cs.RO
|
Robotic manipulation systems operating in diverse, dynamic environments must
exhibit three critical abilities: multitask interaction, generalization to
unseen scenarios, and spatial memory. While significant progress has been made
in robotic manipulation, existing approaches often fall short in generalization
to complex environmental variations and addressing memory-dependent tasks. To
bridge this gap, we introduce SAM2Act, a multi-view robotic transformer-based
policy that leverages multi-resolution upsampling with visual representations
from large-scale foundation model. SAM2Act achieves a state-of-the-art average
success rate of 86.8% across 18 tasks in the RLBench benchmark, and
demonstrates robust generalization on The Colosseum benchmark, with only a 4.3%
performance gap under diverse environmental perturbations. Building on this
foundation, we propose SAM2Act+, a memory-based architecture inspired by SAM2,
which incorporates a memory bank, an encoder, and an attention mechanism to
enhance spatial memory. To address the need for evaluating memory-dependent
tasks, we introduce MemoryBench, a novel benchmark designed to assess spatial
memory and action recall in robotic manipulation. SAM2Act+ achieves competitive
performance on MemoryBench, significantly outperforming existing approaches and
pushing the boundaries of memory-enabled robotic systems. Project page:
https://sam2act.github.io/
|
2501.18565
|
BounTCHA: A CAPTCHA Utilizing Boundary Identification in AI-extended
Videos
|
cs.CR cs.AI cs.HC
|
In recent years, the rapid development of artificial intelligence (AI)
especially multi-modal Large Language Models (MLLMs), has enabled it to
understand text, images, videos, and other multimedia data, allowing AI systems
to execute various tasks based on human-provided prompts. However, AI-powered
bots have increasingly been able to bypass most existing CAPTCHA systems,
posing significant security threats to web applications. This makes the design
of new CAPTCHA mechanisms an urgent priority. We observe that humans are highly
sensitive to shifts and abrupt changes in videos, while current AI systems
still struggle to comprehend and respond to such situations effectively. Based
on this observation, we design and implement BounTCHA, a CAPTCHA mechanism that
leverages human perception of boundaries in video transitions and disruptions.
By utilizing AI's capability to expand original videos with prompts, we
introduce unexpected twists and changes to create a pipeline for generating
short videos for CAPTCHA purposes. We develop a prototype and conduct
experiments to collect data on humans' time biases in boundary identification.
This data serves as a basis for distinguishing between human users and bots.
Additionally, we perform a detailed security analysis of BounTCHA,
demonstrating its resilience against various types of attacks. We hope that
BounTCHA will act as a robust defense, safeguarding millions of web
applications in the AI-driven era.
|
2501.18572
|
Optimum Monitoring and Job Assignment with Multiple Markov Machines
|
cs.IT cs.SY eess.SY math.IT
|
We study a class of systems termed Markov Machines (MM) which process job
requests with exponential service times. Assuming a Poison job arrival process,
these MMs oscillate between two states, free and busy. We consider the problem
of sampling the states of these MMs so as to track their states, subject to a
total sampling budget, with the goal of allocating external job requests
effectively to them. For this purpose, we leverage the $\textit{binary
freshness metric}$ to quantify the quality of our ability to track the states
of the MMs, and introduce two new metrics termed $\textit{false acceptance
ratio}$ (FAR) and $\textit{false rejection ratio}$ (FRR) to evaluate the
effectiveness of our job assignment strategy. We provide optimal sampling rate
allocation schemes for jointly monitoring a system of $N$ heterogeneous MMs.
|
2501.18576
|
Token-Hungry, Yet Precise: DeepSeek R1 Highlights the Need for
Multi-Step Reasoning Over Speed in MATH
|
cs.LG
|
This study investigates the performance of the DeepSeek R1 language model on
30 challenging mathematical problems derived from the MATH dataset, problems
that previously proved unsolvable by other models under time constraints.
Unlike prior work, this research removes time limitations to explore whether
DeepSeek R1's architecture, known for its reliance on token-based reasoning,
can achieve accurate solutions through a multi-step process. The study compares
DeepSeek R1 with four other models (gemini-1.5-flash-8b,
gpt-4o-mini-2024-07-18, llama3.1:8b, and mistral-8b-latest) across 11
temperature settings. Results demonstrate that DeepSeek R1 achieves superior
accuracy on these complex problems but generates significantly more tokens than
other models, confirming its token-intensive approach. The findings highlight a
trade-off between accuracy and efficiency in mathematical problem-solving with
large language models: while DeepSeek R1 excels in accuracy, its reliance on
extensive token generation may not be optimal for applications requiring rapid
responses. The study underscores the importance of considering task-specific
requirements when selecting an LLM and emphasizes the role of temperature
settings in optimizing performance.
|
2501.18577
|
Prediction-Powered Inference with Imputed Covariates and Nonuniform
Sampling
|
stat.ME cs.AI cs.LG stat.ML
|
Machine learning models are increasingly used to produce predictions that
serve as input data in subsequent statistical analyses. For example, computer
vision predictions of economic and environmental indicators based on satellite
imagery are used in downstream regressions; similarly, language models are
widely used to approximate human ratings and opinions in social science
research. However, failure to properly account for errors in the machine
learning predictions renders standard statistical procedures invalid. Prior
work uses what we call the Predict-Then-Debias estimator to give valid
confidence intervals when machine learning algorithms impute missing variables,
assuming a small complete sample from the population of interest. We expand the
scope by introducing bootstrap confidence intervals that apply when the
complete data is a nonuniform (i.e., weighted, stratified, or clustered) sample
and to settings where an arbitrary subset of features is imputed. Importantly,
the method can be applied to many settings without requiring additional
calculations. We prove that these confidence intervals are valid under no
assumptions on the quality of the machine learning model and are no wider than
the intervals obtained by methods that do not use machine learning predictions.
|
2501.18578
|
R.I.P.: Better Models by Survival of the Fittest Prompts
|
cs.CL cs.AI cs.LG
|
Training data quality is one of the most important drivers of final model
quality. In this work, we introduce a method for evaluating data integrity
based on the assumption that low-quality input prompts result in high variance
and low quality responses. This is achieved by measuring the rejected response
quality and the reward gap between the chosen and rejected preference pair. Our
method, Rejecting Instruction Preferences (RIP) can be used to filter prompts
from existing training sets, or to make high quality synthetic datasets,
yielding large performance gains across various benchmarks compared to
unfiltered data. Using Llama 3.1-8B-Instruct, RIP improves AlpacaEval2 LC Win
Rate by 9.4%, Arena-Hard by 8.7%, and WildBench by 9.9%. Using Llama
3.3-70B-Instruct, RIP improves Arena-Hard from 67.5 to 82.9, which is from 18th
place to 6th overall in the leaderboard.
|
2501.18580
|
Node Classification and Search on the Rubik's Cube Graph with GNNs
|
cs.LG
|
This study focuses on the application of deep geometric models to solve the
3x3x3 Rubik's Cube. We begin by discussing the cube's graph representation and
defining distance as the model's optimization objective. The distance
approximation task is reformulated as a node classification problem,
effectively addressed using Graph Neural Networks (GNNs). After training the
model on a random subgraph, the predicted classes are used to construct a
heuristic for $A^*$ search. We conclude with experiments comparing our
heuristic to that of the DeepCubeA model.
|
2501.18581
|
Bias-variance decompositions: the exclusive privilege of Bregman
divergences
|
cs.LG
|
Bias-variance decompositions are widely used to understand the generalization
performance of machine learning models. While the squared error loss permits a
straightforward decomposition, other loss functions - such as zero-one loss or
$L_1$ loss - either fail to sum bias and variance to the expected loss or rely
on definitions that lack the essential properties of meaningful bias and
variance. Recent research has shown that clean decompositions can be achieved
for the broader class of Bregman divergences, with the cross-entropy loss as a
special case. However, the necessary and sufficient conditions for these
decompositions remain an open question.
In this paper, we address this question by studying continuous, nonnegative
loss functions that satisfy the identity of indiscernibles under mild
regularity conditions. We prove that so-called $g$-Bregman divergences are the
only such loss functions that have a clean bias-variance decomposition. A
$g$-Bregman divergence can be transformed into a standard Bregman divergence
through an invertible change of variables. This makes the squared Mahalanobis
distance, up to such a variable transformation, the only symmetric loss
function with a clean bias-variance decomposition. We also examine the impact
of relaxing the restrictions on the loss functions and how this affects our
results.
|
2501.18582
|
Accuracy and Robustness of Weight-Balancing Methods for Training PINNs
|
cs.LG
|
Physics-Informed Neural Networks (PINNs) have emerged as powerful tools for
integrating physics-based models with data by minimizing both data and physics
losses. However, this multi-objective optimization problem is notoriously
challenging, with some benchmark problems leading to unfeasible solutions. To
address these issues, various strategies have been proposed, including adaptive
weight adjustments in the loss function. In this work, we introduce clear
definitions of accuracy and robustness in the context of PINNs and propose a
novel training algorithm based on the Primal-Dual (PD) optimization framework.
Our approach enhances the robustness of PINNs while maintaining comparable
performance to existing weight-balancing methods. Numerical experiments
demonstrate that the PD method consistently achieves reliable solutions across
all investigated cases, even in the low-data regime, and can be easily
implemented, facilitating its practical adoption. The code is available at
https://github.com/haoming-SHEN/Accuracy-and-Robustness-of-Weight-Balancing-Methods-for-Training-PINNs.git.
|
2501.18583
|
Reducing Simulation Effort for RIS Optimization using an Efficient
Far-Field Approximation
|
math.OC cs.SY eess.SY
|
Optimization of Reconfigurable Intelligent Surfaces (RIS) via a previously
introduced method is effective, but time-consuming, because multiport impedance
or scatter matrices are required for each transmitter and receiver position,
which generally must be obtained through full-wave simulation. Herein, a simple
and efficient far-field approximation is introduced, to extrapolate scatter
matrices for arbitrary receiver and transmitter positions from only a single
simulation while still maintaining high accuracy suitable for optimization
purposes. This is demonstrated through comparisons of the optimized capacitance
values and further supported by empirical measurements.
|
2501.18585
|
Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs
|
cs.CL
|
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable
abilities in complex reasoning tasks by scaling test-time compute and
exhibiting human-like deep thinking. However, we identify a phenomenon we term
underthinking, where o1-like LLMs frequently switch between different reasoning
thoughts without sufficiently exploring promising paths to reach a correct
solution. This behavior leads to inadequate depth of reasoning and decreased
performance, particularly on challenging mathematical problems. To
systematically analyze this issue, we conduct experiments on three challenging
test sets and two representative open-source o1-like models, revealing that
frequent thought switching correlates with incorrect responses. We introduce a
novel metric to quantify underthinking by measuring token efficiency in
incorrect answers. To address underthinking, we propose a decoding strategy
with thought switching penalty TIP that discourages premature transitions
between thoughts, encouraging deeper exploration of each reasoning path.
Experimental results demonstrate that our approach improves accuracy across
challenging datasets without requiring model fine-tuning. Our findings
contribute to understanding reasoning inefficiencies in o1-like LLMs and offer
a practical solution to enhance their problem-solving capabilities.
|
2501.18587
|
Entropy functionals and equilibrium states in mixed quantum-classical
dynamics
|
quant-ph cond-mat.stat-mech cs.IT math-ph math.IT math.MP physics.chem-ph
|
The computational challenges posed by many-particle quantum systems are often
overcome by mixed quantum-classical (MQC) models in which certain degrees of
freedom are treated as classical while others are retained as quantum. One of
the fundamental questions raised by this hybrid picture involves the
characterization of the information associated to MQC systems. Based on the
theory of dynamical invariants in Hamiltonian systems, here we propose a family
of hybrid entropy functionals that consistently specialize to the usual R\'enyi
and Shannon entropies. Upon considering the MQC Ehrenfest model for the
dynamics of quantum and classical probabilities, we apply the hybrid Shannon
entropy to characterize equilibrium configurations for simple Hamiltonians. The
present construction also applies beyond Ehrenfest dynamics.
|
2501.18588
|
Inkspire: Supporting Design Exploration with Generative AI through
Analogical Sketching
|
cs.HC cs.AI cs.CV cs.MM
|
With recent advancements in the capabilities of Text-to-Image (T2I) AI
models, product designers have begun experimenting with them in their work.
However, T2I models struggle to interpret abstract language and the current
user experience of T2I tools can induce design fixation rather than a more
iterative, exploratory process. To address these challenges, we developed
Inkspire, a sketch-driven tool that supports designers in prototyping product
design concepts with analogical inspirations and a complete
sketch-to-design-to-sketch feedback loop. To inform the design of Inkspire, we
conducted an exchange session with designers and distilled design goals for
improving T2I interactions. In a within-subjects study comparing Inkspire to
ControlNet, we found that Inkspire supported designers with more inspiration
and exploration of design ideas, and improved aspects of the co-creative
process by allowing designers to effectively grasp the current state of the AI
to guide it towards novel design intentions.
|
2501.18590
|
DiffusionRenderer: Neural Inverse and Forward Rendering with Video
Diffusion Models
|
cs.CV cs.GR
|
Understanding and modeling lighting effects are fundamental tasks in computer
vision and graphics. Classic physically-based rendering (PBR) accurately
simulates the light transport, but relies on precise scene
representations--explicit 3D geometry, high-quality material properties, and
lighting conditions--that are often impractical to obtain in real-world
scenarios. Therefore, we introduce DiffusionRenderer, a neural approach that
addresses the dual problem of inverse and forward rendering within a holistic
framework. Leveraging powerful video diffusion model priors, the inverse
rendering model accurately estimates G-buffers from real-world videos,
providing an interface for image editing tasks, and training data for the
rendering model. Conversely, our rendering model generates photorealistic
images from G-buffers without explicit light transport simulation. Experiments
demonstrate that DiffusionRenderer effectively approximates inverse and
forwards rendering, consistently outperforming the state-of-the-art. Our model
enables practical applications from a single video input--including relighting,
material editing, and realistic object insertion.
|
2501.18592
|
Advances in Multimodal Adaptation and Generalization: From Traditional
Approaches to Foundation Models
|
cs.CV cs.AI cs.LG cs.RO
|
In real-world scenarios, achieving domain adaptation and generalization poses
significant challenges, as models must adapt to or generalize across unknown
target distributions. Extending these capabilities to unseen multimodal
distributions, i.e., multimodal domain adaptation and generalization, is even
more challenging due to the distinct characteristics of different modalities.
Significant progress has been made over the years, with applications ranging
from action recognition to semantic segmentation. Besides, the recent advent of
large-scale pre-trained multimodal foundation models, such as CLIP, has
inspired works leveraging these models to enhance adaptation and generalization
performances or adapting them to downstream tasks. This survey provides the
first comprehensive review of recent advances from traditional approaches to
foundation models, covering: (1) Multimodal domain adaptation; (2) Multimodal
test-time adaptation; (3) Multimodal domain generalization; (4) Domain
adaptation and generalization with the help of multimodal foundation models;
and (5) Adaptation of multimodal foundation models. For each topic, we formally
define the problem and thoroughly review existing methods. Additionally, we
analyze relevant datasets and applications, highlighting open challenges and
potential future research directions. We maintain an active repository that
contains up-to-date literature at
https://github.com/donghao51/Awesome-Multimodal-Adaptation.
|
2501.18593
|
Diffusion Autoencoders are Scalable Image Tokenizers
|
cs.CV cs.AI cs.LG
|
Tokenizing images into compact visual representations is a key step in
learning efficient and high-quality image generative models. We present a
simple diffusion tokenizer (DiTo) that learns compact visual representations
for image generation models. Our key insight is that a single learning
objective, diffusion L2 loss, can be used for training scalable image
tokenizers. Since diffusion is already widely used for image generation, our
insight greatly simplifies training such tokenizers. In contrast, current
state-of-the-art tokenizers rely on an empirically found combination of
heuristics and losses, thus requiring a complex training recipe that relies on
non-trivially balancing different losses and pretrained supervised models. We
show design decisions, along with theoretical grounding, that enable us to
scale DiTo for learning competitive image representations. Our results show
that DiTo is a simpler, scalable, and self-supervised alternative to the
current state-of-the-art image tokenizer which is supervised. DiTo achieves
competitive or better quality than state-of-the-art in image reconstruction and
downstream image generation tasks.
|
2501.18594
|
Foundational Models for 3D Point Clouds: A Survey and Outlook
|
cs.CV
|
The 3D point cloud representation plays a crucial role in preserving the
geometric fidelity of the physical world, enabling more accurate complex 3D
environments. While humans naturally comprehend the intricate relationships
between objects and variations through a multisensory system, artificial
intelligence (AI) systems have yet to fully replicate this capacity. To bridge
this gap, it becomes essential to incorporate multiple modalities. Models that
can seamlessly integrate and reason across these modalities are known as
foundation models (FMs). The development of FMs for 2D modalities, such as
images and text, has seen significant progress, driven by the abundant
availability of large-scale datasets. However, the 3D domain has lagged due to
the scarcity of labelled data and high computational overheads. In response,
recent research has begun to explore the potential of applying FMs to 3D tasks,
overcoming these challenges by leveraging existing 2D knowledge. Additionally,
language, with its capacity for abstract reasoning and description of the
environment, offers a promising avenue for enhancing 3D understanding through
large pre-trained language models (LLMs). Despite the rapid development and
adoption of FMs for 3D vision tasks in recent years, there remains a gap in
comprehensive and in-depth literature reviews. This article aims to address
this gap by presenting a comprehensive overview of the state-of-the-art methods
that utilize FMs for 3D visual understanding. We start by reviewing various
strategies employed in the building of various 3D FMs. Then we categorize and
summarize use of different FMs for tasks such as perception tasks. Finally, the
article offers insights into future directions for research and development in
this field. To help reader, we have curated list of relevant papers on the
topic: https://github.com/vgthengane/Awesome-FMs-in-3D.
|
2501.18595
|
ROSA: Reconstructing Object Shape and Appearance Textures by Adaptive
Detail Transfer
|
cs.CV
|
Reconstructing an object's shape and appearance in terms of a mesh textured
by a spatially-varying bidirectional reflectance distribution function (SVBRDF)
from a limited set of images captured under collocated light is an ill-posed
problem. Previous state-of-the-art approaches either aim to reconstruct the
appearance directly on the geometry or additionally use texture normals as part
of the appearance features. However, this requires detailed but inefficiently
large meshes, that would have to be simplified in a post-processing step, or
suffers from well-known limitations of normal maps such as missing shadows or
incorrect silhouettes. Another limiting factor is the fixed and typically low
resolution of the texture estimation resulting in loss of important surface
details. To overcome these problems, we present ROSA, an inverse rendering
method that directly optimizes mesh geometry with spatially adaptive mesh
resolution solely based on the image data. In particular, we refine the mesh
and locally condition the surface smoothness based on the estimated normal
texture and mesh curvature. In addition, we enable the reconstruction of fine
appearance details in high-resolution textures through a pioneering tile-based
method that operates on a single pre-trained decoder network but is not limited
by the network output resolution.
|
2501.18596
|
DeltaLLM: Compress LLMs with Low-Rank Deltas between Shared Weights
|
cs.LG cs.AI
|
We introduce DeltaLLM, a new post-training compression technique to reduce
the memory footprint of LLMs. We propose an alternative way of structuring LLMs
with weight sharing between layers in subsequent Transformer blocks, along with
additional low-rank difference matrices between them. For training, we adopt
the progressing module replacement method and show that the lightweight
training of the low-rank modules with approximately 30M-40M tokens is
sufficient to achieve performance on par with LLMs of comparable sizes trained
from scratch. We release the resultant models, DeltaLLAMA and DeltaPHI, with a
12% parameter reduction, retaining 90% of the performance of the base Llama and
Phi models on common knowledge and reasoning benchmarks. Our method also
outperforms compression techniques JointDrop, LaCo, ShortGPT and SliceGPT with
the same number of parameters removed. For example, DeltaPhi 2.9B with a 24%
reduction achieves similar average zero-shot accuracies as recovery fine-tuned
SlicedPhi 3.3B with a 12% reduction, despite being approximately 400M
parameters smaller with no fine-tuning applied. This work provides new insights
into LLM architecture design and compression methods when storage space is
critical.
|
2501.18608
|
RTAMT -- Runtime Robustness Monitors with Application to CPS and
Robotics
|
cs.LO cs.RO
|
In this paper, we present Real-Time Analog Monitoring Tool (RTAMT), a tool
for quantitative monitoring of Signal Temporal Logic (STL) specifications. The
library implements a flexible architecture that supports: (1) various
environments connected by an Application Programming Interface (API) in Python,
(2) various flavors of temporal logic specification and robustness notion such
as STL, including an interface-aware variant that distinguishes between input
and output variables, and (3) discrete-time and dense-time interpretation of
STL with generation of online and offline monitors. We specifically focus on
robotics and Cyber-Physical Systems (CPSs) applications, showing how to
integrate RTAMT with (1) the Robot Operating System (ROS) and (2)
MATLAB/Simulink environments. We evaluate the tool by demonstrating several use
scenarios involving service robotic and avionic applications.
|
2501.18612
|
Deeply Optimizing the SAT Solver for the IC3 Algorithm
|
cs.LO cs.AI
|
The IC3 algorithm, also known as PDR, is a SAT-based model checking algorithm
that has significantly influenced the field in recent years due to its
efficiency, scalability, and completeness. It utilizes SAT solvers to solve a
series of SAT queries associated with relative induction. In this paper, we
introduce several optimizations for the SAT solver in IC3 based on our
observations of the unique characteristics of these SAT queries. By observing
that SAT queries do not necessarily require decisions on all variables, we
compute a subset of variables that need to be decided before each solving
process while ensuring that the result remains unaffected. Additionally, noting
that the overhead of binary heap operations in VSIDS is non-negligible, we
replace the binary heap with buckets to achieve constant-time operations.
Furthermore, we support temporary clauses without the need to allocate a new
activation variable for each solving process, thereby eliminating the need to
reset solvers. We developed a novel lightweight CDCL SAT solver, GipSAT, which
integrates these optimizations. A comprehensive evaluation highlights the
performance improvements achieved by GipSAT. Specifically, the GipSAT-based IC3
demonstrates an average speedup of 3.61 times in solving time compared to the
IC3 implementation based on MiniSat.
|
2501.18614
|
Review and Recommendations for using Artificial Intelligence in
Intracoronary Optical Coherence Tomography Analysis
|
eess.IV cs.AI cs.CV
|
Artificial intelligence (AI) methodologies hold great promise for the rapid
and accurate diagnosis of coronary artery disease (CAD) from intravascular
optical coherent tomography (IVOCT) images. Numerous papers have been published
describing AI-based models for different diagnostic tasks, yet it remains
unclear which models have potential clinical utility and have been properly
validated. This systematic review considered published literature between
January 2015 and February 2023 describing AI-based diagnosis of CAD using
IVOCT. Our search identified 5,576 studies, with 513 included after initial
screening and 35 studies included in the final systematic review after quality
screening. Our findings indicate that most of the identified models are not
currently suitable for clinical use, primarily due to methodological flaws and
underlying biases. To address these issues, we provide recommendations to
improve model quality and research practices to enhance the development of
clinically useful AI products.
|
2501.18616
|
STAMP: Scalable Task And Model-agnostic Collaborative Perception
|
cs.CV cs.AI cs.RO
|
Perception is crucial for autonomous driving, but single-agent perception is
often constrained by sensors' physical limitations, leading to degraded
performance under severe occlusion, adverse weather conditions, and when
detecting distant objects. Multi-agent collaborative perception offers a
solution, yet challenges arise when integrating heterogeneous agents with
varying model architectures. To address these challenges, we propose STAMP, a
scalable task- and model-agnostic, collaborative perception pipeline for
heterogeneous agents. STAMP utilizes lightweight adapter-reverter pairs to
transform Bird's Eye View (BEV) features between agent-specific and shared
protocol domains, enabling efficient feature sharing and fusion. This approach
minimizes computational overhead, enhances scalability, and preserves model
security. Experiments on simulated and real-world datasets demonstrate STAMP's
comparable or superior accuracy to state-of-the-art models with significantly
reduced computational costs. As a first-of-its-kind task- and model-agnostic
framework, STAMP aims to advance research in scalable and secure mobility
systems towards Level 5 autonomy. Our project page is at
https://xiangbogaobarry.github.io/STAMP and the code is available at
https://github.com/taco-group/STAMP.
|
2501.18617
|
DarkMind: Latent Chain-of-Thought Backdoor in Customized LLMs
|
cs.CR cs.LG
|
With the growing demand for personalized AI solutions, customized LLMs have
become a preferred choice for businesses and individuals, driving the
deployment of millions of AI agents across various platforms, e.g., GPT Store
hosts over 3 million customized GPTs. Their popularity is partly driven by
advanced reasoning capabilities, such as Chain-of-Thought, which enhance their
ability to tackle complex tasks. However, their rapid proliferation introduces
new vulnerabilities, particularly in reasoning processes that remain largely
unexplored. We introduce DarkMind, a novel backdoor attack that exploits the
reasoning capabilities of customized LLMs. Designed to remain latent, DarkMind
activates within the reasoning chain to covertly alter the final outcome.
Unlike existing attacks, it operates without injecting triggers into user
queries, making it a more potent threat. We evaluate DarkMind across eight
datasets covering arithmetic, commonsense, and symbolic reasoning domains,
using five state-of-the-art LLMs with five distinct trigger implementations.
Our results demonstrate DarkMind effectiveness across all scenarios,
underscoring its impact. Finally, we explore potential defense mechanisms to
mitigate its risks, emphasizing the need for stronger security measures.
|
2501.18618
|
Vision Aided Channel Prediction for Vehicular Communications: A Case
Study of Received Power Prediction Using RGB Images
|
cs.CV
|
The communication scenarios and channel characteristics of 6G will be more
complex and difficult to characterize. Conventional methods for channel
prediction face challenges in achieving an optimal balance between accuracy,
practicality, and generalizability. Additionally, they often fail to
effectively leverage environmental features. Within the framework of
integration communication and artificial intelligence as a pivotal development
vision for 6G, it is imperative to achieve intelligent prediction of channel
characteristics. Vision-aided methods have been employed in various wireless
communication tasks, excluding channel prediction, and have demonstrated
enhanced efficiency and performance. In this paper, we propose a vision-aided
two-stage model for channel prediction in millimeter wave vehicular
communication scenarios, realizing accurate received power prediction utilizing
solely RGB images. Firstly, we obtain original images of propagation
environment through an RGB camera. Secondly, three typical computer vision
methods including object detection, instance segmentation and binary mask are
employed for environmental information extraction from original images in stage
1, and prediction of received power based on processed images is implemented in
stage 2. Pre-trained YOLOv8 and ResNets are used in stages 1 and 2,
respectively, and fine-tuned on datasets. Finally, we conduct five experiments
to evaluate the performance of proposed model, demonstrating its feasibility,
accuracy and generalization capabilities. The model proposed in this paper
offers novel solutions for achieving intelligent channel prediction in
vehicular communications.
|
2501.18619
|
FAAGC: Feature Augmentation on Adaptive Geodesic Curve Based on the
shape space theory
|
cs.CV cs.LG
|
Deep learning models have been widely applied across various domains and
industries. However, many fields still face challenges due to limited and
insufficient data. This paper proposes a Feature Augmentation on Adaptive
Geodesic Curve (FAAGC) method in the pre-shape space to increase data. In the
pre-shape space, objects with identical shapes lie on a great circle. Thus, we
project deep model representations into the pre-shape space and construct a
geodesic curve, i.e., an arc of a great circle, for each class. Feature
augmentation is then performed by sampling along these geodesic paths.
Extensive experiments demonstrate that FAAGC improves classification accuracy
under data-scarce conditions and generalizes well across various feature types.
|
2501.18620
|
Three Laws of Statistical Linguistics Emerging in images
|
cs.CV physics.comp-ph
|
Images, as a product evolving alongside civilization, develop similarly to
natural languages with the advancement of civilization. Not only are images
abundant in daily life, but are also influenced by technology in shaping their
forms, embodying various characteristics as they evolve in time. Language is a
sequence of symbols that represents thoughts. While a written language is
typically associated with the close integration of text and sound, as a
combination of visual symbols and perception, the communicative power of image
is no less significant. This is especially notable since 60% of the sensory
input received by our central nervous system comes from vision. Given the
symbolic system inherent in images, we are curious whether images can also
exhibit the laws of statistical linguistics. To explore this, we begin with the
relationship between human thought and visual perception to decode how images
are formed by the latter mechanism. Building upon previous studies that
established the high correlation between pre-trained deep convolutional neural
networks and the human visual system, we use the VGG-19 to define words via
each kernel and calculate the number of pixels with grayscale values greater
than 90%. By (a) ranking words frequency, (b) randomizing the order of kernel
appearances and performing the same word count accumulation, and (c) summing
the word counts layer by layer, we are surprised to find that Zipf's, Heaps',
and Benford's laws of statistical linguistics also exist in the words that
comprises the text representing different images.
|
2501.18623
|
VLMaterial: Procedural Material Generation with Large Vision-Language
Models
|
cs.CV cs.GR
|
Procedural materials, represented as functional node graphs, are ubiquitous
in computer graphics for photorealistic material appearance design. They allow
users to perform intuitive and precise editing to achieve desired visual
appearances. However, creating a procedural material given an input image
requires professional knowledge and significant effort. In this work, we
leverage the ability to convert procedural materials into standard Python
programs and fine-tune a large pre-trained vision-language model (VLM) to
generate such programs from input images. To enable effective fine-tuning, we
also contribute an open-source procedural material dataset and propose to
perform program-level augmentation by prompting another pre-trained large
language model (LLM). Through extensive evaluation, we show that our method
outperforms previous methods on both synthetic and real-world examples.
|
2501.18624
|
Membership Inference Attacks Against Vision-Language Models
|
cs.CR cs.AI
|
Vision-Language Models (VLMs), built on pre-trained vision encoders and large
language models (LLMs), have shown exceptional multi-modal understanding and
dialog capabilities, positioning them as catalysts for the next technological
revolution. However, while most VLM research focuses on enhancing multi-modal
interaction, the risks of data misuse and leakage have been largely unexplored.
This prompts the need for a comprehensive investigation of such risks in VLMs.
In this paper, we conduct the first analysis of misuse and leakage detection in
VLMs through the lens of membership inference attack (MIA). In specific, we
focus on the instruction tuning data of VLMs, which is more likely to contain
sensitive or unauthorized information. To address the limitation of existing
MIA methods, we introduce a novel approach that infers membership based on a
set of samples and their sensitivity to temperature, a unique parameter in
VLMs. Based on this, we propose four membership inference methods, each
tailored to different levels of background knowledge, ultimately arriving at
the most challenging scenario. Our comprehensive evaluations show that these
methods can accurately determine membership status, e.g., achieving an AUC
greater than 0.8 targeting a small set consisting of only 5 samples on LLaVA.
|
2501.18626
|
The TIP of the Iceberg: Revealing a Hidden Class of Task-in-Prompt
Adversarial Attacks on LLMs
|
cs.CR cs.AI cs.CL
|
We present a novel class of jailbreak adversarial attacks on LLMs, termed
Task-in-Prompt (TIP) attacks. Our approach embeds sequence-to-sequence tasks
(e.g., cipher decoding, riddles, code execution) into the model's prompt to
indirectly generate prohibited inputs. To systematically assess the
effectiveness of these attacks, we introduce the PHRYGE benchmark. We
demonstrate that our techniques successfully circumvent safeguards in six
state-of-the-art language models, including GPT-4o and LLaMA 3.2. Our findings
highlight critical weaknesses in current LLM safety alignments and underscore
the urgent need for more sophisticated defence strategies.
Warning: this paper contains examples of unethical inquiries used solely for
research purposes.
|
2501.18627
|
A Radiance Field Loss for Fast and Simple Emissive Surface
Reconstruction
|
cs.GR cs.CV
|
We present a fast and simple technique to convert images into an emissive
surface-based scene representation. Building on existing emissive volume
reconstruction algorithms, we introduce a subtle yet impactful modification of
the loss function requiring changes to only a few lines of code: instead of
integrating the radiance field along rays and supervising the resulting images,
we project the training images into the scene to directly supervise the
spatio-directional radiance field.
The primary outcome of this change is the complete removal of alpha blending
and ray marching from the image formation model, instead moving these steps
into the loss computation. In addition to promoting convergence to surfaces,
this formulation assigns explicit semantic meaning to 2D subsets of the
radiance field, turning them into well-defined emissive surfaces. We finally
extract a level set from this representation, which results in a high-quality
emissive surface model.
Our method retains much of the speed and quality of the baseline algorithm.
For instance, a suitably modified variant of Instant~NGP maintains comparable
computational efficiency, while achieving an average PSNR that is only 0.1 dB
lower. Most importantly, our method generates explicit surfaces in place of an
exponential volume, doing so with a level of simplicity not seen in prior work.
|
2501.18628
|
Indiana Jones: There Are Always Some Useful Ancient Relics
|
cs.CR cs.AI cs.CL cs.CY
|
This paper introduces Indiana Jones, an innovative approach to jailbreaking
Large Language Models (LLMs) by leveraging inter-model dialogues and
keyword-driven prompts. Through orchestrating interactions among three
specialised LLMs, the method achieves near-perfect success rates in bypassing
content safeguards in both white-box and black-box LLMs. The research exposes
systemic vulnerabilities within contemporary models, particularly their
susceptibility to producing harmful or unethical outputs when guided by
ostensibly innocuous prompts framed in historical or contextual contexts.
Experimental evaluations highlight the efficacy and adaptability of Indiana
Jones, demonstrating its superiority over existing jailbreak methods. These
findings emphasise the urgent need for enhanced ethical safeguards and robust
security measures in the development of LLMs. Moreover, this work provides a
critical foundation for future studies aimed at fortifying LLMs against
adversarial exploitation while preserving their utility and flexibility.
|
2501.18629
|
The Relationship Between Network Similarity and Transferability of
Adversarial Attacks
|
cs.CR cs.LG
|
Neural networks are vulnerable to adversarial attacks, and several defenses
have been proposed. Designing a robust network is a challenging task given the
wide range of attacks that have been developed. Therefore, we aim to provide
insight into the influence of network similarity on the success rate of
transferred adversarial attacks. Network designers can then compare their new
network with existing ones to estimate its vulnerability. To achieve this, we
investigate the complex relationship between network similarity and the success
rate of transferred adversarial attacks. We applied the Centered Kernel
Alignment (CKA) network similarity score and used various methods to find a
correlation between a large number of Convolutional Neural Networks (CNNs) and
adversarial attacks. Network similarity was found to be moderate across
different CNN architectures, with more complex models such as DenseNet showing
lower similarity scores due to their architectural complexity. Layer similarity
was highest for consistent, basic layers such as DataParallel, Dropout and
Conv2d, while specialized layers showed greater variability. Adversarial attack
success rates were generally consistent for non-transferred attacks, but varied
significantly for some transferred attacks, with complex networks being more
vulnerable. We found that a DecisionTreeRegressor can predict the success rate
of transferred attacks for all black-box and Carlini & Wagner attacks with an
accuracy of over 90%, suggesting that predictive models may be viable under
certain conditions. However, the variability of results across different data
subsets underscores the complexity of these relationships and suggests that
further research is needed to generalize these findings across different attack
scenarios and network architectures.
|
2501.18630
|
Deformable Beta Splatting
|
cs.CV cs.GR
|
3D Gaussian Splatting (3DGS) has advanced radiance field reconstruction by
enabling real-time rendering. However, its reliance on Gaussian kernels for
geometry and low-order Spherical Harmonics (SH) for color encoding limits its
ability to capture complex geometries and diverse colors. We introduce
Deformable Beta Splatting (DBS), a deformable and compact approach that
enhances both geometry and color representation. DBS replaces Gaussian kernels
with deformable Beta Kernels, which offer bounded support and adaptive
frequency control to capture fine geometric details with higher fidelity while
achieving better memory efficiency. In addition, we extended the Beta Kernel to
color encoding, which facilitates improved representation of diffuse and
specular components, yielding superior results compared to SH-based methods.
Furthermore, Unlike prior densification techniques that depend on Gaussian
properties, we mathematically prove that adjusting regularized opacity alone
ensures distribution-preserved Markov chain Monte Carlo (MCMC), independent of
the splatting kernel type. Experimental results demonstrate that DBS achieves
state-of-the-art visual quality while utilizing only 45% of the parameters and
rendering 1.5x faster than 3DGS-based methods. Notably, for the first time,
splatting-based methods outperform state-of-the-art Neural Radiance Fields,
highlighting the superior performance and efficiency of DBS for real-time
radiance field rendering.
|
2501.18632
|
Towards Safe AI Clinicians: A Comprehensive Study on Large Language
Model Jailbreaking in Healthcare
|
cs.CR cs.CL
|
Large language models (LLMs) are increasingly utilized in healthcare
applications. However, their deployment in clinical practice raises significant
safety concerns, including the potential spread of harmful information. This
study systematically assesses the vulnerabilities of six LLMs to three advanced
black-box jailbreaking techniques within medical contexts. To quantify the
effectiveness of these techniques, we propose an automated and domain-adapted
agentic evaluation pipeline. Experiment results indicate that leading
commercial and open-source LLMs are highly vulnerable to medical jailbreaking
attacks. To bolster model safety and reliability, we further investigate the
effectiveness of Continual Fine-Tuning (CFT) in defending against medical
adversarial attacks. Our findings underscore the necessity for evolving attack
methods evaluation, domain-specific safety alignment, and LLM safety-utility
balancing. This research offers actionable insights for advancing the safety
and reliability of AI clinicians, contributing to ethical and effective AI
deployment in healthcare.
|
2501.18633
|
Linguistic Analysis of Sinhala YouTube Comments on Sinhala Music Videos:
A Dataset Study
|
cs.CL
|
This research investigates the area of Music Information Retrieval (MIR) and
Music Emotion Recognition (MER) in relation to Sinhala songs, an underexplored
field in music studies. The purpose of this study is to analyze the behavior of
Sinhala comments on YouTube Sinhala song videos using social media comments as
primary data sources. These included comments from 27 YouTube videos containing
20 different Sinhala songs, which were carefully selected so that strict
linguistic reliability would be maintained and relevancy ensured. This process
led to a total of 93,116 comments being gathered upon which the dataset was
refined further by advanced filtering methods and transliteration mechanisms
resulting into 63,471 Sinhala comments. Additionally, 964 stop-words specific
for the Sinhala language were algorithmically derived out of which 182 matched
exactly with English stop-words from NLTK corpus once translated. Also,
comparisons were made between general domain corpora in Sinhala against the
YouTube Comment Corpus in Sinhala confirming latter as good representation of
general domain. The meticulously curated data set as well as the derived
stop-words form important resources for future research in the fields of MIR
and MER, since they could be used and demonstrate that there are possibilities
with computational techniques to solve complex musical experiences across
varied cultural traditions
|
2501.18635
|
Towards Understanding Depth Perception in Foveated Rendering
|
cs.CV cs.GR
|
The true vision for real-time virtual and augmented reality is reproducing
our visual reality in its entirety on immersive displays. To this end, foveated
rendering leverages the limitations of spatial acuity in human peripheral
vision to allocate computational resources to the fovea while reducing quality
in the periphery. Such methods are often derived from studies on the spatial
resolution of the human visual system and its ability to perceive blur in the
periphery, enabling the potential for high spatial quality in real-time.
However, the effects of blur on other visual cues that depend on luminance
contrast, such as depth, remain largely unexplored. It is critical to
understand this interplay, as accurate depth representation is a fundamental
aspect of visual realism. In this paper, we present the first evaluation
exploring the effects of foveated rendering on stereoscopic depth perception.
We design a psychovisual experiment to quantitatively study the effects of
peripheral blur on depth perception. Our analysis demonstrates that
stereoscopic acuity remains unaffected (or even improves) by high levels of
peripheral blur. Based on our studies, we derive a simple perceptual model that
determines the amount of foveation that does not affect stereoacuity.
Furthermore, we analyze the model in the context of common foveation practices
reported in literature. The findings indicate that foveated rendering does not
impact stereoscopic depth perception, and stereoacuity remains unaffected up to
2x stronger foveation than commonly used. Finally, we conduct a validation
experiment and show that our findings hold for complex natural stimuli.
|
2501.18636
|
SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of
Large Language Model
|
cs.CR cs.AI cs.IR
|
The indexing-retrieval-generation paradigm of retrieval-augmented generation
(RAG) has been highly successful in solving knowledge-intensive tasks by
integrating external knowledge into large language models (LLMs). However, the
incorporation of external and unverified knowledge increases the vulnerability
of LLMs because attackers can perform attack tasks by manipulating knowledge.
In this paper, we introduce a benchmark named SafeRAG designed to evaluate the
RAG security. First, we classify attack tasks into silver noise, inter-context
conflict, soft ad, and white Denial-of-Service. Next, we construct RAG security
evaluation dataset (i.e., SafeRAG dataset) primarily manually for each task. We
then utilize the SafeRAG dataset to simulate various attack scenarios that RAG
may encounter. Experiments conducted on 14 representative RAG components
demonstrate that RAG exhibits significant vulnerability to all attack tasks and
even the most apparent attack task can easily bypass existing retrievers,
filters, or advanced LLMs, resulting in the degradation of RAG service quality.
Code is available at: https://github.com/IAAR-Shanghai/SafeRAG.
|
2501.18637
|
Machine learning of microstructure--property relationships in materials
with robust features from foundational vision transformers
|
cs.CV cond-mat.mtrl-sci cs.LG physics.comp-ph
|
Machine learning of microstructure--property relationships from data is an
emerging approach in computational materials science. Most existing machine
learning efforts focus on the development of task-specific models for each
microstructure--property relationship. We propose utilizing pre-trained
foundational vision transformers for the extraction of task-agnostic
microstructure features and subsequent light-weight machine learning of a
microstructure-dependent property. We demonstrate our approach with pre-trained
state-of-the-art vision transformers (CLIP, DINOV2, SAM) in two case studies on
machine-learning: (i) elastic modulus of two-phase microstructures based on
simulations data; and (ii) Vicker's hardness of Ni-base and Co-base superalloys
based on experimental data published in literature. Our results show the
potential of foundational vision transformers for robust microstructure
representation and efficient machine learning of microstructure--property
relationships without the need for expensive task-specific training or
fine-tuning of bespoke deep learning models.
|
2501.18638
|
Graph of Attacks with Pruning: Optimizing Stealthy Jailbreak Prompt
Generation for Enhanced LLM Content Moderation
|
cs.CR cs.AI cs.CL
|
We present a modular pipeline that automates the generation of stealthy
jailbreak prompts derived from high-level content policies, enhancing LLM
content moderation. First, we address query inefficiency and jailbreak strength
by developing Graph of Attacks with Pruning (GAP), a method that utilizes
strategies from prior jailbreaks, resulting in 92% attack success rate on
GPT-3.5 using only 54% of the queries of the prior algorithm. Second, we
address the cold-start issue by automatically generating seed prompts from the
high-level policy using LLMs. Finally, we demonstrate the utility of these
generated jailbreak prompts of improving content moderation by fine-tuning
PromptGuard, a model trained to detect jailbreaks, increasing its accuracy on
the Toxic-Chat dataset from 5.1% to 93.89%.
|
2501.18640
|
Divergent Emotional Patterns in Disinformation on Social Media? An
Analysis of Tweets and TikToks about the DANA in Valencia
|
cs.CL cs.CY cs.SI
|
This study investigates the dissemination of disinformation on social media
platforms during the DANA event (DANA is a Spanish acronym for Depresion
Aislada en Niveles Altos, translating to high-altitude isolated depression)
that resulted in extremely heavy rainfall and devastating floods in Valencia,
Spain, on October 29, 2024. We created a novel dataset of 650 TikTok and X
posts, which was manually annotated to differentiate between disinformation and
trustworthy content. Additionally, a Few-Shot annotation approach with GPT-4o
achieved substantial agreement (Cohen's kappa of 0.684) with manual labels.
Emotion analysis revealed that disinformation on X is mainly associated with
increased sadness and fear, while on TikTok, it correlates with higher levels
of anger and disgust. Linguistic analysis using the LIWC dictionary showed that
trustworthy content utilizes more articulate and factual language, whereas
disinformation employs negations, perceptual words, and personal anecdotes to
appear credible. Audio analysis of TikTok posts highlighted distinct patterns:
trustworthy audios featured brighter tones and robotic or monotone narration,
promoting clarity and credibility, while disinformation audios leveraged tonal
variation, emotional depth, and manipulative musical elements to amplify
engagement. In detection models, SVM+TF-IDF achieved the highest F1-Score,
excelling with limited data. Incorporating audio features into
roberta-large-bne improved both Accuracy and F1-Score, surpassing its text-only
counterpart and SVM in Accuracy. GPT-4o Few-Shot also performed well,
showcasing the potential of large language models for automated disinformation
detection. These findings demonstrate the importance of leveraging both textual
and audio features for improved disinformation detection on multimodal
platforms like TikTok.
|
2501.18641
|
Image Velocimetry using Direct Displacement Field estimation with Neural
Networks for Fluids
|
cs.CV physics.flu-dyn
|
An important tool for experimental fluids mechanics research is Particle
Image Velocimetry (PIV). Several robust methodologies have been proposed to
perform the estimation of velocity field from the images, however, alternative
methods are still needed to increase the spatial resolution of the results.
This work presents a novel approach for estimating fluid flow fields using
neural networks and the optical flow equation to predict displacement vectors
between sequential images. The result is a continuous representation of the
displacement, that can be evaluated on the full spatial resolution of the
image. The methodology was validated on synthetic and experimental images.
Accurate results were obtained in terms of the estimation of instantaneous
velocity fields, and of the determined time average turbulence quantities and
power spectral density. The methodology proposed differs of previous attempts
of using machine learning for this task: it does not require any previous
training, and could be directly used in any pair of images.
|
2501.18642
|
DebiasPI: Inference-time Debiasing by Prompt Iteration of a
Text-to-Image Generative Model
|
cs.CV cs.AI cs.GR cs.HC cs.LG
|
Ethical intervention prompting has emerged as a tool to counter demographic
biases of text-to-image generative AI models. Existing solutions either require
to retrain the model or struggle to generate images that reflect desired
distributions on gender and race. We propose an inference-time process called
DebiasPI for Debiasing-by-Prompt-Iteration that provides prompt intervention by
enabling the user to control the distributions of individuals' demographic
attributes in image generation. DebiasPI keeps track of which attributes have
been generated either by probing the internal state of the model or by using
external attribute classifiers. Its control loop guides the text-to-image model
to select not yet sufficiently represented attributes, With DebiasPI, we were
able to create images with equal representations of race and gender that
visualize challenging concepts of news headlines. We also experimented with the
attributes age, body type, profession, and skin tone, and measured how
attributes change when our intervention prompt targets the distribution of an
unrelated attribute type. We found, for example, if the text-to-image model is
asked to balance racial representation, gender representation improves but the
skin tone becomes less diverse. Attempts to cover a wide range of skin colors
with various intervention prompts showed that the model struggles to generate
the palest skin tones. We conducted various ablation studies, in which we
removed DebiasPI's attribute control, that reveal the model's propensity to
generate young, male characters. It sometimes visualized career success by
generating two-panel images with a pre-success dark-skinned person becoming
light-skinned with success, or switching gender from pre-success female to
post-success male, thus further motivating ethical intervention prompting with
DebiasPI.
|
2501.18643
|
3D Reconstruction of Shoes for Augmented Reality
|
cs.CV cs.AI eess.IV
|
This paper introduces a mobile-based solution that enhances online shoe
shopping through 3D modeling and Augmented Reality (AR), leveraging the
efficiency of 3D Gaussian Splatting. Addressing the limitations of static 2D
images, the framework generates realistic 3D shoe models from 2D images,
achieving an average Peak Signal-to-Noise Ratio (PSNR) of 32, and enables
immersive AR interactions via smartphones. A custom shoe segmentation dataset
of 3120 images was created, with the best-performing segmentation model
achieving an Intersection over Union (IoU) score of 0.95. This paper
demonstrates the potential of 3D modeling and AR to revolutionize online
shopping by offering realistic virtual interactions, with applicability across
broader fashion categories.
|
2501.18644
|
Prompt-oriented Output of Culture-Specific Items in Translated African
Poetry by Large Language Model: An Initial Multi-layered Tabular Review
|
cs.CL
|
This paper examines the output of cultural items generated by Chat Generative
PreTrained Transformer Pro in response to three structured prompts to translate
three anthologies of African poetry. The first prompt was broad, the second
focused on poetic structure, and the third prompt emphasized cultural
specificity. To support this analysis, four comparative tables were created.
The first table presents the results of the cultural items produced after the
three prompts, the second categorizes these outputs based on Aixela framework
of Proper nouns and Common expressions, the third table summarizes the cultural
items generated by human translators, a custom translation engine, and a Large
Language Model. The final table outlines the strategies employed by Chat
Generative PreTrained Transformer Pro following the culture specific prompt.
Compared to the outputs of cultural items from reference human translation and
the custom translation engine in prior studies the findings indicate that the
culture oriented prompts used with Chat Generative PreTrained Transformer Pro
did not yield significant enhancements of cultural items during the translation
of African poetry from English to French. Among the fifty four cultural items,
the human translation produced thirty three cultural items in repetition, the
custom translation engine generated Thirty eight cultural items in repetition
while Chat Generative PreTrained Transformer Pro produced forty one cultural
items in repetition. The untranslated cultural items revealed inconsistencies
in Large language models approach to translating cultural items in African
poetry from English to French.
|
2501.18645
|
Layered Chain-of-Thought Prompting for Multi-Agent LLM Systems: A
Comprehensive Approach to Explainable Large Language Models
|
cs.CL cs.AI cs.MA
|
Large Language Models (LLMs) leverage chain-of-thought (CoT) prompting to
provide step-by-step rationales, improving performance on complex tasks.
Despite its benefits, vanilla CoT often fails to fully verify intermediate
inferences and can produce misleading explanations. In this work, we propose
Layered Chain-of-Thought (Layered-CoT) Prompting, a novel framework that
systematically segments the reasoning process into multiple layers, each
subjected to external checks and optional user feedback. We expand on the key
concepts, present three scenarios -- medical triage, financial risk assessment,
and agile engineering -- and demonstrate how Layered-CoT surpasses vanilla CoT
in terms of transparency, correctness, and user engagement. By integrating
references from recent arXiv papers on interactive explainability, multi-agent
frameworks, and agent-based collaboration, we illustrate how Layered-CoT paves
the way for more reliable and grounded explanations in high-stakes domains.
|
2501.18648
|
Image, Text, and Speech Data Augmentation using Multimodal LLMs for Deep
Learning: A Survey
|
cs.CV
|
In the past five years, research has shifted from traditional Machine
Learning (ML) and Deep Learning (DL) approaches to leveraging Large Language
Models (LLMs) , including multimodality, for data augmentation to enhance
generalization, and combat overfitting in training deep convolutional neural
networks. However, while existing surveys predominantly focus on ML and DL
techniques or limited modalities (text or images), a gap remains in addressing
the latest advancements and multi-modal applications of LLM-based methods. This
survey fills that gap by exploring recent literature utilizing multimodal LLMs
to augment image, text, and audio data, offering a comprehensive understanding
of these processes. We outlined various methods employed in the LLM-based
image, text and speech augmentation, and discussed the limitations identified
in current approaches. Additionally, we identified potential solutions to these
limitations from the literature to enhance the efficacy of data augmentation
practices using multimodal LLMs. This survey serves as a foundation for future
research, aiming to refine and expand the use of multimodal LLMs in enhancing
dataset quality and diversity for deep learning applications. (Surveyed Paper
GitHub Repo: https://github.com/WSUAgRobotics/data-aug-multi-modal-llm.
Keywords: LLM data augmentation, LLM text data augmentation, LLM image data
augmentation, LLM speech data augmentation, audio augmentation, voice
augmentation, chatGPT for data augmentation, DeepSeek R1 text data
augmentation, DeepSeek R1 image augmentation, Image Augmentation using LLM,
Text Augmentation using LLM, LLM data augmentation for deep learning
applications)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.