id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.09608
|
Metric Learning with Progressive Self-Distillation for Audio-Visual
Embedding Learning
|
cs.SD cs.AI cs.CV cs.IR cs.MM eess.AS
|
Metric learning projects samples into an embedded space, where similarities
and dissimilarities are quantified based on their learned representations.
However, existing methods often rely on label-guided representation learning,
where representations of different modalities, such as audio and visual data,
are aligned based on annotated labels. This approach tends to underutilize
latent complex features and potential relationships inherent in the
distributions of audio and visual data that are not directly tied to the
labels, resulting in suboptimal performance in audio-visual embedding learning.
To address this issue, we propose a novel architecture that integrates
cross-modal triplet loss with progressive self-distillation. Our method
enhances representation learning by leveraging inherent distributions and
dynamically refining soft audio-visual alignments -- probabilistic alignments
between audio and visual data that capture the inherent relationships beyond
explicit labels. Specifically, the model distills audio-visual
distribution-based knowledge from annotated labels in a subset of each batch.
This self-distilled knowledge is used t
|
2501.09609
|
Adversarial-Ensemble Kolmogorov Arnold Networks for Enhancing Indoor
Wi-Fi Positioning: A Defensive Approach Against Spoofing and Signal
Manipulation Attacks
|
cs.LG
|
The research presents a study on enhancing the robustness of Wi-Fi-based
indoor positioning systems against adversarial attacks. The goal is to improve
the positioning accuracy and resilience of these systems under two attack
scenarios: Wi-Fi Spoofing and Signal Strength Manipulation. Three models are
developed and evaluated: a baseline model (M_Base), an adversarially trained
robust model (M_Rob), and an ensemble model (M_Ens). All models utilize a
Kolmogorov-Arnold Network (KAN) architecture. The robust model is trained with
adversarially perturbed data, while the ensemble model combines predictions
from both the base and robust models. Experimental results show that the robust
model reduces positioning error by approximately 10% compared to the baseline,
achieving 2.03 meters error under Wi-Fi spoofing and 2.00 meters under signal
strength manipulation. The ensemble model further outperforms with errors of
2.01 meters and 1.975 meters for the respective attack types. This analysis
highlights the effectiveness of adversarial training techniques in mitigating
attack impacts. The findings underscore the importance of considering
adversarial scenarios in developing indoor positioning systems, as improved
resilience can significantly enhance the accuracy and reliability of such
systems in mission-critical environments.
|
2501.09611
|
EVaDE : Event-Based Variational Thompson Sampling for Model-Based
Reinforcement Learning
|
cs.LG
|
Posterior Sampling for Reinforcement Learning (PSRL) is a well-known
algorithm that augments model-based reinforcement learning (MBRL) algorithms
with Thompson sampling. PSRL maintains posterior distributions of the
environment transition dynamics and the reward function, which are intractable
for tasks with high-dimensional state and action spaces. Recent works show that
dropout, used in conjunction with neural networks, induces variational
distributions that can approximate these posteriors. In this paper, we propose
Event-based Variational Distributions for Exploration (EVaDE), which are
variational distributions that are useful for MBRL, especially when the
underlying domain is object-based. We leverage the general domain knowledge of
object-based domains to design three types of event-based convolutional layers
to direct exploration. These layers rely on Gaussian dropouts and are inserted
between the layers of the deep neural network model to help facilitate
variational Thompson sampling. We empirically show the effectiveness of
EVaDE-equipped Simulated Policy Learning (EVaDE-SimPLe) on the 100K Atari game
suite.
|
2501.09616
|
ARMAX identification of low rank graphical models
|
cs.LG
|
In large-scale systems, complex internal relationships are often present.
Such interconnected systems can be effectively described by low rank stochastic
processes. When identifying a predictive model of low rank processes from
sampling data, the rank-deficient property of spectral densities is often
obscured by the inevitable measurement noise in practice. However, existing low
rank identification approaches often did not take noise into explicit
consideration, leading to non-negligible inaccuracies even under weak noise. In
this paper, we address the identification issue of low rank processes under
measurement noise. We find that the noisy measurement model admits a sparse
plus low rank structure in latent-variable graphical models. Specifically, we
first decompose the problem into a maximum entropy covariance extension
problem, and a low rank graphical estimation problem based on an autoregressive
moving-average with exogenous input (ARMAX) model. To identify the ARMAX low
rank graphical models, we propose an estimation approach based on maximum
likelihood. The identifiability and consistency of this approach are proven
under certain conditions. Simulation results confirm the reliable performance
of the entire algorithm in both the parameter estimation and noisy data
filtering.
|
2501.09617
|
WMamba: Wavelet-based Mamba for Face Forgery Detection
|
cs.CV
|
With the rapid advancement of deepfake generation technologies, the demand
for robust and accurate face forgery detection algorithms has become
increasingly critical. Recent studies have demonstrated that wavelet analysis
can uncover subtle forgery artifacts that remain imperceptible in the spatial
domain. Wavelets effectively capture important facial contours, which are often
slender, fine-grained, and global in nature. However, existing wavelet-based
approaches fail to fully leverage these unique characteristics, resulting in
sub-optimal feature extraction and limited generalizability. To address this
challenge, we introduce WMamba, a novel wavelet-based feature extractor built
upon the Mamba architecture. WMamba maximizes the utility of wavelet
information through two key innovations. First, we propose Dynamic Contour
Convolution (DCConv), which employs specially crafted deformable kernels to
adaptively model slender facial contours. Second, by leveraging the Mamba
architecture, our method captures long-range spatial relationships with linear
computational complexity. This efficiency allows for the extraction of
fine-grained, global forgery artifacts from small image patches. Extensive
experimental results show that WMamba achieves state-of-the-art (SOTA)
performance, highlighting its effectiveness and superiority in face forgery
detection.
|
2501.09620
|
Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
|
cs.LG cs.AI
|
Recent advances in large language models (LLMs) have demonstrated significant
progress in performing complex tasks. While Reinforcement Learning from Human
Feedback (RLHF) has been effective in aligning LLMs with human preferences, it
is susceptible to spurious correlations in reward modeling. Consequently, it
often introduces biases-such as length bias, sycophancy, conceptual bias, and
discrimination that hinder the model's ability to capture true causal
relationships. To address this, we propose a novel causal reward modeling
approach that integrates causal inference to mitigate these spurious
correlations. Our method enforces counterfactual invariance, ensuring reward
predictions remain consistent when irrelevant variables are altered. Through
experiments on both synthetic and real-world datasets, we show that our
approach mitigates various types of spurious correlations effectively,
resulting in more reliable and fair alignment of LLMs with human preferences.
As a drop-in enhancement to the existing RLHF workflow, our causal reward
modeling provides a practical way to improve the trustworthiness and fairness
of LLM finetuning.
|
2501.09621
|
Weight for Robustness: A Comprehensive Approach towards Optimal
Fault-Tolerant Asynchronous ML
|
cs.LG
|
We address the challenges of Byzantine-robust training in asynchronous
distributed machine learning systems, aiming to enhance efficiency amid massive
parallelization and heterogeneous computing resources. Asynchronous systems,
marked by independently operating workers and intermittent updates, uniquely
struggle with maintaining integrity against Byzantine failures, which encompass
malicious or erroneous actions that disrupt learning. The inherent delays in
such settings not only introduce additional bias to the system but also obscure
the disruptions caused by Byzantine faults. To tackle these issues, we adapt
the Byzantine framework to asynchronous dynamics by introducing a novel
weighted robust aggregation framework. This allows for the extension of robust
aggregators and a recent meta-aggregator to their weighted versions, mitigating
the effects of delayed updates. By further incorporating a recent
variance-reduction technique, we achieve an optimal convergence rate for the
first time in an asynchronous Byzantine environment. Our methodology is
rigorously validated through empirical and theoretical analysis, demonstrating
its effectiveness in enhancing fault tolerance and optimizing performance in
asynchronous ML systems.
|
2501.09622
|
Optimizing hypergraph product codes with random walks, simulated
annealing and reinforcement learning
|
quant-ph cs.IT math.IT
|
Hypergraph products are quantum low-density parity-check (LDPC) codes
constructed from two classical LDPC codes. Although their dimension and
distance depend only on the parameters of the underlying classical codes,
optimizing their performance against various noise channels remains
challenging. This difficulty partly stems from the complexity of decoding in
the quantum setting. The standard, ad hoc approach typically involves selecting
classical LDPC codes with large girth. In this work, we focus on optimizing
performance against the quantum erasure channel. A key advantage of this
channel is the existence of an efficient maximum-likelihood decoder, which
enables us to employ optimization techniques based on sampling random codes,
such as Reinforcement Learning (RL) and Simulated Annealing (SA). Our results
indicate that these techniques improve performance relative to the
state-of-the-art.
|
2501.09628
|
Artificial Intelligence-Driven Clinical Decision Support Systems
|
cs.AI
|
As artificial intelligence (AI) becomes increasingly embedded in healthcare
delivery, this chapter explores the critical aspects of developing reliable and
ethical Clinical Decision Support Systems (CDSS). Beginning with the
fundamental transition from traditional statistical models to sophisticated
machine learning approaches, this work examines rigorous validation strategies
and performance assessment methods, including the crucial role of model
calibration and decision curve analysis. The chapter emphasizes that creating
trustworthy AI systems in healthcare requires more than just technical
accuracy; it demands careful consideration of fairness, explainability, and
privacy. The challenge of ensuring equitable healthcare delivery through AI is
stressed, discussing methods to identify and mitigate bias in clinical
predictive models. The chapter then delves into explainability as a cornerstone
of human-centered CDSS. This focus reflects the understanding that healthcare
professionals must not only trust AI recommendations but also comprehend their
underlying reasoning. The discussion advances in an analysis of privacy
vulnerabilities in medical AI systems, from data leakage in deep learning
models to sophisticated attacks against model explanations. The text explores
privacy-preservation strategies such as differential privacy and federated
learning, while acknowledging the inherent trade-offs between privacy
protection and model performance. This progression, from technical validation
to ethical considerations, reflects the multifaceted challenges of developing
AI systems that can be seamlessly and reliably integrated into daily clinical
practice while maintaining the highest standards of patient care and data
protection.
|
2501.09631
|
Empowering Large Language Models in Wireless Communication: A Novel
Dataset and Fine-Tuning Framework
|
cs.LG
|
In this work, we develop a specialized dataset aimed at enhancing the
evaluation and fine-tuning of large language models (LLMs) specifically for
wireless communication applications. The dataset includes a diverse set of
multi-hop questions, including true/false and multiple-choice types, spanning
varying difficulty levels from easy to hard. By utilizing advanced language
models for entity extraction and question generation, rigorous data curation
processes are employed to maintain high quality and relevance. Additionally, we
introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a
detailed theoretical analysis and justification for its use in quantifying the
information content of training data with 2.24\% and 1.31\% performance boost
for different models compared to baselines, respectively. To demonstrate the
effectiveness of the fine-tuned models with the proposed methodologies on
practical tasks, we also consider different tasks, including summarizing
optimization problems from technical papers and solving the mathematical
problems related to non-orthogonal multiple access (NOMA), which are generated
by using the proposed multi-agent framework. Simulation results show
significant performance gain in summarization tasks with 20.9\% in the ROUGE-L
metrics. We also study the scaling laws of fine-tuning LLMs and the challenges
LLMs face in the field of wireless communications, offering insights into their
adaptation to wireless communication tasks. This dataset and fine-tuning
methodology aim to enhance the training and evaluation of LLMs, contributing to
advancements in LLMs for wireless communication research and applications.
|
2501.09632
|
Platform-Aware Mission Planning
|
cs.AI
|
Planning for autonomous systems typically requires reasoning with models at
different levels of abstraction, and the harmonization of two competing sets of
objectives: high-level mission goals that refer to an interaction of the system
with the external environment, and low-level platform constraints that aim to
preserve the integrity and the correct interaction of the subsystems. The
complicated interplay between these two models makes it very hard to reason on
the system as a whole, especially when the objective is to find plans with
robustness guarantees, considering the non-deterministic behavior of the lower
layers of the system.
In this paper, we introduce the problem of Platform-Aware Mission Planning
(PAMP), addressing it in the setting of temporal durative actions. The PAMP
problem differs from standard temporal planning for its exists-forall nature:
the high-level plan dealing with mission goals is required to satisfy safety
and executability constraints, for all the possible non-deterministic
executions of the low-level model of the platform and the environment. We
propose two approaches for solving PAMP. The first baseline approach
amalgamates the mission and platform levels, while the second is based on an
abstraction-refinement loop that leverages the combination of a planner and a
verification engine. We prove the soundness and completeness of the proposed
approaches and validate them experimentally, demonstrating the importance of
heterogeneous modeling and the superiority of the technique based on
abstraction-refinement.
|
2501.09635
|
Unified Face Matching and Physical-Digital Spoofing Attack Detection
|
cs.CV
|
Face recognition technology has dramatically transformed the landscape of
security, surveillance, and authentication systems, offering a user-friendly
and non-invasive biometric solution. However, despite its significant
advantages, face recognition systems face increasing threats from physical and
digital spoofing attacks. Current research typically treats face recognition
and attack detection as distinct classification challenges. This approach
necessitates the implementation of separate models for each task, leading to
considerable computational complexity, particularly on devices with limited
resources. Such inefficiencies can stifle scalability and hinder performance.
In response to these challenges, this paper introduces an innovative unified
model designed for face recognition and detection of physical and digital
attacks. By leveraging the advanced Swin Transformer backbone and incorporating
HiLo attention in a convolutional neural network framework, we address unified
face recognition and spoof attack detection more effectively. Moreover, we
introduce augmentation techniques that replicate the traits of physical and
digital spoofing cues, significantly enhancing our model robustness. Through
comprehensive experimental evaluation across various datasets, we showcase the
effectiveness of our model in unified face recognition and spoof detection.
Additionally, we confirm its resilience against unseen physical and digital
spoofing attacks, underscoring its potential for real-world applications.
|
2501.09636
|
LLM-Based Routing in Mixture of Experts: A Novel Framework for Trading
|
cs.LG q-fin.TR
|
Recent advances in deep learning and large language models (LLMs) have
facilitated the deployment of the mixture-of-experts (MoE) mechanism in the
stock investment domain. While these models have demonstrated promising trading
performance, they are often unimodal, neglecting the wealth of information
available in other modalities, such as textual data. Moreover, the traditional
neural network-based router selection mechanism fails to consider contextual
and real-world nuances, resulting in suboptimal expert selection. To address
these limitations, we propose LLMoE, a novel framework that employs LLMs as the
router within the MoE architecture. Specifically, we replace the conventional
neural network-based router with LLMs, leveraging their extensive world
knowledge and reasoning capabilities to select experts based on historical
price data and stock news. This approach provides a more effective and
interpretable selection mechanism. Our experiments on multimodal real-world
stock datasets demonstrate that LLMoE outperforms state-of-the-art MoE models
and other deep neural network approaches. Additionally, the flexible
architecture of LLMoE allows for easy adaptation to various downstream tasks.
|
2501.09640
|
Electronic Health Records: Towards Digital Twins in Healthcare
|
cs.AI
|
The pivotal shift from traditional paper-based records to sophisticated
Electronic Health Records (EHR), enabled systematic collection and analysis of
patient data through descriptive statistics, providing insight into patterns
and trends across patient populations. This evolution continued toward
predictive analytics, allowing healthcare providers to anticipate patient
outcomes and potential complications before they occur. This progression from
basic digital record-keeping to sophisticated predictive modelling and digital
twins reflects healthcare's broader evolution toward more integrated,
patient-centred approaches that combine data-driven insights with personalized
care delivery. This chapter explores the evolution and significance of
healthcare information systems, beginning with an examination of the
implementation of EHR in the UK and the USA. It provides a comprehensive
overview of the International Classification of Diseases (ICD) system, tracing
its development from ICD-9 to ICD-10. Central to this discussion is the
MIMIC-III database, a landmark achievement in healthcare data sharing and
arguably the most comprehensive critical care database freely available to
researchers worldwide. MIMIC-III has democratized access to high-quality
healthcare data, enabling unprecedented opportunities for research and
analysis. The chapter examines its structure, clinical outcome analysis
capabilities, and practical applications through case studies, with a
particular focus on mortality and length of stay metrics, vital signs
extraction, and ICD coding. Through detailed entity-relationship diagrams and
practical examples, the text illustrates MIMIC's complex data structure and
demonstrates how different querying approaches can lead to subtly different
results, emphasizing the critical importance of understanding the database's
architecture for accurate data extraction.
|
2501.09645
|
CarMem: Enhancing Long-Term Memory in LLM Voice Assistants through
Category-Bounding
|
cs.AI cs.CL cs.HC
|
In today's assistant landscape, personalisation enhances interactions,
fosters long-term relationships, and deepens engagement. However, many systems
struggle with retaining user preferences, leading to repetitive user requests
and disengagement. Furthermore, the unregulated and opaque extraction of user
preferences in industry applications raises significant concerns about privacy
and trust, especially in regions with stringent regulations like Europe. In
response to these challenges, we propose a long-term memory system for voice
assistants, structured around predefined categories. This approach leverages
Large Language Models to efficiently extract, store, and retrieve preferences
within these categories, ensuring both personalisation and transparency. We
also introduce a synthetic multi-turn, multi-session conversation dataset
(CarMem), grounded in real industry data, tailored to an in-car voice assistant
setting. Benchmarked on the dataset, our system achieves an F1-score of .78 to
.95 in preference extraction, depending on category granularity. Our
maintenance strategy reduces redundant preferences by 95% and contradictory
ones by 92%, while the accuracy of optimal retrieval is at .87. Collectively,
the results demonstrate the system's suitability for industrial applications.
|
2501.09646
|
NS-Gym: Open-Source Simulation Environments and Benchmarks for
Non-Stationary Markov Decision Processes
|
cs.AI
|
In many real-world applications, agents must make sequential decisions in
environments where conditions are subject to change due to various exogenous
factors. These non-stationary environments pose significant challenges to
traditional decision-making models, which typically assume stationary dynamics.
Non-stationary Markov decision processes (NS-MDPs) offer a framework to model
and solve decision problems under such changing conditions. However, the lack
of standardized benchmarks and simulation tools has hindered systematic
evaluation and advance in this field. We present NS-Gym, the first simulation
toolkit designed explicitly for NS-MDPs, integrated within the popular
Gymnasium framework. In NS-Gym, we segregate the evolution of the environmental
parameters that characterize non-stationarity from the agent's decision-making
module, allowing for modular and flexible adaptations to dynamic environments.
We review prior work in this domain and present a toolkit encapsulating key
problem characteristics and types in NS-MDPs. This toolkit is the first effort
to develop a set of standardized interfaces and benchmark problems to enable
consistent and reproducible evaluation of algorithms under non-stationary
conditions. We also benchmark six algorithmic approaches from prior work on
NS-MDPs using NS-Gym. Our vision is that NS-Gym will enable researchers to
assess the adaptability and robustness of their decision-making algorithms to
non-stationary conditions.
|
2501.09649
|
Monte Carlo Tree Search with Velocity Obstacles for safe and efficient
motion planning in dynamic environments
|
cs.AI cs.RO
|
Online motion planning is a challenging problem for intelligent robots moving
in dense environments with dynamic obstacles, e.g., crowds. In this work, we
propose a novel approach for optimal and safe online motion planning with
minimal information about dynamic obstacles. Specifically, our approach
requires only the current position of the obstacles and their maximum speed,
but it does not need any information about their exact trajectories or dynamic
model. The proposed methodology combines Monte Carlo Tree Search (MCTS), for
online optimal planning via model simulations, with Velocity Obstacles (VO),
for obstacle avoidance. We perform experiments in a cluttered simulated
environment with walls, and up to 40 dynamic obstacles moving with random
velocities and directions. With an ablation study, we show the key contribution
of VO in scaling up the efficiency of MCTS, selecting the safest and most
rewarding actions in the tree of simulations. Moreover, we show the superiority
of our methodology with respect to state-of-the-art planners, including
Non-linear Model Predictive Control (NMPC), in terms of improved collision
rate, computational and task performance.
|
2501.09653
|
The Heap: A Contamination-Free Multilingual Code Dataset for Evaluating
Large Language Models
|
cs.CL cs.AI
|
The recent rise in the popularity of large language models has spurred the
development of extensive code datasets needed to train them. This has left
limited code available for collection and use in the downstream investigation
of specific behaviors, or evaluation of large language models without suffering
from data contamination. To address this problem, we release The Heap, a large
multilingual dataset covering 57 programming languages that has been
deduplicated with respect to other open datasets of code, enabling researchers
to conduct fair evaluations of large language models without significant data
cleaning overhead.
|
2501.09655
|
A Survey of Research in Large Language Models for Electronic Design
Automation
|
cs.LG
|
Within the rapidly evolving domain of Electronic Design Automation (EDA),
Large Language Models (LLMs) have emerged as transformative technologies,
offering unprecedented capabilities for optimizing and automating various
aspects of electronic design. This survey provides a comprehensive exploration
of LLM applications in EDA, focusing on advancements in model architectures,
the implications of varying model sizes, and innovative customization
techniques that enable tailored analytical insights. By examining the
intersection of LLM capabilities and EDA requirements, the paper highlights the
significant impact these models have on extracting nuanced understandings from
complex datasets. Furthermore, it addresses the challenges and opportunities in
integrating LLMs into EDA workflows, paving the way for future research and
application in this dynamic field. Through this detailed analysis, the survey
aims to offer valuable insights to professionals in the EDA industry, AI
researchers, and anyone interested in the convergence of advanced AI
technologies and electronic design.
|
2501.09659
|
Fokker-Planck to Callan-Symanzik: evolution of weight matrices under
training
|
cs.LG
|
The dynamical evolution of a neural network during training has been an
incredibly fascinating subject of study. First principal derivation of generic
evolution of variables in statistical physics systems has proved useful when
used to describe training dynamics conceptually, which in practice means
numerically solving equations such as Fokker-Planck equation. Simulating entire
networks inevitably runs into the curse of dimensionality. In this paper, we
utilize Fokker-Planck to simulate the probability density evolution of
individual weight matrices in the bottleneck layers of a simple
2-bottleneck-layered auto-encoder and compare the theoretical evolutions
against the empirical ones by examining the output data distributions. We also
derive physically relevant partial differential equations such as
Callan-Symanzik and Kardar-Parisi-Zhang equations from the dynamical equation
we have.
|
2501.09665
|
Design-Agnostic Distributed Timing Fault Injection Monitor With
End-to-End Design Automation
|
eess.SY cs.SY
|
Fault injection attacks induce hardware failures in circuits and exploit
these faults to compromise the security of the system. It has been demonstrated
that FIAs can bypass system security mechanisms, cause faulty outputs, and gain
access to secret information. Certain types of FIAs can be mounted with little
effort by tampering with clock signals and or the chip operating conditions. To
mitigate such low cost, yet powerful attacks, we propose a fully synthesizable
and distributable in situ fault injection monitor that employs a delay locked
loop to track the pulsewidth of the clock. We further develop a fully automated
design framework to optimize and implement the FIA monitors at any process
node. Our design is fabricated and verified in 65 nm CMOS technology with a
small footprint of 1500 um2. It can lock to clock frequencies from 2 MHz to
1.26 GHz while detecting all 12 types of possible clock glitches, as well as
timing FIA injections via the supply voltage, electromagnetic signals, and chip
temperature.
|
2501.09668
|
Model Predictive Path Integral Docking of Fully Actuated Surface Vessel
|
cs.RO
|
Autonomous docking remains one of the most challenging maneuvers in marine
robotics, requiring precise control and robust perception in confined spaces.
This paper presents a novel approach integrating Model Predictive Path
Integral(MPPI) control with real-time LiDAR-based dock detection for autonomous
surface vessel docking. Our framework uniquely combines probabilistic
trajectory optimization with a multiobjective cost function that simultaneously
considers docking precision, safety constraints, and motion efficiency. The
MPPI controller generates optimal trajectories by intelligently sampling
control sequences and evaluating their costs based on dynamic clearance
requirements, orientation alignment, and target position objectives. We
introduce an adaptive dock detection pipeline that processes LiDAR point clouds
to extract critical geometric features, enabling real-time updates of docking
parameters. The proposed method is extensively validated in a physics-based
simulation environment that incorporates realistic sensor noise, vessel
dynamics, and environmental constraints. Results demonstrate successful docking
from various initial positions while maintaining safe clearances and smooth
motion characteristics.
|
2501.09672
|
Robin: a Suite of Multi-Scale Vision-Language Models and the CHIRP
Evaluation Benchmark
|
cs.CV cs.AI
|
The proliferation of Vision-Language Models (VLMs) in the past several years
calls for rigorous and comprehensive evaluation methods and benchmarks. This
work analyzes existing VLM evaluation techniques, including automated metrics,
AI-based assessments, and human evaluations across diverse tasks. We first
introduce Robin - a novel suite of VLMs that we built by combining Large
Language Models (LLMs) and Vision Encoders (VEs) at multiple scales, and use
Robin to identify shortcomings of current evaluation approaches across scales.
Next, to overcome the identified limitations, we introduce CHIRP - a new long
form response benchmark we developed for more robust and complete VLM
evaluation. We provide open access to the Robin training code, model suite, and
CHIRP benchmark to promote reproducibility and advance VLM research.
|
2501.09674
|
Authenticated Delegation and Authorized AI Agents
|
cs.CY cs.AI cs.NI
|
The rapid deployment of autonomous AI agents creates urgent challenges around
authorization, accountability, and access control in digital spaces. New
standards are needed to know whom AI agents act on behalf of and guide their
use appropriately, protecting online spaces while unlocking the value of task
delegation to autonomous agents. We introduce a novel framework for
authenticated, authorized, and auditable delegation of authority to AI agents,
where human users can securely delegate and restrict the permissions and scope
of agents while maintaining clear chains of accountability. This framework
builds on existing identification and access management protocols, extending
OAuth 2.0 and OpenID Connect with agent-specific credentials and metadata,
maintaining compatibility with established authentication and web
infrastructure. Further, we propose a framework for translating flexible,
natural language permissions into auditable access control configurations,
enabling robust scoping of AI agent capabilities across diverse interaction
modalities. Taken together, this practical approach facilitates immediate
deployment of AI agents while addressing key security and accountability
concerns, working toward ensuring agentic AI systems perform only appropriate
actions and providing a tool for digital service providers to enable AI agent
interactions without risking harm from scalable interaction.
|
2501.09680
|
CoNav Chair: Design of a ROS-based Smart Wheelchair for Shared Control
Navigation in the Built Environment
|
cs.RO
|
With the number of people with disabilities (PWD) increasing worldwide each
year, the demand for mobility support to enable independent living and social
integration is also growing. Wheelchairs commonly support the mobility of PWD
in both indoor and outdoor environments. However, current powered wheelchairs
(PWC) often fail to meet the needs of PWD, who may find it difficult to operate
them. Furthermore, existing research on robotic wheelchairs typically focuses
either on full autonomy or enhanced manual control, which can lead to reduced
efficiency and user trust. To address these issues, this paper proposes a Robot
Operating System (ROS)-based smart wheelchair, called CoNav Chair, that
incorporates a shared control navigation algorithm and obstacle avoidance to
support PWD while fostering efficiency and trust between the robot and the
user. Our design consists of hardware and software components. Experimental
results conducted in a typical indoor social environment demonstrate the
performance and effectiveness of the smart wheelchair hardware and software
design. This integrated design promotes trust and autonomy, which are crucial
for the acceptance of assistive mobility technologies in the built environment.
|
2501.09682
|
Incorporating Quantum Advantage in Quantum Circuit Generation through
Genetic Programming
|
quant-ph cs.AI cs.ET cs.NE
|
Designing efficient quantum circuits that leverage quantum advantage compared
to classical computing has become increasingly critical. Genetic algorithms
have shown potential in generating such circuits through artificial evolution.
However, integrating quantum advantage into the fitness function of these
algorithms remains unexplored. In this paper, we aim to enhance the efficiency
of quantum circuit design by proposing two novel approaches for incorporating
quantum advantage metrics into the fitness function of genetic algorithms.1 We
evaluate our approaches based on the Bernstein-Vazirani Problem and the
Unstructured Database Search Problem as test cases. The results demonstrate
that our approaches not only improve the convergence speed of the genetic
algorithm but also produce circuits comparable to expert-designed solutions.
Our findings suggest that automated quantum circuit design using genetic
algorithms that incorporate a measure of quantum advantage is a promising
approach to accelerating the development of quantum algorithms.
|
2501.09683
|
Rough kernel hedging
|
math.FA cs.LG stat.ML
|
Building on the functional-analytic framework of operator-valued kernels and
un-truncated signature kernels, we propose a scalable, provably convergent
signature-based algorithm for a broad class of high-dimensional, path-dependent
hedging problems. We make minimal assumptions about market dynamics by
modelling them as general geometric rough paths, yielding a fully model-free
approach. Furthermore, through a representer theorem, we provide theoretical
guarantees on the existence and uniqueness of a global minimum for the
resulting optimization problem and derive an analytic solution under highly
general loss functions. Similar to the popular deep hedging approach, but in a
more rigorous fashion, our method can also incorporate additional features via
the underlying operator-valued kernel, such as trading signals, news analytics,
and past hedging decisions, closely aligning with true machine-learning
practice.
|
2501.09685
|
Inference-Time Alignment in Diffusion Models with Reward-Guided
Generation: Tutorial and Review
|
cs.AI cs.LG q-bio.QM stat.ML
|
This tutorial provides an in-depth guide on inference-time guidance and
alignment methods for optimizing downstream reward functions in diffusion
models. While diffusion models are renowned for their generative modeling
capabilities, practical applications in fields such as biology often require
sample generation that maximizes specific metrics (e.g., stability, affinity in
proteins, closeness to target structures). In these scenarios, diffusion models
can be adapted not only to generate realistic samples but also to explicitly
maximize desired measures at inference time without fine-tuning. This tutorial
explores the foundational aspects of such inference-time algorithms. We review
these methods from a unified perspective, demonstrating that current techniques
-- such as Sequential Monte Carlo (SMC)-based guidance, value-based sampling,
and classifier guidance -- aim to approximate soft optimal denoising processes
(a.k.a. policies in RL) that combine pre-trained denoising processes with value
functions serving as look-ahead functions that predict from intermediate states
to terminal rewards. Within this framework, we present several novel algorithms
not yet covered in the literature. Furthermore, we discuss (1) fine-tuning
methods combined with inference-time techniques, (2) inference-time algorithms
based on search algorithms such as Monte Carlo tree search, which have received
limited attention in current research, and (3) connections between
inference-time algorithms in language models and diffusion models. The code of
this tutorial on protein design is available at
https://github.com/masa-ue/AlignInversePro
|
2501.09686
|
Towards Large Reasoning Models: A Survey of Reinforced Reasoning with
Large Language Models
|
cs.AI cs.CL
|
Language has long been conceived as an essential tool for human reasoning.
The breakthrough of Large Language Models (LLMs) has sparked significant
research interest in leveraging these models to tackle complex reasoning tasks.
Researchers have moved beyond simple autoregressive token generation by
introducing the concept of "thought" -- a sequence of tokens representing
intermediate steps in the reasoning process. This innovative paradigm enables
LLMs' to mimic complex human reasoning processes, such as tree search and
reflective thinking. Recently, an emerging trend of learning to reason has
applied reinforcement learning (RL) to train LLMs to master reasoning
processes. This approach enables the automatic generation of high-quality
reasoning trajectories through trial-and-error search algorithms, significantly
expanding LLMs' reasoning capacity by providing substantially more training
data. Furthermore, recent studies demonstrate that encouraging LLMs to "think"
with more tokens during test-time inference can further significantly boost
reasoning accuracy. Therefore, the train-time and test-time scaling combined to
show a new research frontier -- a path toward Large Reasoning Model. The
introduction of OpenAI's o1 series marks a significant milestone in this
research direction. In this survey, we present a comprehensive review of recent
progress in LLM reasoning. We begin by introducing the foundational background
of LLMs and then explore the key technical components driving the development
of large reasoning models, with a focus on automated data construction,
learning-to-reason techniques, and test-time scaling. We also analyze popular
open-source projects at building large reasoning models, and conclude with open
challenges and future research directions.
|
2501.09687
|
U-Fair: Uncertainty-based Multimodal Multitask Learning for Fairer
Depression Detection
|
cs.LG
|
Machine learning bias in mental health is becoming an increasingly pertinent
challenge. Despite promising efforts indicating that multitask approaches often
work better than unitask approaches, there is minimal work investigating the
impact of multitask learning on performance and fairness in depression
detection nor leveraged it to achieve fairer prediction outcomes. In this work,
we undertake a systematic investigation of using a multitask approach to
improve performance and fairness for depression detection. We propose a novel
gender-based task-reweighting method using uncertainty grounded in how the
PHQ-8 questionnaire is structured. Our results indicate that, although a
multitask approach improves performance and fairness compared to a unitask
approach, the results are not always consistent and we see evidence of negative
transfer and a reduction in the Pareto frontier, which is concerning given the
high-stake healthcare setting. Our proposed approach of gender-based
reweighting with uncertainty improves performance and fairness and alleviates
both challenges to a certain extent. Our findings on each PHQ-8 subitem task
difficulty are also in agreement with the largest study conducted on the PHQ-8
subitem discrimination capacity, thus providing the very first tangible
evidence linking ML findings with large-scale empirical population studies
conducted on the PHQ-8.
|
2501.09688
|
Fine-Grained Image-Text Correspondence with Cost Aggregation for
Open-Vocabulary Part Segmentation
|
cs.CV
|
Open-Vocabulary Part Segmentation (OVPS) is an emerging field for recognizing
fine-grained parts in unseen categories. We identify two primary challenges in
OVPS: (1) the difficulty in aligning part-level image-text correspondence, and
(2) the lack of structural understanding in segmenting object parts. To address
these issues, we propose PartCATSeg, a novel framework that integrates
object-aware part-level cost aggregation, compositional loss, and structural
guidance from DINO. Our approach employs a disentangled cost aggregation
strategy that handles object and part-level costs separately, enhancing the
precision of part-level segmentation. We also introduce a compositional loss to
better capture part-object relationships, compensating for the limited part
annotations. Additionally, structural guidance from DINO features improves
boundary delineation and inter-part understanding. Extensive experiments on
Pascal-Part-116, ADE20K-Part-234, and PartImageNet datasets demonstrate that
our method significantly outperforms state-of-the-art approaches, setting a new
baseline for robust generalization to unseen part categories.
|
2501.09691
|
A Near-optimal Algorithm for Learning Margin Halfspaces with Massart
Noise
|
cs.LG cs.DS math.ST stat.ML stat.TH
|
We study the problem of PAC learning $\gamma$-margin halfspaces in the
presence of Massart noise. Without computational considerations, the sample
complexity of this learning problem is known to be
$\widetilde{\Theta}(1/(\gamma^2 \epsilon))$. Prior computationally efficient
algorithms for the problem incur sample complexity $\tilde{O}(1/(\gamma^4
\epsilon^3))$ and achieve 0-1 error of $\eta+\epsilon$, where $\eta<1/2$ is the
upper bound on the noise rate. Recent work gave evidence of an
information-computation tradeoff, suggesting that a quadratic dependence on
$1/\epsilon$ is required for computationally efficient algorithms. Our main
result is a computationally efficient learner with sample complexity
$\widetilde{\Theta}(1/(\gamma^2 \epsilon^2))$, nearly matching this lower
bound. In addition, our algorithm is simple and practical, relying on online
SGD on a carefully selected sequence of convex losses.
|
2501.09695
|
Mitigating Hallucinations in Large Vision-Language Models via DPO:
On-Policy Data Hold the Key
|
cs.CV
|
Hallucination remains a major challenge for Large Vision-Language Models
(LVLMs). Direct Preference Optimization (DPO) has gained increasing attention
as a simple solution to hallucination issues. It directly learns from
constructed preference pairs that reflect the severity of hallucinations in
responses to the same prompt and image. Nonetheless, different data
construction methods in existing works bring notable performance variations. We
identify a crucial factor here: outcomes are largely contingent on whether the
constructed data aligns on-policy w.r.t the initial (reference) policy of DPO.
Theoretical analysis suggests that learning from off-policy data is impeded by
the presence of KL-divergence between the updated policy and the reference
policy. From the perspective of dataset distribution, we systematically
summarize the inherent flaws in existing algorithms that employ DPO to address
hallucination issues. To alleviate the problems, we propose On-Policy Alignment
(OPA)-DPO framework, which uniquely leverages expert feedback to correct
hallucinated responses and aligns both the original and expert-revised
responses in an on-policy manner. Notably, with only 4.8k data, OPA-DPO
achieves an additional reduction in the hallucination rate of LLaVA-1.5-7B:
13.26% on the AMBER benchmark and 5.39% on the Object-Hal benchmark, compared
to the previous SOTA algorithm trained with 16k samples.
|
2501.09700
|
Cueless EEG imagined speech for subject identification: dataset and
benchmarks
|
cs.LG cs.AI
|
Electroencephalogram (EEG) signals have emerged as a promising modality for
biometric identification. While previous studies have explored the use of
imagined speech with semantically meaningful words for subject identification,
most have relied on additional visual or auditory cues. In this study, we
introduce a cueless EEG-based imagined speech paradigm, where subjects imagine
the pronunciation of semantically meaningful words without any external cues.
This innovative approach addresses the limitations of prior methods by
requiring subjects to select and imagine words from a predefined list
naturally. The dataset comprises over 4,350 trials from 11 subjects across five
sessions. We assess a variety of classification methods, including traditional
machine learning techniques such as Support Vector Machines (SVM) and XGBoost,
as well as time-series foundation models and deep learning architectures
specifically designed for EEG classification, such as EEG Conformer and Shallow
ConvNet. A session-based hold-out validation strategy was employed to ensure
reliable evaluation and prevent data leakage. Our results demonstrate
outstanding classification accuracy, reaching 97.93%. These findings highlight
the potential of cueless EEG paradigms for secure and reliable subject
identification in real-world applications, such as brain-computer interfaces
(BCIs).
|
2501.09705
|
Practical Continual Forgetting for Pre-trained Vision Models
|
cs.CV cs.AI cs.LG
|
For privacy and security concerns, the need to erase unwanted information
from pre-trained vision models is becoming evident nowadays. In real-world
scenarios, erasure requests originate at any time from both users and model
owners, and these requests usually form a sequence. Therefore, under such a
setting, selective information is expected to be continuously removed from a
pre-trained model while maintaining the rest. We define this problem as
continual forgetting and identify three key challenges. (i) For unwanted
knowledge, efficient and effective deleting is crucial. (ii) For remaining
knowledge, the impact brought by the forgetting procedure should be minimal.
(iii) In real-world scenarios, the training samples may be scarce or partially
missing during the process of forgetting. To address them, we first propose
Group Sparse LoRA (GS-LoRA). Specifically, towards (i), we introduce LoRA
modules to fine-tune the FFN layers in Transformer blocks for each forgetting
task independently, and towards (ii), a simple group sparse regularization is
adopted, enabling automatic selection of specific LoRA groups and zeroing out
the others. To further extend GS-LoRA to more practical scenarios, we
incorporate prototype information as additional supervision and introduce a
more practical approach, GS-LoRA++. For each forgotten class, we move the
logits away from its original prototype. For the remaining classes, we pull the
logits closer to their respective prototypes. We conduct extensive experiments
on face recognition, object detection and image classification and demonstrate
that our method manages to forget specific classes with minimal impact on other
classes. Codes have been released on https://github.com/bjzhb666/GS-LoRA.
|
2501.09706
|
Domain Adaptation of Foundation LLMs for e-Commerce
|
cs.CL
|
We present the e-Llama models: 8 billion and 70 billion parameter large
language models that are adapted towards the e-commerce domain. These models
are meant as foundation models with deep knowledge about e-commerce, that form
a base for instruction- and fine-tuning. The e-Llama models are obtained by
continuously pretraining the Llama 3.1 base models on 1 trillion tokens of
domain-specific data.
We discuss our approach and motivate our choice of hyperparameters with a
series of ablation studies. To quantify how well the models have been adapted
to the e-commerce domain, we define and implement a set of multilingual,
e-commerce specific evaluation tasks.
We show that, when carefully choosing the training setup, the Llama 3.1
models can be adapted towards the new domain without sacrificing significant
performance on general domain tasks. We also explore the possibility of merging
the adapted model and the base model for a better control of the performance
trade-off between domains.
|
2501.09707
|
The Goofus & Gallant Story Corpus for Practical Value Alignment
|
cs.AI
|
Values or principles are key elements of human society that influence people
to behave and function according to an accepted standard set of social rules to
maintain social order. As AI systems are becoming ubiquitous in human society,
it is a major concern that they could violate these norms or values and
potentially cause harm. Thus, to prevent intentional or unintentional harm, AI
systems are expected to take actions that align with these principles. Training
systems to exhibit this type of behavior is difficult and often requires a
specialized dataset. This work presents a multi-modal dataset illustrating
normative and non-normative behavior in real-life situations described through
natural language and artistic images. This training set contains curated sets
of images that are designed to teach young children about social principles. We
argue that this is an ideal dataset to use for training socially normative
agents given this fact.
|
2501.09709
|
CyberMentor: AI Powered Learning Tool Platform to Address Diverse
Student Needs in Cybersecurity Education
|
cs.CY cs.AI
|
Many non-traditional students in cybersecurity programs often lack access to
advice from peers, family members and professors, which can hinder their
educational experiences. Additionally, these students may not fully benefit
from various LLM-powered AI assistants due to issues like content relevance,
locality of advice, minimum expertise, and timing. This paper addresses these
challenges by introducing an application designed to provide comprehensive
support by answering questions related to knowledge, skills, and career
preparation advice tailored to the needs of these students. We developed a
learning tool platform, CyberMentor, to address the diverse needs and pain
points of students majoring in cybersecurity. Powered by agentic workflow and
Generative Large Language Models (LLMs), the platform leverages
Retrieval-Augmented Generation (RAG) for accurate and contextually relevant
information retrieval to achieve accessibility and personalization. We
demonstrated its value in addressing knowledge requirements for cybersecurity
education and for career marketability, in tackling skill requirements for
analytical and programming assignments, and in delivering real time on demand
learning support. Using three use scenarios, we showcased CyberMentor in
facilitating knowledge acquisition and career preparation and providing
seamless skill-based guidance and support. We also employed the LangChain
prompt-based evaluation methodology to evaluate the platform's impact,
confirming its strong performance in helpfulness, correctness, and
completeness. These results underscore the system's ability to support students
in developing practical cybersecurity skills while improving equity and
sustainability within higher education. Furthermore, CyberMentor's open-source
design allows for adaptation across other disciplines, fostering educational
innovation and broadening its potential impact.
|
2501.09710
|
On equidistant single-orbit cyclic and quasi-cyclic subspace codes
|
cs.IT math.IT
|
A code is said to be equidistant if the distance between any two distinct
codewords of the code is the same. In this paper, we have studied equidistant
single-orbit cyclic and quasi-cyclic subspace codes. The orbit code generated
by a subspace $U$ in $\mathbb{F}_{q^n}$ such that the dimension of $U$ over
$\mathbb{F}_q$ is $t$ or $n-t$,
$\mbox{where}~t=\dim_{\mathbb{F}_q}(\mbox{Stab}(U)\cup\{0\})$, is equidistant
and is termed a trivial equidistant orbit code. Using the concept of cyclic
difference sets, we have proved that only the trivial equidistant single-orbit
cyclic subspace codes exist. Further, we have explored equidistant single-orbit
quasi-cyclic subspace codes, focusing specifically on those which are
sunflowers.
|
2501.09712
|
Converse bounds for quantum hypothesis exclusion: A divergence-radius
approach
|
quant-ph cs.IT math.IT
|
Hypothesis exclusion is an information-theoretic task in which an
experimenter aims at ruling out a false hypothesis from a finite set of known
candidates, and an error occurs if and only if the hypothesis being ruled out
is the ground truth. For the tasks of quantum state exclusion and quantum
channel exclusion -- where hypotheses are represented by quantum states and
quantum channels, respectively -- efficiently computable upper bounds on the
asymptotic error exponents were established in a recent work of the current
authors [Ji et al., arXiv:2407.13728 (2024)], where the derivation was based on
nonasymptotic analysis. In this companion paper of our previous work, we
provide alternative proofs for the same upper bounds on the asymptotic error
exponents of quantum state and channel exclusion, but using a conceptually
different approach from the one adopted in the previous work. Specifically, we
apply strong converse results for asymmetric binary hypothesis testing to
distinguishing an arbitrary ``dummy'' hypothesis from each of the concerned
candidates. This leads to the desired upper bounds in terms of divergence radii
via a geometrically inspired argument.
|
2501.09716
|
Intelligent OLSR Routing Protocol Optimization for VANETs
|
cs.NE cs.NI
|
Recent advances in wireless technologies have given rise to the emergence of
vehicular ad hoc networks (VANETs). In such networks, the limited coverage of
WiFi and the high mobility of the nodes generate frequent topology changes and
network fragmentations. For these reasons, and taking into account that there
is no central manager entity, routing packets through the network is a
challenging task. Therefore, offering an efficient routing strategy is crucial
to the deployment of VANETs. This paper deals with the optimal parameter
setting of the optimized link state routing (OLSR), which is a well-known
mobile ad hoc network routing protocol, by defining an optimization problem.
This way, a series of representative metaheuristic algorithms (particle swarm
optimization, differential evolution, genetic algorithm, and simulated
annealing) are studied in this paper to find automatically optimal
configurations of this routing protocol. In addition, a set of realistic VANET
scenarios (based in the city of M\'alaga) have been defined to accurately
evaluate the performance of the network under our automatic OLSR. In the
experiments, our tuned OLSR configurations result in better quality of service
(QoS) than the standard request for comments (RFC 3626), as well as several
human experts, making it amenable for utilization in VANET configurations.
|
2501.09718
|
FLOL: Fast Baselines for Real-World Low-Light Enhancement
|
cs.CV cs.RO
|
Low-Light Image Enhancement (LLIE) is a key task in computational photography
and imaging. The problem of enhancing images captured during night or in dark
environments has been well-studied in the image signal processing literature.
However, current deep learning-based solutions struggle with efficiency and
robustness in real-world scenarios (e.g. scenes with noise, saturated pixels,
bad illumination). We propose a lightweight neural network that combines image
processing in the frequency and spatial domains. Our method, FLOL+, is one of
the fastest models for this task, achieving state-of-the-art results on popular
real scenes datasets such as LOL and LSRW. Moreover, we are able to process
1080p images under 12ms. Code and models at https://github.com/cidautai/FLOL
|
2501.09719
|
Comparative Insights from 12 Machine Learning Models in Extracting
Economic Ideology from Political Text
|
cs.CL
|
This study conducts a systematic assessment of the capabilities of 12 machine
learning models and model variations in detecting economic ideology. As an
evaluation benchmark, I use manifesto data spanning six elections in the United
Kingdom and pre-annotated by expert and crowd coders. The analysis assesses the
performance of several generative, fine-tuned, and zero-shot models at the
granular and aggregate levels. The results show that generative models such as
GPT-4o and Gemini 1.5 Flash consistently outperform other models against all
benchmarks. However, they pose issues of accessibility and resource
availability. Fine-tuning yielded competitive performance and offers a reliable
alternative through domain-specific optimization. But its dependency on
training data severely limits scalability. Zero-shot models consistently face
difficulties with identifying signals of economic ideology, often resulting in
negative associations with human coding. Using general knowledge for the
domain-specific task of ideology scaling proved to be unreliable. Other key
findings include considerable within-party variation, fine-tuning benefiting
from larger training data, and zero-shot's sensitivity to prompt content. The
assessments include the strengths and limitations of each model and derive
best-practices for automated analyses of political content.
|
2501.09720
|
A Simple Aerial Detection Baseline of Multimodal Language Models
|
cs.CV cs.AI
|
The multimodal language models (MLMs) based on generative pre-trained
Transformer are considered powerful candidates for unifying various domains and
tasks. MLMs developed for remote sensing (RS) have demonstrated outstanding
performance in multiple tasks, such as visual question answering and visual
grounding. In addition to visual grounding that detects specific objects
corresponded to given instruction, aerial detection, which detects all objects
of multiple categories, is also a valuable and challenging task for RS
foundation models. However, aerial detection has not been explored by existing
RS MLMs because the autoregressive prediction mechanism of MLMs differs
significantly from the detection outputs. In this paper, we present a simple
baseline for applying MLMs to aerial detection for the first time, named
LMMRotate. Specifically, we first introduce a normalization method to transform
detection outputs into textual outputs to be compatible with the MLM framework.
Then, we propose a evaluation method, which ensures a fair comparison between
MLMs and conventional object detection models. We construct the baseline by
fine-tuning open-source general-purpose MLMs and achieve impressive detection
performance comparable to conventional detector. We hope that this baseline
will serve as a reference for future MLM development, enabling more
comprehensive capabilities for understanding RS images. Code is available at
https://github.com/Li-Qingyun/mllm-mmrotate.
|
2501.09722
|
Attention based Bidirectional GRU hybrid model for inappropriate content
detection in Urdu language
|
cs.CL cs.LG
|
With the increased use of the internet and social networks for online
discussions, the spread of toxic and inappropriate content on social networking
sites has also increased. Several studies have been conducted in different
languages. However, there is less work done for South Asian languages for
inappropriate content identification using deep learning techniques. In Urdu
language, the spellings are not unique, and people write different common
spellings for the same word, while mixing it other languages, like English in
the text makes it more challenging, and limited research work is available to
process such language with the finest algorithms. The use of attention layer
with a deep learning model can help handling the long-term dependencies and
increase its efficiency . To explore the effects of the attention layer, this
study proposes attention-based Bidirectional GRU hybrid model for identifying
inappropriate content in Urdu Unicode text language. Four different baseline
deep learning models; LSTM, Bi-LSTM, GRU, and TCN, are used to compare the
performance of the proposed model. The results of these models were compared
based on evaluation metrics, dataset size, and impact of the word embedding
layer. The pre-trained Urdu word2Vec embeddings were utilized for our case. Our
proposed model BiGRU-A outperformed all other baseline models by yielding 84\%
accuracy without using pre-trained word2Vec layer. From our experiments, we
have established that the attention layer improves the model's efficiency, and
pre-trained word2Vec embedding does not work well with an inappropriate content
dataset.
|
2501.09725
|
Parallel multi-objective metaheuristics for smart communications in
vehicular networks
|
cs.NE cs.AI cs.NI
|
This article analyzes the use of two parallel multi-objective soft computing
algorithms to automatically search for high-quality settings of the Ad hoc On
Demand Vector routing protocol for vehicular networks. These methods are based
on an evolutionary algorithm and on a swarm intelligence approach. The
experimental analysis demonstrates that the configurations computed by our
optimization algorithms outperform other state-of-the-art optimized ones. In
turn, the computational efficiency achieved by all the parallel versions is
greater than 87 %. Therefore, the line of work presented in this article
represents an efficient framework to improve vehicular communications.
|
2501.09729
|
Generating particle physics Lagrangians with transformers
|
cs.LG cs.SC hep-ph hep-th
|
In physics, Lagrangians provide a systematic way to describe laws governing
physical systems. In the context of particle physics, they encode the
interactions and behavior of the fundamental building blocks of our universe.
By treating Lagrangians as complex, rule-based constructs similar to linguistic
expressions, we trained a transformer model -- proven to be effective in
natural language tasks -- to predict the Lagrangian corresponding to a given
list of particles. We report on the transformer's performance in constructing
Lagrangians respecting the Standard Model $\mathrm{SU}(3)\times
\mathrm{SU}(2)\times \mathrm{U}(1)$ gauge symmetries. The resulting model is
shown to achieve high accuracies (over 90\%) with Lagrangians up to six matter
fields, with the capacity to generalize beyond the training distribution,
albeit within architectural constraints. We show through an analysis of input
embeddings that the model has internalized concepts such as group
representations and conjugation operations as it learned to generate
Lagrangians. We make the model and training datasets available to the
community. An interactive demonstration can be found at:
\url{https://huggingface.co/spaces/JoseEliel/generate-lagrangians}.
|
2501.09731
|
Predictions as Surrogates: Revisiting Surrogate Outcomes in the Age of
AI
|
stat.ML cs.LG
|
We establish a formal connection between the decades-old surrogate outcome
model in biostatistics and economics and the emerging field of
prediction-powered inference (PPI). The connection treats predictions from
pre-trained models, prevalent in the age of AI, as cost-effective surrogates
for expensive outcomes. Building on the surrogate outcomes literature, we
develop recalibrated prediction-powered inference, a more efficient approach to
statistical inference than existing PPI proposals. Our method departs from the
existing proposals by using flexible machine learning techniques to learn the
optimal ``imputed loss'' through a step we call recalibration. Importantly, the
method always improves upon the estimator that relies solely on the data with
available true outcomes, even when the optimal imputed loss is estimated
imperfectly, and it achieves the smallest asymptotic variance among PPI
estimators if the estimate is consistent. Computationally, our optimization
objective is convex whenever the loss function that defines the target
parameter is convex. We further analyze the benefits of recalibration, both
theoretically and numerically, in several common scenarios where machine
learning predictions systematically deviate from the outcome of interest. We
demonstrate significant gains in effective sample size over existing PPI
proposals via three applications leveraging state-of-the-art machine
learning/AI models.
|
2501.09732
|
Inference-Time Scaling for Diffusion Models beyond Scaling Denoising
Steps
|
cs.CV
|
Generative models have made significant impacts across various domains,
largely due to their ability to scale during training by increasing data,
computational resources, and model size, a phenomenon characterized by the
scaling laws. Recent research has begun to explore inference-time scaling
behavior in Large Language Models (LLMs), revealing how performance can further
improve with additional computation during inference. Unlike LLMs, diffusion
models inherently possess the flexibility to adjust inference-time computation
via the number of denoising steps, although the performance gains typically
flatten after a few dozen. In this work, we explore the inference-time scaling
behavior of diffusion models beyond increasing denoising steps and investigate
how the generation performance can further improve with increased computation.
Specifically, we consider a search problem aimed at identifying better noises
for the diffusion sampling process. We structure the design space along two
axes: the verifiers used to provide feedback, and the algorithms used to find
better noise candidates. Through extensive experiments on class-conditioned and
text-conditioned image generation benchmarks, our findings reveal that
increasing inference-time compute leads to substantial improvements in the
quality of samples generated by diffusion models, and with the complicated
nature of images, combinations of the components in the framework can be
specifically chosen to conform with different application scenario.
|
2501.09733
|
ComplexVAD: Detecting Interaction Anomalies in Video
|
cs.CV
|
Existing video anomaly detection datasets are inadequate for representing
complex anomalies that occur due to the interactions between objects. The
absence of complex anomalies in previous video anomaly detection datasets
affects research by shifting the focus onto simple anomalies. To address this
problem, we introduce a new large-scale dataset: ComplexVAD. In addition, we
propose a novel method to detect complex anomalies via modeling the
interactions between objects using a scene graph with spatio-temporal
attributes. With our proposed method and two other state-of-the-art video
anomaly detection methods, we obtain baseline scores on ComplexVAD and
demonstrate that our new method outperforms existing works.
|
2501.09734
|
Random Subspace Cubic-Regularization Methods, with Applications to
Low-Rank Functions
|
math.OC cs.LG cs.NA math.NA
|
We propose and analyze random subspace variants of the second-order Adaptive
Regularization using Cubics (ARC) algorithm. These methods iteratively restrict
the search space to some random subspace of the parameters, constructing and
minimizing a local model only within this subspace. Thus, our variants only
require access to (small-dimensional) projections of first- and second-order
problem derivatives and calculate a reduced step inexpensively. Under suitable
assumptions, the ensuing methods maintain the optimal first-order, and
second-order, global rates of convergence of (full-dimensional) cubic
regularization, while showing improved scalability both theoretically and
numerically, particularly when applied to low-rank functions. When applied to
the latter, our adaptive variant naturally adapts the subspace size to the true
rank of the function, without knowing it a priori.
|
2501.09736
|
MultiGraphMatch: a subgraph matching algorithm for multigraphs
|
cs.DB
|
Subgraph matching is the problem of finding all the occurrences of a small
graph, called the query, in a larger graph, called the target. Although the
problem has been widely studied in simple graphs, few solutions have been
proposed for multigraphs, in which two nodes can be connected by multiple
edges, each denoting a possibly different type of relationship. In our new
algorithm MultiGraphMatch, nodes and edges can be associated with labels and
multiple properties. MultiGraphMatch introduces a novel data structure called
bit matrix to efficiently index both the query and the target and filter the
set of target edges that are matchable with each query edge. In addition, the
algorithm proposes a new technique for ordering the processing of query edges
based on the cardinalities of the sets of matchable edges. Using the CYPHER
query definition language, MultiGraphMatch can perform queries with logical
conditions on node and edge labels. We compare MultiGraphMatch with SuMGra and
graph database systems Memgraph and Neo4J, showing comparable or better
performance in all queries on a wide variety of synthetic and real-world
graphs.
|
2501.09744
|
KU AIGEN ICL EDI@BC8 Track 3: Advancing Phenotype Named Entity
Recognition and Normalization for Dysmorphology Physical Examination Reports
|
cs.AI
|
The objective of BioCreative8 Track 3 is to extract phenotypic key medical
findings embedded within EHR texts and subsequently normalize these findings to
their Human Phenotype Ontology (HPO) terms. However, the presence of diverse
surface forms in phenotypic findings makes it challenging to accurately
normalize them to the correct HPO terms. To address this challenge, we explored
various models for named entity recognition and implemented data augmentation
techniques such as synonym marginalization to enhance the normalization step.
Our pipeline resulted in an exact extraction and normalization F1 score 2.6\%
higher than the mean score of all submissions received in response to the
challenge. Furthermore, in terms of the normalization F1 score, our approach
surpassed the average performance by 1.9\%. These findings contribute to the
advancement of automated medical data extraction and normalization techniques,
showcasing potential pathways for future research and application in the
biomedical domain.
|
2501.09745
|
Suggesting Code Edits in Interactive Machine Learning Notebooks Using
Large Language Models
|
cs.SE cs.CL cs.LG
|
Machine learning developers frequently use interactive computational
notebooks, such as Jupyter notebooks, to host code for data processing and
model training. Jupyter notebooks provide a convenient tool for writing machine
learning pipelines and interactively observing outputs, however, maintaining
Jupyter notebooks, e.g., to add new features or fix bugs, can be challenging
due to the length and complexity of the notebooks. Moreover, there is no
existing benchmark related to developer edits on Jupyter notebooks. To address
this, we present the first dataset of 48,398 Jupyter notebook edits derived
from 20,095 revisions of 792 machine learning repositories on GitHub, and
perform the first study of the using LLMs to predict code edits in Jupyter
notebooks. Our dataset captures granular details of cell-level and line-level
modifications, offering a foundation for understanding real-world maintenance
patterns in machine learning workflows. We observed that the edits on Jupyter
notebooks are highly localized, with changes averaging only 166 lines of code
in repositories. While larger models outperform smaller counterparts in code
editing, all models have low accuracy on our dataset even after finetuning,
demonstrating the complexity of real-world machine learning maintenance tasks.
Our findings emphasize the critical role of contextual information in improving
model performance and point toward promising avenues for advancing large
language models' capabilities in engineering machine learning code.
|
2501.09747
|
FAST: Efficient Action Tokenization for Vision-Language-Action Models
|
cs.RO cs.LG
|
Autoregressive sequence models, such as Transformer-based vision-language
action (VLA) policies, can be tremendously effective for capturing complex and
generalizable robotic behaviors. However, such models require us to choose a
tokenization of our continuous action signals, which determines how the
discrete symbols predicted by the model map to continuous robot actions. We
find that current approaches for robot action tokenization, based on simple
per-dimension, per-timestep binning schemes, typically perform poorly when
learning dexterous skills from high-frequency robot data. To address this
challenge, we propose a new compression-based tokenization scheme for robot
actions, based on the discrete cosine transform. Our tokenization approach,
Frequency-space Action Sequence Tokenization (FAST), enables us to train
autoregressive VLAs for highly dexterous and high-frequency tasks where
standard discretization methods fail completely. Based on FAST, we release
FAST+, a universal robot action tokenizer, trained on 1M real robot action
trajectories. It can be used as a black-box tokenizer for a wide range of robot
action sequences, with diverse action spaces and control frequencies. Finally,
we show that, when combined with the pi0 VLA, our method can scale to training
on 10k hours of robot data and match the performance of diffusion VLAs, while
reducing training time by up to 5x.
|
2501.09749
|
Enhancing Lexicon-Based Text Embeddings with Large Language Models
|
cs.CL cs.IR
|
Recent large language models (LLMs) have demonstrated exceptional performance
on general-purpose text embedding tasks. While dense embeddings have dominated
related research, we introduce the first Lexicon-based EmbeddiNgS (LENS)
leveraging LLMs that achieve competitive performance on these tasks. Regarding
the inherent tokenization redundancy issue and unidirectional attention
limitations in traditional causal LLMs, LENS consolidates the vocabulary space
through token embedding clustering, and investigates bidirectional attention
and various pooling strategies. Specifically, LENS simplifies lexicon matching
by assigning each dimension to a specific token cluster, where semantically
similar tokens are grouped together, and unlocking the full potential of LLMs
through bidirectional attention. Extensive experiments demonstrate that LENS
outperforms dense embeddings on the Massive Text Embedding Benchmark (MTEB),
delivering compact feature representations that match the sizes of dense
counterparts. Notably, combining LENSE with dense embeddings achieves
state-of-the-art performance on the retrieval subset of MTEB (i.e. BEIR).
|
2501.09751
|
OmniThink: Expanding Knowledge Boundaries in Machine Writing through
Thinking
|
cs.CL cs.AI cs.HC cs.IR cs.LG
|
Machine writing with large language models often relies on
retrieval-augmented generation. However, these approaches remain confined
within the boundaries of the model's predefined scope, limiting the generation
of content with rich information. Specifically, vanilla-retrieved information
tends to lack depth, novelty, and suffers from redundancy, which negatively
impacts the quality of generated articles, leading to shallow, unoriginal, and
repetitive outputs. To address these issues, we propose OmniThink, a
slow-thinking machine writing framework that emulates the human-like process of
iterative expansion and reflection. The core idea behind OmniThink is to
simulate the cognitive behavior of learners as they slowly deepen their
knowledge of the topics. Experimental results demonstrate that OmniThink
improves the knowledge density of generated articles without compromising
metrics such as coherence and depth. Human evaluations and expert feedback
further highlight the potential of OmniThink to address real-world challenges
in the generation of long-form articles.
|
2501.09753
|
SRE-Conv: Symmetric Rotation Equivariant Convolution for Biomedical
Image Classification
|
cs.CV cs.LG eess.IV
|
Convolutional neural networks (CNNs) are essential tools for computer vision
tasks, but they lack traditionally desired properties of extracted features
that could further improve model performance, e.g., rotational equivariance.
Such properties are ubiquitous in biomedical images, which often lack explicit
orientation. While current work largely relies on data augmentation or explicit
modules to capture orientation information, this comes at the expense of
increased training costs or ineffective approximations of the desired
equivariance. To overcome these challenges, we propose a novel and efficient
implementation of the Symmetric Rotation-Equivariant (SRE) Convolution
(SRE-Conv) kernel, designed to learn rotation-invariant features while
simultaneously compressing the model size. The SRE-Conv kernel can easily be
incorporated into any CNN backbone. We validate the ability of a deep SRE-CNN
to capture equivariance to rotation using the public MedMNISTv2 dataset (16
total tasks). SRE-Conv-CNN demonstrated improved rotated image classification
performance accuracy on all 16 test datasets in both 2D and 3D images, all
while increasing efficiency with fewer parameters and reduced memory footprint.
The code is available at https://github.com/XYPB/SRE-Conv.
|
2501.09754
|
Lost in Translation, Found in Context: Sign Language Translation with
Contextual Cues
|
cs.CV
|
Our objective is to translate continuous sign language into spoken language
text. Inspired by the way human interpreters rely on context for accurate
translation, we incorporate additional contextual cues together with the
signing video, into a new translation framework. Specifically, besides visual
sign recognition features that encode the input video, we integrate
complementary textual information from (i) captions describing the background
show, (ii) translation of previous sentences, as well as (iii) pseudo-glosses
transcribing the signing. These are automatically extracted and inputted along
with the visual features to a pre-trained large language model (LLM), which we
fine-tune to generate spoken language translations in text form. Through
extensive ablation studies, we show the positive contribution of each input cue
to the translation performance. We train and evaluate our approach on BOBSL --
the largest British Sign Language dataset currently available. We show that our
contextual approach significantly enhances the quality of the translations
compared to previously reported results on BOBSL, and also to state-of-the-art
methods that we implement as baselines. Furthermore, we demonstrate the
generality of our approach by applying it also to How2Sign, an American Sign
Language dataset, and achieve competitive results.
|
2501.09755
|
Learnings from Scaling Visual Tokenizers for Reconstruction and
Generation
|
cs.CV cs.AI
|
Visual tokenization via auto-encoding empowers state-of-the-art image and
video generative models by compressing pixels into a latent space. Although
scaling Transformer-based generators has been central to recent advances, the
tokenizer component itself is rarely scaled, leaving open questions about how
auto-encoder design choices influence both its objective of reconstruction and
downstream generative performance. Our work aims to conduct an exploration of
scaling in auto-encoders to fill in this blank. To facilitate this exploration,
we replace the typical convolutional backbone with an enhanced Vision
Transformer architecture for Tokenization (ViTok). We train ViTok on
large-scale image and video datasets far exceeding ImageNet-1K, removing data
constraints on tokenizer scaling. We first study how scaling the auto-encoder
bottleneck affects both reconstruction and generation -- and find that while it
is highly correlated with reconstruction, its relationship with generation is
more complex. We next explored the effect of separately scaling the
auto-encoders' encoder and decoder on reconstruction and generation
performance. Crucially, we find that scaling the encoder yields minimal gains
for either reconstruction or generation, while scaling the decoder boosts
reconstruction but the benefits for generation are mixed. Building on our
exploration, we design ViTok as a lightweight auto-encoder that achieves
competitive performance with state-of-the-art auto-encoders on ImageNet-1K and
COCO reconstruction tasks (256p and 512p) while outperforming existing
auto-encoders on 16-frame 128p video reconstruction for UCF-101, all with 2-5x
fewer FLOPs. When integrated with Diffusion Transformers, ViTok demonstrates
competitive performance on image generation for ImageNet-1K and sets new
state-of-the-art benchmarks for class-conditional video generation on UCF-101.
|
2501.09756
|
SynthLight: Portrait Relighting with Diffusion Model by Learning to
Re-render Synthetic Faces
|
cs.CV cs.GR
|
We introduce SynthLight, a diffusion model for portrait relighting. Our
approach frames image relighting as a re-rendering problem, where pixels are
transformed in response to changes in environmental lighting conditions. Using
a physically-based rendering engine, we synthesize a dataset to simulate this
lighting-conditioned transformation with 3D head assets under varying lighting.
We propose two training and inference strategies to bridge the gap between the
synthetic and real image domains: (1) multi-task training that takes advantage
of real human portraits without lighting labels; (2) an inference time
diffusion sampling procedure based on classifier-free guidance that leverages
the input portrait to better preserve details. Our method generalizes to
diverse real photographs and produces realistic illumination effects, including
specular highlights and cast shadows, while preserving the subject's identity.
Our quantitative experiments on Light Stage data demonstrate results comparable
to state-of-the-art relighting methods. Our qualitative results on in-the-wild
images showcase rich and unprecedented illumination effects. Project Page:
\url{https://vrroom.github.io/synthlight/}
|
2501.09757
|
Distilling Multi-modal Large Language Models for Autonomous Driving
|
cs.CV cs.RO
|
Autonomous driving demands safe motion planning, especially in critical
"long-tail" scenarios. Recent end-to-end autonomous driving systems leverage
large language models (LLMs) as planners to improve generalizability to rare
events. However, using LLMs at test time introduces high computational costs.
To address this, we propose DiMA, an end-to-end autonomous driving system that
maintains the efficiency of an LLM-free (or vision-based) planner while
leveraging the world knowledge of an LLM. DiMA distills the information from a
multi-modal LLM to a vision-based end-to-end planner through a set of specially
designed surrogate tasks. Under a joint training strategy, a scene encoder
common to both networks produces structured representations that are
semantically grounded as well as aligned to the final planning objective.
Notably, the LLM is optional at inference, enabling robust planning without
compromising on efficiency. Training with DiMA results in a 37% reduction in
the L2 trajectory error and an 80% reduction in the collision rate of the
vision-based planner, as well as a 44% trajectory error reduction in longtail
scenarios. DiMA also achieves state-of-the-art performance on the nuScenes
planning benchmark.
|
2501.09760
|
Boosting the Accuracy of Stock Market Prediction via Multi-Layer Hybrid
MTL Structure
|
q-fin.ST cs.LG
|
Accurate stock market prediction provides great opportunities for informed
decision-making, yet existing methods struggle with financial data's
non-linear, high-dimensional, and volatile characteristics. Advanced predictive
models are needed to effectively address these complexities. This paper
proposes a novel multi-layer hybrid multi-task learning (MTL) framework aimed
at achieving more efficient stock market predictions. It involves a Transformer
encoder to extract complex correspondences between various input features, a
Bidirectional Gated Recurrent Unit (BiGRU) to capture long-term temporal
relationships, and a Kolmogorov-Arnold Network (KAN) to enhance the learning
process. Experimental evaluations indicate that the proposed learning structure
achieves great performance, with an MAE as low as 1.078, a MAPE as low as
0.012, and an R^2 as high as 0.98, when compared with other competitive
networks.
|
2501.09761
|
VERITAS: Verifying the Performance of AI-native Transceiver Actions in
Base-Stations
|
eess.SP cs.AI cs.LG
|
Artificial Intelligence (AI)-native receivers prove significant performance
improvement in high noise regimes and can potentially reduce communication
overhead compared to the traditional receiver. However, their performance
highly depends on the representativeness of the training dataset. A major issue
is the uncertainty of whether the training dataset covers all test environments
and waveform configurations, and thus, whether the trained model is robust in
practical deployment conditions. To this end, we propose a joint
measurement-recovery framework for AI-native transceivers post deployment,
called VERITAS, that continuously looks for distribution shifts in the received
signals and triggers finite re-training spurts. VERITAS monitors the wireless
channel using 5G pilots fed to an auxiliary neural network that detects
out-of-distribution channel profile, transmitter speed, and delay spread. As
soon as such a change is detected, a traditional (reference) receiver is
activated, which runs for a period of time in parallel to the AI-native
receiver. Finally, VERTIAS compares the bit probabilities of the AI-native and
the reference receivers for the same received data inputs, and decides whether
or not a retraining process needs to be initiated. Our evaluations reveal that
VERITAS can detect changes in the channel profile, transmitter speed, and delay
spread with 99%, 97%, and 69% accuracies, respectively, followed by timely
initiation of retraining for 86%, 93.3%, and 94.8% of inputs in channel
profile, transmitter speed, and delay spread test sets, respectively.
|
2501.09765
|
Enhancing the De-identification of Personally Identifiable Information
in Educational Data
|
cs.CL cs.AI
|
Protecting Personally Identifiable Information (PII), such as names, is a
critical requirement in learning technologies to safeguard student and teacher
privacy and maintain trust. Accurate PII detection is an essential step toward
anonymizing sensitive information while preserving the utility of educational
data. Motivated by recent advancements in artificial intelligence, our study
investigates the GPT-4o-mini model as a cost-effective and efficient solution
for PII detection tasks. We explore both prompting and fine-tuning approaches
and compare GPT-4o-mini's performance against established frameworks, including
Microsoft Presidio and Azure AI Language. Our evaluation on two public
datasets, CRAPII and TSCC, demonstrates that the fine-tuned GPT-4o-mini model
achieves superior performance, with a recall of 0.9589 on CRAPII. Additionally,
fine-tuned GPT-4o-mini significantly improves precision scores (a threefold
increase) while reducing computational costs to nearly one-tenth of those
associated with Azure AI Language. Furthermore, our bias analysis reveals that
the fine-tuned GPT-4o-mini model consistently delivers accurate results across
diverse cultural backgrounds and genders. The generalizability analysis using
the TSCC dataset further highlights its robustness, achieving a recall of
0.9895 with minimal additional training data from TSCC. These results emphasize
the potential of fine-tuned GPT-4o-mini as an accurate and cost-effective tool
for PII detection in educational data. It offers robust privacy protection
while preserving the data's utility for research and pedagogical analysis. Our
code is available on GitHub: https://github.com/AnonJD/PrivacyAI
|
2501.09766
|
iTool: Boosting Tool Use of Large Language Models via Iterative
Reinforced Fine-Tuning
|
cs.CL cs.AI cs.LG
|
Augmenting large language models (LLMs) with external tools is known as a
promising approach to enhancing their capabilities, especially for complex
tasks. Synthesizing tool-use data through real-world simulations is an
effective way to achieve it. Nevertheless, our investigation reveals that (1)
training gains significantly decay as synthetic data increases. The model
struggles to benefit from more synthetic data due to potential data diversity
issues, resulting in poor performance in complex scenarios. Moreover, we find
that (2) this challenge primarily manifests as minor discrepancies between the
model's output and the ground truth response (termed as deficiency), such as
errors in parameter values that require complex reasoning from the context to
resolve. To this end, we propose an iterative reinforced fine-tuning strategy
designed to alleviate these challenges. This strategy involves: (1) enhancing
the diversity of synthetic data through path exploration of Monte Carlo Tree
Search. (2) iteratively identifying deficiency-related data, constructing
fine-grained preference pairs to pinpoint deficiencies, and then applying
preference optimization to optimize these deficiencies. Our experiments show
that models trained using our method achieve about 3\% better performance than
same-size models, outperforming larger open-source and closed-source models.
|
2501.09767
|
LeMo: Enabling LEss Token Involvement for MOre Context Fine-tuning
|
cs.CL cs.AI
|
The escalating demand for long-context applications has intensified the
necessity of extending the LLM context windows. Despite recent fine-tuning
approaches successfully expanding context lengths, their high memory
footprints, especially for activations, present a critical practical
limitation. Current parameter-efficient fine-tuning methods prioritize reducing
parameter update overhead over addressing activation memory constraints.
Similarly, existing sparsity mechanisms improve computational efficiency but
overlook activation memory optimization due to the phenomenon of Shadowy
Activation.
In this paper, we propose LeMo, the first LLM fine-tuning system that
explores and exploits a new token-level sparsity mechanism inherent in
long-context scenarios, termed Contextual Token Sparsity. LeMo minimizes
redundant token involvement by assessing the informativeness of token
embeddings while preserving model accuracy. Specifically, LeMo introduces three
key techniques: (1) Token Elimination, dynamically identifying and excluding
redundant tokens across varying inputs and layers. (2) Pattern Prediction,
utilizing well-trained predictors to approximate token sparsity patterns with
minimal overhead. (3) Kernel Optimization, employing permutation-free and
segment-based strategies to boost system performance. We implement LeMo as an
end-to-end fine-tuning system compatible with various LLM architectures and
other optimization techniques. Comprehensive evaluations demonstrate that LeMo
reduces memory consumption by up to 1.93x and achieves up to 1.36x speedups,
outperforming state-of-the-art fine-tuning systems.
|
2501.09768
|
Can Large Language Models Predict the Outcome of Judicial Decisions?
|
cs.CL cs.AI
|
Large Language Models (LLMs) have shown exceptional capabilities in Natural
Language Processing (NLP) across diverse domains. However, their application in
specialized tasks such as Legal Judgment Prediction (LJP) for low-resource
languages like Arabic remains underexplored. In this work, we address this gap
by developing an Arabic LJP dataset, collected and preprocessed from Saudi
commercial court judgments. We benchmark state-of-the-art open-source LLMs,
including LLaMA-3.2-3B and LLaMA-3.1-8B, under varying configurations such as
zero-shot, one-shot, and fine-tuning using QLoRA. Additionally, we used a
comprehensive evaluation framework combining quantitative metrics (BLEU and
ROUGE) and qualitative assessments (Coherence, legal language, clarity). Our
results demonstrate that fine-tuned smaller models achieve comparable
performance to larger models in task-specific contexts while offering
significant resource efficiency. Furthermore, we investigate the effects of
prompt engineering and fine-tuning on model outputs, providing insights into
performance variability and instruction sensitivity. By making the dataset,
implementation code, and models publicly available, we establish a robust
foundation for future research in Arabic legal NLP.
|
2501.09770
|
EVAL: EigenVector-based Average-reward Learning
|
cs.LG cs.AI
|
In reinforcement learning, two objective functions have been developed
extensively in the literature: discounted and averaged rewards. The
generalization to an entropy-regularized setting has led to improved robustness
and exploration for both of these objectives. Recently, the entropy-regularized
average-reward problem was addressed using tools from large deviation theory in
the tabular setting. This method has the advantage of linearity, providing
access to both the optimal policy and average reward-rate through properties of
a single matrix. In this paper, we extend that framework to more general
settings by developing approaches based on function approximation by neural
networks. This formulation reveals new theoretical insights into the
relationship between different objectives used in RL. Additionally, we combine
our algorithm with a posterior policy iteration scheme, showing how our
approach can also solve the average-reward RL problem without
entropy-regularization. Using classic control benchmarks, we experimentally
find that our method compares favorably with other algorithms in terms of
stability and rate of convergence.
|
2501.09775
|
Multiple Choice Questions: Reasoning Makes Large Language Models (LLMs)
More Self-Confident Even When They Are Wrong
|
cs.CL cs.AI
|
One of the most widely used methods to evaluate LLMs are Multiple Choice
Question (MCQ) tests. MCQ benchmarks enable the testing of LLM knowledge on
almost any topic at scale as the results can be processed automatically. To
help the LLM answer, a few examples called few shots can be included in the
prompt. Moreover, the LLM can be asked to answer the question directly with the
selected option or to first provide the reasoning and then the selected answer,
which is known as chain of thought. In addition to checking whether the
selected answer is correct, the evaluation can look at the LLM-estimated
probability of its response as an indication of the confidence of the LLM in
the response. In this paper, we study how the LLM confidence in its answer
depends on whether the model has been asked to answer directly or to provide
the reasoning before answering. The results of the evaluation of questions on a
wide range of topics in seven different models show that LLMs are more
confident in their answers when they provide reasoning before the answer. This
occurs regardless of whether the selected answer is correct. Our hypothesis is
that this behavior is due to the reasoning that modifies the probability of the
selected answer, as the LLM predicts the answer based on the input question and
the reasoning that supports the selection made. Therefore, LLM estimated
probabilities seem to have intrinsic limitations that should be understood in
order to use them in evaluation procedures. Interestingly, the same behavior
has been observed in humans, for whom explaining an answer increases confidence
in its correctness.
|
2501.09776
|
Multi-Head Self-Attending Neural Tucker Factorization
|
cs.LG
|
Quality-of-service (QoS) data exhibit dynamic temporal patterns that are
crucial for accurately predicting missing values. These patterns arise from the
evolving interactions between users and services, making it essential to
capture the temporal dynamics inherent in such data for improved prediction
performance. As the size and complexity of QoS datasets increase, existing
models struggle to provide accurate predictions, highlighting the need for more
flexible and dynamic methods to better capture the underlying patterns in
large-scale QoS data. To address this issue, we introduce a neural
network-based tensor factorization approach tailored for learning
spatiotemporal representations of high-dimensional and incomplete (HDI)
tensors, namely the Multi-head Self-attending Neural Tucker Factorization
(MSNTucF). The model is elaborately designed for modeling intricate nonlinear
spatiotemporal feature interaction patterns hidden in real world data with a
two-fold idea. It first employs a neural network structure to generalize the
traditional framework of Tucker factorization and then proposes to leverage a
multi-head self-attending module to enforce nonlinear latent interaction
learning. In empirical studies on two dynamic QoS datasets from real
applications, the proposed MSNTucF model demonstrates superior performance
compared to state-of-the-art benchmark models in estimating missing
observations. This highlights its ability to learn non-linear spatiotemporal
representations of HDI tensors.
|
2501.09777
|
Sentiment Analysis in Twitter Social Network Centered on
Cryptocurrencies Using Machine Learning
|
cs.CL
|
Cryptocurrency is a digital currency that uses blockchain technology with
secure encryption. Due to the decentralization of these currencies, traditional
monetary systems and the capital market of each they, can influence a society.
Therefore, due to the importance of the issue, the need to understand public
opinion and analyze people's opinions in this regard increases. To understand
the opinions and views of people about different topics, you can take help from
social networks because they are a rich source of opinions. The Twitter social
network is one of the main platforms where users discuss various topics,
therefore, in the shortest time and with the lowest cost, the opinion of the
community can be measured on this social network. Twitter Sentiment Analysis
(TSA) is a field that analyzes the sentiment expressed in tweets. Considering
that most of TSA's research efforts on cryptocurrencies are focused on English
language, the purpose of this paper is to investigate the opinions of Iranian
users on the Twitter social network about cryptocurrencies and provide the best
model for classifying tweets based on sentiment. In the case of automatic
analysis of tweets, managers and officials in the field of economy can gain
knowledge from the general public's point of view about this issue and use the
information obtained in order to properly manage this phenomenon. For this
purpose, in this paper, in order to build emotion classification models,
natural language processing techniques such as bag of words (BOW) and FastText
for text vectorization and classical machine learning algorithms including KNN,
SVM and Adaboost learning methods Deep including LSTM and BERT model were used
for classification, and finally BERT linguistic model had the best accuracy
with 83.50%.
|
2501.09781
|
VideoWorld: Exploring Knowledge Learning from Unlabeled Videos
|
cs.CV
|
This work explores whether a deep generative model can learn complex
knowledge solely from visual input, in contrast to the prevalent focus on
text-based models like large language models (LLMs). We develop VideoWorld, an
auto-regressive video generation model trained on unlabeled video data, and
test its knowledge acquisition abilities in video-based Go and robotic control
tasks. Our experiments reveal two key findings: (1) video-only training
provides sufficient information for learning knowledge, including rules,
reasoning and planning capabilities, and (2) the representation of visual
change is crucial for knowledge acquisition. To improve both the efficiency and
efficacy of this process, we introduce the Latent Dynamics Model (LDM) as a key
component of VideoWorld. Remarkably, VideoWorld reaches a 5-dan professional
level in the Video-GoBench with just a 300-million-parameter model, without
relying on search algorithms or reward mechanisms typical in reinforcement
learning. In robotic tasks, VideoWorld effectively learns diverse control
operations and generalizes across environments, approaching the performance of
oracle models in CALVIN and RLBench. This study opens new avenues for knowledge
acquisition from visual data, with all code, data, and models open-sourced for
further research.
|
2501.09782
|
SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape
Estimation
|
cs.CV cs.GR cs.HC cs.MM cs.RO
|
Expressive human pose and shape estimation (EHPS) unifies body, hands, and
face motion capture with numerous applications. Despite encouraging progress,
current state-of-the-art methods focus on training innovative architectural
designs on confined datasets. In this work, we investigate the impact of
scaling up EHPS towards a family of generalist foundation models. 1) For data
scaling, we perform a systematic investigation on 40 EHPS datasets,
encompassing a wide range of scenarios that a model trained on any single
dataset cannot handle. More importantly, capitalizing on insights obtained from
the extensive benchmarking process, we optimize our training scheme and select
datasets that lead to a significant leap in EHPS capabilities. Ultimately, we
achieve diminishing returns at 10M training instances from diverse data
sources. 2) For model scaling, we take advantage of vision transformers (up to
ViT-Huge as the backbone) to study the scaling law of model sizes in EHPS. To
exclude the influence of algorithmic design, we base our experiments on two
minimalist architectures: SMPLer-X, which consists of an intermediate step for
hand and face localization, and SMPLest-X, an even simpler version that reduces
the network to its bare essentials and highlights significant advances in the
capture of articulated hands. With big data and the large model, the foundation
models exhibit strong performance across diverse test benchmarks and excellent
transferability to even unseen environments. Moreover, our finetuning strategy
turns the generalist into specialist models, allowing them to achieve further
performance boosts. Notably, our foundation models consistently deliver
state-of-the-art results on seven benchmarks such as AGORA, UBody, EgoBody, and
our proposed SynHand dataset for comprehensive hand evaluation. (Code is
available at: https://github.com/wqyin/SMPLest-X).
|
2501.09783
|
GeoManip: Geometric Constraints as General Interfaces for Robot
Manipulation
|
cs.RO
|
We present GeoManip, a framework to enable generalist robots to leverage
essential conditions derived from object and part relationships, as geometric
constraints, for robot manipulation. For example, cutting the carrot requires
adhering to a geometric constraint: the blade of the knife should be
perpendicular to the carrot's direction. By interpreting these constraints
through symbolic language representations and translating them into low-level
actions, GeoManip bridges the gap between natural language and robotic
execution, enabling greater generalizability across diverse even unseen tasks,
objects, and scenarios. Unlike vision-language-action models that require
extensive training, operates training-free by utilizing large foundational
models: a constraint generation module that predicts stage-specific geometric
constraints and a geometry parser that identifies object parts involved in
these constraints. A solver then optimizes trajectories to satisfy inferred
constraints from task descriptions and the scene. Furthermore, GeoManip learns
in-context and provides five appealing human-robot interaction features:
on-the-fly policy adaptation, learning from human demonstrations, learning from
failure cases, long-horizon action planning, and efficient data collection for
imitation learning. Extensive evaluations on both simulations and real-world
scenarios demonstrate GeoManip's state-of-the-art performance, with superior
out-of-distribution generalization while avoiding costly model training.
|
2501.09798
|
Computing Optimization-Based Prompt Injections Against Closed-Weights
Models By Misusing a Fine-Tuning API
|
cs.CR cs.CL
|
We surface a new threat to closed-weight Large Language Models (LLMs) that
enables an attacker to compute optimization-based prompt injections.
Specifically, we characterize how an attacker can leverage the loss-like
information returned from the remote fine-tuning interface to guide the search
for adversarial prompts. The fine-tuning interface is hosted by an LLM vendor
and allows developers to fine-tune LLMs for their tasks, thus providing
utility, but also exposes enough information for an attacker to compute
adversarial prompts. Through an experimental analysis, we characterize the
loss-like values returned by the Gemini fine-tuning API and demonstrate that
they provide a useful signal for discrete optimization of adversarial prompts
using a greedy search algorithm. Using the PurpleLlama prompt injection
benchmark, we demonstrate attack success rates between 65% and 82% on Google's
Gemini family of LLMs. These attacks exploit the classic utility-security
tradeoff - the fine-tuning interface provides a useful feature for developers
but also exposes the LLMs to powerful attacks.
|
2501.09801
|
Conversational Text Extraction with Large Language Models Using
Retrieval-Augmented Systems
|
cs.IR cs.CL
|
This study introduces a system leveraging Large Language Models (LLMs) to
extract text and enhance user interaction with PDF documents via a
conversational interface. Utilizing Retrieval-Augmented Generation (RAG), the
system provides informative responses to user inquiries while highlighting
relevant passages within the PDF. Upon user upload, the system processes the
PDF, employing sentence embeddings to create a document-specific vector store.
This vector store enables efficient retrieval of pertinent sections in response
to user queries. The LLM then engages in a conversational exchange, using the
retrieved information to extract text and generate comprehensive, contextually
aware answers. While our approach demonstrates competitive ROUGE values
compared to existing state-of-the-art techniques for text extraction and
summarization, we acknowledge that further qualitative evaluation is necessary
to fully assess its effectiveness in real-world applications. The proposed
system gives competitive ROUGE values as compared to existing state-of-the-art
techniques for text extraction and summarization, thus offering a valuable tool
for researchers, students, and anyone seeking to efficiently extract knowledge
and gain insights from documents through an intuitive question-answering
interface.
|
2501.09803
|
Graph Neural Networks for Travel Distance Estimation and Route
Recommendation Under Probabilistic Hazards
|
cs.LG
|
Estimating the shortest travel time and providing route recommendation
between different locations in a city or region can quantitatively measure the
conditions of the transportation network during or after extreme events. One
common approach is to use Dijkstra's Algorithm, which produces the shortest
path as well as the shortest distance. However, this option is computationally
expensive when applied to large-scale networks. This paper proposes a novel
fast framework based on graph neural networks (GNNs) which approximate the
single-source shortest distance between pairs of locations, and predict the
single-source shortest path subsequently. We conduct multiple experiments on
synthetic graphs of different size to demonstrate the feasibility and
computational efficiency of the proposed model. In real-world case studies, we
also applied the proposed method of flood risk analysis of coastal urban areas
to calculate delays in evacuation to public shelters during hurricanes. The
results indicate the accuracy and computational efficiency of the GNN model,
and its potential for effective implementation in emergency planning and
management.
|
2501.09804
|
Enhancing Generalization in Chain of Thought Reasoning for Smaller
Models
|
cs.LG cs.AI cs.CL
|
Chain-of-Thought (CoT) reasoning in smaller language models is a challenging
natural language process problem yet highly desirable in many real-life
applications. Existing CoT knowledge distillation methods often suffer from
overly conservative memorization in smaller LLMs, leading to low generalization
confidence. As fully preserving the CoT ability of teacher model is impossible,
we hypothesize that adversarial CoT fine-tuning is crucial for developing
smaller LLM with robust CoT generalization. To this end, we propose
\textit{PRompt-Assisted Domain-Adversarial fine-tuning} (PRADA), a principled
fine-tuning framework that integrates diverse CoT domains. Specifically, PRADA
pioneers two CoT improvements in smaller LLM: (1) Recovering the
domain-invariant feature insight which typically lost during distillation with
domain adversarial fine-tuning; (2) Enhancing the domain adaptability of CoT
prompt engineering by employing domain-adversarial approaches. We theoretically
demonstrate the effectiveness of our approach and empirically show that it
significantly outperforms the state of the arts in a wide range of tasks.
Moreover, our empirical findings reveal that the smaller LLM, when leveraging
PRADA, aligns closely with domain knowledge, thereby improving the
explainability of our approach.
|
2501.09805
|
Multiplex Nodal Modularity: A novel network metric for the regional
analysis of amnestic mild cognitive impairment during a working memory
binding task
|
q-bio.NC cs.SI physics.bio-ph
|
Modularity is a well-established concept for assessing community structures
in various single and multi-layer networks, including those in biological and
social domains. Biological networks, such as the brain, are known to exhibit
group structure at a variety of scales -- local, meso, and global scale.
Modularity, while useful in describing mesoscale brain organization, is limited
as a metric to a global scale describing the overall strength of community
structure. This approach, while valuable, overlooks important localized
variations in community structure at the node level. To address this
limitation, we extended modularity to individual nodes. This novel measure of
nodal modularity ($nQ$) captures both meso and local scale changes in
modularity. We hypothesized that $nQ$ illuminates granular changes in the brain
due to diseases such as Alzheimer's disease (AD), which are known to disrupt
the brain's modular structure. We explored $nQ$ in multiplex networks of a
visual short-term memory binding task in fMRI and DTI data in the early stages
of AD. Observed changes in $nQ$ in fMRI and DTI networks aligned with known
trajectories of AD and were linked to common biomarkers of the disease,
including amyloid-$\beta$ and tau. Additionally, $nQ$ clearly differentiated
MCI from MCI converters showing indications that $nQ$ may be a useful
diagnostic tool for characterizing disease stages. Our findings demonstrate the
utility of $nQ$ as a measure of localized group structure, providing novel
insights into temporal and disease related variability at the node level. Given
the widespread application of modularity as a global measure, $nQ$ represents a
significant advancement, providing a granular measure of network organization
applicable to a wide range of disciplines.
|
2501.09813
|
Qwen it detect machine-generated text?
|
cs.CL
|
This paper describes the approach of the Unibuc - NLP team in tackling the
Coling 2025 GenAI Workshop, Task 1: Binary Multilingual Machine-Generated Text
Detection. We explored both masked language models and causal models. For
Subtask A, our best model achieved first-place out of 36 teams when looking at
F1 Micro (Auxiliary Score) of 0.8333, and second-place when looking at F1 Macro
(Main Score) of 0.8301
|
2501.09815
|
Lossy Compression with Pretrained Diffusion Models
|
cs.CV eess.IV
|
We apply the DiffC algorithm (Theis et al. 2022) to Stable Diffusion 1.5,
2.1, XL, and Flux-dev, and demonstrate that these pretrained models are
remarkably capable lossy image compressors. A principled algorithm for lossy
compression using pretrained diffusion models has been understood since at
least Ho et al. 2020, but challenges in reverse-channel coding have prevented
such algorithms from ever being fully implemented. We introduce simple
workarounds that lead to the first complete implementation of DiffC, which is
capable of compressing and decompressing images using Stable Diffusion in under
10 seconds. Despite requiring no additional training, our method is competitive
with other state-of-the-art generative compression methods at low ultra-low
bitrates.
|
2501.09817
|
Generalized Single-Image-Based Morphing Attack Detection Using Deep
Representations from Vision Transformer
|
cs.CV cs.AI
|
Face morphing attacks have posed severe threats to Face Recognition Systems
(FRS), which are operated in border control and passport issuance use cases.
Correspondingly, morphing attack detection algorithms (MAD) are needed to
defend against such attacks. MAD approaches must be robust enough to handle
unknown attacks in an open-set scenario where attacks can originate from
various morphing generation algorithms, post-processing and the diversity of
printers/scanners. The problem of generalization is further pronounced when the
detection has to be made on a single suspected image. In this paper, we propose
a generalized single-image-based MAD (S-MAD) algorithm by learning the encoding
from Vision Transformer (ViT) architecture. Compared to CNN-based
architectures, ViT model has the advantage on integrating local and global
information and hence can be suitable to detect the morphing traces widely
distributed among the face region. Extensive experiments are carried out on
face morphing datasets generated using publicly available FRGC face datasets.
Several state-of-the-art (SOTA) MAD algorithms, including representative ones
that have been publicly evaluated, have been selected and benchmarked with our
ViT-based approach. Obtained results demonstrate the improved detection
performance of the proposed S-MAD method on inter-dataset testing (when
different data is used for training and testing) and comparable performance on
intra-dataset testing (when the same data is used for training and testing)
experimental protocol.
|
2501.09819
|
Torque Responsive Metamaterials Enable High Payload Soft Robot Arms
|
cs.RO
|
Soft robots have struggled to support large forces and moments while also
supporting their own weight against gravity. This limits their ability to reach
certain configurations necessary for tasks such as inspection and pushing
objects up. We have overcome this limitation by creating an electrically driven
metamaterial soft arm using handed shearing auxetics (HSA) and bendable
extendable torque resistant (BETR) shafts. These use the large force and torque
capacity of HSAs and the nestable torque transmission of BETRs to create a
strong soft arm. We found that the HSA arm was able to push 2.3 kg vertically
and lift more than 600 g when positioned horizontally, supporting 0.33 Nm of
torque at the base. The arm is able to move between waypoints while carrying
the large payload and demonstrates consistent movement with path variance below
5 mm. The HSA arm's ability to perform active grasping with HSA grippers was
also demonstrated, requiring 20 N of pull force to dislodge the object.
Finally, we test the arm in a pipe inspection task. The arm is able to locate
all the defects while sliding against the inner surface of the pipe,
demonstrating its compliance.
|
2501.09821
|
BN-Pool: a Bayesian Nonparametric Approach to Graph Pooling
|
cs.LG math.PR
|
We introduce BN-Pool, the first clustering-based pooling method for Graph
Neural Networks (GNNs) that adaptively determines the number of supernodes in a
coarsened graph. By leveraging a Bayesian non-parametric framework, BN-Pool
employs a generative model capable of partitioning graph nodes into an
unbounded number of clusters. During training, we learn the node-to-cluster
assignments by combining the supervised loss of the downstream task with an
unsupervised auxiliary term, which encourages the reconstruction of the
original graph topology while penalizing unnecessary proliferation of clusters.
This adaptive strategy allows BN-Pool to automatically discover an optimal
coarsening level, offering enhanced flexibility and removing the need to
specify sensitive pooling ratios. We show that BN-Pool achieves superior
performance across diverse benchmarks.
|
2501.09822
|
pFedWN: A Personalized Federated Learning Framework for D2D Wireless
Networks with Heterogeneous Data
|
cs.LG cs.NI
|
Traditional Federated Learning (FL) approaches often struggle with data
heterogeneity across clients, leading to suboptimal model performance for
individual clients. To address this issue, Personalized Federated Learning
(PFL) emerges as a solution to the challenges posed by non-independent and
identically distributed (non-IID) and unbalanced data across clients.
Furthermore, in most existing decentralized machine learning works, a perfect
communication channel is considered for model parameter transmission between
clients and servers. However, decentralized PFL over wireless links introduces
new challenges, such as resource allocation and interference management. To
overcome these challenges, we formulate a joint optimization problem that
incorporates the underlying device-to-device (D2D) wireless channel conditions
into a server-free PFL approach. The proposed method, dubbed pFedWN, optimizes
the learning performance for each client while accounting for the variability
in D2D wireless channels. To tackle the formulated problem, we divide it into
two sub-problems: PFL neighbor selection and PFL weight assignment. The PFL
neighbor selection is addressed through channel-aware neighbor selection within
unlicensed spectrum bands such as ISM bands. Next, to assign PFL weights, we
utilize the Expectation-Maximization (EM) method to evaluate the similarity
between clients' data and obtain optimal weight distribution among the chosen
PFL neighbors. Empirical results show that pFedWN provides efficient and
personalized learning performance with non-IID and unbalanced datasets.
Furthermore, it outperforms the existing FL and PFL methods in terms of
learning efficacy and robustness, particularly under dynamic and unpredictable
wireless channel conditions.
|
2501.09825
|
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
|
cs.CL cs.AI
|
This paper investigates the challenges of developing large language models
(LLMs) proficient in both multilingual understanding and medical knowledge. We
demonstrate that simply translating medical data does not guarantee strong
performance on clinical tasks in the target language. Our experiments reveal
that the optimal language mix in training data varies significantly across
different medical tasks. We find that larger models with carefully calibrated
language ratios achieve superior performance on native-language clinical tasks.
Furthermore, our results suggest that relying solely on fine-tuning may not be
the most effective approach for incorporating new language knowledge into LLMs.
Instead, data and computationally intensive pretraining methods may still be
necessary to achieve optimal performance in multilingual medical settings.
These findings provide valuable guidance for building effective and inclusive
medical AI systems for diverse linguistic communities.
|
2501.09826
|
PIXELS: Progressive Image Xemplar-based Editing with Latent Surgery
|
cs.CV
|
Recent advancements in language-guided diffusion models for image editing are
often bottle-necked by cumbersome prompt engineering to precisely articulate
desired changes. An intuitive alternative calls on guidance from in-the-wild
image exemplars to help users bring their imagined edits to life. Contemporary
exemplar-based editing methods shy away from leveraging the rich latent space
learnt by pre-existing large text-to-image (TTI) models and fall back on
training with curated objective functions to achieve the task. Though somewhat
effective, this demands significant computational resources and lacks
compatibility with diverse base models and arbitrary exemplar count. On further
investigation, we also find that these techniques restrict user control to only
applying uniform global changes over the entire edited region. In this paper,
we introduce a novel framework for progressive exemplar-driven editing with
off-the-shelf diffusion models, dubbed PIXELS, to enable customization by
providing granular control over edits, allowing adjustments at the pixel or
region level. Our method operates solely during inference to facilitate
imitative editing, enabling users to draw inspiration from a dynamic number of
reference images, or multimodal prompts, and progressively incorporate all the
desired changes without retraining or fine-tuning existing TTI models. This
capability of fine-grained control opens up a range of new possibilities,
including selective modification of individual objects and specifying gradual
spatial changes. We demonstrate that PIXELS delivers high-quality edits
efficiently, leading to a notable improvement in quantitative metrics as well
as human evaluation. By making high-quality image editing more accessible,
PIXELS has the potential to enable professional-grade edits to a wider audience
with the ease of using any open-source image generation model.
|
2501.09832
|
Crossover-BPSO Driven Multi-Agent Technology for Managing Local Energy
Systems
|
eess.SY cs.SY
|
This article presents a new hybrid algorithm, crossover binary particle swarm
optimization (crBPSO), for allocating resources in local energy systems via
multi-agent (MA) technology. Initially, a hierarchical MA-based architecture in
a grid-connected local energy setup is presented. In this architecture, task
specific agents operate in a master-slave manner. Where, the master runs a
well-formulated optimization routine aiming at minimizing costs of energy
procurement, battery degradation, and load scheduling delay. The slaves update
the master on their current status and receive optimal action plans
accordingly. Simulation results demonstrate that the proposed algorithm
outperforms selected existing ones by 21\% in terms average energy system costs
while satisfying customers' energy demand and maintaining the required quality
of service.
|
2501.09833
|
EraseBench: Understanding The Ripple Effects of Concept Erasure
Techniques
|
cs.CV
|
Concept erasure techniques have recently gained significant attention for
their potential to remove unwanted concepts from text-to-image models. While
these methods often demonstrate success in controlled scenarios, their
robustness in real-world applications and readiness for deployment remain
uncertain. In this work, we identify a critical gap in evaluating sanitized
models, particularly in terms of their performance across various concept
dimensions. We systematically investigate the failure modes of current concept
erasure techniques, with a focus on visually similar, binomial, and
semantically related concepts. We propose that these interconnected
relationships give rise to a phenomenon of concept entanglement resulting in
ripple effects and degradation in image quality. To facilitate more
comprehensive evaluation, we introduce EraseBENCH, a multi-dimensional
benchmark designed to assess concept erasure methods with greater depth. Our
dataset includes over 100 diverse concepts and more than 1,000 tailored
prompts, paired with a comprehensive suite of metrics that together offer a
holistic view of erasure efficacy. Our findings reveal that even
state-of-the-art techniques struggle with maintaining quality post-erasure,
indicating that these approaches are not yet ready for real-world deployment.
This highlights the gap in reliability of the concept erasure techniques.
|
2501.09837
|
Complex-Valued Neural Networks for Ultra-Reliable Massive MIMO
|
eess.SP cs.IT cs.NI math.IT
|
In the evolving landscape of 5G and 6G networks, the demands extend beyond
high data rates, ultra-low latency, and extensive coverage, increasingly
emphasizing the need for reliability. This paper proposes an ultra-reliable
multiple-input multiple-output (MIMO) scheme utilizing quasi-orthogonal
space-time block coding (QOSTBC) combined with singular value decomposition
(SVD) for channel state information (CSI) correction, significantly improving
performance over QOSTBC and traditional orthogonal STBC (OSTBC) when analyzing
spectral efficiency. Although QOSTBC enhances spectral efficiency, it also
increases computational complexity at the maximum likelihood (ML) decoder. To
address this, a neural network-based decoding scheme using phase-transmittance
radial basis function (PT-RBF) architecture is also introduced to manage
QOSTBC's complexity. Simulation results demonstrate improved system robustness
and performance, making this approach a potential candidate for ultra-reliable
communication in next-generation networks.
|
2501.09838
|
CrossModalityDiffusion: Multi-Modal Novel View Synthesis with Unified
Intermediate Representation
|
cs.CV cs.AI eess.IV
|
Geospatial imaging leverages data from diverse sensing modalities-such as EO,
SAR, and LiDAR, ranging from ground-level drones to satellite views. These
heterogeneous inputs offer significant opportunities for scene understanding
but present challenges in interpreting geometry accurately, particularly in the
absence of precise ground truth data. To address this, we propose
CrossModalityDiffusion, a modular framework designed to generate images across
different modalities and viewpoints without prior knowledge of scene geometry.
CrossModalityDiffusion employs modality-specific encoders that take multiple
input images and produce geometry-aware feature volumes that encode scene
structure relative to their input camera positions. The space where the feature
volumes are placed acts as a common ground for unifying input modalities. These
feature volumes are overlapped and rendered into feature images from novel
perspectives using volumetric rendering techniques. The rendered feature images
are used as conditioning inputs for a modality-specific diffusion model,
enabling the synthesis of novel images for the desired output modality. In this
paper, we show that jointly training different modules ensures consistent
geometric understanding across all modalities within the framework. We validate
CrossModalityDiffusion's capabilities on the synthetic ShapeNet cars dataset,
demonstrating its effectiveness in generating accurate and consistent novel
views across multiple imaging modalities and perspectives.
|
2501.09849
|
Coded Deep Learning: Framework and Algorithm
|
cs.LG
|
The success of deep learning (DL) is often achieved with large models and
high complexity during both training and post-training inferences, hindering
training in resource-limited settings. To alleviate these issues, this paper
introduces a new framework dubbed ``coded deep learning'' (CDL), which
integrates information-theoretic coding concepts into the inner workings of DL,
to significantly compress model weights and activations, reduce computational
complexity at both training and post-training inference stages, and enable
efficient model/data parallelism. Specifically, within CDL, (i) we first
propose a novel probabilistic method for quantizing both model weights and
activations, and its soft differentiable variant which offers an analytic
formula for gradient calculation during training; (ii) both the forward and
backward passes during training are executed over quantized weights and
activations, eliminating most floating-point operations and reducing training
complexity; (iii) during training, both weights and activations are entropy
constrained so that they are compressible in an information-theoretic sense
throughout training, thus reducing communication costs in model/data
parallelism; and (iv) the trained model in CDL is by default in a quantized
format with compressible quantized weights, reducing post-training inference
and storage complexity. Additionally, a variant of CDL, namely relaxed CDL
(R-CDL), is presented to further improve the trade-off between validation
accuracy and compression though requiring full precision in training with other
advantageous features of CDL intact. Extensive empirical results show that CDL
and R-CDL outperform the state-of-the-art algorithms in DNN compression in the
literature.
|
2501.09851
|
Learning Noisy Halfspaces with a Margin: Massart is No Harder than
Random
|
cs.LG cs.DS
|
We study the problem of PAC learning $\gamma$-margin halfspaces with Massart
noise. We propose a simple proper learning algorithm, the Perspectron, that has
sample complexity $\widetilde{O}((\epsilon\gamma)^{-2})$ and achieves
classification error at most $\eta+\epsilon$ where $\eta$ is the Massart noise
rate. Prior works [DGT19,CKMY20] came with worse sample complexity guarantees
(in both $\epsilon$ and $\gamma$) or could only handle random classification
noise [DDK+23,KIT+23] -- a much milder noise assumption. We also show that our
results extend to the more challenging setting of learning generalized linear
models with a known link function under Massart noise, achieving a similar
sample complexity to the halfspace case. This significantly improves upon the
prior state-of-the-art in this setting due to [CKMY20], who introduced this
model.
|
2501.09853
|
Greening the Grid: Electricity Market Clearing with Consumer-Based
Carbon Cost
|
eess.SY cs.SY
|
To enhance decarbonization efforts in electric power systems, we propose a
novel electricity market clearing model that internalizes the allocation of
emissions from generations to loads and allows for consideration of
consumer-side carbon costs. Specifically, consumers can not only bid for power
but also assign a cost to the carbon emissions incurred by their electricity
use. These carbon costs provide consumers, ranging from carbon-agnostic to
carbon-sensitive, with a tool to actively manage their roles in carbon emission
mitigation. By incorporating carbon allocation and consumer-side carbon costs,
the market clearing is influenced not solely by production and demand dynamics
but also by the allocation of carbon emission responsibilities. To demonstrate
the effect of our proposed model, we conduct a case study comparing market
clearing outcomes across various percentages of carbon-sensitive consumers with
differing carbon costs.
|
2501.09856
|
Efficient Sampling of Temporal Networks with Preserved Causality
Structure
|
cs.SI cs.DS
|
In this paper, we extend the classical Color Refinement algorithm for static
networks to temporal (undirected and directed) networks. This enables us to
design an algorithm to sample synthetic networks that preserves the $d$-hop
neighborhood structure of a given temporal network. The higher $d$ is chosen,
the better the temporal neighborhood structure of the original network is
preserved. Specifically, we provide efficient algorithms that preserve
time-respecting ("causal") paths in the networks up to length $d$, and scale to
real-world network sizes. We validate our approach theoretically (for Degree
and Katz centrality) and experimentally (for edge persistence, causal
triangles, and burstiness). An experimental comparison shows that our method
retains these key temporal characteristics more effectively than existing
randomization methods.
|
2501.09857
|
Efficient Probabilistic Assessment of Power System Resilience Using the
Polynomial Chaos Expansion Method with Enhanced Stability
|
eess.SY cs.SY
|
Increasing frequency and intensity of extreme weather events motivates the
assessment of power system resilience. The random nature of these events and
the resulting failures mandates probabilistic resilience assessment, but
state-of-the-art methods (e.g., Monte Carlo simulation) are computationally
inefficient. This paper leverages the polynomial chaos expansion (PCE) method
to efficiently quantify uncertainty in power system resilience. To address
repeatability issues arising from PCE computation with different sample sets,
we propose the integration of the Maximin-LHS experiment design method with the
PCE method. Numerical studies on the IEEE 39-bus system illustrate the improved
repeatability and convergence of the proposed method. The enhanced PCE method
is then used to assess the resilience of the system and propose adaptation
measures to improve it.
|
2501.09858
|
From Explainability to Interpretability: Interpretable Policies in
Reinforcement Learning Via Model Explanation
|
cs.LG cs.AI cs.SY eess.SY
|
Deep reinforcement learning (RL) has shown remarkable success in complex
domains, however, the inherent black box nature of deep neural network policies
raises significant challenges in understanding and trusting the decision-making
processes. While existing explainable RL methods provide local insights, they
fail to deliver a global understanding of the model, particularly in
high-stakes applications. To overcome this limitation, we propose a novel
model-agnostic approach that bridges the gap between explainability and
interpretability by leveraging Shapley values to transform complex deep RL
policies into transparent representations. The proposed approach offers two key
contributions: a novel approach employing Shapley values to policy
interpretation beyond local explanations and a general framework applicable to
off-policy and on-policy algorithms. We evaluate our approach with three
existing deep RL algorithms and validate its performance in two classic control
environments. The results demonstrate that our approach not only preserves the
original models' performance but also generates more stable interpretable
policies.
|
2501.09859
|
Empirical Evaluation of Embedding Models in the Context of Text
Classification in Document Review in Construction Delay Disputes
|
cs.IR
|
Text embeddings are numerical representations of text data, where words,
phrases, or entire documents are converted into vectors of real numbers. These
embeddings capture semantic meanings and relationships between text elements in
a continuous vector space. The primary goal of text embeddings is to enable the
processing of text data by machine learning models, which require numerical
input. Numerous embedding models have been developed for various applications.
This paper presents our work in evaluating different embeddings through a
comprehensive comparative analysis of four distinct models, focusing on their
text classification efficacy. We employ both K-Nearest Neighbors (KNN) and
Logistic Regression (LR) to perform binary classification tasks, specifically
determining whether a text snippet is associated with 'delay' or 'not delay'
within a labeled dataset. Our research explores the use of text snippet
embeddings for training supervised text classification models to identify
delay-related statements during the document review process of construction
delay disputes. The results of this study highlight the potential of embedding
models to enhance the efficiency and accuracy of document analysis in legal
contexts, paving the way for more informed decision-making in complex
investigative scenarios.
|
2501.09863
|
Detection of Vascular Leukoencephalopathy in CT Images
|
eess.IV cs.CV
|
Artificial intelligence (AI) has seen a significant surge in popularity,
particularly in its application to medicine. This study explores AI's role in
diagnosing leukoencephalopathy, a small vessel disease of the brain, and a
leading cause of vascular dementia and hemorrhagic strokes. We utilized a
dataset of approximately 1200 patients with axial brain CT scans to train
convolutional neural networks (CNNs) for binary disease classification.
Addressing the challenge of varying scan dimensions due to different patient
physiologies, we processed the data to a uniform size and applied three
preprocessing methods to improve model accuracy. We compared four neural
network architectures: ResNet50, ResNet50 3D, ConvNext, and Densenet. The
ConvNext model achieved the highest accuracy of 98.5% without any
preprocessing, outperforming models with 3D convolutions. To gain insights into
model decision-making, we implemented Grad-CAM heatmaps, which highlighted the
focus areas of the models on the scans. Our results demonstrate that AI,
particularly the ConvNext architecture, can significantly enhance diagnostic
accuracy for leukoencephalopathy. This study underscores AI's potential in
advancing diagnostic methodologies for brain diseases and highlights the
effectiveness of CNNs in medical imaging applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.