id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.19298
|
Synthetic User Behavior Sequence Generation with Large Language Models
for Smart Homes
|
cs.AI cs.LG cs.NI
|
In recent years, as smart home systems have become more widespread, security
concerns within these environments have become a growing threat. Currently,
most smart home security solutions, such as anomaly detection and behavior
prediction models, are trained using fixed datasets that are precollected.
However, the process of dataset collection is time-consuming and lacks the
flexibility needed to adapt to the constantly evolving smart home environment.
Additionally, the collection of personal data raises significant privacy
concerns for users. Lately, large language models (LLMs) have emerged as a
powerful tool for a wide range of tasks across diverse application domains,
thanks to their strong capabilities in natural language processing, reasoning,
and problem-solving. In this paper, we propose an LLM-based synthetic dataset
generation IoTGen framework to enhance the generalization of downstream smart
home intelligent models. By generating new synthetic datasets that reflect
changes in the environment, smart home intelligent models can be retrained to
overcome the limitations of fixed and outdated data, allowing them to better
align with the dynamic nature of real-world home environments. Specifically, we
first propose a Structure Pattern Perception Compression (SPPC) method tailored
for IoT behavior data, which preserves the most informative content in the data
while significantly reducing token consumption. Then, we propose a systematic
approach to create prompts and implement data generation to automatically
generate IoT synthetic data with normative and reasonable properties, assisting
task models in adaptive training to improve generalization and real-world
performance.
|
2501.19300
|
Offline Learning for Combinatorial Multi-armed Bandits
|
cs.LG
|
The combinatorial multi-armed bandit (CMAB) is a fundamental sequential
decision-making framework, extensively studied over the past decade. However,
existing work primarily focuses on the online setting, overlooking the
substantial costs of online interactions and the readily available offline
datasets. To overcome these limitations, we introduce Off-CMAB, the first
offline learning framework for CMAB. Central to our framework is the
combinatorial lower confidence bound (CLCB) algorithm, which combines
pessimistic reward estimations with combinatorial solvers. To characterize the
quality of offline datasets, we propose two novel data coverage conditions and
prove that, under these conditions, CLCB achieves a near-optimal suboptimality
gap, matching the theoretical lower bound up to a logarithmic factor. We
validate Off-CMAB through practical applications, including learning to rank,
large language model (LLM) caching, and social influence maximization, showing
its ability to handle nonlinear reward functions, general feedback models, and
out-of-distribution action samples that excludes optimal or even feasible
actions. Extensive experiments on synthetic and real-world datasets further
highlight the superior performance of CLCB.
|
2501.19301
|
Beyond checkmate: exploring the creative chokepoints in AI text
|
cs.CL cs.AI
|
Large Language Models (LLMs) have revolutionized Natural Language Processing
(NLP) and Artificial Intelligence (AI), unlocking unprecedented capabilities.
This rapid advancement has spurred research into various aspects of LLMs, their
text generation & reasoning capability, and potential misuse, fueling the
necessity for robust detection methods. While numerous prior research has
focused on detecting LLM-generated text (AI text) and thus checkmating them,
our study investigates a relatively unexplored territory: portraying the
nuanced distinctions between human and AI texts across text segments. Whether
LLMs struggle with or excel at incorporating linguistic ingenuity across
different text segments carries substantial implications for determining their
potential as effective creative assistants to humans. Through an analogy with
the structure of chess games-comprising opening, middle, and end games-we
analyze text segments (introduction, body, and conclusion) to determine where
the most significant distinctions between human and AI texts exist. While AI
texts can approximate the body segment better due to its increased length, a
closer examination reveals a pronounced disparity, highlighting the importance
of this segment in AI text detection. Additionally, human texts exhibit higher
cross-segment differences compared to AI texts. Overall, our research can shed
light on the intricacies of human-AI text distinctions, offering novel insights
for text detection and understanding.
|
2501.19306
|
SETS: Leveraging Self-Verification and Self-Correction for Improved
Test-Time Scaling
|
cs.AI cs.CL
|
Recent advancements in Large Language Models (LLMs) have created new
opportunities to enhance performance on complex reasoning tasks by leveraging
test-time computation. However, conventional approaches such as repeated
sampling with majority voting or reward model scoring, often face diminishing
returns as test-time compute scales, in addition to requiring costly
task-specific reward model training. In this paper, we present Self-Enhanced
Test-Time Scaling (SETS), a novel method that leverages the self-verification
and self-correction capabilities of recent advanced LLMs to overcome these
limitations. SETS integrates sampling, self-verification, and self-correction
into a unified framework, enabling efficient and scalable test-time computation
for improved capabilities at complex tasks. Through extensive experiments on
challenging planning and reasoning benchmarks, compared to the alternatives, we
demonstrate that SETS achieves significant performance improvements and more
favorable test-time scaling laws.
|
2501.19307
|
Quantum-Inspired Fidelity-based Divergence
|
cs.IT math.IT
|
Kullback--Leibler (KL) divergence is a fundamental measure of the
dissimilarity between two probability distributions, but it can become unstable
in high-dimensional settings due to its sensitivity to mismatches in
distributional support. To address robustness limitations, we propose a novel
Quantum-Inspired Fidelity-based Divergence (QIF), leveraging quantum
information principles yet efficiently computable on classical hardware.
Compared to KL divergence, QIF demonstrates improved numerical stability under
partial or near-disjoint support conditions, thereby reducing the need for
extensive regularization in specific scenarios. Moreover, QIF admits
well-defined theoretical bounds and continuous similarity measures. Building on
this, we introduce a novel regularization method, QR-Drop, which utilizes QIF
to improve generalization in machine learning models. Empirical results show
that QR-Drop effectively mitigates overfitting and outperforms state-of-the-art
methods.
|
2501.19308
|
Ontological analysis of proactive life event services
|
cs.AI
|
Life event service is a direct digital public service provided jointly by
several governmental institutions so that a person can fulfill all the
obligations and use all the rights that arise due to a particular event or
situation in personal life. Life event service consolidates several public
services related to the same life event into one service for the service
consumer. This paper presents an ontological analysis of life event services,
which is based on the works by Guarino, Guizzardi, Nardi, Wagner, and others.
The purpose of the ontological analysis is to understand the meanings of life
event, proactive public service based on life event, and other related notions.
This kind of ontological analysis is crucial because for implementing the
hardware and software architectures of e-government and digital public
services, it is essential to agree upon the precise meanings of the underlying
terms.
|
2501.19309
|
Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model
Alignment
|
cs.LG cs.CL
|
The performance of large language models (LLMs) is closely linked to their
underlying size, leading to ever-growing networks and hence slower inference.
Speculative decoding has been proposed as a technique to accelerate
autoregressive generation, leveraging a fast draft model to propose candidate
tokens, which are then verified in parallel based on their likelihood under the
target model. While this approach guarantees to reproduce the target output, it
incurs a substantial penalty: many high-quality draft tokens are rejected, even
when they represent objectively valid continuations. Indeed, we show that even
powerful draft models such as GPT-4o, as well as human text cannot achieve high
acceptance rates under the standard verification scheme. This severely limits
the speedup potential of current speculative decoding methods, as an early
rejection becomes overwhelmingly likely when solely relying on alignment of
draft and target.
We thus ask the following question: Can we adapt verification to recognize
correct, but non-aligned replies? To this end, we draw inspiration from the
LLM-as-a-judge framework, which demonstrated that LLMs are able to rate answers
in a versatile way. We carefully design a dataset to elicit the same capability
in the target model by training a compact module on top of the embeddings to
produce ``judgements" of the current continuation. We showcase our strategy on
the Llama-3.1 family, where our 8b/405B-Judge achieves a speedup of 9x over
Llama-405B, while maintaining its quality on a large range of benchmarks. These
benefits remain present even in optimized inference frameworks, where our
method reaches up to 141 tokens/s for 8B/70B-Judge and 129 tokens/s for 8B/405B
on 2 and 8 H100s respectively.
|
2501.19314
|
An Efficient Approach for Machine Translation on Low-resource Languages:
A Case Study in Vietnamese-Chinese
|
cs.CL
|
Despite the rise of recent neural networks in machine translation, those
networks do not work well if the training data is insufficient. In this paper,
we proposed an approach for machine translation in low-resource languages such
as Vietnamese-Chinese. Our proposed method leveraged the power of the
multilingual pre-trained language model (mBART) and both Vietnamese and Chinese
monolingual corpus. Firstly, we built an early bird machine translation model
using the bilingual training dataset. Secondly, we used TF-IDF technique to
select sentences from the monolingual corpus which are the most related to
domains of the parallel dataset. Finally, the first model was used to
synthesize the augmented training data from the selected monolingual corpus for
the translation model. Our proposed scheme showed that it outperformed 8%
compared to the transformer model. The augmented dataset also pushed the model
performance.
|
2501.19316
|
Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task
Embeddings for Coreference Resolution
|
cs.CL
|
In this work, we reimagine classical probing to evaluate knowledge transfer
from simple source to more complex target tasks. Instead of probing frozen
representations from a complex source task on diverse simple target probing
tasks (as usually done in probing), we explore the effectiveness of embeddings
from multiple simple source tasks on a single target task. We select
coreference resolution, a linguistically complex problem requiring contextual
understanding, as focus target task, and test the usefulness of embeddings from
comparably simpler tasks tasks such as paraphrase detection, named entity
recognition, and relation extraction. Through systematic experiments, we
evaluate the impact of individual and combined task embeddings.
Our findings reveal that task embeddings vary significantly in utility for
coreference resolution, with semantic similarity tasks (e.g., paraphrase
detection) proving most beneficial. Additionally, representations from
intermediate layers of fine-tuned models often outperform those from final
layers. Combining embeddings from multiple tasks consistently improves
performance, with attention-based aggregation yielding substantial gains. These
insights shed light on relationships between task-specific representations and
their adaptability to complex downstream tasks, encouraging further exploration
of embedding-level task transfer.
|
2501.19317
|
LLM-based Affective Text Generation Quality Based on Different
Quantization Values
|
cs.CL
|
Large language models exhibit a remarkable capacity in language generation
and comprehension. These advances enable AI systems to produce more human-like
and emotionally engaging text. However, these models rely on a large number of
parameters, requiring significant computational resources for training and
inference. In some scenarios, accessing these resources can be challenging
(e.g., budget or hardware limitations). Techniques like reducing precision bits
can make models more memory-efficient, reducing the computational resources
needed, at the cost of reduced accuracy. This paper addresses the trade-off
between different quantization values, GPU RAM utilization, and text quality in
affective text generation (e.g., "I really enjoy running in the snow-covered
forest"). To evaluate, we use an emotion classifier and ten seed prompts to
generate affective text. We test three setups of precision bits (8, 16, and 32)
across five open-weight language models from two different families. Our
findings demonstrate that bit reductions lead to memory savings, achieving a
reduction of 76%. However, this optimization comes with a trade-off, leading to
a decrease of up to 10 pp in F1 score for larger models and an increase of 10
pp for smaller models, along with roughly double the inference time. In terms
of text quality, larger models at lower quantization levels generally
outperform smaller, higher-precision models -- while requiring similar memory.
|
2501.19318
|
MINDSTORES: Memory-Informed Neural Decision Synthesis for Task-Oriented
Reinforcement in Embodied Systems
|
cs.AI
|
While large language models (LLMs) have shown promising capabilities as
zero-shot planners for embodied agents, their inability to learn from
experience and build persistent mental models limits their robustness in
complex open-world environments like Minecraft. We introduce MINDSTORES, an
experience-augmented planning framework that enables embodied agents to build
and leverage mental models through natural interaction with their environment.
Drawing inspiration from how humans construct and refine cognitive mental
models, our approach extends existing zero-shot LLM planning by maintaining a
database of past experiences that informs future planning iterations. The key
innovation is representing accumulated experiences as natural language
embeddings of (state, task, plan, outcome) tuples, which can then be
efficiently retrieved and reasoned over by an LLM planner to generate insights
and guide plan refinement for novel states and tasks. Through extensive
experiments in the MineDojo environment, a simulation environment for agents in
Minecraft that provides low-level controls for Minecraft, we find that
MINDSTORES learns and applies its knowledge significantly better than existing
memory-based LLM planners while maintaining the flexibility and generalization
benefits of zero-shot approaches, representing an important step toward more
capable embodied AI systems that can learn continuously through natural
experience.
|
2501.19319
|
Advancing Dense Endoscopic Reconstruction with Gaussian Splatting-driven
Surface Normal-aware Tracking and Mapping
|
cs.CV cs.RO
|
Simultaneous Localization and Mapping (SLAM) is essential for precise
surgical interventions and robotic tasks in minimally invasive procedures.
While recent advancements in 3D Gaussian Splatting (3DGS) have improved SLAM
with high-quality novel view synthesis and fast rendering, these systems
struggle with accurate depth and surface reconstruction due to multi-view
inconsistencies. Simply incorporating SLAM and 3DGS leads to mismatches between
the reconstructed frames. In this work, we present Endo-2DTAM, a real-time
endoscopic SLAM system with 2D Gaussian Splatting (2DGS) to address these
challenges. Endo-2DTAM incorporates a surface normal-aware pipeline, which
consists of tracking, mapping, and bundle adjustment modules for geometrically
accurate reconstruction. Our robust tracking module combines point-to-point and
point-to-plane distance metrics, while the mapping module utilizes normal
consistency and depth distortion to enhance surface reconstruction quality. We
also introduce a pose-consistent strategy for efficient and geometrically
coherent keyframe sampling. Extensive experiments on public endoscopic datasets
demonstrate that Endo-2DTAM achieves an RMSE of $1.87\pm 0.63$ mm for depth
reconstruction of surgical scenes while maintaining computationally efficient
tracking, high-quality visual appearance, and real-time rendering. Our code
will be released at github.com/lastbasket/Endo-2DTAM.
|
2501.19321
|
Language Bias in Self-Supervised Learning For Automatic Speech
Recognition
|
eess.AS cs.AI cs.CL cs.LG eess.SP
|
Self-supervised learning (SSL) is used in deep learning to train on large
datasets without the need for expensive labelling of the data. Recently, large
Automatic Speech Recognition (ASR) models such as XLS-R have utilised SSL to
train on over one hundred different languages simultaneously. However, deeper
investigation shows that the bulk of the training data for XLS-R comes from a
small number of languages. Biases learned through SSL have been shown to exist
in multiple domains, but language bias in multilingual SSL ASR has not been
thoroughly examined. In this paper, we utilise the Lottery Ticket Hypothesis
(LTH) to identify language-specific subnetworks within XLS-R and test the
performance of these subnetworks on a variety of different languages. We are
able to show that when fine-tuning, XLS-R bypasses traditional linguistic
knowledge and builds only on weights learned from the languages with the
largest data contribution to the pretraining data.
|
2501.19324
|
Reward-Guided Speculative Decoding for Efficient LLM Reasoning
|
cs.CL cs.AI
|
We introduce Reward-Guided Speculative Decoding (RSD), a novel framework
aimed at improving the efficiency of inference in large language models (LLMs).
RSD synergistically combines a lightweight draft model with a more powerful
target model, incorporating a controlled bias to prioritize high-reward
outputs, in contrast to existing speculative decoding methods that enforce
strict unbiasedness. RSD employs a process reward model to evaluate
intermediate decoding steps and dynamically decide whether to invoke the target
model, optimizing the trade-off between computational cost and output quality.
We theoretically demonstrate that a threshold-based mixture strategy achieves
an optimal balance between resource utilization and performance. Extensive
evaluations on challenging reasoning benchmarks, including Olympiad-level
tasks, show that RSD delivers significant efficiency gains against decoding
with the target model only (up to 4.4x fewer FLOPs), while achieving
significant better accuracy than parallel decoding method on average (up to
+3.5). These results highlight RSD as a robust and cost-effective approach for
deploying LLMs in resource-intensive scenarios. The code is available at
https://github.com/BaohaoLiao/RSD.
|
2501.19325
|
A Generic Hybrid Framework for 2D Visual Reconstruction
|
cs.CV
|
This paper presents a versatile hybrid framework for addressing 2D real-world
reconstruction tasks formulated as jigsaw puzzle problems (JPPs) with square,
non-overlapping pieces. Our approach integrates a deep learning (DL)-based
compatibility measure (CM) model that evaluates pairs of puzzle pieces
holistically, rather than focusing solely on their adjacent edges as
traditionally done. This DL-based CM is paired with an optimized genetic
algorithm (GA)-based solver, which iteratively searches for a global optimal
arrangement using the pairwise CM scores of the puzzle pieces. Extensive
experimental results highlight the framework's adaptability and robustness
across multiple real-world domains. Notably, our unique hybrid methodology
achieves state-of-the-art (SOTA) results in reconstructing Portuguese tile
panels and large degraded puzzles with eroded boundaries.
|
2501.19328
|
Capturing Temporal Dynamics in Large-Scale Canopy Tree Height Estimation
|
cs.LG cs.AI cs.CV
|
With the rise in global greenhouse gas emissions, accurate large-scale tree
canopy height maps are essential for understanding forest structure, estimating
above-ground biomass, and monitoring ecological disruptions. To this end, we
present a novel approach to generate large-scale, high-resolution canopy height
maps over time. Our model accurately predicts canopy height over multiple years
given Sentinel-2 time series satellite data. Using GEDI LiDAR data as the
ground truth for training the model, we present the first 10m resolution
temporal canopy height map of the European continent for the period 2019-2022.
As part of this product, we also offer a detailed canopy height map for 2020,
providing more precise estimates than previous studies. Our pipeline and the
resulting temporal height map are publicly available, enabling comprehensive
large-scale monitoring of forests and, hence, facilitating future research and
ecological analyses. For an interactive viewer, see
https://europetreemap.projects.earthengine.app/view/temporalcanopyheight.
|
2501.19329
|
Let Human Sketches Help: Empowering Challenging Image Segmentation Task
with Freehand Sketches
|
cs.CV
|
Sketches, with their expressive potential, allow humans to convey the essence
of an object through even a rough contour. For the first time, we harness this
expressive potential to improve segmentation performance in challenging tasks
like camouflaged object detection (COD). Our approach introduces an innovative
sketch-guided interactive segmentation framework, allowing users to intuitively
annotate objects with freehand sketches (drawing a rough contour of the object)
instead of the traditional bounding boxes or points used in classic interactive
segmentation models like SAM. We demonstrate that sketch input can
significantly improve performance in existing iterative segmentation methods,
outperforming text or bounding box annotations. Additionally, we introduce key
modifications to network architectures and a novel sketch augmentation
technique to fully harness the power of sketch input and further boost
segmentation accuracy. Remarkably, our model' s output can be directly used to
train other neural networks, achieving results comparable to pixel-by-pixel
annotations--while reducing annotation time by up to 120 times, which shows
great potential in democratizing the annotation process and enabling model
training with less reliance on resource-intensive, laborious pixel-level
annotations. We also present KOSCamo+, the first freehand sketch dataset for
camouflaged object detection. The dataset, code, and the labeling tool will be
open sourced.
|
2501.19331
|
Consistent Video Colorization via Palette Guidance
|
cs.CV
|
Colorization is a traditional computer vision task and it plays an important
role in many time-consuming tasks, such as old film restoration. Existing
methods suffer from unsaturated color and temporally inconsistency. In this
paper, we propose a novel pipeline to overcome the challenges. We regard the
colorization task as a generative task and introduce Stable Video Diffusion
(SVD) as our base model. We design a palette-based color guider to assist the
model in generating vivid and consistent colors. The color context introduced
by the palette not only provides guidance for color generation, but also
enhances the stability of the generated colors through a unified color context
across multiple sequences. Experiments demonstrate that the proposed method can
provide vivid and stable colors for videos, surpassing previous methods.
|
2501.19334
|
The Value of Prediction in Identifying the Worst-Off
|
cs.CY cs.LG stat.ML
|
Machine learning is increasingly used in government programs to identify and
support the most vulnerable individuals, prioritizing assistance for those at
greatest risk over optimizing aggregate outcomes. This paper examines the
welfare impacts of prediction in equity-driven contexts, and how they compare
to other policy levers, such as expanding bureaucratic capacity. Through
mathematical models and a real-world case study on long-term unemployment
amongst German residents, we develop a comprehensive understanding of the
relative effectiveness of prediction in surfacing the worst-off. Our findings
provide clear analytical frameworks and practical, data-driven tools that
empower policymakers to make principled decisions when designing these systems.
|
2501.19335
|
What is causal about causal models and representations?
|
stat.ML cs.AI cs.LG math.ST stat.TH
|
Causal Bayesian networks are 'causal' models since they make predictions
about interventional distributions. To connect such causal model predictions to
real-world outcomes, we must determine which actions in the world correspond to
which interventions in the model. For example, to interpret an action as an
intervention on a treatment variable, the action will presumably have to a)
change the distribution of treatment in a way that corresponds to the
intervention, and b) not change other aspects, such as how the outcome depends
on the treatment; while the marginal distributions of some variables may change
as an effect. We introduce a formal framework to make such requirements for
different interpretations of actions as interventions precise. We prove that
the seemingly natural interpretation of actions as interventions is circular:
Under this interpretation, every causal Bayesian network that correctly models
the observational distribution is trivially also interventionally valid, and no
action yields empirical data that could possibly falsify such a model. We prove
an impossibility result: No interpretation exists that is non-circular and
simultaneously satisfies a set of natural desiderata. Instead, we examine
non-circular interpretations that may violate some desiderata and show how this
may in turn enable the falsification of causal models. By rigorously examining
how a causal Bayesian network could be a 'causal' model of the world instead of
merely a mathematical object, our formal framework contributes to the
conceptual foundations of causal representation learning, causal discovery, and
causal abstraction, while also highlighting some limitations of existing
approaches.
|
2501.19337
|
Homogeneity Bias as Differential Sampling Uncertainty in Language Models
|
cs.CL cs.CV
|
Prior research show that Large Language Models (LLMs) and Vision-Language
Models (VLMs) represent marginalized groups more homogeneously than dominant
groups. However, the mechanisms underlying this homogeneity bias remain
relatively unexplored. We propose that this bias emerges from systematic
differences in the probability distributions from which tokens are sampled at
inference-time. Analyzing three measures of uncertainty in token sampling
distributions-entropy, perplexity, and probability of differentiation-we find
that in some models, specifically GPT-4 Turbo and Llama-3.2, tokens are sampled
more deterministically when generating texts about marginalized groups (i.e.,
Black Americans and women) compared to their dominant group counterparts (i.e.,
White Americans and men). While these findings may help explain homogeneity
bias in certain models, the patterns did not replicate across all VLMs tested,
suggesting multiple mechanisms may contribute to homogeneity bias in AI.
|
2501.19338
|
Pathological MRI Segmentation by Synthetic Pathological Data Generation
in Fetuses and Neonates
|
eess.IV cs.AI cs.CV
|
Developing new methods for the automated analysis of clinical fetal and
neonatal MRI data is limited by the scarcity of annotated pathological datasets
and privacy concerns that often restrict data sharing, hindering the
effectiveness of deep learning models. We address this in two ways. First, we
introduce Fetal&Neonatal-DDPM, a novel diffusion model framework designed to
generate high-quality synthetic pathological fetal and neonatal MRIs from
semantic label images. Second, we enhance training data by modifying healthy
label images through morphological alterations to simulate conditions such as
ventriculomegaly, cerebellar and pontocerebellar hypoplasia, and microcephaly.
By leveraging Fetal&Neonatal-DDPM, we synthesize realistic pathological MRIs
from these modified pathological label images. Radiologists rated the synthetic
MRIs as significantly (p < 0.05) superior in quality and diagnostic value
compared to real MRIs, demonstrating features such as blood vessels and choroid
plexus, and improved alignment with label annotations. Synthetic pathological
data enhanced state-of-the-art nnUNet segmentation performance, particularly
for severe ventriculomegaly cases, with the greatest improvements achieved in
ventricle segmentation (Dice scores: 0.9253 vs. 0.7317). This study underscores
the potential of generative AI as transformative tool for data augmentation,
offering improved segmentation performance in pathological cases. This
development represents a significant step towards improving analysis and
segmentation accuracy in prenatal imaging, and also offers new ways for data
anonymization through the generation of pathologic image data.
|
2501.19339
|
PixelWorld: Towards Perceiving Everything as Pixels
|
cs.CV cs.CL
|
Existing foundation models typically process visual input as pixels and
textual input as tokens, a paradigm that contrasts with human perception, where
both modalities are processed in a unified manner. With the rise of embodied
and agentic AI, where inputs primarily come from camera pixels, the need for a
unified perception framework becomes increasingly evident. In this paper, we
propose to unify all modalities (text, tables, code, diagrams, images, etc) as
pixel inputs, i.e. "Perceive Everything as Pixels" (PEAP). We introduce
PixelWorld, a novel evaluation suite that unifies all the mentioned modalities
into pixel space to gauge the existing models' performance. Our findings show
that (1) PEAP outperforms baseline with token-based input in multimodal
datasets, benefiting from unified input for better disambiguation, (2)
significant declines in reasoning and coding capabilities across all models
when processing pixel-based input, underscoring the need to enhance foundation
models' perceptual abilities, (3) larger models can maintain strong performance
on non-reasoning tasks under PEAP, while smaller models like Phi-3.5-V suffer
significant performance degradation, (4) the attention pattern of PEAP is
highly aligned with text token input, (5) PEAP can be accelerated significantly
by exploiting the spatial sparsity. We conclude that the existing frontier
models are competent in pixel perception, however, there is still headroom for
improvement. Our code, dataset will be released upon acceptance.
|
2501.19340
|
Towards Adaptive Self-Improvement for Smarter Energy Systems
|
eess.SY cs.SY
|
This paper introduces a hierarchical framework for decision-making and
optimization, leveraging Large Language Models (LLMs) for adaptive code
generation. Instead of direct decision-making, LLMs generate and refine
executable control policies through a meta-policy that guides task generation
and a base policy for operational actions. Applied to a simplified microgrid
scenario, the approach achieves up to 15 percent cost savings by iteratively
improving battery control strategies. The proposed methodology lays a
foundation for integrating LLM-based tools into planning and control tasks,
offering adaptable and scalable solutions for complex systems while addressing
challenges of uncertainty and reproducibility.
|
2501.19342
|
Covering Multiple Objectives with a Small Set of Solutions Using
Bayesian Optimization
|
cs.LG
|
In multi-objective black-box optimization, the goal is typically to find
solutions that optimize a set of T black-box objective functions, $f_1$, ...,
$f_T$, simultaneously. Traditional approaches often seek a single
Pareto-optimal set that balances trade-offs among all objectives. In this work,
we introduce a novel problem setting that departs from this paradigm: finding a
smaller set of K solutions, where K < T, that collectively "covers" the T
objectives. A set of solutions is defined as "covering" if, for each objective
$f_1$, ..., $f_T$, there is at least one good solution. A motivating example
for this problem setting occurs in drug design. For example, we may have T
pathogens and aim to identify a set of K < T antibiotics such that at least one
antibiotic can be used to treat each pathogen. To address this problem, we
propose Multi-Objective Coverage Bayesian Optimization (MOCOBO), a principled
algorithm designed to efficiently find a covering set. We validate our approach
through extensive experiments on challenging high-dimensional tasks, including
applications in peptide and molecular design. Experiments demonstrate MOCOBO's
ability to find high-performing covering sets of solutions. Additionally, we
show that the small sets of K < T solutions found by MOCOBO can match or nearly
match the performance of T individually optimized solutions for the same
objectives. Our results highlight MOCOBO's potential to tackle complex
multi-objective problems in domains where finding at least one high-performing
solution for each objective is critical.
|
2501.19345
|
PUATE: Semiparametric Efficient Average Treatment Effect Estimation from
Treated (Positive) and Unlabeled Units
|
cs.LG econ.EM math.ST stat.ME stat.ML stat.TH
|
The estimation of average treatment effects (ATEs), defined as the difference
in expected outcomes between treatment and control groups, is a central topic
in causal inference. This study develops semiparametric efficient estimators
for ATE estimation in a setting where only a treatment group and an unknown
group-comprising units for which it is unclear whether they received the
treatment or control-are observable. This scenario represents a variant of
learning from positive and unlabeled data (PU learning) and can be regarded as
a special case of ATE estimation with missing data. For this setting, we derive
semiparametric efficiency bounds, which provide lower bounds on the asymptotic
variance of regular estimators. We then propose semiparametric efficient ATE
estimators whose asymptotic variance aligns with these efficiency bounds. Our
findings contribute to causal inference with missing data and weakly supervised
learning.
|
2501.19347
|
An All-digital 65-nm Tsetlin Machine Image Classification Accelerator
with 8.6 nJ per MNIST Frame at 60.3k Frames per Second
|
cs.LG cs.AR
|
We present an all-digital programmable machine learning accelerator chip for
image classification, underpinning on the Tsetlin machine (TM) principles. The
TM is a machine learning algorithm founded on propositional logic, utilizing
sub-pattern recognition expressions called clauses. The accelerator implements
the coalesced TM version with convolution, and classifies booleanized images of
28$\times$28 pixels with 10 categories. A configuration with 128 clauses is
used in a highly parallel architecture. Fast clause evaluation is obtained by
keeping all clause weights and Tsetlin automata (TA) action signals in
registers. The chip is implemented in a 65 nm low-leakage CMOS technology, and
occupies an active area of 2.7mm$^2$. At a clock frequency of 27.8 MHz, the
accelerator achieves 60.3k classifications per second, and consumes 8.6 nJ per
classification. The latency for classifying a single image is 25.4 $\mu$s which
includes system timing overhead. The accelerator achieves 97.42%, 84.54% and
82.55% test accuracies for the datasets MNIST, Fashion-MNIST and
Kuzushiji-MNIST, respectively, matching the TM software models.
|
2501.19348
|
Characterizing User Behavior: The Interplay Between Mobility Patterns
and Mobile Traffic
|
cs.NI cs.IR
|
Mobile devices have become essential for capturing human activity, and
eXtended Data Records (XDRs) offer rich opportunities for detailed user
behavior modeling, which is useful for designing personalized digital services.
Previous studies have primarily focused on aggregated mobile traffic and
mobility analyses, often neglecting individual-level insights. This paper
introduces a novel approach that explores the dependency between traffic and
mobility behaviors at the user level. By analyzing 13 individual features that
encompass traffic patterns and various mobility aspects, we enhance the
understanding of how these behaviors interact. Our advanced user modeling
framework integrates traffic and mobility behaviors over time, allowing for
fine-grained dependencies while maintaining population heterogeneity through
user-specific signatures. Furthermore, we develop a Markov model that infers
traffic behavior from mobility and vice versa, prioritizing significant
dependencies while addressing privacy concerns. Using a week-long XDR dataset
from 1,337,719 users across several provinces in Chile, we validate our
approach, demonstrating its robustness and applicability in accurately
inferring user behavior and matching mobility and traffic profiles across
diverse urban contexts.
|
2501.19351
|
Neural Implicit Solution Formula for Efficiently Solving Hamilton-Jacobi
Equations
|
cs.LG
|
This paper presents an implicit solution formula for the Hamilton-Jacobi
partial differential equation (HJ PDE). The formula is derived using the method
of characteristics and is shown to coincide with the Hopf and Lax formulas in
the case where either the Hamiltonian or the initial function is convex. It
provides a simple and efficient numerical approach for computing the viscosity
solution of HJ PDEs, bypassing the need for the Legendre transform of the
Hamiltonian or the initial condition, and the explicit computation of
individual characteristic trajectories. A deep learning-based methodology is
proposed to learn this implicit solution formula, leveraging the mesh-free
nature of deep learning to ensure scalability for high-dimensional problems.
Building upon this framework, an algorithm is developed that approximates the
characteristic curves piecewise linearly for state-dependent Hamiltonians.
Extensive experimental results demonstrate that the proposed method delivers
highly accurate solutions, even for nonconvex Hamiltonians, and exhibits
remarkable scalability, achieving computational efficiency for problems up to
40 dimensions.
|
2501.19353
|
Do Large Multimodal Models Solve Caption Generation for Scientific
Figures? Lessons Learned from SciCap Challenge 2023
|
cs.CL cs.AI cs.CV
|
Since the SciCap datasets launch in 2021, the research community has made
significant progress in generating captions for scientific figures in scholarly
articles. In 2023, the first SciCap Challenge took place, inviting global teams
to use an expanded SciCap dataset to develop models for captioning diverse
figure types across various academic fields. At the same time, text generation
models advanced quickly, with many powerful pre-trained large multimodal models
(LMMs) emerging that showed impressive capabilities in various
vision-and-language tasks. This paper presents an overview of the first SciCap
Challenge and details the performance of various models on its data, capturing
a snapshot of the fields state. We found that professional editors
overwhelmingly preferred figure captions generated by GPT-4V over those from
all other models and even the original captions written by authors. Following
this key finding, we conducted detailed analyses to answer this question: Have
advanced LMMs solved the task of generating captions for scientific figures?
|
2501.19358
|
The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating
Reward Hacking
|
cs.LG
|
This work identifies the Energy Loss Phenomenon in Reinforcement Learning
from Human Feedback (RLHF) and its connection to reward hacking. Specifically,
energy loss in the final layer of a Large Language Model (LLM) gradually
increases during the RL process, with an excessive increase in energy loss
characterizing reward hacking. Beyond empirical analysis, we further provide a
theoretical foundation by proving that, under mild conditions, the increased
energy loss reduces the upper bound of contextual relevance in LLMs, which is a
critical aspect of reward hacking as the reduced contextual relevance typically
indicates overfitting to reward model-favored patterns in RL. To address this
issue, we propose an Energy loss-aware PPO algorithm (EPPO) which penalizes the
increase in energy loss in the LLM's final layer during reward calculation to
prevent excessive energy loss, thereby mitigating reward hacking. We
theoretically show that EPPO can be conceptually interpreted as an
entropy-regularized RL algorithm, which provides deeper insights into its
effectiveness. Extensive experiments across various LLMs and tasks demonstrate
the commonality of the energy loss phenomenon, as well as the effectiveness of
EPPO in mitigating reward hacking and improving RLHF performance.
|
2501.19361
|
We're Different, We're the Same: Creative Homogeneity Across LLMs
|
cs.CY cs.AI cs.CL cs.LG
|
Numerous powerful large language models (LLMs) are now available for use as
writing support tools, idea generators, and beyond. Although these LLMs are
marketed as helpful creative assistants, several works have shown that using an
LLM as a creative partner results in a narrower set of creative outputs.
However, these studies only consider the effects of interacting with a single
LLM, begging the question of whether such narrowed creativity stems from using
a particular LLM -- which arguably has a limited range of outputs -- or from
using LLMs in general as creative assistants. To study this question, we elicit
creative responses from humans and a broad set of LLMs using standardized
creativity tests and compare the population-level diversity of responses. We
find that LLM responses are much more similar to other LLM responses than human
responses are to each other, even after controlling for response structure and
other key variables. This finding of significant homogeneity in creative
outputs across the LLMs we evaluate adds a new dimension to the ongoing
conversation about creativity and LLMs. If today's LLMs behave similarly, using
them as a creative partners -- regardless of the model used -- may drive all
users towards a limited set of "creative" outputs.
|
2501.19364
|
CoSTI: Consistency Models for (a faster) Spatio-Temporal Imputation
|
cs.LG cs.AI
|
Multivariate Time Series Imputation (MTSI) is crucial for many applications,
such as healthcare monitoring and traffic management, where incomplete data can
compromise decision-making. Existing state-of-the-art methods, like Denoising
Diffusion Probabilistic Models (DDPMs), achieve high imputation accuracy;
however, they suffer from significant computational costs and are notably
time-consuming due to their iterative nature. In this work, we propose CoSTI,
an innovative adaptation of Consistency Models (CMs) for the MTSI domain. CoSTI
employs Consistency Training to achieve comparable imputation quality to DDPMs
while drastically reducing inference times, making it more suitable for
real-time applications. We evaluate CoSTI across multiple datasets and missing
data scenarios, demonstrating up to a 98% reduction in imputation time with
performance on par with diffusion-based models. This work bridges the gap
between efficiency and accuracy in generative imputation tasks, providing a
scalable solution for handling missing data in critical spatio-temporal
systems.
|
2501.19373
|
Beyond Fixed Horizons: A Theoretical Framework for Adaptive Denoising
Diffusions
|
stat.ML cs.LG
|
We introduce a new class of generative diffusion models that, unlike
conventional denoising diffusion models, achieve a time-homogeneous structure
for both the noising and denoising processes, allowing the number of steps to
adaptively adjust based on the noise level. This is accomplished by
conditioning the forward process using Doob's $h$-transform, which terminates
the process at a suitable sampling distribution at a random time. The model is
particularly well suited for generating data with lower intrinsic dimensions,
as the termination criterion simplifies to a first-hitting rule. A key feature
of the model is its adaptability to the target data, enabling a variety of
downstream tasks using a pre-trained unconditional generative model. These
tasks include natural conditioning through appropriate initialization of the
denoising process and classification of noisy data.
|
2501.19374
|
Fixing the Double Penalty in Data-Driven Weather Forecasting Through a
Modified Spherical Harmonic Loss Function
|
cs.LG physics.ao-ph
|
Recent advancements in data-driven weather forecasting models have delivered
deterministic models that outperform the leading operational forecast systems
based on traditional, physics-based models. However, these data-driven models
are typically trained with a mean squared error loss function, which causes
smoothing of fine scales through a "double penalty" effect. We develop a
simple, parameter-free modification to this loss function that avoids this
problem by separating the loss attributable to decorrelation from the loss
attributable to spectral amplitude errors. Fine-tuning the GraphCast model with
this new loss function results in sharp deterministic weather forecasts, an
increase of the model's effective resolution from 1,250km to 160km,
improvements to ensemble spread, and improvements to predictions of tropical
cyclone strength and surface wind extremes.
|
2501.19375
|
A topological theory for qLDPC: non-Clifford gates and magic state
fountain on homological product codes with constant rate and beyond the
$N^{1/3}$ distance barrier
|
quant-ph cond-mat.str-el cs.IT hep-th math.GT math.IT
|
We develop a unified theory for fault-tolerant quantum computation in quantum
low-density parity-check (qLDPC) and topological codes. We show that there
exist hidden simplicial complex structures encoding the topological data for
all qLDPC and CSS codes obtained from product construction by generalizing the
Freedman-Hastings code-to-manifold mapping. This is achieved by building
manifolds corresponding to high-dimensional topological expanders from the
Tanner graphs of the skeleton classical or quantum codes, which further form a
product manifold and an associated thickened product code defined on its
triangulation with only a constant qubit overhead. This suggests that qLDPC or
more generally CSS codes obtained from product constructions are topological,
and hence can admit cohomology operations such as cup products, physically
corresponding to higher symmetries in the underlying topological quantum field
theory. When applying this mapping to a 3D hypergraph product code obtained
from the product of 3 copies of good classical expander codes, we obtain the
first non-Clifford logical CCZ gates via constant depth circuits on a code with
constant stabilizer weight $w=O(1)$, constant rate $K=\Theta(N)$, and
polynomial distance $D=\Omega(N^{1/3})$. When applied to 3D homological product
codes consisting of the product of a pair of good quantum and classical LDPC
codes, we can further improve the distance to $D=\Omega(\sqrt{N})$ exceeding
the $N^{1/3}$ distance barrier implied by the Bravyi-K\"onig bound for
conventional topological codes. Our work suggests that it is feasible to apply
native logical non-Clifford gates on qLDPC codes or directly inject
high-fidelity magic states as resources (`magic state fountain') without the
distillation process. For the homological product construction, the fountain
can inject $\Theta(\sqrt{N})$ magic states in parallel in a single round.
|
2501.19377
|
SELMA: A Speech-Enabled Language Model for Virtual Assistant
Interactions
|
cs.SD cs.CL cs.LG eess.AS
|
In this work, we present and evaluate SELMA, a Speech-Enabled Language Model
for virtual Assistant interactions that integrates audio and text as inputs to
a Large Language Model (LLM). SELMA is designed to handle three primary and two
auxiliary tasks related to interactions with virtual assistants simultaneously
within a single end-to-end model. We employ low-rank adaptation modules for
parameter-efficient training of both the audio encoder and the LLM.
Additionally, we implement a feature pooling strategy enabling the system to
recognize global patterns and improve accuracy on tasks less reliant on
individual sequence elements. Experimental results on Voice Trigger (VT)
detection, Device-Directed Speech Detection (DDSD), and Automatic Speech
Recognition (ASR), demonstrate that our approach both simplifies the typical
input processing pipeline of virtual assistants significantly and also improves
performance compared to dedicated models for each individual task. SELMA yields
relative Equal-Error Rate improvements of 64% on the VT detection task, and 22%
on DDSD, while also achieving word error rates close to the baseline.
|
2501.19378
|
TableMaster: A Recipe to Advance Table Understanding with Language
Models
|
cs.CL
|
Tables serve as a fundamental format for representing structured relational
data. While current language models (LMs) excel at many text-based tasks, they
still face challenges in table understanding due to the complex characteristics
of tabular data, such as their structured nature. In this paper, we aim to
enhance LMs for improved table understanding. We identify four key challenges:
1) difficulty in locating target data, 2) deficiency in table semantics, 3)
numerical inaccuracies in textual reasoning, and 4) semantic inflexibility in
symbolic reasoning. To address these issues, we propose TableMaster, a recipe
and comprehensive framework that integrates multiple solutions to overcome
these obstacles. TableMaster first extracts relevant table content and
verbalizes it with enriched semantic context. Additionally, we introduce
adaptive reasoning, a flexible approach that dynamically adjusts between
textual and symbolic reasoning, tailoring the reasoning process to each query.
Extensive analyses and experiments demonstrate our findings and the
effectiveness of TableMaster. On the WikiTQ dataset, TableMaster achieves an
accuracy of 78.13% using GPT-4o-mini, surpassing existing baselines.
|
2501.19381
|
Using gradient of Lagrangian function to compute efficient channels for
the ideal observer
|
eess.SP cs.CV cs.LG math.ST stat.CO stat.TH
|
It is widely accepted that the Bayesian ideal observer (IO) should be used to
guide the objective assessment and optimization of medical imaging systems. The
IO employs complete task-specific information to compute test statistics for
making inference decisions and performs optimally in signal detection tasks.
However, the IO test statistic typically depends non-linearly on the image data
and cannot be analytically determined. The ideal linear observer, known as the
Hotelling observer (HO), can sometimes be used as a surrogate for the IO.
However, when image data are high dimensional, HO computation can be difficult.
Efficient channels that can extract task-relevant features have been
investigated to reduce the dimensionality of image data to approximate IO and
HO performance. This work proposes a novel method for generating efficient
channels by use of the gradient of a Lagrangian-based loss function that was
designed to learn the HO. The generated channels are referred to as the
Lagrangian-gradient (L-grad) channels. Numerical studies are conducted that
consider binary signal detection tasks involving various backgrounds and
signals. It is demonstrated that channelized HO (CHO) using L-grad channels can
produce significantly better signal detection performance compared to the CHO
using PLS channels. Moreover, it is shown that the proposed L-grad method can
achieve significantly lower computation time compared to the PLS method.
|
2501.19382
|
LiDAR Loop Closure Detection using Semantic Graphs with Graph Attention
Networks
|
cs.CV cs.RO
|
In this paper, we propose a novel loop closure detection algorithm that uses
graph attention neural networks to encode semantic graphs to perform place
recognition and then use semantic registration to estimate the 6 DoF relative
pose constraint. Our place recognition algorithm has two key modules, namely, a
semantic graph encoder module and a graph comparison module. The semantic graph
encoder employs graph attention networks to efficiently encode spatial,
semantic and geometric information from the semantic graph of the input point
cloud. We then use self-attention mechanism in both node-embedding and
graph-embedding steps to create distinctive graph vectors. The graph vectors of
the current scan and a keyframe scan are then compared in the graph comparison
module to identify a possible loop closure. Specifically, employing the
difference of the two graph vectors showed a significant improvement in
performance, as shown in ablation studies. Lastly, we implemented a semantic
registration algorithm that takes in loop closure candidate scans and estimates
the relative 6 DoF pose constraint for the LiDAR SLAM system. Extensive
evaluation on public datasets shows that our model is more accurate and robust,
achieving 13% improvement in maximum F1 score on the SemanticKITTI dataset,
when compared to the baseline semantic graph algorithm. For the benefit of the
community, we open-source the complete implementation of our proposed algorithm
and custom implementation of semantic registration at
https://github.com/crepuscularlight/SemanticLoopClosure
|
2501.19383
|
Decoding-based Regression
|
cs.LG cs.AI cs.CL stat.ML
|
Language models have recently been shown capable of performing regression
tasks wherein numeric predictions are represented as decoded strings. In this
work, we provide theoretical grounds for this capability and furthermore
investigate the utility of causal auto-regressive sequence models when they are
applied to any feature representation. We find that, despite being trained in
the usual way - for next-token prediction via cross-entropy loss -
decoding-based regression is as performant as traditional approaches for
tabular regression tasks, while being flexible enough to capture arbitrary
distributions, such as in the task of density estimation.
|
2501.19386
|
Multi-Frame Blind Manifold Deconvolution for Rotating Synthetic Aperture
Imaging
|
stat.ME cs.CV eess.SP
|
Rotating synthetic aperture (RSA) imaging system captures images of the
target scene at different rotation angles by rotating a rectangular aperture.
Deblurring acquired RSA images plays a critical role in reconstructing a latent
sharp image underlying the scene. In the past decade, the emergence of blind
convolution technology has revolutionised this field by its ability to model
complex features from acquired images. Most of the existing methods attempt to
solve the above ill-posed inverse problem through maximising a posterior.
Despite this progress, researchers have paid limited attention to exploring
low-dimensional manifold structures of the latent image within a
high-dimensional ambient-space. Here, we propose a novel method to process RSA
images using manifold fitting and penalisation in the content of multi-frame
blind convolution. We develop fast algorithms for implementing the proposed
procedure. Simulation studies demonstrate that manifold-based deconvolution can
outperform conventional deconvolution algorithms in the sense that it can
generate a sharper estimate of the latent image in terms of estimating pixel
intensities and preserving structural details.
|
2501.19389
|
Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large
Language Models
|
cs.LG
|
Fine-tuning large language models (LLMs) on devices is attracting increasing
interest. Recent works have fused low-rank adaptation (LoRA) techniques with
federated fine-tuning to mitigate challenges associated with device model sizes
and data scarcity. Still, the heterogeneity of computational resources remains
a critical bottleneck: while higher-rank modules generally enhance performance,
varying device capabilities constrain LoRA's feasible rank range. Existing
approaches attempting to resolve this issue either lack analytical
justification or impose additional computational overhead, leaving a wide gap
for an efficient and theoretically-grounded solution. To address these
challenges, we propose federated sketching LoRA (FSLoRA), which leverages a
sketching mechanism to enable devices to selectively update submatrices of
global LoRA modules maintained by the server. By adjusting the sketching
ratios, which determine the ranks of the submatrices on the devices, FSLoRA
flexibly adapts to device-specific communication and computational constraints.
We provide a rigorous convergence analysis of FSLoRA that characterizes how the
sketching ratios affect the convergence rate. Through comprehensive experiments
on multiple datasets and LLM models, we demonstrate FSLoRA's superior
performance compared to various baselines.
|
2501.19390
|
From a Frequency-Domain Willems' Lemma to Data-Driven Predictive Control
|
math.OC cs.SY eess.SY
|
Willems' fundamental lemma has recently received an impressive amount of
attention from the (data-driven) control community. In this paper, we formulate
a version of this celebrated result based on frequency-domain data. In doing
so, we bridge the gap between recent developments in data-driven analysis and
control, and the readily-available techniques and extensive expertise for
non-parametric frequency-domain identification in academia and industry. In
addition, we generalize our results to allow multiple frequency-domain data
sets to be carefully combined to form a sufficiently rich data set. Building on
these results, we propose a data-driven predictive control scheme based on
measured frequency-domain data of the plant. This novel scheme provides a
frequency-domain counterpart of the well-known data-enabled predictive control
scheme DeePC based on time-domain data. We prove that, under appropriate
conditions, the new frequency-domain data-driven predictive control (FreePC)
scheme is equivalent to the corresponding DeePC scheme, and we demonstrate the
benefits of FreePC and the use of frequency-domain data in a numerical case
study. These benefits include the ability to collect data in closed loop with a
pre-stabilizing controller, dealing with noisy data, without increasing
computational complexity, and intuitively visualizing the uncertainty in the
frequency-domain data. In addition, we further showcase the potential of our
frequency-domain Willems' fundamental lemma in applications to data-driven
simulation, and the linear-quadratic regulator (LQR) problem. Finally, we show
that our results can be used to evaluate the transfer function of the system at
a desired frequency based on a finite amount of frequency-domain data.
|
2501.19391
|
Perceptive Mixed-Integer Footstep Control for Underactuated Bipedal
Walking on Rough Terrain
|
cs.RO
|
Traversing rough terrain requires dynamic bipeds to stabilize themselves
through foot placement without stepping in unsafe areas. Planning these
footsteps online is challenging given non-convexity of the safe terrain, and
imperfect perception and state estimation. This paper addresses these
challenges with a full-stack perception and control system for achieving
underactuated walking on discontinuous terrain. First, we develop
model-predictive footstep control (MPFC), a single mixed-integer quadratic
program which assumes a convex polygon terrain decomposition to optimize over
discrete foothold choice, footstep position, ankle torque, template dynamics,
and footstep timing at over 100 Hz. We then propose a novel approach for
generating convex polygon terrain decompositions online. Our perception stack
decouples safe-terrain classification from fitting planar polygons, generating
a temporally consistent terrain segmentation in real time using a single CPU
thread. We demonstrate the performance of our perception and control stack
through outdoor experiments with the underactuated biped Cassie, achieving
state of the art perceptive bipedal walking on discontinuous terrain.
Supplemental Video: https://youtu.be/eCOD1bMi638
|
2501.19392
|
Cache Me If You Must: Adaptive Key-Value Quantization for Large Language
Models
|
cs.LG
|
Efficient real-world deployments of large language models (LLMs) rely on
Key-Value (KV) caching for processing and generating long outputs, reducing the
need for repetitive computation. For large contexts, Key-Value caches can take
up tens of gigabytes of device memory, as they store vector representations for
each token and layer. Recent work has shown that the cached vectors can be
compressed through quantization, pruning or merging, but these techniques often
compromise quality towards higher compression rates. In this work, we aim to
improve Key & Value compression by exploiting two observations: 1) the inherent
dependencies between keys and values across different layers, and 2)
high-compression mechanisms for internal network states. We propose AQUA-KV, an
adaptive quantization for Key-Value caches that relies on compact adapters to
exploit existing dependencies between Keys and Values, and aims to "optimally"
compress the information that cannot be predicted. AQUA-KV significantly
improves compression rates, while maintaining high accuracy on state-of-the-art
LLM families. On Llama 3.2 LLMs, we achieve near-lossless inference at 2-2.5
bits per value with under $1\%$ relative error in perplexity and LongBench
scores. AQUA-KV is one-shot, simple, and efficient: it can be calibrated on a
single GPU within 1-6 hours, even for 70B models.
|
2501.19393
|
s1: Simple test-time scaling
|
cs.CL cs.AI cs.LG
|
Test-time scaling is a promising new approach to language modeling that uses
extra test-time compute to improve performance. Recently, OpenAI's o1 model
showed this capability but did not publicly share its methodology, leading to
many replication efforts. We seek the simplest approach to achieve test-time
scaling and strong reasoning performance. First, we curate a small dataset s1K
of 1,000 questions paired with reasoning traces relying on three criteria we
validate through ablations: difficulty, diversity, and quality. Second, we
develop budget forcing to control test-time compute by forcefully terminating
the model's thinking process or lengthening it by appending "Wait" multiple
times to the model's generation when it tries to end. This can lead the model
to double-check its answer, often fixing incorrect reasoning steps. After
supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and
equipping it with budget forcing, our model s1-32B exceeds o1-preview on
competition math questions by up to 27% (MATH and AIME24). Further, scaling
s1-32B with budget forcing allows extrapolating beyond its performance without
test-time intervention: from 50% to 57% on AIME24. Our model, data, and code
are open-source at https://github.com/simplescaling/s1
|
2501.19395
|
Precision Harvesting in Cluttered Environments: Integrating End Effector
Design with Dual Camera Perception
|
cs.RO
|
Due to labor shortages in specialty crop industries, a need for robotic
automation to increase agricultural efficiency and productivity has arisen.
Previous manipulation systems perform well in harvesting in uncluttered and
structured environments. High tunnel environments are more compact and
cluttered in nature, requiring a rethinking of the large form factor systems
and grippers. We propose a novel codesigned framework incorporating a global
detection camera and a local eye-in-hand camera that demonstrates precise
localization of small fruits via closed-loop visual feedback and reliable error
handling. Field experiments in high tunnels show our system can reach an
average of 85.0\% of cherry tomato fruit in 10.98s on average.
|
2501.19398
|
Do LLMs Strategically Reveal, Conceal, and Infer Information? A
Theoretical and Empirical Analysis in The Chameleon Game
|
cs.AI cs.GT cs.LG
|
Large language model-based (LLM-based) agents have become common in settings
that include non-cooperative parties. In such settings, agents' decision-making
needs to conceal information from their adversaries, reveal information to
their cooperators, and infer information to identify the other agents'
characteristics. To investigate whether LLMs have these information control and
decision-making capabilities, we make LLM agents play the language-based
hidden-identity game, The Chameleon. In the game, a group of non-chameleon
agents who do not know each other aim to identify the chameleon agent without
revealing a secret. The game requires the aforementioned information control
capabilities both as a chameleon and a non-chameleon. The empirical results
show that while non-chameleon LLM agents identify the chameleon, they fail to
conceal the secret from the chameleon, and their winning probability is far
from the levels of even trivial strategies. To formally explain this behavior,
we give a theoretical analysis for a spectrum of strategies, from concealing to
revealing, and provide bounds on the non-chameleons' winning probability. Based
on the empirical results and theoretical analysis of different strategies, we
deduce that LLM-based non-chameleon agents reveal excessive information to
agents of unknown identities. Our results point to a weakness of contemporary
LLMs, including GPT-4, GPT-4o, Gemini 1.5, and Claude 3.5 Sonnet, in strategic
interactions.
|
2501.19399
|
Scalable-Softmax Is Superior for Attention
|
cs.CL cs.AI cs.LG
|
The maximum element of the vector output by the Softmax function approaches
zero as the input vector size increases. Transformer-based language models rely
on Softmax to compute attention scores, causing the attention distribution to
flatten as the context size grows. This reduces the model's ability to
prioritize key information effectively and potentially limits its length
generalization. To address this problem, we propose Scalable-Softmax (SSMax),
which replaces Softmax in scenarios where the input vector size varies. SSMax
can be seamlessly integrated into existing Transformer-based architectures.
Experimental results in language modeling show that models using SSMax not only
achieve faster loss reduction during pretraining but also significantly improve
performance in long contexts and key information retrieval. Furthermore, an
analysis of attention scores reveals that SSMax enables the model to focus
attention on key information even in long contexts. Additionally, although
models that use SSMax from the beginning of pretraining achieve better length
generalization, those that have already started pretraining can still gain some
of this ability by replacing Softmax in the attention layers with SSMax, either
during or after pretraining.
|
2501.19400
|
Vintix: Action Model via In-Context Reinforcement Learning
|
cs.LG cs.AI cs.RO
|
In-Context Reinforcement Learning (ICRL) represents a promising paradigm for
developing generalist agents that learn at inference time through
trial-and-error interactions, analogous to how large language models adapt
contextually, but with a focus on reward maximization. However, the scalability
of ICRL beyond toy tasks and single-domain settings remains an open challenge.
In this work, we present the first steps toward scaling ICRL by introducing a
fixed, cross-domain model capable of learning behaviors through in-context
reinforcement learning. Our results demonstrate that Algorithm Distillation, a
framework designed to facilitate ICRL, offers a compelling and competitive
alternative to expert distillation to construct versatile action models. These
findings highlight the potential of ICRL as a scalable approach for generalist
decision-making systems. Code to be released at
https://github.com/dunnolab/vintix
|
2501.19401
|
Detection Is All You Need: A Feasible Optimal Prior-Free Black-Box
Approach For Piecewise Stationary Bandits
|
cs.LG stat.ML
|
We study the problem of piecewise stationary bandits without prior knowledge
of the underlying non-stationarity. We propose the first $\textit{feasible}$
black-box algorithm applicable to most common parametric bandit variants. Our
procedure, termed Detection Augmented Bandit (DAB), is modular, accepting any
stationary bandit algorithm as input and augmenting it with a change detector.
DAB achieves optimal regret in the piecewise stationary setting under mild
assumptions. Specifically, we prove that DAB attains the order-optimal regret
bound of $\tilde{\mathcal{O}}(\sqrt{N_T T})$, where $N_T$ denotes the number of
changes over the horizon $T$, if its input stationary bandit algorithm has
order-optimal stationary regret guarantees. Applying DAB to different
parametric bandit settings, we recover recent state-of-the-art results.
Notably, for self-concordant bandits, DAB achieves optimal dynamic regret,
while previous works obtain suboptimal bounds and require knowledge on the
non-stationarity. In simulations on piecewise stationary environments, DAB
outperforms existing approaches across varying number of changes.
Interestingly, despite being theoretically designed for piecewise stationary
environments, DAB is also effective in simulations in drifting environments,
outperforming existing methods designed specifically for this scenario.
|
2501.19403
|
Redefining Machine Unlearning: A Conformal Prediction-Motivated Approach
|
cs.LG cs.AI
|
Machine unlearning seeks to systematically remove specified data from a
trained model, effectively achieving a state as though the data had never been
encountered during training. While metrics such as Unlearning Accuracy (UA) and
Membership Inference Attack (MIA) provide a baseline for assessing unlearning
performance, they fall short of evaluating the completeness and reliability of
forgetting. This is because the ground truth labels remain potential candidates
within the scope of uncertainty quantification, leaving gaps in the evaluation
of true forgetting. In this paper, we identify critical limitations in existing
unlearning metrics and propose enhanced evaluation metrics inspired by
conformal prediction. Our metrics can effectively capture the extent to which
ground truth labels are excluded from the prediction set. Furthermore, we
observe that many existing machine unlearning methods do not achieve
satisfactory forgetting performance when evaluated with our new metrics. To
address this, we propose an unlearning framework that integrates conformal
prediction insights into Carlini & Wagner adversarial attack loss. Extensive
experiments on the image classification task demonstrate that our enhanced
metrics offer deeper insights into unlearning effectiveness, and that our
unlearning framework significantly improves the forgetting quality of
unlearning methods.
|
2501.19405
|
Human-Precision Medicine Interaction: Public Perceptions of Polygenic
Risk Score for Genetic Health Prediction
|
cs.HC cs.CE cs.ET
|
Precision Medicine (PM) transforms the traditional "one-drug-fits-all"
paradigm by customising treatments based on individual characteristics, and is
an emerging topic for HCI research on digital health. A key element of PM, the
Polygenic Risk Score (PRS), uses genetic data to predict an individual's
disease risk. Despite its potential, PRS faces barriers to adoption, such as
data inclusivity, psychological impact, and public trust. We conducted a
mixed-methods study to explore how people perceive PRS, formed of surveys
(n=254) and interviews (n=11) with UK-based participants. The interviews were
supplemented by interactive storyboards with the ContraVision technique to
provoke deeper reflection and discussion. We identified ten key barriers and
five themes to PRS adoption and proposed design implications for a responsible
PRS framework. To address the complexities of PRS and enhance broader PM
practices, we introduce the term Human-Precision Medicine Interaction (HPMI),
which integrates, adapts, and extends HCI approaches to better meet these
challenges.
|
2501.19406
|
Low-Rank Adapting Models for Sparse Autoencoders
|
cs.LG
|
Sparse autoencoders (SAEs) decompose language model representations into a
sparse set of linear latent vectors. Recent works have improved SAEs using
language model gradients, but these techniques require many expensive backward
passes during training and still cause a significant increase in cross entropy
loss when SAE reconstructions are inserted into the model. In this work, we
improve on these limitations by taking a fundamentally different approach: we
use low-rank adaptation (LoRA) to finetune the language model itself around a
previously trained SAE. We analyze our method across SAE sparsity, SAE width,
language model size, LoRA rank, and model layer on the Gemma Scope family of
SAEs. In these settings, our method reduces the cross entropy loss gap by 30%
to 55% when SAEs are inserted during the forward pass. We also find that
compared to end-to-end (e2e) SAEs, our approach achieves the same downstream
cross entropy loss 3$\times$ to 20$\times$ faster on Gemma-2-2B and 2$\times$
to 10$\times$ faster on Llama-3.2-1B. We further show that our technique
improves downstream metrics and can adapt multiple SAEs at once. Our results
demonstrate that improving model interpretability is not limited to post-hoc
SAE training; Pareto improvements can also be achieved by directly optimizing
the model itself.
|
2501.19407
|
Algorithmic Inheritance: Surname Bias in AI Decisions Reinforces
Intergenerational Inequality
|
cs.CY cs.AI cs.LG econ.GN q-fin.EC
|
Surnames often convey implicit markers of social status, wealth, and lineage,
shaping perceptions in ways that can perpetuate systemic biases and
intergenerational inequality. This study is the first of its kind to
investigate whether and how surnames influence AI-driven decision-making,
focusing on their effects across key areas such as hiring recommendations,
leadership appointments, and loan approvals. Using 72,000 evaluations of 600
surnames from the United States and Thailand, two countries with distinct
sociohistorical contexts and surname conventions, we classify names into four
categories: Rich, Legacy, Normal, and phonetically similar Variant groups. Our
findings show that elite surnames consistently increase AI-generated
perceptions of power, intelligence, and wealth, which in turn influence
AI-driven decisions in high-stakes contexts. Mediation analysis reveals
perceived intelligence as a key mechanism through which surname biases
influence AI decision-making process. While providing objective qualifications
alongside surnames mitigates most of these biases, it does not eliminate them
entirely, especially in contexts where candidate credentials are low. These
findings highlight the need for fairness-aware algorithms and robust policy
measures to prevent AI systems from reinforcing systemic inequalities tied to
surnames, an often-overlooked bias compared to more salient characteristics
such as race and gender. Our work calls for a critical reassessment of
algorithmic accountability and its broader societal impact, particularly in
systems designed to uphold meritocratic principles while counteracting the
perpetuation of intergenerational privilege.
|
2502.00003
|
Defending Compute Thresholds Against Legal Loopholes
|
cs.CY cs.AI
|
Existing legal frameworks on AI rely on training compute thresholds as a
proxy to identify potentially-dangerous AI models and trigger increased
regulatory attention. In the United States, Section 4.2(a) of Executive Order
14110 instructs the Secretary of Commerce to require extensive reporting from
developers of AI models above a certain training compute threshold. In the
European Union, Article 51 of the AI Act establishes a presumption that AI
models above a certain compute threshold have high impact capabilities and
hence pose systemic risk, thus subjecting their developers to several
obligations including capability evaluations, reporting, and incident
monitoring. In this paper, we examine some enhancement techniques that are
capable of decreasing training compute usage while preserving, or even
increasing, model capabilities. Since training compute thresholds rely on
training compute as a metric and trigger for increased regulatory attention,
these capability-enhancing and compute-saving techniques could constitute a
legal loophole to existing training compute thresholds. In particular, we
concentrate on four illustrative techniques (fine-tuning, model reuse, model
expansion, and above compute-optimal inference compute) with the goal of
furthering the conversation about their implications on training compute
thresholds as a legal mechanism and advancing policy recommendations that could
address the relevant legal loopholes.
|
2502.00005
|
A Study about Distribution and Acceptance of Conversational Agents for
Mental Health in Germany: Keep the Human in the Loop?
|
cs.HC cs.AI cs.CY
|
Good mental health enables individuals to cope with the normal stresses of
life. In Germany, approximately one-quarter of the adult population is affected
by mental illnesses. Teletherapy and digital health applications are available
to bridge gaps in care and relieve healthcare professionals. The acceptance of
these tools is a strongly influencing factor for their effectiveness, which
also needs to be evaluated for AI-based conversational agents (CAs) (e. g.
ChatGPT, Siri) to assess the risks and potential for integration into
therapeutic practice. This study investigates the perspectives of both the
general population and healthcare professionals with the following questions:
1. How frequently are CAs used for mental health? 2. How high is the acceptance
of CAs in the field of mental health? 3. To what extent is the use of CAs in
counselling, diagnosis, and treatment acceptable? To address these questions,
two quantitative online surveys were conducted with 444 participants from the
general population and 351 healthcare professionals. Statistical analyses show
that 27 % of the surveyed population already confide their concerns to CAs. Not
only experience with this technology but also experience with telemedicine
shows a higher acceptance among both groups for using CAs for mental health.
Additionally, participants from the general population were more likely to
support CAs as companions controlled by healthcare professionals rather than as
additional experts for the professionals. CAs have the potential to support
mental health, particularly in counselling. Future research should examine the
influence of different communication media and further possibilities of
augmented intelligence. With the right balance between technology and human
care, integration into patient-professional interaction can be achieved.
|
2502.00008
|
Zoning in American Cities: Are Reforms Making a Difference? An AI-based
Analysis
|
cs.CY cs.CL
|
Cities are at the forefront of addressing global sustainability challenges,
particularly those exacerbated by climate change. Traditional zoning codes,
which often segregate land uses, have been linked to increased vehicular
dependence, urban sprawl, and social disconnection, undermining broader social
and environmental sustainability objectives. This study investigates the
adoption and impact of form-based codes (FBCs), which aim to promote
sustainable, compact, and mixed-use urban forms as a solution to these issues.
Using Natural Language Processing (NLP) techniques, we analyzed zoning
documents from over 2000 U.S. census-designated places to identify linguistic
patterns indicative of FBC principles. Our findings reveal widespread adoption
of FBCs across the country, with notable variations within regions. FBCs are
associated with higher floor-to-area ratios, narrower and more consistent
street setbacks, and smaller plots. We also find that places with FBCs have
improved walkability, shorter commutes, and a higher share of multi-family
housing. Our findings highlight the utility of NLP for evaluating zoning codes
and underscore the potential benefits of form-based zoning reforms for
enhancing urban sustainability.
|
2502.00011
|
TOAST Framework: A Multidimensional Approach to Ethical and Sustainable
AI Integration in Organizations
|
cs.CY cs.AI cs.HC
|
Artificial Intelligence (AI) has emerged as a transformative technology with
the potential to revolutionize various sectors, from healthcare to finance,
education, and beyond. However, successfully implementing AI systems remains a
complex challenge, requiring a comprehensive and methodologically sound
framework. This paper contributes to this challenge by introducing the
Trustworthy, Optimized, Adaptable, and Socio-Technologically harmonious (TOAST)
framework. It draws on insights from various disciplines to align technical
strategy with ethical values, societal responsibilities, and innovation
aspirations. The TOAST framework is a novel approach designed to guide the
implementation of AI systems, focusing on reliability, accountability,
technical advancement, adaptability, and socio-technical harmony. By grounding
the TOAST framework in healthcare case studies, this paper provides a robust
evaluation of its practicality and theoretical soundness in addressing
operational, ethical, and regulatory challenges in high-stakes environments,
demonstrating how adaptable AI systems can enhance institutional efficiency,
mitigate risks like bias and data privacy, and offer a replicable model for
other sectors requiring ethically aligned and efficient AI integration.
|
2502.00013
|
Behavioural Analytics: Mathematics of the Mind
|
cs.CY cs.LG
|
Behavioural analytics provides insights into individual and crowd behaviour,
enabling analysis of what previously happened and predictions for how people
may be likely to act in the future. In defence and security, this analysis
allows organisations to achieve tactical and strategic advantage through
influence campaigns, a key counterpart to physical activities. Before action
can be taken, online and real-world behaviour must be analysed to determine the
level of threat. Huge data volumes mean that automated processes are required
to attain an accurate understanding of risk. We describe the mathematical basis
of technologies to analyse quotes in multiple languages. These include a
Bayesian network to understand behavioural factors, state estimation algorithms
for time series analysis, and machine learning algorithms for classification.
We present results from studies of quotes in English, French, and Arabic, from
anti-violence campaigners, politicians, extremists, and terrorists. The
algorithms correctly identify extreme statements; and analysis at individual,
group, and population levels detects both trends over time and sharp changes
attributed to major geopolitical events. Group analysis shows that additional
population characteristics can be determined, such as polarisation over
particular issues and large-scale shifts in attitude. Finally, MP voting
behaviour and statements from publicly-available records are analysed to
determine the level of correlation between what people say and what they do.
|
2502.00015
|
Ethical Concerns of Generative AI and Mitigation Strategies: A
Systematic Mapping Study
|
cs.CY cs.AI
|
[Context] Generative AI technologies, particularly Large Language Models
(LLMs), have transformed numerous domains by enhancing convenience and
efficiency in information retrieval, content generation, and decision-making
processes. However, deploying LLMs also presents diverse ethical challenges,
and their mitigation strategies remain complex and domain-dependent.
[Objective] This paper aims to identify and categorize the key ethical concerns
associated with using LLMs, examine existing mitigation strategies, and assess
the outstanding challenges in implementing these strategies across various
domains. [Method] We conducted a systematic mapping study, reviewing 39 studies
that discuss ethical concerns and mitigation strategies related to LLMs. We
analyzed these ethical concerns using five ethical dimensions that we extracted
based on various existing guidelines, frameworks, and an analysis of the
mitigation strategies and implementation challenges. [Results] Our findings
reveal that ethical concerns in LLMs are multi-dimensional and
context-dependent. While proposed mitigation strategies address some of these
concerns, significant challenges still remain. [Conclusion] Our results
highlight that ethical issues often hinder the practical implementation of the
mitigation strategies, particularly in high-stake areas like healthcare and
public governance; existing frameworks often lack adaptability, failing to
accommodate evolving societal expectations and diverse contexts.
|
2502.00017
|
A Frugal Model for Accurate Early Student Failure Prediction
|
cs.CY cs.LG
|
Predicting student success or failure is vital for timely interventions and
personalized support. Early failure prediction is particularly crucial, yet
limited data availability in the early stages poses challenges, one of the
possible solutions is to make use of additional data from other contexts,
however, this might lead to overconsumption with no guarantee of better
results. To address this, we propose the Frugal Early Prediction (FEP) model, a
new hybrid model that selectively incorporates additional data, promoting data
frugality and efficient resource utilization. Experiments conducted on a public
dataset from a VLE demonstrate FEP's effectiveness in reducing data usage, a
primary goal of this research.Experiments showcase a remarkable 27% reduction
in data consumption, compared to a systematic use of additional data, aligning
with our commitment to data frugality and offering substantial benefits to
educational institutions seeking efficient data consumption. Additionally, FEP
also excels in enhancing prediction accuracy. Compared to traditional
approaches, FEP achieves an average accuracy gain of 7.3%. This not only
highlights the practicality and efficiency of FEP but also its superiority in
performance, while respecting resource constraints, providing beneficial
findings for educational institutions seeking data frugality.
|
2502.00018
|
An Expectation-Maximization Algorithm-based Autoregressive Model for the
Fuzzy Job Shop Scheduling Problem
|
cs.AI
|
The fuzzy job shop scheduling problem (FJSSP) emerges as an innovative
extension to the job shop scheduling problem (JSSP), incorporating a layer of
uncertainty that aligns the problem more closely with the complexities of
real-world manufacturing environments. This improvement increases the
computational complexity of deriving the solution while improving its
applicability. In the domain of deterministic scheduling, neural combinatorial
optimization (NCO) has recently demonstrated remarkable efficacy. However, its
application to the realm of fuzzy scheduling has been relatively unexplored.
This paper aims to bridge this gap by investigating the feasibility of
employing neural networks to assimilate and process fuzzy information for the
resolution of FJSSP, thereby leveraging the advancements in NCO to enhance
fuzzy scheduling methodologies. To achieve this, we approach the FJSSP as a
generative task and introduce an expectation-maximization algorithm-based
autoregressive model (EMARM) to address it. During training, our model
alternates between generating scheduling schemes from given instances (E-step)
and adjusting the autoregressive model weights based on these generated schemes
(M-step). This novel methodology effectively navigates around the substantial
hurdle of obtaining ground-truth labels, which is a prevalent issue in NCO
frameworks. In testing, the experimental results demonstrate the superior
capability of EMARM in addressing the FJSSP, showcasing its effectiveness and
potential for practical applications in fuzzy scheduling.
|
2502.00019
|
Growth Patterns of Inference
|
cs.AI
|
What properties of a first-order search space support/hinder inference? What
kinds of facts would be most effective to learn? Answering these questions is
essential for understanding the dynamics of deductive reasoning and creating
large-scale knowledge-based learning systems that support efficient inference.
We address these questions by developing a model of how the distribution of
ground facts affects inference performance in search spaces. Experiments
suggest that uniform search spaces are suitable for larger KBs whereas search
spaces with skewed degree distribution show better performance in smaller KBs.
A sharp transition in Q/A performance is seen in some cases, suggesting that
analysis of the structure of search spaces with existing knowledge should be
used to guide the acquisition of new ground facts in learning systems.
|
2502.00020
|
Temporal Reasoning in AI systems
|
cs.AI
|
Commonsense temporal reasoning at scale is a core problem for cognitive
systems. The correct inference of the duration for which fluents hold is
required by many tasks, including natural language understanding and planning.
Many AI systems have limited deductive closure because they cannot extrapolate
information correctly regarding existing fluents and events. In this study, we
discuss the knowledge representation and reasoning schemes required for robust
temporal projection in the Cyc Knowledge Base. We discuss how events can start
and end risk periods for fluents. We then use discrete survival functions,
which represent knowledge of the persistence of facts, to extrapolate a given
fluent. The extrapolated intervals can be truncated by temporal constraints and
other types of commonsense knowledge. Finally, we present the results of
experiments to demonstrate that these methods obtain significant improvements
in terms of Q/A performance.
|
2502.00021
|
PixelBrax: Learning Continuous Control from Pixels End-to-End on the GPU
|
cs.LG cs.PF
|
We present PixelBrax, a set of continuous control tasks with pixel
observations. We combine the Brax physics engine with a pure JAX renderer,
allowing reinforcement learning (RL) experiments to run end-to-end on the GPU.
PixelBrax can render observations over thousands of parallel environments and
can run two orders of magnitude faster than existing benchmarks that rely on
CPU-based rendering. Additionally, PixelBrax supports fully reproducible
experiments through its explicit handling of any stochasticity within the
environments and supports color and video distractors for benchmarking
generalization. We open-source PixelBrax alongside JAX implementations of
several RL algorithms at github.com/trevormcinroe/pixelbrax.
|
2502.00022
|
A Dynamic and High-Precision Method for Scenario-Based HRA Synthetic
Data Collection in Multi-Agent Collaborative Environments Driven by LLMs
|
cs.AI cs.HC
|
HRA (Human Reliability Analysis) data is crucial for advancing HRA
methodologies. however, existing data collection methods lack the necessary
granularity, and most approaches fail to capture dynamic features.
Additionally, many methods require expert knowledge as input, making them
time-consuming and labor-intensive. To address these challenges, we propose a
new paradigm for the automated collection of HRA data. Our approach focuses on
key indicators behind human error, specifically measuring workload in
collaborative settings. This study introduces a novel, scenario-driven method
for workload estimation, leveraging fine-tuned large language models (LLMs). By
training LLMs on real-world operational data from high-temperature gas-cooled
reactors (HTGRs), we simulate human behavior and cognitive load in real time
across various collaborative scenarios. The method dynamically adapts to
changes in operator workload, providing more accurate, flexible, and scalable
workload estimates. The results demonstrate that the proposed WELLA (Workload
Estimation with LLMs and Agents) outperforms existing commercial LLM-based
methods in terms of prediction accuracy.
|
2502.00023
|
Musical Agent Systems: MACAT and MACataRT
|
cs.MA cs.AI cs.HC cs.SD eess.AS
|
Our research explores the development and application of musical agents,
human-in-the-loop generative AI systems designed to support music performance
and improvisation within co-creative spaces. We introduce MACAT and MACataRT,
two distinct musical agent systems crafted to enhance interactive music-making
between human musicians and AI. MACAT is optimized for agent-led performance,
employing real-time synthesis and self-listening to shape its output
autonomously, while MACataRT provides a flexible environment for collaborative
improvisation through audio mosaicing and sequence-based learning. Both systems
emphasize training on personalized, small datasets, fostering ethical and
transparent AI engagement that respects artistic integrity. This research
highlights how interactive, artist-centred generative AI can expand creative
possibilities, empowering musicians to explore new forms of artistic expression
in real-time, performance-driven and music improvisation contexts.
|
2502.00024
|
Retail Market Analysis
|
q-fin.GN cs.LG
|
This project focuses on analyzing retail market trends using historical sales
data, search trends, and customer reviews. By identifying the patterns and
trending products, the analysis provides actionable insights for retailers to
optimize inventory management and marketing strategies, ultimately enhancing
customer satisfaction and maximizing revenue.
|
2502.00025
|
Leveraging Large Language Models to Enhance Machine Learning
Interpretability and Predictive Performance: A Case Study on Emergency
Department Returns for Mental Health Patients
|
cs.LG cs.AI cs.CY
|
Importance: Emergency department (ED) returns for mental health conditions
pose a major healthcare burden, with 24-27% of patients returning within 30
days. Traditional machine learning models for predicting these returns often
lack interpretability for clinical use.
Objective: To assess whether integrating large language models (LLMs) with
machine learning improves predictive accuracy and clinical interpretability of
ED mental health return risk models.
Methods: This retrospective cohort study analyzed 42,464 ED visits for 27,904
unique mental health patients at an academic medical center in the Deep South
from January 2018 to December 2022.
Main Outcomes and Measures: Two primary outcomes were evaluated: (1) 30-day
ED return prediction accuracy and (2) model interpretability using a novel
LLM-enhanced framework integrating SHAP (SHapley Additive exPlanations) values
with clinical knowledge.
Results: For chief complaint classification, LLaMA 3 (8B) with 10-shot
learning outperformed traditional models (accuracy: 0.882, F1-score: 0.86). In
SDoH classification, LLM-based models achieved 0.95 accuracy and 0.96 F1-score,
with Alcohol, Tobacco, and Substance Abuse performing best (F1: 0.96-0.89),
while Exercise and Home Environment showed lower performance (F1: 0.70-0.67).
The LLM-based interpretability framework achieved 99% accuracy in translating
model predictions into clinically relevant explanations. LLM-extracted features
improved XGBoost AUC from 0.74 to 0.76 and AUC-PR from 0.58 to 0.61.
Conclusions and Relevance: Integrating LLMs with machine learning models
yielded modest but consistent accuracy gains while significantly enhancing
interpretability through automated, clinically relevant explanations. This
approach provides a framework for translating predictive analytics into
actionable clinical insights.
|
2502.00026
|
Pushing the Limits of BFP on Narrow Precision LLM Inference
|
cs.AR cs.AI
|
The substantial computational and memory demands of Large Language Models
(LLMs) hinder their deployment. Block Floating Point (BFP) has proven effective
in accelerating linear operations, a cornerstone of LLM workloads. However, as
sequence lengths grow, nonlinear operations, such as Attention, increasingly
become performance bottlenecks due to their quadratic computational complexity.
These nonlinear operations are predominantly executed using inefficient
floating-point formats, which renders the system challenging to optimize
software efficiency and hardware overhead. In this paper, we delve into the
limitations and potential of applying BFP to nonlinear operations. Given our
findings, we introduce a hardware-software co-design framework (DB-Attn),
including: (i) DBFP, an advanced BFP version, overcomes nonlinear operation
challenges with a pivot-focus strategy for diverse data and an adaptive
grouping strategy for flexible exponent sharing. (ii) DH-LUT, a novel lookup
table algorithm dedicated to accelerating nonlinear operations with DBFP
format. (iii) An RTL-level DBFP-based engine is implemented to support DB-Attn,
applicable to FPGA and ASIC. Results show that DB-Attn provides significant
performance improvements with negligible accuracy loss, achieving 74% GPU
speedup on Softmax of LLaMA and 10x low overhead performance improvement over
SOTA designs.
|
2502.00027
|
Analysis of a Memcapacitor-Based for Neural Network Accelerator
Framework
|
cs.AR cs.AI cs.NE
|
Data-intensive computing tasks, such as training neural networks, are crucial
for artificial intelligence applications but often come with high energy
demands. One promising solution is to develop specialized hardware that
directly maps neural networks, utilizing arrays of memristive devices to
perform parallel multiply-accumulate operations. In our research, we introduce
a novel CMOS-based memcapacitor circuit that is validated using the cadence
tool. Additionally, we developed the device in Python to facilitate the design
of a memcapacitive-based accelerator. Our proposed framework employs a crossbar
array of memcapacitor devices to train a neural network capable of digit
classification and CIFAR dataset recognition. We tested the non-ideal
characteristics of the constructed memcapacitor-based neural network. The
system achieved an impressive 98.4% training accuracy in digit recognition and
94.4% training accuracy in CIFAR recognition, highlighting its effectiveness.
This study demonstrates the potential of memcapacitor-based neural network
systems in handling classification tasks and sets the stage for further
advancements in neuromorphic computing.
|
2502.00029
|
AlphaSharpe: LLM-Driven Discovery of Robust Risk-Adjusted Metrics
|
q-fin.PM cs.AI cs.CL cs.NE q-fin.RM
|
Financial metrics like the Sharpe ratio are pivotal in evaluating investment
performance by balancing risk and return. However, traditional metrics often
struggle with robustness and generalization, particularly in dynamic and
volatile market conditions. This paper introduces AlphaSharpe, a novel
framework leveraging large language models (LLMs) to iteratively evolve and
optimize financial metrics to discover enhanced risk-return metrics that
outperform traditional approaches in robustness and correlation with future
performance metrics by employing iterative crossover, mutation, and evaluation.
Key contributions of this work include: (1) a novel use of LLMs to generate and
refine financial metrics with implicit domain-specific knowledge, (2) a scoring
mechanism to ensure that evolved metrics generalize effectively to unseen data,
and (3) an empirical demonstration of 3x predictive power for future
risk-returns, and 2x portfolio performance. Experimental results in a
real-world dataset highlight the superiority of discovered metrics, making them
highly relevant to portfolio managers and financial decision-makers. This
framework not only addresses the limitations of existing metrics but also
showcases the potential of LLMs in advancing financial analytics, paving the
way for informed and robust investment strategies.
|
2502.00031
|
GNN-based Anchor Embedding for Exact Subgraph Matching
|
cs.SI cs.DB
|
Subgraph matching query is a classic problem in graph data management and has
a variety of real-world applications, such as discovering structures in
biological or chemical networks, finding communities in social network
analysis, explaining neural networks, and so on. To further solve the subgraph
matching problem, several recent advanced works attempt to utilize
deep-learning-based techniques to handle the subgraph matching query. However,
most of these works only obtain approximate results for subgraph matching
without theoretical guarantees of accuracy. In this paper, we propose a novel
and effective graph neural network (GNN)-based anchor embedding framework
(GNN-AE), which allows exact subgraph matching. Unlike GNN-based approximate
subgraph matching approaches that only produce inexact results, in this paper,
we pioneer a series of concepts related to anchor (including anchor, anchor
graph/path, etc.) in subgraph matching and carefully devise the anchor (graph)
embedding technique based on GNN models. We transform the subgraph matching
problem into a search problem in the embedding space via the anchor (graph &
path) embedding techniques. With the proposed anchor matching mechanism, GNN-AE
can guarantee subgraph matching has no false dismissals. We design an efficient
matching growth algorithm, which can retrieve the locations of all exact
matches in parallel. We also propose a cost-model-based DFS query plan to
enhance the parallel matching growth algorithm. Through extensive experiments
on 6 real-world and 3 synthetic datasets, we confirm the effectiveness and
efficiency of our GNN-AE approach for exact subgraph matching.
|
2502.00032
|
Querying Databases with Function Calling
|
cs.DB cs.AI cs.IR
|
The capabilities of Large Language Models (LLMs) are rapidly accelerating
largely thanks to their integration with external tools. Querying databases is
among the most effective of these integrations, enabling LLMs to access private
or continually updating data. While Function Calling is the most common method
for interfacing external tools to LLMs, its application to database querying as
a tool has been underexplored. We propose a tool definition for database
querying that unifies accessing data with search queries, filters, or a
combination both, as well as transforming results with aggregation and groupby
operators. To evaluate its effectiveness, we conduct a study with 8 LLMs
spanning 5 model families. We present a novel pipeline adapting the Gorilla LLM
framework to create synthetic database schemas and queries. We primarily
evaluate the models with the Exact Match of predicted and ground truth query
APIs. Among the models tested, Claude 3.5 Sonnet achieves the highest
performance with an Exact Match score of 74.3%, followed by GPT-4o mini at
73.7%, and GPT-4o at 71.8%. We further breakdown these results per API
component utilized and across synthetic use cases. We find that LLMs are highly
effective at utilizing operators on boolean properties, but struggle with text
property filters. Across use cases we find robust results with the higher
performing models such as GPT-4o, but significant performance variance across
use cases from lower performing models. We additionally conduct ablation
studies exploring the impact of parallel tool calling, adding a rationale as an
argument of the tool call, using a separate tool per database collection, and
tool calling with structured outputs. Our findings demonstrate the
effectiveness of enabling LLMs to query databases with Function Calling. We
have open-sourced our experimental code and results at
github.com/weaviate/gorilla.
|
2502.00034
|
Towards Efficient Multi-Objective Optimisation for Real-World Power Grid
Topology Control
|
cs.AI cs.LG cs.SY eess.SY
|
Power grid operators face increasing difficulties in the control room as the
increase in energy demand and the shift to renewable energy introduce new
complexities in managing congestion and maintaining a stable supply. Effective
grid topology control requires advanced tools capable of handling
multi-objective trade-offs. While Reinforcement Learning (RL) offers a
promising framework for tackling such challenges, existing Multi-Objective
Reinforcement Learning (MORL) approaches fail to scale to the large state and
action spaces inherent in real-world grid operations. Here we present a
two-phase, efficient and scalable Multi-Objective Optimisation (MOO) method
designed for grid topology control, combining an efficient RL learning phase
with a rapid planning phase to generate day-ahead plans for unseen scenarios.
We validate our approach using historical data from TenneT, a European
Transmission System Operator (TSO), demonstrating minimal deployment time,
generating day-ahead plans within 4-7 minutes with strong performance. These
results underline the potential of our scalable method to support real-world
power grid management, offering a practical, computationally efficient, and
time-effective tool for operational planning. Based on current congestion costs
and inefficiencies in grid operations, adopting our approach by TSOs could
potentially save millions of euros annually, providing a compelling economic
incentive for its integration in the control room.
|
2502.00036
|
Efficient Client Selection in Federated Learning
|
cs.LG cs.AI cs.DC
|
Federated Learning (FL) enables decentralized machine learning while
preserving data privacy. This paper proposes a novel client selection framework
that integrates differential privacy and fault tolerance. The adaptive client
selection adjusts the number of clients based on performance and system
constraints, with noise added to protect privacy. Evaluated on the UNSW-NB15
and ROAD datasets for network anomaly detection, the method improves accuracy
by 7% and reduces training time by 25% compared to baselines. Fault tolerance
enhances robustness with minimal performance trade-offs.
|
2502.00037
|
Super Quantum Mechanics
|
quant-ph cs.LG cs.NA math.NA
|
We introduce Super Quantum Mechanics (SQM) as a theory that considers states
in Hilbert space subject to multiple quadratic constraints. Traditional quantum
mechanics corresponds to a single quadratic constraint of wavefunction
normalization. In its simplest form, SQM considers states in the form of
unitary operators, where the quadratic constraints are conditions of unitarity.
In this case, the stationary SQM problem is a quantum inverse problem with
multiple applications in machine learning and artificial intelligence. The SQM
stationary problem is equivalent to a new algebraic problem that we address in
this paper. The SQM non-stationary problem considers the evolution of a quantum
system, distinct from the explicit time dependence of the Hamiltonian, $H(t)$.
Several options for the SQM dynamic equation are considered, and quantum
circuits of 2D type are introduced, which transform one quantum system into
another. Although no known physical process currently describes such dynamics,
this approach naturally bridges direct and inverse quantum mechanics problems,
allowing for the development of a new type of computer algorithm. Beyond
computer modeling, the developed theory could be directly applied if or when a
physical process capable of solving an inverse quantum problem in a single
measurement act (analogous to wavefunction measurement in traditional quantum
mechanics) is discovered in the future.
|
2502.00038
|
The Best Soules Basis for the Estimation of a Spectral Barycentre
Network
|
cs.SI cs.LG physics.data-an stat.ML
|
The main contribution of this work is a fast algorithm to compute the
barycentre of a set of networks based on a Laplacian spectral pseudo-distance.
The core engine for the reconstruction of the barycentre is an algorithm that
explores the large library of Soules bases, and returns a basis that yields a
sparse approximation of the sample mean adjacency matrix. We prove that when
the networks are random realizations of stochastic block models, then our
algorithm reconstructs the population mean adjacency matrix. In addition to the
theoretical analysis of the estimator of the barycentre network, we perform
Monte Carlo simulations to validate the theoretical properties of the
estimator. This work is significant because it opens the door to the design of
new spectral-based network synthesis that have theoretical guarantees.
|
2502.00039
|
Accurately Estimating Unreported Infections using Information Theory
|
cs.SI cs.IT math.IT physics.soc-ph
|
One of the most significant challenges in combating against the spread of
infectious diseases was the difficulty in estimating the true magnitude of
infections. Unreported infections could drive up disease spread, making it very
hard to accurately estimate the infectivity of the pathogen, therewith
hampering our ability to react effectively. Despite the use of
surveillance-based methods such as serological studies, identifying the true
magnitude is still challenging. This paper proposes an information theoretic
approach for accurately estimating the number of total infections. Our approach
is built on top of Ordinary Differential Equations (ODE) based models, which
are commonly used in epidemiology and for estimating such infections. We show
how we can help such models to better compute the number of total infections
and identify the parametrization by which we need the fewest bits to describe
the observed dynamics of reported infections. Our experiments on COVID-19
spread show that our approach leads to not only substantially better estimates
of the number of total infections but also better forecasts of infections than
standard model calibration based methods. We additionally show how our learned
parametrization helps in modeling more accurate what-if scenarios with
non-pharmaceutical interventions. Our approach provides a general method for
improving epidemic modeling which is applicable broadly.
|
2502.00040
|
Multi-Objective Reinforcement Learning for Power Grid Topology Control
|
cs.LG cs.AI cs.SY eess.SY
|
Transmission grid congestion increases as the electrification of various
sectors requires transmitting more power. Topology control, through substation
reconfiguration, can reduce congestion but its potential remains
under-exploited in operations. A challenge is modeling the topology control
problem to align well with the objectives and constraints of operators.
Addressing this challenge, this paper investigates the application of
multi-objective reinforcement learning (MORL) to integrate multiple conflicting
objectives for power grid topology control. We develop a MORL approach using
deep optimistic linear support (DOL) and multi-objective proximal policy
optimization (MOPPO) to generate a set of Pareto-optimal policies that balance
objectives such as minimizing line loading, topological deviation, and
switching frequency. Initial case studies show that the MORL approach can
provide valuable insights into objective trade-offs and improve Pareto front
approximation compared to a random search baseline. The generated
multi-objective RL policies are 30% more successful in preventing grid failure
under contingencies and 20% more effective when training budget is reduced -
compared to the common single objective RL policy.
|
2502.00041
|
MALT: Mechanistic Ablation of Lossy Translation in LLMs for a
Low-Resource Language: Urdu
|
cs.CL
|
LLMs are predominantly trained on English data, which leads to a significant
drop in performance on low-resource languages. Understanding how LLMs handle
these languages is crucial for improving their effectiveness. This study
focuses on Urdu as a use case for exploring the challenges faced by LLMs in
processing low-resource languages. LLMs primarily reason in English when
prompted in another language, with the final layers acting as translators to
convert the English response into the target language. This study finds that
even for low-resource languages, the internal latent response of LLMs in
English is quite coherent; however, the translation features are lossy and
result in poor translations, leading to reduced performance. By mechanistically
removing these translation features and using a separate translation model to
translate the internal latent response of LLM, the performance of LLMs improves
significantly while also preserving the cultural nuances of the input in
low-resource languages.
|
2502.00042
|
LSU-Net: Lightweight Automatic Organs Segmentation Network For Medical
Images
|
eess.IV cs.CV
|
UNet and its variants have widespread applications in medical image
segmentation. However, the substantial number of parameters and computational
complexity of these models make them less suitable for use in clinical settings
with limited computational resources. To address this limitation, we propose a
novel Lightweight Shift U-Net (LSU-Net). We integrate the Light Conv Block and
the Tokenized Shift Block in a lightweight manner, combining them with a
dynamic weight multi-loss design for efficient dynamic weight allocation. The
Light Conv Block effectively captures features with a low parameter count by
combining standard convolutions with depthwise separable convolutions. The
Tokenized Shift Block optimizes feature representation by shifting and
capturing deep features through a combination of the Spatial Shift Block and
depthwise separable convolutions. Dynamic adjustment of the loss weights at
each layer approaches the optimal solution and enhances training stability. We
validated LSU-Net on the UWMGI and MSD Colon datasets, and experimental results
demonstrate that LSU-Net outperforms most state-of-the-art segmentation
architectures.
|
2502.00043
|
A scalable adaptive deep Koopman predictive controller for real-time
optimization of mixed traffic flow
|
eess.SY cs.AI cs.SY
|
The use of connected automated vehicle (CAV) is advocated to mitigate traffic
oscillations in mixed traffic flow consisting of CAVs and human driven vehicles
(HDVs). This study proposes an adaptive deep Koopman predictive control
framework (AdapKoopPC) for regulating mixed traffic flow. Firstly, a Koopman
theory-based adaptive trajectory prediction deep network (AdapKoopnet) is
designed for modeling HDVs car-following behavior. AdapKoopnet enables the
representation of HDVs behavior by a linear model in a high-dimensional space.
Secondly, the model predictive control is employed to smooth the mixed traffic
flow, where the combination of the linear dynamic model of CAVs and linear
prediction blocks from AdapKoopnet is embedded as the predictive model into the
AdapKoopPC. Finally, the predictive performance of the prosed AdapKoopnet is
verified using the HighD naturalistic driving dataset. Furthermore, the control
performance of AdapKoopPC is validated by the numerical simulations. Results
demonstrate that the AdapKoopnet provides more accuracy HDVs predicted
trajectories than the baseline nonlinear models. Moreover, the proposed
AdapKoopPC exhibits more effective control performance with less computation
cost compared with baselines in mitigating traffic oscillations, especially at
the low CAVs penetration rates. The code of proposed AdapKoopPC is open source.
|
2502.00044
|
HoloGraphs: An Interactive Physicalization for Dynamic Graphs
|
cs.SI cs.HC
|
We present HoloGraphs, a novel approach for physically representing,
explaining, exploring, and interacting with dynamic networks. HoloGraphs
addresses the challenges of visualizing and understanding evolving network
structures by providing an engaging method of interacting and exploring dynamic
network structures using physicalization techniques. In contrast to traditional
digital interfaces, our approach leverages tangible artifacts made from
transparent materials to provide an intuitive way for people with low
visualization literacy to explore network data. The process involves printing
network embeddings on transparent media and assembling them to create a 3D
representation of dynamic networks, maintaining spatial perception and allowing
the examination of each timeslice individually. Interactivity is envisioned
using optional Focus+Context layers and overlays for node trajectories and
labels. Focus layers highlight nodes of interest, context layers provide an
overview of the network structure, and global overlays show node trajectories
over time. In this paper, we outline the design principles and implementation
of HoloGraphs and present how elementary digital interactions can be mapped to
physical interactions to manipulate the elements of a network and temporal
dimension in an engaging matter. We demonstrate the capabilities of our concept
in a case study. Using a dynamic network of character interactions from a
popular book series, we showcase how it represents and supports understanding
complex concepts such as dynamic networks.
|
2502.00045
|
Restless Multi-armed Bandits under Frequency and Window Constraints for
Public Service Inspections
|
cs.LG cs.AI cs.CE cs.CY
|
Municipal inspections are an important part of maintaining the quality of
goods and services. In this paper, we approach the problem of intelligently
scheduling service inspections to maximize their impact, using the case of food
establishment inspections in Chicago as a case study. The Chicago Department of
Public Health (CDPH) inspects thousands of establishments each year, with a
substantial fail rate (over 3,000 failed inspection reports in 2023). To
balance the objectives of ensuring adherence to guidelines, minimizing
disruption to establishments, and minimizing inspection costs, CDPH assigns
each establishment an inspection window every year and guarantees that they
will be inspected exactly once during that window. These constraints create a
challenge for a restless multi-armed bandit (RMAB) approach, for which there
are no existing methods. We develop an extension to Whittle index-based systems
for RMABs that can guarantee action window constraints and frequencies, and
furthermore can be leveraged to optimize action window assignments themselves.
Briefly, we combine MDP reformulation and integer programming-based lookahead
to maximize the impact of inspections subject to constraints. A neural
network-based supervised learning model is developed to model state transitions
of real Chicago establishments using public CDPH inspection records, which
demonstrates 10\% AUC improvements compared with directly predicting
establishments' failures. Our experiments not only show up to 24\% (in
simulation) or 33\% (on real data) reward improvements resulting from our
approach but also give insight into the impact of scheduling constraints.
|
2502.00046
|
Optimization Strategies for Enhancing Resource Efficiency in
Transformers & Large Language Models
|
cs.LG cs.CL
|
Advancements in Natural Language Processing are heavily reliant on the
Transformer architecture, whose improvements come at substantial resource costs
due to ever-growing model sizes. This study explores optimization techniques,
including Quantization, Knowledge Distillation, and Pruning, focusing on energy
and computational efficiency while retaining performance. Among standalone
methods, 4-bit Quantization significantly reduces energy use with minimal
accuracy loss. Hybrid approaches, like NVIDIA's Minitron approach combining KD
and Structured Pruning, further demonstrate promising trade-offs between size
reduction and accuracy retention. A novel optimization equation is introduced,
offering a flexible framework for comparing various methods. Through the
investigation of these compression methods, we provide valuable insights for
developing more sustainable and efficient LLMs, shining a light on the
often-ignored concern of energy efficiency.
|
2502.00047
|
HadamRNN: Binary and Sparse Ternary Orthogonal RNNs
|
cs.LG cs.AI
|
Binary and sparse ternary weights in neural networks enable faster
computations and lighter representations, facilitating their use on edge
devices with limited computational power. Meanwhile, vanilla RNNs are highly
sensitive to changes in their recurrent weights, making the binarization and
ternarization of these weights inherently challenging. To date, no method has
successfully achieved binarization or ternarization of vanilla RNN weights. We
present a new approach leveraging the properties of Hadamard matrices to
parameterize a subset of binary and sparse ternary orthogonal matrices. This
method enables the training of orthogonal RNNs (ORNNs) with binary and sparse
ternary recurrent weights, effectively creating a specific class of binary and
sparse ternary vanilla RNNs. The resulting ORNNs, called HadamRNN and
lock-HadamRNN, are evaluated on benchmarks such as the copy task, permuted and
sequential MNIST tasks, and IMDB dataset. Despite binarization or sparse
ternarization, these RNNs maintain performance levels comparable to
state-of-the-art full-precision models, highlighting the effectiveness of our
approach. Notably, our approach is the first solution with binary recurrent
weights capable of tackling the copy task over 1000 timesteps.
|
2502.00048
|
Contextually Entangled Gradient Mapping for Optimized LLM Comprehension
|
cs.LG cs.AI cs.CL
|
Contextually Entangled Gradient Mapping (CEGM) introduces a new approach to
gradient optimization, redefining the relationship between contextual
embeddings and gradient updates to enhance semantic coherence and reasoning
capabilities in neural architectures. By treating gradients as dynamic carriers
of contextual dependencies rather than isolated numerical entities, the
proposed methodology bridges critical gaps in existing optimization strategies.
The integration of entangled gradient dynamics into a loss regularization
framework demonstrated significant improvements in tasks involving long-form
reasoning, contextual retention, and adaptability to unseen domains.
Experimental evaluations showed that the CEGM-enhanced model consistently
outperformed baseline approaches, achieving higher accuracy in token-level
predictions and greater resilience to noisy inputs. Practical implementations
involved modifications to training pipelines, introducing entanglement layers
and dynamic coefficient adjustments that seamlessly align with existing
architectures. Results further highlighted reductions in semantic drift during
sequential transformations and improvements in embedding coherence across
paraphrased sentences, showing the robustness and versatility of the proposed
methodology. The findings demonstrate the broader implications of gradient
entanglement for both theoretical advancements and practical applications in
optimization strategies.
|
2502.00050
|
DISC: Dataset for Analyzing Driving Styles In Simulated Crashes for
Mixed Autonomy
|
cs.RO cs.LG
|
Handling pre-crash scenarios is still a major challenge for self-driving cars
due to limited practical data and human-driving behavior datasets. We introduce
DISC (Driving Styles In Simulated Crashes), one of the first datasets designed
to capture various driving styles and behaviors in pre-crash scenarios for
mixed autonomy analysis. DISC includes over 8 classes of driving
styles/behaviors from hundreds of drivers navigating a simulated vehicle
through a virtual city, encountering rare-event traffic scenarios. This dataset
enables the classification of pre-crash human driving behaviors in unsafe
conditions, supporting individualized trajectory prediction based on observed
driving patterns. By utilizing a custom-designed VR-based in-house driving
simulator, TRAVERSE, data was collected through a driver-centric study
involving human drivers encountering twelve simulated accident scenarios. This
dataset fills a critical gap in human-centric driving data for rare events
involving interactions with autonomous vehicles. It enables autonomous systems
to better react to human drivers and optimize trajectory prediction in mixed
autonomy environments involving both human-driven and self-driving cars. In
addition, individual driving behaviors are classified through a set of
standardized questionnaires, carefully designed to identify and categorize
driving behavior traits. We correlate data features with driving behaviors,
showing that the simulated environment reflects real-world driving styles. DISC
is the first dataset to capture how various driving styles respond to accident
scenarios, offering significant potential to enhance autonomous vehicle safety
and driving behavior analysis in mixed autonomy environments.
|
2502.00051
|
A two-stage dual-task learning strategy for early prediction of
pathological complete response to neoadjuvant chemotherapy for breast cancer
using dynamic contrast-enhanced magnetic resonance images
|
cs.CV physics.med-ph
|
Rationale and Objectives: Early prediction of pathological complete response
(pCR) can facilitate personalized treatment for breast cancer patients. To
improve prediction accuracy at the early time point of neoadjuvant
chemotherapy, we proposed a two-stage dual-task learning strategy to train a
deep neural network for early prediction of pCR using early-treatment magnetic
resonance images. Methods: We developed and validated the two-stage dual-task
learning strategy using the dataset from the national-wide, multi-institutional
I-SPY2 clinical trial, which included dynamic contrast-enhanced magnetic
resonance images acquired at three time points: pretreatment (T0), after 3
weeks (T1), and after 12 weeks of treatment (T2). First, we trained a
convolutional long short-term memory network to predict pCR and extract the
latent space image features at T2. At the second stage, we trained a dual-task
network to simultaneously predict pCR and the image features at T2 using images
from T0 and T1. This allowed us to predict pCR earlier without using images
from T2. Results: The conventional single-stage single-task strategy gave an
area under the receiver operating characteristic curve (AUROC) of 0.799 for pCR
prediction using all the data at time points T0 and T1. By using the proposed
two-stage dual-task learning strategy, the AUROC was improved to 0.820.
Conclusions: The proposed two-stage dual-task learning strategy can improve
model performance significantly (p=0.0025) for predicting pCR at the early
stage (3rd week) of neoadjuvant chemotherapy. The early prediction model can
potentially help physicians to intervene early and develop personalized plans
at the early stage of chemotherapy.
|
2502.00052
|
Bridging Contrastive Learning and Domain Adaptation: Theoretical
Perspective and Practical Application
|
cs.LG cs.AI
|
This work studies the relationship between Contrastive Learning and Domain
Adaptation from a theoretical perspective. The two standard contrastive losses,
NT-Xent loss (Self-supervised) and Supervised Contrastive loss, are related to
the Class-wise Mean Maximum Discrepancy (CMMD), a dissimilarity measure widely
used for Domain Adaptation. Our work shows that minimizing the contrastive
losses decreases the CMMD and simultaneously improves class-separability,
laying the theoretical groundwork for the use of Contrastive Learning in the
context of Domain Adaptation. Due to the relevance of Domain Adaptation in
medical imaging, we focused the experiments on mammography images. Extensive
experiments on three mammography datasets - synthetic patches, clinical (real)
patches, and clinical (real) images - show improved Domain Adaptation,
class-separability, and classification performance, when minimizing the
Supervised Contrastive loss.
|
2502.00053
|
Differentiable Projection-based Learn to Optimize in Wireless
Network-Part I: Convex Constrained (Non-)Convex Programming
|
eess.SY cs.LG cs.SY
|
This paper addresses a class of (non-)convex optimization problems subject to
general convex constraints, which pose significant challenges for traditional
methods due to their inherent non-convexity and diversity. Conventional convex
optimization-based solvers often struggle to efficiently handle these problems
in their most general form. While neural network (NN)-based approaches offer a
promising alternative, ensuring the feasibility of NN-generated solutions and
effectively training the NN remain key hurdles, largely because finite-capacity
networks can produce infeasible outputs. To overcome these issues, we propose a
projection-based method that projects any infeasible NN output onto the
feasible domain, thus guaranteeing strict adherence to the constraints without
compromising the NN's optimization capability. Furthermore, we derive the
objective function values for both the raw NN outputs and their projected
counterparts, along with the gradients of these values with respect to the NN
parameters. This derivation enables label-free (unsupervised) training,
reducing reliance on labeled data and improving scalability. Experimental
results demonstrate that the proposed projection-based method consistently
ensures feasibility.
|
2502.00055
|
Towards Recommender Systems LLMs Playground (RecSysLLMsP): Exploring
Polarization and Engagement in Simulated Social Networks
|
cs.SI cs.AI cs.CY cs.HC cs.IR
|
Given the exponential advancement in AI technologies and the potential
escalation of harmful effects from recommendation systems, it is crucial to
simulate and evaluate these effects early on. Doing so can help prevent
possible damage to both societies and technology companies. This paper
introduces the Recommender Systems LLMs Playground (RecSysLLMsP), a novel
simulation framework leveraging Large Language Models (LLMs) to explore the
impacts of different content recommendation setups on user engagement and
polarization in social networks. By creating diverse AI agents (AgentPrompts)
with descriptive, static, and dynamic attributes, we assess their autonomous
behaviour across three scenarios: Plurality, Balanced, and Similarity. Our
findings reveal that the Similarity Scenario, which aligns content with user
preferences, maximizes engagement while potentially fostering echo chambers.
Conversely, the Plurality Scenario promotes diverse interactions but produces
mixed engagement results. Our study emphasizes the need for a careful balance
in recommender system designs to enhance user satisfaction while mitigating
societal polarization. It underscores the unique value and challenges of
incorporating LLMs into simulation environments. The benefits of RecSysLLMsP
lie in its potential to calculate polarization effects, which is crucial for
assessing societal impacts and determining user engagement levels with diverse
recommender system setups. This advantage is essential for developing and
maintaining a successful business model for social media companies. However,
the study's limitations revolve around accurately emulating reality. Future
efforts should validate the similarity in behaviour between real humans and
AgentPrompts and establish metrics for measuring polarization scores.
|
2502.00058
|
GitHub Stargazers | Building Graph- and Edge-level Prediction Algorithms
for Developer Social Networks
|
cs.SI
|
Analyzing social networks formed by developers provides valuable insights for
market segmentation, trend analysis, and community engagement. In this study,
we explore the GitHub Stargazers dataset to classify developer communities and
predict potential collaborations using graph neural networks (GNNs). By
modeling 12,725 developer networks, we segment communities based on their focus
on web development or machine learning repositories, leveraging graph
attributes and node embeddings. Furthermore, we propose an edge-level
recommendation algorithm that predicts new connections between developers using
similarity measures. Our experimental results demonstrate the effectiveness of
our approach in accurately segmenting communities and improving connection
predictions, offering valuable insights for understanding open-source developer
networks.
|
2502.00059
|
Large Language Models are Few-shot Multivariate Time Series Classifiers
|
cs.LG cs.AI
|
Large Language Models (LLMs) have been extensively applied in time series
analysis. Yet, their utility in the few-shot classification (i.e., a crucial
training scenario due to the limited training data available in industrial
applications) concerning multivariate time series data remains underexplored.
We aim to leverage the extensive pre-trained knowledge in LLMs to overcome the
data scarcity problem within multivariate time series. Specifically, we propose
LLMFew, an LLM-enhanced framework to investigate the feasibility and capacity
of LLMs for few-shot multivariate time series classification. This model
introduces a Patch-wise Temporal Convolution Encoder (PTCEnc) to align time
series data with the textual embedding input of LLMs. We further fine-tune the
pre-trained LLM decoder with Low-rank Adaptations (LoRA) to enhance its feature
representation learning ability in time series data. Experimental results show
that our model outperformed state-of-the-art baselines by a large margin,
achieving 125.2% and 50.2% improvement in classification accuracy on
Handwriting and EthanolConcentration datasets, respectively. Moreover, our
experimental results demonstrate that LLM-based methods perform well across a
variety of datasets in few-shot MTSC, delivering reliable results compared to
traditional models. This success paves the way for their deployment in
industrial environments where data are limited.
|
2502.00060
|
Israel-Hamas war through Telegram, Reddit and Twitter
|
cs.SI cs.AI cs.LG
|
The Israeli-Palestinian conflict started on 7 October 2023, have resulted
thus far to over 48,000 people killed including more than 17,000 children with
a majority from Gaza, more than 30,000 people injured, over 10,000 missing, and
over 1 million people displaced, fleeing conflict zones. The infrastructure
damage includes the 87\% of housing units, 80\% of public buildings and 60\% of
cropland 17 out of 36 hospitals, 68\% of road networks and 87\% of school
buildings damaged. This conflict has as well launched an online discussion
across various social media platforms. Telegram was no exception due to its
encrypted communication and highly involved audience. The current study will
cover an analysis of the related discussion in relation to different
participants of the conflict and sentiment represented in those discussion. To
this end, we prepared a dataset of 125K messages shared on channels in Telegram
spanning from 23 October 2025 until today. Additionally, we apply the same
analysis in two publicly available datasets from Twitter containing 2001 tweets
and from Reddit containing 2M opinions. We apply a volume analysis across the
three datasets, entity extraction and then proceed to BERT topic analysis in
order to extract common themes or topics. Next, we apply sentiment analysis to
analyze the emotional tone of the discussions. Our findings hint at polarized
narratives as the hallmark of how political factions and outsiders mold public
opinion. We also analyze the sentiment-topic prevalence relationship, detailing
the trends that may show manipulation and attempts of propaganda by the
involved parties. This will give a better understanding of the online discourse
on the Israel-Palestine conflict and contribute to the knowledge on the
dynamics of social media communication during geopolitical crises.
|
2502.00061
|
From Data to Action: Charting A Data-Driven Path to Combat Antimicrobial
Resistance
|
cs.LG cs.AI q-bio.PE
|
Antimicrobial-resistant (AMR) microbes are a growing challenge in healthcare,
rendering modern medicines ineffective. AMR arises from antibiotic production
and bacterial evolution, but quantifying its transmission remains difficult.
With increasing AMR-related data, data-driven methods offer promising insights
into its causes and treatments. This paper reviews AMR research from a data
analytics and machine learning perspective, summarizing the state-of-the-art
and exploring key areas such as surveillance, prediction, drug discovery,
stewardship, and driver analysis. It discusses data sources, methods, and
challenges, emphasizing standardization and interoperability. Additionally, it
surveys statistical and machine learning techniques for AMR analysis,
addressing issues like data noise and bias. Strategies for denoising and
debiasing are highlighted to enhance fairness and robustness in AMR research.
The paper underscores the importance of interdisciplinary collaboration and
awareness of data challenges in advancing AMR research, pointing to future
directions for innovation and improved methodologies.
|
2502.00063
|
A Multi-Layered Large Language Model Framework for Disease Prediction
|
cs.CL cs.AI
|
Social telehealth has revolutionized healthcare by enabling patients to share
symptoms and receive medical consultations remotely. Users frequently post
symptoms on social media and online health platforms, generating a vast
repository of medical data that can be leveraged for disease classification and
symptom severity assessment. Large language models (LLMs), such as LLAMA3,
GPT-3.5 Turbo, and BERT, process complex medical data to enhance disease
classification. This study explores three Arabic medical text preprocessing
techniques: text summarization, text refinement, and Named Entity Recognition
(NER). Evaluating CAMeL-BERT, AraBERT, and Asafaya-BERT with LoRA, the best
performance was achieved using CAMeL-BERT with NER-augmented text (83% type
classification, 69% severity assessment). Non-fine-tuned models performed
poorly (13%-20% type classification, 40%-49% severity assessment). Integrating
LLMs into social telehealth systems enhances diagnostic accuracy and treatment
outcomes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.