id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.17890
|
VidSole: A Multimodal Dataset for Joint Kinetics Quantification and
Disease Detection with Deep Learning
|
cs.CV eess.SP
|
Understanding internal joint loading is critical for diagnosing gait-related
diseases such as knee osteoarthritis; however, current methods of measuring
joint risk factors are time-consuming, expensive, and restricted to lab
settings. In this paper, we enable the large-scale, cost-effective
biomechanical analysis of joint loading via three key contributions: the
development and deployment of novel instrumented insoles, the creation of a
large multimodal biomechanics dataset (VidSole), and a baseline deep learning
pipeline to predict internal joint loading factors. Our novel instrumented
insole measures the tri-axial forces and moments across five high-pressure
points under the foot. VidSole consists of the forces and moments measured by
these insoles along with corresponding RGB video from two viewpoints, 3D body
motion capture, and force plate data for over 2,600 trials of 52 diverse
participants performing four fundamental activities of daily living
(sit-to-stand, stand-to-sit, walking, and running). We feed the insole data and
kinematic parameters extractable from video (i.e., pose, knee angle) into a
deep learning pipeline consisting of an ensemble Gated Recurrent Unit (GRU)
activity classifier followed by activity-specific Long Short Term Memory (LSTM)
regression networks to estimate knee adduction moment (KAM), a biomechanical
risk factor for knee osteoarthritis. The successful classification of
activities at an accuracy of 99.02 percent and KAM estimation with mean
absolute error (MAE) less than 0.5 percent*body weight*height, the current
threshold for accurately detecting knee osteoarthritis with KAM, illustrates
the usefulness of our dataset for future research and clinical settings.
|
2501.17893
|
Language Modelling for Speaker Diarization in Telephonic Interviews
|
eess.AS cs.LG cs.SD
|
The aim of this paper is to investigate the benefit of combining both
language and acoustic modelling for speaker diarization. Although conventional
systems only use acoustic features, in some scenarios linguistic data contain
high discriminative speaker information, even more reliable than the acoustic
ones. In this study we analyze how an appropriate fusion of both kind of
features is able to obtain good results in these cases. The proposed system is
based on an iterative algorithm where a LSTM network is used as a speaker
classifier. The network is fed with character-level word embeddings and a GMM
based acoustic score created with the output labels from previous iterations.
The presented algorithm has been evaluated in a Call-Center database, which is
composed of telephone interview audios. The combination of acoustic features
and linguistic content shows a 84.29% improvement in terms of a word-level DER
as compared to a HMM/VB baseline system. The results of this study confirms
that linguistic content can be efficiently used for some speaker recognition
tasks.
|
2501.17894
|
Progress in Artificial Intelligence and its Determinants
|
econ.GN cs.AI cs.CY cs.LG physics.soc-ph q-fin.EC
|
We study long-run progress in artificial intelligence in a quantitative way.
Many measures, including traditional ones such as patents and publications,
machine learning benchmarks, and a new Aggregate State of the Art in ML (or
ASOTA) Index we have constructed from these, show exponential growth at roughly
constant rates over long periods. Production of patents and publications
doubles every ten years, by contrast with the growth of computing resources
driven by Moore's Law, roughly a doubling every two years. We argue that the
input of AI researchers is also crucial and its contribution can be objectively
estimated. Consequently, we give a simple argument that explains the 5:1
relation between these two rates. We then discuss the application of this
argument to different output measures and compare our analyses with predictions
based on machine learning scaling laws proposed in existing literature. Our
quantitative framework facilitates understanding, predicting, and modulating
the development of these important technologies.
|
2501.17896
|
Explainable Machine Learning: An Illustration of Kolmogorov-Arnold
Network Model for Airfoil Lift Prediction
|
cs.LG
|
Data science has emerged as fourth paradigm of scientific exploration.
However many machine learning models operate as black boxes offering limited
insight into the reasoning behind their predictions. This lack of transparency
is one of the drawbacks to generate new knowledge from data. Recently
Kolmogorov-Arnold Network or KAN has been proposed as an alternative model
which embeds explainable AI. This study demonstrates the potential of KAN for
new scientific exploration. KAN along with five other popular supervised
machine learning models are applied to the well-known problem of airfoil lift
prediction in aerospace engineering. Standard data generated from an earlier
study on 2900 different airfoils is used. KAN performed the best with an R2
score of 96.17 percent on the test data, surpassing both the baseline model and
Multi Layer Perceptron. Explainability of KAN is shown by pruning and
symbolizing the model resulting in an equation for coefficient of lift in terms
of input variables. The explainable information retrieved from KAN model is
found to be consistent with the known physics of lift generation by airfoil
thus demonstrating its potential to aid in scientific exploration.
|
2501.17897
|
Visualization of Organ Movements Using Automatic Region Segmentation of
Swallowing CT
|
eess.IV cs.CV physics.med-ph
|
This study presents the first report on the development of an artificial
intelligence (AI) for automatic region segmentation of four-dimensional
computer tomography (4D-CT) images during swallowing. The material consists of
4D-CT images taken during swallowing. Additionally, data for verifying the
practicality of the AI were obtained from 4D-CT images during mastication and
swallowing. The ground truth data for the region segmentation for the AI were
created from five 4D-CT datasets of swallowing. A 3D convolutional model of
nnU-Net was used for the AI. The learning and evaluation method for the AI was
leave-one-out cross-validation. The number of epochs for training the nnU-Net
was 100. The Dice coefficient was used as a metric to assess the AI's region
segmentation accuracy. Regions with a median Dice coefficient of 0.7 or higher
included the bolus, bones, tongue, and soft palate. Regions with a Dice
coefficient below 0.7 included the thyroid cartilage and epiglottis. Factors
that reduced the Dice coefficient included metal artifacts caused by dental
crowns in the bolus and the speed of movement for the thyroid cartilage and
epiglottis. In practical verification of the AI, no significant misrecognition
was observed for facial bones, jaw bones, or the tongue. However, regions such
as the hyoid bone, thyroid cartilage, and epiglottis were not fully delineated
during fast movement. It is expected that future research will improve the
accuracy of the AI's region segmentation, though the risk of misrecognition
will always exist. Therefore, the development of tools for efficiently
correcting the AI's segmentation results is necessary. AI-based visualization
is expected to contribute not only to the deepening of motion analysis of
organs during swallowing but also to improving the accuracy of swallowing CT by
clearly showing the current state of its precision.
|
2501.17898
|
Distilling Knowledge for Designing Computational Imaging Systems
|
eess.IV cs.LG
|
Designing the physical encoder is crucial for accurate image reconstruction
in computational imaging (CI) systems. Currently, these systems are designed
via end-to-end (E2E) optimization, where the encoder is modeled as a neural
network layer and is jointly optimized with the decoder. However, the
performance of E2E optimization is significantly reduced by the physical
constraints imposed on the encoder. Also, since the E2E learns the parameters
of the encoder by backpropagating the reconstruction error, it does not promote
optimal intermediate outputs and suffers from gradient vanishing. To address
these limitations, we reinterpret the concept of knowledge distillation (KD)
for designing a physically constrained CI system by transferring the knowledge
of a pretrained, less-constrained CI system. Our approach involves three steps:
(1) Given the original CI system (student), a teacher system is created by
relaxing the constraints on the student's encoder. (2) The teacher is optimized
to solve a less-constrained version of the student's problem. (3) The teacher
guides the training of the student through two proposed knowledge transfer
functions, targeting both the encoder and the decoder feature space. The
proposed method can be employed to any imaging modality since the relaxation
scheme and the loss functions can be adapted according to the physical
acquisition and the employed decoder. This approach was validated on three
representative CI modalities: magnetic resonance, single-pixel, and compressive
spectral imaging. Simulations show that a teacher system with an encoder that
has a structure similar to that of the student encoder provides effective
guidance. Our approach achieves significantly improved reconstruction
performance and encoder design, outperforming both E2E optimization and
traditional non-data-driven encoder designs.
|
2501.17899
|
The Right to AI
|
cs.CY cs.AI cs.HC
|
This paper proposes a Right to AI, which asserts that individuals and
communities should meaningfully participate in the development and governance
of the AI systems that shape their lives. Motivated by the increasing
deployment of AI in critical domains and inspired by Henri Lefebvre's concept
of the Right to the City, we reconceptualize AI as a societal infrastructure,
rather than merely a product of expert design. In this paper, we critically
evaluate how generative agents, large-scale data extraction, and diverse
cultural values bring new complexities to AI oversight. The paper proposes that
grassroots participatory methodologies can mitigate biased outcomes and enhance
social responsiveness. It asserts that data is socially produced and should be
managed and owned collectively. Drawing on Sherry Arnstein's Ladder of Citizen
Participation and analyzing nine case studies, the paper develops a four-tier
model for the Right to AI that situates the current paradigm and envisions an
aspirational future. It proposes recommendations for inclusive data ownership,
transparent design processes, and stakeholder-driven oversight. We also discuss
market-led and state-centric alternatives and argue that participatory
approaches offer a better balance between technical efficiency and democratic
legitimacy.
|
2501.17900
|
Shared DIFF Transformer
|
cs.LG
|
DIFF Transformer improves attention allocation by enhancing focus on relevant
context while suppressing noise. It introduces a differential attention
mechanism that calculates the difference between two independently generated
attention distributions, effectively reducing noise and promoting sparse
attention patterns. However, the independent signal generation in DIFF
Transformer results in parameter redundancy and suboptimal utilization of
information. In this work, we propose Shared DIFF Transformer, which draws on
the idea of a differential amplifier by introducing a shared base matrix to
model global patterns and incorporating low-rank updates to enhance
task-specific flexibility. This design significantly reduces parameter
redundancy, improves efficiency, and retains strong noise suppression
capabilities. Experimental results show that, compared to DIFF Transformer, our
method achieves better performance in tasks such as long-sequence modeling, key
information retrieval, and in-context learning. Our work provides a novel and
efficient approach to optimizing differential attention mechanisms and
advancing robust Transformer architectures.
|
2501.17901
|
Molecular Fingerprints Are Strong Models for Peptide Function Prediction
|
q-bio.BM cs.LG
|
We study the effectiveness of molecular fingerprints for peptide property
prediction and demonstrate that domain-specific feature extraction from
molecular graphs can outperform complex and computationally expensive models
such as GNNs, pretrained sequence-based transformers and multimodal ensembles,
even without hyperparameter tuning. To this end, we perform a thorough
evaluation on 126 datasets, achieving state-of-the-art results on LRGB and 5
other peptide function prediction benchmarks. We show that models based on
count variants of ECFP, Topological Torsion, and RDKit molecular fingerprints
and LightGBM as classification head are remarkably robust. The strong
performance of molecular fingerprints, which are intrinsically very short-range
feature encoders, challenges the presumed importance of long-range interactions
in peptides. Our conclusion is that the use of molecular fingerprints for
larger molecules, such as peptides, can be a computationally feasible,
low-parameter, and versatile alternative to sophisticated deep learning models.
|
2501.17903
|
Free Agent in Agent-Based Mixture-of-Experts Generative AI Framework
|
cs.MA cs.AI
|
Multi-agent systems commonly distribute tasks among specialized, autonomous
agents, yet they often lack mechanisms to replace or reassign underperforming
agents in real time. Inspired by the free-agency model of Major League
Baseball, the Reinforcement Learning Free Agent (RLFA) algorithm introduces a
reward-based mechanism to detect and remove agents exhibiting persistent
underperformance and seamlessly insert more capable ones. Each agent internally
uses a mixture-of-experts (MoE) approach, delegating incoming tasks to
specialized sub-models under the guidance of a gating function. A primary use
case is fraud detection, where RLFA promptly swaps out an agent whose detection
accuracy dips below a preset threshold. A new agent is tested in a probationary
mode, and upon demonstrating superior performance, fully replaces the
underperformer. This dynamic, free-agency cycle ensures sustained accuracy,
quicker adaptation to emerging threats, and minimal disruption to ongoing
operations. By continually refreshing its roster of agents, the system fosters
ongoing improvements and more resilient collaboration in multi-agent Generative
AI environments.
|
2501.17904
|
A Robust Support Vector Machine Approach for Raman COVID-19 Data
Classification
|
q-bio.QM cs.LG
|
Recent advances in healthcare technologies have led to the availability of
large amounts of biological samples across several techniques and applications.
In particular, in the last few years, Raman spectroscopy analysis of biological
samples has been successfully applied for early-stage diagnosis. However,
spectra' inherent complexity and variability make the manual analysis
challenging, even for domain experts. For the same reason, the use of
traditional Statistical and Machine Learning (ML) techniques could not
guarantee for accurate and reliable results. ML models, combined with robust
optimization techniques, offer the possibility to improve the classification
accuracy and enhance the resilience of predictive models. In this paper, we
investigate the performance of a novel robust formulation for Support Vector
Machine (SVM) in classifying COVID-19 samples obtained from Raman Spectroscopy.
Given the noisy and perturbed nature of biological samples, we protect the
classification process against uncertainty through the application of robust
optimization techniques. Specifically, we derive robust counterpart models of
deterministic formulations using bounded-by-norm uncertainty sets around each
observation. We explore the cases of both linear and kernel-induced classifiers
to address binary and multiclass classification tasks. The effectiveness of our
approach is validated on real-world COVID-19 datasets provided by Italian
hospitals by comparing the results of our simulations with a state-of-the-art
classifier.
|
2501.17905
|
DReSS: Data-driven Regularized Structured Streamlining for Large
Language Models
|
cs.LG cs.AI cs.CL
|
Large language models (LLMs) have achieved significant progress across
various domains, but their increasing scale results in high computational and
memory costs. Recent studies have revealed that LLMs exhibit sparsity,
providing the potential to reduce model size through pruning techniques.
However, existing pruning methods typically follow a prune-then-finetune
paradigm. Since the pruned components still contain valuable information, their
direct removal often leads to irreversible performance degradation, imposing a
substantial computational burden to recover performance during finetuning. In
this paper, we propose a novel paradigm that first applies regularization, then
prunes, and finally finetunes. Based on this paradigm, we introduce DReSS, a
simple and effective Data-driven Regularized Structured Streamlining method for
LLMs. By leveraging a small amount of data to regularize the components to be
pruned, DReSS explicitly transfers the important information to the remaining
parts of the model in advance. Compared to direct pruning, this can reduce the
information loss caused by parameter removal, thereby enhancing its language
modeling capabilities. Experimental results demonstrate that DReSS
significantly outperforms existing pruning methods even under extreme pruning
ratios, significantly reducing latency and increasing throughput.
|
2501.17906
|
Unsupervised Patch-GAN with Targeted Patch Ranking for Fine-Grained
Novelty Detection in Medical Imaging
|
cs.CV eess.IV
|
Detecting novel anomalies in medical imaging is challenging due to the
limited availability of labeled data for rare abnormalities, which often
display high variability and subtlety. This challenge is further compounded
when small abnormal regions are embedded within larger normal areas, as
whole-image predictions frequently overlook these subtle deviations. To address
these issues, we propose an unsupervised Patch-GAN framework designed to detect
and localize anomalies by capturing both local detail and global structure. Our
framework first reconstructs masked images to learn fine-grained,
normal-specific features, allowing for enhanced sensitivity to minor deviations
from normality. By dividing these reconstructed images into patches and
assessing the authenticity of each patch, our approach identifies anomalies at
a more granular level, overcoming the limitations of whole-image evaluation.
Additionally, a patch-ranking mechanism prioritizes regions with higher
abnormal scores, reinforcing the alignment between local patch discrepancies
and the global image context. Experimental results on the ISIC 2016 skin lesion
and BraTS 2019 brain tumor datasets validate our framework's effectiveness,
achieving AUCs of 95.79% and 96.05%, respectively, and outperforming three
state-of-the-art baselines.
|
2501.17917
|
Deep Ensembles Secretly Perform Empirical Bayes
|
cs.LG cs.AI stat.ML
|
Quantifying uncertainty in neural networks is a highly relevant problem which
is essential to many applications. The two predominant paradigms to tackle this
task are Bayesian neural networks (BNNs) and deep ensembles. Despite some
similarities between these two approaches, they are typically surmised to lack
a formal connection and are thus understood as fundamentally different. BNNs
are often touted as more principled due to their reliance on the Bayesian
paradigm, whereas ensembles are perceived as more ad-hoc; yet, deep ensembles
tend to empirically outperform BNNs, with no satisfying explanation as to why
this is the case. In this work we bridge this gap by showing that deep
ensembles perform exact Bayesian averaging with a posterior obtained with an
implicitly learned data-dependent prior. In other words deep ensembles are
Bayesian, or more specifically, they implement an empirical Bayes procedure
wherein the prior is learned from the data. This perspective offers two main
benefits: (i) it theoretically justifies deep ensembles and thus provides an
explanation for their strong empirical performance; and (ii) inspection of the
learned prior reveals it is given by a mixture of point masses -- the use of
such a strong prior helps elucidate observed phenomena about ensembles.
Overall, our work delivers a newfound understanding of deep ensembles which is
not only of interest in it of itself, but which is also likely to generate
future insights that drive empirical improvements for these models.
|
2501.17951
|
An iterative spectral algorithm for digraph clustering
|
physics.soc-ph cs.SI
|
Graph clustering is a fundamental technique in data analysis with
applications in many different fields. While there is a large body of work on
clustering undirected graphs, the problem of clustering directed graphs is much
less understood. The analysis is more complex in the directed graph case for
two reasons: the clustering must preserve directional information in the
relationships between clusters, and directed graphs have non-Hermitian
adjacency matrices whose properties are less conducive to traditional spectral
methods. Here we consider the problem of partitioning the vertex set of a
directed graph into $k\ge 2$ clusters so that edges between different clusters
tend to follow the same direction. We present an iterative algorithm based on
spectral methods applied to new Hermitian representations of directed graphs.
Our algorithm performs favourably against the state-of-the-art, both on
synthetic and real-world data sets. Additionally, it is able to identify a
"meta-graph" of $k$ vertices that represents the higher-order relations between
clusters in a directed graph. We showcase this capability on data sets
pertaining food webs, biological neural networks, and the online card game
Hearthstone.
|
2501.17962
|
Agricultural Industry Initiatives on Autonomy: How collaborative
initiatives of VDMA and AEF can facilitate complexity in domain crossing
harmonization needs
|
cs.CY cs.RO cs.SY eess.SY
|
The agricultural industry is undergoing a significant transformation with the
increasing adoption of autonomous technologies. Addressing complex challenges
related to safety and security, components and validation procedures, and
liability distribution is essential to facilitate the adoption of autonomous
technologies. This paper explores the collaborative groups and initiatives
undertaken to address these challenges. These groups investigate inter alia
three focal topics: 1) describe the functional architecture of the operational
range, 2) define the work context, i.e., the realistic scenarios that emerge in
various agricultural applications, and 3) the static and dynamic detection
cases that need to be detected by sensor sets. Linked by the Agricultural
Operational Design Domain (Agri-ODD), use case descriptions, risk analysis, and
questions of liability can be handled. By providing an overview of these
collaborative initiatives, this paper aims to highlight the joint development
of autonomous agricultural systems that enhance the overall efficiency of
farming operations.
|
2501.17963
|
Physics-Grounded Differentiable Simulation for Soft Growing Robots
|
cs.RO
|
Soft-growing robots (i.e., vine robots) are a promising class of soft robots
that allow for navigation and growth in tightly confined environments. However,
these robots remain challenging to model and control due to the complex
interplay of the inflated structure and inextensible materials, which leads to
obstacles for autonomous operation and design optimization. Although there
exist simulators for these systems that have achieved qualitative and
quantitative success in matching high-level behavior, they still often fail to
capture realistic vine robot shapes using simplified parameter models and have
difficulties in high-throughput simulation necessary for planning and parameter
optimization. We propose a differentiable simulator for these systems, enabling
the use of the simulator "in-the-loop" of gradient-based optimization
approaches to address the issues listed above. With the more complex parameter
fitting made possible by this approach, we experimentally validate and
integrate a closed-form nonlinear stiffness model for thin-walled inflated
tubes based on a first-principles approach to local material wrinkling. Our
simulator also takes advantage of data-parallel operations by leveraging
existing differentiable computation frameworks, allowing multiple simultaneous
rollouts. We demonstrate the feasibility of using a physics-grounded nonlinear
stiffness model within our simulator, and how it can be an effective tool in
sim-to-real transfer. We provide our implementation open source.
|
2501.17965
|
Variational Combinatorial Sequential Monte Carlo for Bayesian
Phylogenetics in Hyperbolic Space
|
cs.LG stat.ML
|
Hyperbolic space naturally encodes hierarchical structures such as
phylogenies (binary trees), where inward-bending geodesics reflect paths
through least common ancestors, and the exponential growth of neighborhoods
mirrors the super-exponential scaling of topologies. This scaling challenge
limits the efficiency of Euclidean-based approximate inference methods.
Motivated by the geometric connections between trees and hyperbolic space, we
develop novel hyperbolic extensions of two sequential search algorithms:
Combinatorial and Nested Combinatorial Sequential Monte Carlo (\textsc{Csmc}
and \textsc{Ncsmc}). Our approach introduces consistent and unbiased
estimators, along with variational inference methods (\textsc{H-Vcsmc} and
\textsc{H-Vncsmc}), which outperform their Euclidean counterparts. Empirical
results demonstrate improved speed, scalability and performance in
high-dimensional phylogenetic inference tasks.
|
2501.17968
|
Online Trajectory Replanner for Dynamically Grasping Irregular Objects
|
cs.RO
|
This paper presents a new trajectory replanner for grasping irregular
objects. Unlike conventional grasping tasks where the object's geometry is
assumed simple, we aim to achieve a "dynamic grasp" of the irregular objects,
which requires continuous adjustment during the grasping process. To
effectively handle irregular objects, we propose a trajectory optimization
framework that comprises two phases. Firstly, in a specified time limit of 10s,
initial offline trajectories are computed for a seamless motion from an initial
configuration of the robot to grasp the object and deliver it to a pre-defined
target location. Secondly, fast online trajectory optimization is implemented
to update robot trajectories in real-time within 100 ms. This helps to mitigate
pose estimation errors from the vision system. To account for model
inaccuracies, disturbances, and other non-modeled effects, trajectory tracking
controllers for both the robot and the gripper are implemented to execute the
optimal trajectories from the proposed framework. The intensive experimental
results effectively demonstrate the performance of our trajectory planning
framework in both simulation and real-world scenarios.
|
2501.17969
|
LLMs can be Fooled into Labelling a Document as Relevant (best caf\'e
near me; this paper is perfectly relevant)
|
cs.IR
|
LLMs are increasingly being used to assess the relevance of information
objects. This work reports on experiments to study the labelling of short texts
(i.e., passages) for relevance, using multiple open-source and proprietary
LLMs. While the overall agreement of some LLMs with human judgements is
comparable to human-to-human agreement measured in previous research, LLMs are
more likely to label passages as relevant compared to human judges, indicating
that LLM labels denoting non-relevance are more reliable than those indicating
relevance.
This observation prompts us to further examine cases where human judges and
LLMs disagree, particularly when the human judge labels the passage as
non-relevant and the LLM labels it as relevant. Results show a tendency for
many LLMs to label passages that include the original query terms as relevant.
We, therefore, conduct experiments to inject query words into random and
irrelevant passages, not unlike the way we inserted the query "best caf\'e near
me" into this paper. The results show that LLMs are highly influenced by the
presence of query words in the passages under assessment, even if the wider
passage has no relevance to the query. This tendency of LLMs to be fooled by
the mere presence of query words demonstrates a weakness in our current
measures of LLM labelling: relying on overall agreement misses important
patterns of failures. There is a real risk of bias in LLM-generated relevance
labels and, therefore, a risk of bias in rankers trained on those labels.
We also investigate the effects of deliberately manipulating LLMs by
instructing them to label passages as relevant, similar to the instruction
"this paper is perfectly relevant" inserted above. We find that such
manipulation influences the performance of some LLMs, highlighting the critical
need to consider potential vulnerabilities when deploying LLMs in real-world
applications.
|
2501.17974
|
Think Smarter not Harder: Adaptive Reasoning with Inference Aware
Optimization
|
cs.AI
|
Solving mathematics problems has been an intriguing capability of large
language models, and many efforts have been made to improve reasoning by
extending reasoning length, such as through self-correction and extensive long
chain-of-thoughts. While promising in problem-solving, advanced long reasoning
chain models exhibit an undesired single-modal behavior, where trivial
questions require unnecessarily tedious long chains of thought. In this work,
we propose a way to allow models to be aware of inference budgets by
formulating it as utility maximization with respect to an inference budget
constraint, hence naming our algorithm Inference Budget-Constrained Policy
Optimization (IBPO). In a nutshell, models fine-tuned through IBPO learn to
``understand'' the difficulty of queries and allocate inference budgets to
harder ones. With different inference budgets, our best models are able to have
a $4.14$\% and $5.74$\% absolute improvement ($8.08$\% and $11.2$\% relative
improvement) on MATH500 using $2.16$x and $4.32$x inference budgets
respectively, relative to LLaMA3.1 8B Instruct. These improvements are
approximately $2$x those of self-consistency under the same budgets.
|
2501.17976
|
KoopAGRU: A Koopman-based Anomaly Detection in Time-Series using Gated
Recurrent Units
|
cs.LG
|
Anomaly detection in real-world time-series data is a challenging task due to
the complex and nonlinear temporal dynamics involved. This paper introduces
KoopAGRU, a new deep learning model designed to tackle this problem by
combining Fast Fourier Transform (FFT), Deep Dynamic Mode Decomposition
(DeepDMD), and Koopman theory. FFT allows KoopAGRU to decompose temporal data
into time-variant and time-invariant components providing precise modeling of
complex patterns. To better control these two components, KoopAGRU utilizes
Gate Recurrent Unit (GRU) encoders to learn Koopman observables, enhancing the
detection capability across multiple temporal scales. KoopAGRU is trained in a
single process and offers fast inference times. Extensive tests on various
benchmark datasets show that KoopAGRU outperforms other leading methods,
achieving a new average F1-score of 90.88\% on the well-known anomalies
detection task of times series datasets, and proves to be efficient and
reliable in detecting anomalies in real-world scenarios.
|
2501.17977
|
TransRAD: Retentive Vision Transformer for Enhanced Radar Object
Detection
|
cs.CV cs.SY eess.SY
|
Despite significant advancements in environment perception capabilities for
autonomous driving and intelligent robotics, cameras and LiDARs remain
notoriously unreliable in low-light conditions and adverse weather, which
limits their effectiveness. Radar serves as a reliable and low-cost sensor that
can effectively complement these limitations. However, radar-based object
detection has been underexplored due to the inherent weaknesses of radar data,
such as low resolution, high noise, and lack of visual information. In this
paper, we present TransRAD, a novel 3D radar object detection model designed to
address these challenges by leveraging the Retentive Vision Transformer (RMT)
to more effectively learn features from information-dense radar
Range-Azimuth-Doppler (RAD) data. Our approach leverages the Retentive
Manhattan Self-Attention (MaSA) mechanism provided by RMT to incorporate
explicit spatial priors, thereby enabling more accurate alignment with the
spatial saliency characteristics of radar targets in RAD data and achieving
precise 3D radar detection across Range-Azimuth-Doppler dimensions.
Furthermore, we propose Location-Aware NMS to effectively mitigate the common
issue of duplicate bounding boxes in deep radar object detection. The
experimental results demonstrate that TransRAD outperforms state-of-the-art
methods in both 2D and 3D radar detection tasks, achieving higher accuracy,
faster inference speed, and reduced computational complexity. Code is available
at https://github.com/radar-lab/TransRAD
|
2501.17978
|
VoD-3DGS: View-opacity-Dependent 3D Gaussian Splatting
|
cs.CV cs.GR cs.LG
|
Reconstructing a 3D scene from images is challenging due to the different
ways light interacts with surfaces depending on the viewer's position and the
surface's material. In classical computer graphics, materials can be classified
as diffuse or specular, interacting with light differently. The standard 3D
Gaussian Splatting model struggles to represent view-dependent content, since
it cannot differentiate an object within the scene from the light interacting
with its specular surfaces, which produce highlights or reflections. In this
paper, we propose to extend the 3D Gaussian Splatting model by introducing an
additional symmetric matrix to enhance the opacity representation of each 3D
Gaussian. This improvement allows certain Gaussians to be suppressed based on
the viewer's perspective, resulting in a more accurate representation of
view-dependent reflections and specular highlights without compromising the
scene's integrity. By allowing the opacity to be view dependent, our enhanced
model achieves state-of-the-art performance on Mip-Nerf, Tanks&Temples, Deep
Blending, and Nerf-Synthetic datasets without a significant loss in rendering
speed, achieving >60FPS, and only incurring a minimal increase in memory used.
|
2501.17980
|
Limits to AI Growth: The Ecological and Social Consequences of Scaling
|
cs.CY cs.AI
|
The accelerating development and deployment of AI technologies depend on the
continued ability to scale their infrastructure. This has implied increasing
amounts of monetary investment and natural resources. Frontier AI applications
have thus resulted in rising financial, environmental, and social costs. While
the factors that AI scaling depends on reach its limits, the push for its
accelerated advancement and entrenchment continues. In this paper, we provide a
holistic review of AI scaling using four lenses (technical, economic,
ecological, and social) and review the relationships between these lenses to
explore the dynamics of AI growth. We do so by drawing on system dynamics
concepts including archetypes such as "limits to growth" to model the dynamic
complexity of AI scaling and synthesize several perspectives. Our work maps out
the entangled relationships between the technical, economic, ecological and
social perspectives and the apparent limits to growth. The analysis explains
how industry's responses to external limits enables continued (but temporary)
scaling and how this benefits Big Tech while externalizing social and
environmental damages. To avoid an "overshoot and collapse" trajectory, we
advocate for realigning priorities and norms around scaling to prioritize
sustainable and mindful advancements.
|
2501.17981
|
Can Generative LLMs Create Query Variants for Test Collections? An
Exploratory Study
|
cs.IR
|
This paper explores the utility of a Large Language Model (LLM) to
automatically generate queries and query variants from a description of an
information need. Given a set of information needs described as backstories, we
explore how similar the queries generated by the LLM are to those generated by
humans. We quantify the similarity using different metrics and examine how the
use of each set would contribute to document pooling when building test
collections. Our results show potential in using LLMs to generate query
variants. While they may not fully capture the wide variety of human-generated
variants, they generate similar sets of relevant documents, reaching up to
71.1% overlap at a pool depth of 100.
|
2501.17982
|
Belief Roadmaps with Uncertain Landmark Evanescence
|
cs.RO cs.AI
|
We would like a robot to navigate to a goal location while minimizing state
uncertainty. To aid the robot in this endeavor, maps provide a prior belief
over the location of objects and regions of interest. To localize itself within
the map, a robot identifies mapped landmarks using its sensors. However, as the
time between map creation and robot deployment increases, portions of the map
can become stale, and landmarks, once believed to be permanent, may disappear.
We refer to the propensity of a landmark to disappear as landmark evanescence.
Reasoning about landmark evanescence during path planning, and the associated
impact on localization accuracy, requires analyzing the presence or absence of
each landmark, leading to an exponential number of possible outcomes of a given
motion plan. To address this complexity, we develop BRULE, an extension of the
Belief Roadmap. During planning, we replace the belief over future robot poses
with a Gaussian mixture which is able to capture the effects of landmark
evanescence. Furthermore, we show that belief updates can be made efficient,
and that maintaining a random subset of mixture components is sufficient to
find high quality solutions. We demonstrate performance in simulated and
real-world experiments. Software is available at https://bit.ly/BRULE.
|
2501.17983
|
Efficient Feature Fusion for UAV Object Detection
|
cs.CV
|
Object detection in unmanned aerial vehicle (UAV) remote sensing images poses
significant challenges due to unstable image quality, small object sizes,
complex backgrounds, and environmental occlusions. Small objects, in
particular, occupy small portions of images, making their accurate detection
highly difficult. Existing multi-scale feature fusion methods address these
challenges to some extent by aggregating features across different resolutions.
However, they often fail to effectively balance the classification and
localization performance for small objects, primarily due to insufficient
feature representation and imbalanced network information flow. In this paper,
we propose a novel feature fusion framework specifically designed for UAV
object detection tasks to enhance both localization accuracy and classification
performance. The proposed framework integrates hybrid upsampling and
downsampling modules, enabling feature maps from different network depths to be
flexibly adjusted to arbitrary resolutions. This design facilitates cross-layer
connections and multi-scale feature fusion, ensuring improved representation of
small objects. Our approach leverages hybrid downsampling to enhance
fine-grained feature representation, improving spatial localization of small
targets, even under complex conditions. Simultaneously, the upsampling module
aggregates global contextual information, optimizing feature consistency across
scales and enhancing classification robustness in cluttered scenes.
Experimental results on two public UAV datasets demonstrate the effectiveness
of the proposed framework. Integrated into the YOLO-v10 model, our method
achieves a 2% improvement in average precision (AP) compared to the baseline
YOLO-v10 model, while maintaining the same number of parameters. These results
highlight the potential of our framework for accurate and efficient UAV object
detection.
|
2501.17987
|
Pressure Field Reconstruction with SIREN: A Mesh-Free Approach for Image
Velocimetry in Complex Noisy Environments
|
cs.CV physics.flu-dyn
|
This work presents a novel approach for pressure field reconstruction from
image velocimetry data using SIREN (Sinusoidal Representation Network),
emphasizing its effectiveness as an implicit neural representation in noisy
environments and its mesh-free nature. While we briefly assess two recently
proposed methods - one-shot matrix-omnidirectional integration (OS-MODI) and
Green's function integral (GFI) - the primary focus is on the advantages of the
SIREN approach. The OS-MODI technique performs well in noise-free conditions
and with structured meshes but struggles when applied to unstructured meshes
with high aspect ratio. Similarly, the GFI method encounters difficulties due
to singularities inherent from the Newtonian kernel. In contrast, the proposed
SIREN approach is a mesh-free method that directly reconstructs the pressure
field, bypassing the need for an intrinsic grid connectivity and, hence,
avoiding the challenges associated with ill-conditioned cells and unstructured
meshes. This provides a distinct advantage over traditional mesh-based methods.
Moreover, it is shown that changes in the architecture of the SIREN can be used
to filter out inherent noise from velocimetry data. This work positions SIREN
as a robust and versatile solution for pressure reconstruction, particularly in
noisy environments characterized by the absence of mesh structure, opening new
avenues for innovative applications in this field.
|
2501.17991
|
Investigating the Monte-Carlo Tree Search Approach for the Job Shop
Scheduling Problem
|
cs.AI math.OC
|
The Job Shop Scheduling Problem (JSSP) is a well-known optimization problem
in manufacturing, where the goal is to determine the optimal sequence of jobs
across different machines to minimize a given objective. In this work, we focus
on minimising the weighted sum of job completion times. We explore the
potential of Monte Carlo Tree Search (MCTS), a heuristic-based reinforcement
learning technique, to solve large-scale JSSPs, especially those with
recirculation. We propose several Markov Decision Process (MDP) formulations to
model the JSSP for the MCTS algorithm. In addition, we introduce a new
synthetic benchmark derived from real manufacturing data, which captures the
complexity of large, non-rectangular instances often encountered in practice.
Our experimental results show that MCTS effectively produces good-quality
solutions for large-scale JSSP instances, outperforming our constraint
programming approach.
|
2501.17992
|
Reinforcement-Learning Portfolio Allocation with Dynamic Embedding of
Market Information
|
q-fin.PM cs.LG
|
We develop a portfolio allocation framework that leverages deep learning
techniques to address challenges arising from high-dimensional, non-stationary,
and low-signal-to-noise market information. Our approach includes a dynamic
embedding method that reduces the non-stationary, high-dimensional state space
into a lower-dimensional representation. We design a reinforcement learning
(RL) framework that integrates generative autoencoders and online meta-learning
to dynamically embed market information, enabling the RL agent to focus on the
most impactful parts of the state space for portfolio allocation decisions.
Empirical analysis based on the top 500 U.S. stocks demonstrates that our
framework outperforms common portfolio benchmarks and the predict-then-optimize
(PTO) approach using machine learning, particularly during periods of market
stress. Traditional factor models do not fully explain this superior
performance. The framework's ability to time volatility reduces its market
exposure during turbulent times. Ablation studies confirm the robustness of
this performance across various reinforcement learning algorithms.
Additionally, the embedding and meta-learning techniques effectively manage the
complexities of high-dimensional, noisy, and non-stationary financial data,
enhancing both portfolio performance and risk management.
|
2501.17994
|
InnerThoughts: Disentangling Representations and Predictions in Large
Language Models
|
cs.CL cs.LG
|
Large language models (LLMs) contain substantial factual knowledge which is
commonly elicited by multiple-choice question-answering prompts. Internally,
such models process the prompt through multiple transformer layers, building
varying representations of the problem within its hidden states. Ultimately,
however, only the hidden state corresponding to the final layer and token
position are used to predict the answer label. In this work, we propose instead
to learn a small separate neural network predictor module on a collection of
training questions, that take the hidden states from all the layers at the last
temporal position as input and outputs predictions. In effect, such a framework
disentangles the representational abilities of LLMs from their predictive
abilities. On a collection of hard benchmarks, our method achieves considerable
improvements in performance, sometimes comparable to supervised fine-tuning
procedures, but at a fraction of the computational cost.
|
2501.18005
|
Fault Localization via Fine-tuning Large Language Models with Mutation
Generated Stack Traces
|
cs.SE cs.LG
|
Abrupt and unexpected terminations of software are termed as software
crashes. They can be challenging to analyze. Finding the root cause requires
extensive manual effort and expertise to connect information sources like stack
traces, source code, and logs. Typical approaches to fault localization require
either test failures or source code. Crashes occurring in production
environments, such as that of SAP HANA, provide solely crash logs and stack
traces. We present a novel approach to localize faults based only on the stack
trace information and no additional runtime information, by fine-tuning large
language models (LLMs). We address complex cases where the root cause of a
crash differs from the technical cause, and is not located in the innermost
frame of the stack trace. As the number of historic crashes is insufficient to
fine-tune LLMs, we augment our dataset by leveraging code mutators to inject
synthetic crashes into the code base. By fine-tuning on 64,369 crashes
resulting from 4.1 million mutations of the HANA code base, we can correctly
predict the root cause location of a crash with an accuracy of 66.9\% while
baselines only achieve 12.6% and 10.6%. We substantiate the generalizability of
our approach by evaluating on two additional open-source databases, SQLite and
DuckDB, achieving accuracies of 63% and 74%, respectively. Across all our
experiments, fine-tuning consistently outperformed prompting non-finetuned LLMs
for localizing faults in our datasets.
|
2501.18006
|
Topological Signatures of Adversaries in Multimodal Alignments
|
cs.LG cs.AI cs.CR
|
Multimodal Machine Learning systems, particularly those aligning text and
image data like CLIP/BLIP models, have become increasingly prevalent, yet
remain susceptible to adversarial attacks. While substantial research has
addressed adversarial robustness in unimodal contexts, defense strategies for
multimodal systems are underexplored. This work investigates the topological
signatures that arise between image and text embeddings and shows how
adversarial attacks disrupt their alignment, introducing distinctive
signatures. We specifically leverage persistent homology and introduce two
novel Topological-Contrastive losses based on Total Persistence and Multi-scale
kernel methods to analyze the topological signatures introduced by adversarial
perturbations. We observe a pattern of monotonic changes in the proposed
topological losses emerging in a wide range of attacks on image-text
alignments, as more adversarial samples are introduced in the data. By
designing an algorithm to back-propagate these signatures to input samples, we
are able to integrate these signatures into Maximum Mean Discrepancy tests,
creating a novel class of tests that leverage topological signatures for better
adversarial detection.
|
2501.18009
|
Large Language Models Think Too Fast To Explore Effectively
|
cs.AI q-bio.NC
|
Large Language Models have emerged many intellectual capacities. While
numerous benchmarks assess their intelligence, limited attention has been given
to their ability to explore, an essential capacity for discovering new
information and adapting to novel environments in both natural and artificial
systems. The extent to which LLMs can effectively explore, particularly in
open-ended tasks, remains unclear. This study investigates whether LLMs can
surpass humans in exploration during an open-ended task, using Little Alchemy 2
as a paradigm, where agents combine elements to discover new ones. Results show
most LLMs underperform compared to humans, except for the o1 model, with those
traditional LLMs relying primarily on uncertainty driven strategies, unlike
humans who balance uncertainty and empowerment. Representational analysis of
the models with Sparse Autoencoders revealed that uncertainty and choices are
represented at earlier transformer blocks, while empowerment values are
processed later, causing LLMs to think too fast and make premature decisions,
hindering effective exploration. These findings shed light on the limitations
of LLM exploration and suggest directions for improving their adaptability.
|
2501.18011
|
Anatomy Might Be All You Need: Forecasting What to Do During Surgery
|
cs.CV cs.AI
|
Surgical guidance can be delivered in various ways. In neurosurgery, spatial
guidance and orientation are predominantly achieved through neuronavigation
systems that reference pre-operative MRI scans. Recently, there has been
growing interest in providing live guidance by analyzing video feeds from tools
such as endoscopes. Existing approaches, including anatomy detection,
orientation feedback, phase recognition, and visual question-answering,
primarily focus on aiding surgeons in assessing the current surgical scene.
This work aims to provide guidance on a finer scale, aiming to provide guidance
by forecasting the trajectory of the surgical instrument, essentially
addressing the question of what to do next. To address this task, we propose a
model that not only leverages the historical locations of surgical instruments
but also integrates anatomical features. Importantly, our work does not rely on
explicit ground truth labels for instrument trajectories. Instead, the ground
truth is generated by a detection model trained to detect both anatomical
structures and instruments within surgical videos of a comprehensive dataset
containing pituitary surgery videos. By analyzing the interaction between
anatomy and instrument movements in these videos and forecasting future
instrument movements, we show that anatomical features are a valuable asset in
addressing this challenging task. To the best of our knowledge, this work is
the first attempt to address this task for manually operated surgeries.
|
2501.18012
|
When less is more: evolving large neural networks from small ones
|
cs.LG cond-mat.dis-nn
|
In contrast to conventional artificial neural networks, which are large and
structurally static, we study feed-forward neural networks that are small and
dynamic, whose nodes can be added (or subtracted) during training. A single
neuronal weight in the network controls the network's size, while the weight
itself is optimized by the same gradient-descent algorithm that optimizes the
network's other weights and biases, but with a size-dependent objective or loss
function. We train and evaluate such Nimble Neural Networks on nonlinear
regression and classification tasks where they outperform the corresponding
static networks. Growing networks to minimal, appropriate, or optimal sizes
while training elucidates network dynamics and contrasts with pruning large
networks after training but before deployment.
|
2501.18015
|
A Proximal Operator for Inducing 2:4-Sparsity
|
cs.LG
|
Recent hardware advancements in AI Accelerators and GPUs allow to efficiently
compute sparse matrix multiplications, especially when 2 out of 4 consecutive
weights are set to zero. However, this so-called 2:4 sparsity usually comes at
a decreased accuracy of the model. We derive a regularizer that exploits the
local correlation of features to find better sparsity masks in trained models.
We minimize the regularizer jointly with a local squared loss by deriving the
proximal operator for which we show that it has an efficient solution in the
2:4-sparse case. After optimizing the mask, we use maskedgradient updates to
further minimize the local squared loss. We illustrate our method on toy
problems and apply it to pruning entire large language models up to 70B
parameters. On models up to 13B we improve over previous state of the art
algorithms, whilst on 70B models we match their performance.
|
2501.18016
|
Digital Twin-Enabled Real-Time Control in Robotic Additive Manufacturing
via Soft Actor-Critic Reinforcement Learning
|
cs.RO cs.AI cs.LG cs.SY eess.SY
|
Smart manufacturing systems increasingly rely on adaptive control mechanisms
to optimize complex processes. This research presents a novel approach
integrating Soft Actor-Critic (SAC) reinforcement learning with digital twin
technology to enable real-time process control in robotic additive
manufacturing. We demonstrate our methodology using a Viper X300s robot arm,
implementing two distinct control scenarios: static target acquisition and
dynamic trajectory following. The system architecture combines Unity's
simulation environment with ROS2 for seamless digital twin synchronization,
while leveraging transfer learning to efficiently adapt trained models across
tasks. Our hierarchical reward structure addresses common reinforcement
learning challenges including local minima avoidance, convergence acceleration,
and training stability. Experimental results show rapid policy convergence and
robust task execution in both simulated and physical environments, with
performance metrics including cumulative reward, value prediction accuracy,
policy loss, and discrete entropy coefficient demonstrating the effectiveness
of our approach. This work advances the integration of reinforcement learning
with digital twins for industrial robotics applications, providing a framework
for enhanced adaptive real-time control for smart additive manufacturing
process.
|
2501.18018
|
Perforated Backpropagation: A Neuroscience Inspired Extension to
Artificial Neural Networks
|
cs.NE cs.LG q-bio.NC
|
The neurons of artificial neural networks were originally invented when much
less was known about biological neurons than is known today. Our work explores
a modification to the core neuron unit to make it more parallel to a biological
neuron. The modification is made with the knowledge that biological dendrites
are not simply passive activation funnels, but also compute complex non-linear
functions as they transmit activation to the cell body. The paper explores a
novel system of "Perforated" backpropagation empowering the artificial neurons
of deep neural networks to achieve better performance coding for the same
features they coded for in the original architecture. After an initial network
training phase, additional "Dendrite Nodes" are added to the network and
separately trained with a different objective: to correlate their output with
the remaining error of the original neurons. The trained Dendrite Nodes are
then frozen, and the original neurons are further trained, now taking into
account the additional error signals provided by the Dendrite Nodes. The cycle
of training the original neurons and then adding and training Dendrite Nodes
can be repeated several times until satisfactory performance is achieved. Our
algorithm was successfully added to modern state-of-the-art PyTorch networks
across multiple domains, improving upon original accuracies and allowing for
significant model compression without a loss in accuracy.
|
2501.18028
|
KNN and K-means in Gini Prametric Spaces
|
cs.LG
|
This paper introduces innovative enhancements to the K-means and K-nearest
neighbors (KNN) algorithms based on the concept of Gini prametric spaces.
Unlike traditional distance metrics, Gini-based measures incorporate both
value-based and rank-based information, improving robustness to noise and
outliers. The main contributions of this work include: proposing a Gini-based
measure that captures both rank information and value distances; presenting a
Gini K-means algorithm that is proven to converge and demonstrates resilience
to noisy data; and introducing a Gini KNN method that performs competitively
with state-of-the-art approaches such as Hassanat's distance in noisy
environments. Experimental evaluations on 14 datasets from the UCI repository
demonstrate the superior performance and efficiency of Gini-based algorithms in
clustering and classification tasks. This work opens new avenues for leveraging
rank-based measures in machine learning and statistical analysis.
|
2501.18033
|
Generative AI for Vision: A Comprehensive Study of Frameworks and
Applications
|
cs.CV
|
Generative AI is transforming image synthesis, enabling the creation of
high-quality, diverse, and photorealistic visuals across industries like
design, media, healthcare, and autonomous systems. Advances in techniques such
as image-to-image translation, text-to-image generation, domain transfer, and
multimodal alignment have broadened the scope of automated visual content
creation, supporting a wide spectrum of applications. These advancements are
driven by models like Generative Adversarial Networks (GANs), conditional
frameworks, and diffusion-based approaches such as Stable Diffusion. This work
presents a structured classification of image generation techniques based on
the nature of the input, organizing methods by input modalities like noisy
vectors, latent representations, and conditional inputs. We explore the
principles behind these models, highlight key frameworks including DALL-E,
ControlNet, and DeepSeek Janus-Pro, and address challenges such as
computational costs, data biases, and output alignment with user intent. By
offering this input-centric perspective, this study bridges technical depth
with practical insights, providing researchers and practitioners with a
comprehensive resource to harness generative AI for real-world applications.
|
2501.18039
|
Online Nonstochastic Control with Convex Safety Constraints
|
math.OC cs.SY eess.SY
|
This paper considers the online nonstochastic control problem of a linear
time-invariant system under convex state and input constraints that need to be
satisfied at all times. We propose an algorithm called Online Gradient Descent
with Buffer Zone for Convex Constraints (OGD-BZC), designed to handle scenarios
where the system operates within general convex safety constraints. We
demonstrate that OGD-BZC, with appropriate parameter selection, satisfies all
the safety constraints under bounded adversarial disturbances. Additionally, to
evaluate the performance of OGD-BZC, we define the regret with respect to the
best safe linear policy in hindsight. We prove that OGD-BZC achieves $\tilde{O}
(\sqrt{T})$ regret given proper parameter choices. Our numerical results
highlight the efficacy and robustness of the proposed algorithm.
|
2501.18045
|
From tools to thieves: Measuring and understanding public perceptions of
AI through crowdsourced metaphors
|
cs.CY cs.AI cs.CL cs.HC
|
How has the public responded to the increasing prevalence of artificial
intelligence (AI)-based technologies? We investigate public perceptions of AI
by collecting over 12,000 responses over 12 months from a nationally
representative U.S. sample. Participants provided open-ended metaphors
reflecting their mental models of AI, a methodology that overcomes the
limitations of traditional self-reported measures. Using a mixed-methods
approach combining quantitative clustering and qualitative coding, we identify
20 dominant metaphors shaping public understanding of AI. To analyze these
metaphors systematically, we present a scalable framework integrating language
modeling (LM)-based techniques to measure key dimensions of public perception:
anthropomorphism (attribution of human-like qualities), warmth, and competence.
We find that Americans generally view AI as warm and competent, and that over
the past year, perceptions of AI's human-likeness and warmth have significantly
increased ($+34\%, r = 0.80, p < 0.01; +41\%, r = 0.62, p < 0.05$).
Furthermore, these implicit perceptions, along with the identified dominant
metaphors, strongly predict trust in and willingness to adopt AI ($r^2 = 0.21,
0.18, p < 0.001$). We further explore how differences in metaphors and implicit
perceptions--such as the higher propensity of women, older individuals, and
people of color to anthropomorphize AI--shed light on demographic disparities
in trust and adoption. In addition to our dataset and framework for tracking
evolving public attitudes, we provide actionable insights on using metaphors
for inclusive and responsible AI development.
|
2501.18049
|
Joint Pricing and Resource Allocation: An Optimal Online-Learning
Approach
|
cs.LG math.OC stat.ML
|
We study an online learning problem on dynamic pricing and resource
allocation, where we make joint pricing and inventory decisions to maximize the
overall net profit. We consider the stochastic dependence of demands on the
price, which complicates the resource allocation process and introduces
significant non-convexity and non-smoothness to the problem. To solve this
problem, we develop an efficient algorithm that utilizes a "Lower-Confidence
Bound (LCB)" meta-strategy over multiple OCO agents. Our algorithm achieves
$\tilde{O}(\sqrt{Tmn})$ regret (for $m$ suppliers and $n$ consumers), which is
optimal with respect to the time horizon $T$. Our results illustrate an
effective integration of statistical learning methodologies with complex
operations research problems.
|
2501.18052
|
SAeUron: Interpretable Concept Unlearning in Diffusion Models with
Sparse Autoencoders
|
cs.LG cs.AI
|
Diffusion models, while powerful, can inadvertently generate harmful or
undesirable content, raising significant ethical and safety concerns. Recent
machine unlearning approaches offer potential solutions but often lack
transparency, making it difficult to understand the changes they introduce to
the base model. In this work, we introduce SAeUron, a novel method leveraging
features learned by sparse autoencoders (SAEs) to remove unwanted concepts in
text-to-image diffusion models. First, we demonstrate that SAEs, trained in an
unsupervised manner on activations from multiple denoising timesteps of the
diffusion model, capture sparse and interpretable features corresponding to
specific concepts. Building on this, we propose a feature selection method that
enables precise interventions on model activations to block targeted content
while preserving overall performance. Evaluation with the competitive
UnlearnCanvas benchmark on object and style unlearning highlights SAeUron's
state-of-the-art performance. Moreover, we show that with a single SAE, we can
remove multiple concepts simultaneously and that in contrast to other methods,
SAeUron mitigates the possibility of generating unwanted content, even under
adversarial attack. Code and checkpoints are available at:
https://github.com/cywinski/SAeUron.
|
2501.18055
|
Current Pathology Foundation Models are unrobust to Medical Center
Differences
|
cs.LG cs.AI
|
Pathology Foundation Models (FMs) hold great promise for healthcare. Before
they can be used in clinical practice, it is essential to ensure they are
robust to variations between medical centers. We measure whether pathology FMs
focus on biological features like tissue and cancer type, or on the well known
confounding medical center signatures introduced by staining procedure and
other differences. We introduce the Robustness Index. This novel robustness
metric reflects to what degree biological features dominate confounding
features. Ten current publicly available pathology FMs are evaluated. We find
that all current pathology foundation models evaluated represent the medical
center to a strong degree. Significant differences in the robustness index are
observed. Only one model so far has a robustness index greater than one,
meaning biological features dominate confounding features, but only slightly. A
quantitative approach to measure the influence of medical center differences on
FM-based prediction performance is described. We analyze the impact of
unrobustness on classification performance of downstream models, and find that
cancer-type classification errors are not random, but specifically attributable
to same-center confounders: images of other classes from the same medical
center. We visualize FM embedding spaces, and find these are more strongly
organized by medical centers than by biological factors. As a consequence, the
medical center of origin is predicted more accurately than the tissue source
and cancer type. The robustness index introduced here is provided with the aim
of advancing progress towards clinical adoption of robust and reliable
pathology FMs.
|
2501.18056
|
RL-based Query Rewriting with Distilled LLM for online E-Commerce
Systems
|
cs.IR
|
Query rewriting (QR) is a critical technique in e-commerce search, addressing
the lexical gap between user queries and product descriptions to enhance search
performance. Existing QR approaches typically fall into two categories:
discriminative models and generative methods leveraging large language models
(LLMs). Discriminative models often struggle with natural language
understanding and offer limited flexibility in rewriting, while generative
LLMs, despite producing high-quality rewrites, face high inference latency and
cost in online settings. These limitations force offline deployment, making
them vulnerable to issues like information staleness and semantic drift. To
overcome these challenges, we propose a novel hybrid pipeline for QR that
balances efficiency and effectiveness. Our approach combines offline knowledge
distillation to create a lightweight but efficient student model with online
reinforcement learning (RL) to refine query rewriting dynamically using
real-time feedback. A key innovation is the use of LLMs as simulated human
feedback, enabling scalable reward signals and cost-effective evaluation
without manual annotations. Experimental results on Amazon ESCI dataset
demonstrate significant improvements in query relevance, diversity, and
adaptability, as well as positive feedback from the LLM simulation. This work
contributes to advancing LLM capabilities for domain-specific applications,
offering a robust solution for dynamic and complex e-commerce search
environments.
|
2501.18058
|
Power-Efficient Over-the-Air Aggregation with Receive Beamforming for
Federated Learning
|
cs.IT eess.SP math.IT
|
This paper studies power-efficient uplink transmission design for federated
learning (FL) that employs over-the-air analog aggregation and multi-antenna
beamforming at the server. We jointly optimize device transmit weights and
receive beamforming at each FL communication round to minimize the total device
transmit power while ensuring convergence in FL training. Through our
convergence analysis, we establish sufficient conditions on the aggregation
error to guarantee FL training convergence. Utilizing these conditions, we
reformulate the power minimization problem into a unique bi-convex structure
that contains a transmit beamforming optimization subproblem and a receive
beamforming feasibility subproblem. Despite this unconventional structure, we
propose a novel alternating optimization approach that guarantees monotonic
decrease of the objective value, to allow convergence to a partial optimum. We
further consider imperfect channel state information (CSI), which requires
accounting for the channel estimation errors in the power minimization problem
and FL convergence analysis. We propose a CSI-error-aware joint beamforming
algorithm, which can substantially outperform one that does not account for
channel estimation errors. Simulation with canonical classification datasets
demonstrates that our proposed methods achieve significant power reduction
compared to existing benchmarks across a wide range of parameter settings,
while attaining the same target accuracy under the same convergence rate.
|
2501.18059
|
Learning the Optimal Stopping for Early Classification within Finite
Horizons via Sequential Probability Ratio Test
|
cs.LG cs.AI
|
Time-sensitive machine learning benefits from Sequential Probability Ratio
Test (SPRT), which provides an optimal stopping time for early classification
of time series. However, in finite horizon scenarios, where input lengths are
finite, determining the optimal stopping rule becomes computationally intensive
due to the need for backward induction, limiting practical applicability. We
thus introduce FIRMBOUND, an SPRT-based framework that efficiently estimates
the solution to backward induction from training data, bridging the gap between
optimal stopping theory and real-world deployment. It employs density ratio
estimation and convex function learning to provide statistically consistent
estimators for sufficient statistic and conditional expectation, both essential
for solving backward induction; consequently, FIRMBOUND minimizes Bayes risk to
reach optimality. Additionally, we present a faster alternative using Gaussian
process regression, which significantly reduces training time while retaining
low deployment overhead, albeit with potential compromise in statistical
consistency. Experiments across independent and identically distributed
(i.i.d.), non-i.i.d., binary, multiclass, synthetic, and real-world datasets
show that FIRMBOUND achieves optimalities in the sense of Bayes risk and
speed-accuracy tradeoff. Furthermore, it advances the tradeoff boundary toward
optimality when possible and reduces decision-time variance, ensuring reliable
decision-making. Code is publicly available at
https://github.com/Akinori-F-Ebihara/FIRMBOUND
|
2501.18060
|
Noise-Adaptive Conformal Classification with Marginal Coverage
|
stat.ME cs.LG stat.ML
|
Conformal inference provides a rigorous statistical framework for uncertainty
quantification in machine learning, enabling well-calibrated prediction sets
with precise coverage guarantees for any classification model. However, its
reliance on the idealized assumption of perfect data exchangeability limits its
effectiveness in the presence of real-world complications, such as low-quality
labels -- a widespread issue in modern large-scale data sets. This work tackles
this open problem by introducing an adaptive conformal inference method capable
of efficiently handling deviations from exchangeability caused by random label
noise, leading to informative prediction sets with tight marginal coverage
guarantees even in those challenging scenarios. We validate our method through
extensive numerical experiments demonstrating its effectiveness on synthetic
and real data sets, including CIFAR-10H and BigEarthNet.
|
2501.18062
|
FinanceQA: A Benchmark for Evaluating Financial Analysis Capabilities of
Large Language Models
|
cs.LG cs.CL
|
FinanceQA is a testing suite that evaluates LLMs' performance on complex
numerical financial analysis tasks that mirror real-world investment work.
Despite recent advances, current LLMs fail to meet the strict accuracy
requirements of financial institutions, with models failing approximately 60%
of realistic tasks that mimic on-the-job analyses at hedge funds, private
equity firms, investment banks, and other financial institutions. The primary
challenges include hand-spreading metrics, adhering to standard accounting and
corporate valuation conventions, and performing analysis under incomplete
information - particularly in multi-step tasks requiring assumption generation.
This performance gap highlights the disconnect between existing LLM
capabilities and the demands of professional financial analysis that are
inadequately tested by current testing architectures. Results show that
higher-quality training data is needed to support such tasks, which we
experiment with using OpenAI's fine-tuning API. FinanceQA is publicly released
at [this https URL](https://huggingface.co/datasets/AfterQuery/FinanceQA).
|
2501.18063
|
Impedance Trajectory Analysis during Power Swing for Grid-Forming
Inverter with Different Current Limiters
|
eess.SY cs.SY
|
Grid-forming (GFM) inverter-based resources (IBRs) are capable of emulating
the external characteristics of synchronous generators (SGs) through the
careful design of the control loops. However, the current limiter in the
control loops of the GFM IBR poses challenges to the effectiveness of power
swing detection functions designed for SG-based systems. Among various current
limiting strategies, current saturation algorithms (CSAs), widely employed for
their strict current limiting capability, are the focus of this paper. The
paper presents a theoretical analysis of the conditions for entering and
exiting the current saturation mode of the GFM IBR under three CSAs.
Furthermore, the corresponding impedance trajectories observed by the distance
relay on the GFM IBR side are investigated. The analysis results reveal that
the unique impedance trajectories under these CSAs markedly differ from those
associated with SGs. Moreover, it is demonstrated that the conventional power
swing detection scheme may lose functionality due to the rapid movement of the
trajectory or its failure to pass through the detection zones. Conclusions are
validated through simulations in MATLAB/Simulink.
|
2501.18064
|
Learning Metal Microstructural Heterogeneity through Spatial Mapping of
Diffraction Latent Space Features
|
cond-mat.mtrl-sci cs.AI
|
To leverage advancements in machine learning for metallic materials design
and property prediction, it is crucial to develop a data-reduced representation
of metal microstructures that surpasses the limitations of current
physics-based discrete microstructure descriptors. This need is particularly
relevant for metallic materials processed through additive manufacturing, which
exhibit complex hierarchical microstructures that cannot be adequately
described using the conventional metrics typically applied to wrought
materials. Furthermore, capturing the spatial heterogeneity of microstructures
at the different scales is necessary within such framework to accurately
predict their properties. To address these challenges, we propose the physical
spatial mapping of metal diffraction latent space features. This approach
integrates (i) point diffraction data encoding via variational autoencoders or
contrastive learning and (ii) the physical mapping of the encoded values.
Together these steps offer a method offers a novel means to comprehensively
describe metal microstructures. We demonstrate this approach on a wrought and
additively manufactured alloy, showing that it effectively encodes
microstructural information and enables direct identification of
microstructural heterogeneity not directly possible by physics-based models.
This data-reduced microstructure representation opens the application of
machine learning models in accelerating metallic material design and accurately
predicting their properties.
|
2501.18071
|
Towards Transparent and Accurate Diabetes Prediction Using Machine
Learning and Explainable Artificial Intelligence
|
cs.LG cs.AI cs.SE
|
Diabetes mellitus (DM) is a global health issue of significance that must be
diagnosed as early as possible and managed well. This study presents a
framework for diabetes prediction using Machine Learning (ML) models,
complemented with eXplainable Artificial Intelligence (XAI) tools, to
investigate both the predictive accuracy and interpretability of the
predictions from ML models. Data Preprocessing is based on the Synthetic
Minority Oversampling Technique (SMOTE) and feature scaling used on the
Diabetes Binary Health Indicators dataset to deal with class imbalance and
variability of clinical features. The ensemble model provided high accuracy,
with a test accuracy of 92.50% and an ROC-AUC of 0.975. BMI, Age, General
Health, Income, and Physical Activity were the most influential predictors
obtained from the model explanations. The results of this study suggest that ML
combined with XAI is a promising means of developing accurate and
computationally transparent tools for use in healthcare systems.
|
2501.18075
|
Synthesizing Grasps and Regrasps for Complex Manipulation Tasks
|
cs.RO
|
In complex manipulation tasks, e.g., manipulation by pivoting, the motion of
the object being manipulated has to satisfy path constraints that can change
during the motion. Therefore, a single grasp may not be sufficient for the
entire path, and the object may need to be regrasped. Additionally, geometric
data for objects from a sensor are usually available in the form of point
clouds. The problem of computing grasps and regrasps from point-cloud
representation of objects for complex manipulation tasks is a key problem in
endowing robots with manipulation capabilities beyond pick-and-place. In this
paper, we formalize the problem of grasping/regrasping for complex manipulation
tasks with objects represented by (partial) point clouds and present an
algorithm to solve it. We represent a complex manipulation task as a sequence
of constant screw motions. Using a manipulation plan skeleton as a sequence of
constant screw motions, we use a grasp metric to find graspable regions on the
object for every constant screw segment. The overlap of the graspable regions
for contiguous screws are then used to determine when and how many times the
object needs to be regrasped. We present experimental results on point cloud
data collected from RGB-D sensors to illustrate our approach.
|
2501.18078
|
Statistical Design of Thermal Protection System Using Physics-Informed
Neural Network
|
cs.CE
|
Thermal protection systems (TPS) of space vehicles are designed
computationally rather than experimentally. They are validated using ground
experiments, but all aspects of the flight cannot be replicated on ground. This
ground-to-flight mapping introduces uncertainties which need to be accounted
for while designing any thermal protection system. Thus, precise computational
models along with uncertainty quantification in the models are required to
design the TPS. The focus of this study is to estimate the thermal material
parameters of TPS based on the target reliability requirements using
statistical methods. To perform uncertainty quantification (UQ) of a system, a
simulated model of the system needs to be solved many times on statistical
samples, increasing the computational time and cost of the overall process. A
physics-informed neural network (PINN) model is used in the analysis instead of
traditional physics based numerical solutions. The accuracy of PINN is
comparable to that of the numerical solution. To find the parameter
distribution, sampling of the parameter space is performed using Sequential
Monte- Carlo (SMC) method. The sampling method is efficient as it generates
samples based on the target distribution in parallel and it also generates
diverse samples for proper UQ. Combining the use of both PINN predictive model
and SMC sampling, the framework can approximate the parameter distributions
that satisfy the TPS design reliability constraints. The framework achieved
remarkable increases in the speed of performing the reliability analysis of the
TPS. This reliability analysis can be used for design optimization of the TPS
based on risk analysis along with other systems of the vehicle.
|
2501.18080
|
PAC Codes Meet CRC-Polar Codes
|
cs.IT math.IT
|
CRC-Polar codes under SC list decoding are well-regarded for their
competitive error performance. This paper examines these codes by focusing on
minimum weight codewords, breaking them down into the rows of the polar
transform. Inspired by the significant impact of parity check bits and their
positions, we apply a shifted rate-profile for polarization-adjusted
convolutional (PS-PAC) codes, thereby achieving similar improvements in the
weight distribution of polar codes through precoding. The results demonstrate a
significant improvement in error performance, achieving up to a 0.5 dB power
gain with short PS-PAC codes. Additionally, leveraging convolutional precoding
in PAC codes, we adopt a continuous deployment (masking) of parity check bits
derived from the remainder of continuous division of the partial message
polynomial and the CRC polynomial over frozen positions in the rate-profile.
This approach enhances performance for medium-length codes, with an overall
improvement of 0.12 dB.
|
2501.18081
|
Normative Evaluation of Large Language Models with Everyday Moral
Dilemmas
|
cs.AI cs.CY
|
The rapid adoption of large language models (LLMs) has spurred extensive
research into their encoded moral norms and decision-making processes. Much of
this research relies on prompting LLMs with survey-style questions to assess
how well models are aligned with certain demographic groups, moral beliefs, or
political ideologies. While informative, the adherence of these approaches to
relatively superficial constructs tends to oversimplify the complexity and
nuance underlying everyday moral dilemmas. We argue that auditing LLMs along
more detailed axes of human interaction is of paramount importance to better
assess the degree to which they may impact human beliefs and actions. To this
end, we evaluate LLMs on complex, everyday moral dilemmas sourced from the "Am
I the Asshole" (AITA) community on Reddit, where users seek moral judgments on
everyday conflicts from other community members. We prompted seven LLMs to
assign blame and provide explanations for over 10,000 AITA moral dilemmas. We
then compared the LLMs' judgments and explanations to those of Redditors and to
each other, aiming to uncover patterns in their moral reasoning. Our results
demonstrate that large language models exhibit distinct patterns of moral
judgment, varying substantially from human evaluations on the AITA subreddit.
LLMs demonstrate moderate to high self-consistency but low inter-model
agreement. Further analysis of model explanations reveals distinct patterns in
how models invoke various moral principles. These findings highlight the
complexity of implementing consistent moral reasoning in artificial systems and
the need for careful evaluation of how different models approach ethical
judgment. As LLMs continue to be used in roles requiring ethical
decision-making such as therapists and companions, careful evaluation is
crucial to mitigate potential biases and limitations.
|
2501.18084
|
U-aggregation: Unsupervised Aggregation of Multiple Learning Algorithms
|
stat.ML cs.LG
|
Across various domains, the growing advocacy for open science and open-source
machine learning has made an increasing number of models publicly available.
These models allow practitioners to integrate them into their own contexts,
reducing the need for extensive data labeling, training, and calibration.
However, selecting the best model for a specific target population remains
challenging due to issues like limited transferability, data heterogeneity, and
the difficulty of obtaining true labels or outcomes in real-world settings. In
this paper, we propose an unsupervised model aggregation method, U-aggregation,
designed to integrate multiple pre-trained models for enhanced and robust
performance in new populations. Unlike existing supervised model aggregation or
super learner approaches, U-aggregation assumes no observed labels or outcomes
in the target population. Our method addresses limitations in existing
unsupervised model aggregation techniques by accommodating more realistic
settings, including heteroskedasticity at both the model and individual levels,
and the presence of adversarial models. Drawing on insights from random matrix
theory, U-aggregation incorporates a variance stabilization step and an
iterative sparse signal recovery process. These steps improve the estimation of
individuals' true underlying risks in the target population and evaluate the
relative performance of candidate models. We provide a theoretical
investigation and systematic numerical experiments to elucidate the properties
of U-aggregation. We demonstrate its potential real-world application by using
U-aggregation to enhance genetic risk prediction of complex traits, leveraging
publicly available models from the PGS Catalog.
|
2501.18086
|
DIAL: Distribution-Informed Adaptive Learning of Multi-Task Constraints
for Safety-Critical Systems
|
cs.LG cs.AI cs.RO cs.SY eess.SY
|
Safe reinforcement learning has traditionally relied on predefined constraint
functions to ensure safety in complex real-world tasks, such as autonomous
driving. However, defining these functions accurately for varied tasks is a
persistent challenge. Recent research highlights the potential of leveraging
pre-acquired task-agnostic knowledge to enhance both safety and sample
efficiency in related tasks. Building on this insight, we propose a novel
method to learn shared constraint distributions across multiple tasks. Our
approach identifies the shared constraints through imitation learning and then
adapts to new tasks by adjusting risk levels within these learned
distributions. This adaptability addresses variations in risk sensitivity
stemming from expert-specific biases, ensuring consistent adherence to general
safety principles even with imperfect demonstrations. Our method can be applied
to control and navigation domains, including multi-task and meta-task
scenarios, accommodating constraints such as maintaining safe distances or
adhering to speed limits. Experimental results validate the efficacy of our
approach, demonstrating superior safety performance and success rates compared
to baselines, all without requiring task-specific constraint definitions. These
findings underscore the versatility and practicality of our method across a
wide range of real-world tasks.
|
2501.18089
|
ISAM-MTL: Cross-subject multi-task learning model with identifiable
spikes and associative memory networks
|
cs.NE cs.LG q-bio.NC
|
Cross-subject variability in EEG degrades performance of current deep
learning models, limiting the development of brain-computer interface (BCI).
This paper proposes ISAM-MTL, which is a multi-task learning (MTL) EEG
classification model based on identifiable spiking (IS) representations and
associative memory (AM) networks. The proposed model treats EEG classification
of each subject as an independent task and leverages cross-subject data
training to facilitate feature sharing across subjects. ISAM-MTL consists of a
spiking feature extractor that captures shared features across subjects and a
subject-specific bidirectional associative memory network that is trained by
Hebbian learning for efficient and fast within-subject EEG classification.
ISAM-MTL integrates learned spiking neural representations with bidirectional
associative memory for cross-subject EEG classification. The model employs
label-guided variational inference to construct identifiable spike
representations, enhancing classification accuracy. Experimental results on two
BCI Competition datasets demonstrate that ISAM-MTL improves the average
accuracy of cross-subject EEG classification while reducing performance
variability among subjects. The model further exhibits the characteristics of
few-shot learning and identifiable neural activity beneath EEG, enabling rapid
and interpretable calibration for BCI systems.
|
2501.18092
|
Learning Provably Improves the Convergence of Gradient Descent
|
cs.LG math.OC
|
As a specialized branch of deep learning, Learning to Optimize (L2O) tackles
optimization problems by training DNN-based solvers. Despite achieving
significant success in various scenarios, such as faster convergence in solving
convex optimizations and improved optimality in addressing non-convex cases,
there remains a deficiency in theoretical support. Current research heavily
relies on stringent assumptions that do not align with the intricacies of the
training process. To address this gap, our study aims to establish L2O's
convergence through its training methodology. We demonstrate that learning an
algorithm's hyperparameters significantly enhances its convergence. Focusing on
the gradient descent (GD) algorithm for quadratic programming, we prove the
convergence of L2O's training using the neural tangent kernel theory. Moreover,
we conduct empirical evaluations using synthetic datasets. Our findings
indicate exceeding 50\% outperformance over the GD methods.
|
2501.18093
|
Reward Prediction Error Prioritisation in Experience Replay: The RPE-PER
Method
|
cs.LG cs.RO
|
Reinforcement Learning algorithms aim to learn optimal control strategies
through iterative interactions with an environment. A critical element in this
process is the experience replay buffer, which stores past experiences,
allowing the algorithm to learn from a diverse range of interactions rather
than just the most recent ones. This buffer is especially essential in dynamic
environments with limited experiences. However, efficiently selecting
high-value experiences to accelerate training remains a challenge. Drawing
inspiration from the role of reward prediction errors (RPEs) in biological
systems, where they are essential for adaptive behaviour and learning, we
introduce Reward Predictive Error Prioritised Experience Replay (RPE-PER). This
novel approach prioritises experiences in the buffer based on RPEs. Our method
employs a critic network, EMCN, that predicts rewards in addition to the
Q-values produced by standard critic networks. The discrepancy between these
predicted and actual rewards is computed as RPE and utilised as a signal for
experience prioritisation. Experimental evaluations across various continuous
control tasks demonstrate RPE-PER's effectiveness in enhancing the learning
speed and performance of off-policy actor-critic algorithms compared to
baseline approaches.
|
2501.18094
|
AlphaAdam:Asynchronous Masked Optimization with Dynamic Alpha for
Selective Updates
|
cs.LG
|
In the training of large language models (LLMs), updating parameters more
efficiently and stably has always been an important challenge. To achieve
efficient parameter updates, existing methods usually achieve performance
comparable to full parameter updates through methods such as low-dimensional
decomposition or layer-wise selective updates. In this work, we propose
AlphaAdam, an optimization framework for LLM from the perspective of
intra-layer parameter updates. By decoupling parameter updates and dynamically
adjusting their strength, AlphaAdam accelerates convergence and improves
training stability. We construct parameter masks based on the consistency of
historical momentum and gradient direction and combine them with an adaptive
mask strength strategy to ensure efficient optimization and theoretical
convergence guarantees, which is also applicable to most momentum-based
optimizers. Extensive experiments show that AlphaAdam outperforms
state-of-the-art methods such as AdamW in terms of convergence speed and
computational efficiency across tasks, including GPT-2 pre-trained and
fine-tuned RoBERTa and Llama-7B. Our AlphaAdam implements an optimizer
enhancement framework for LLMs through intra-layer asynchronous masked adaptive
updates. Our code is available in this https://github.com/MaeChd/AlphaAdam.
|
2501.18096
|
LLMs can see and hear without any training
|
cs.CV cs.AI cs.CL cs.LG
|
We present MILS: Multimodal Iterative LLM Solver, a surprisingly simple,
training-free approach, to imbue multimodal capabilities into your favorite
LLM. Leveraging their innate ability to perform multi-step reasoning, MILS
prompts the LLM to generate candidate outputs, each of which are scored and fed
back iteratively, eventually generating a solution to the task. This enables
various applications that typically require training specialized models on
task-specific data. In particular, we establish a new state-of-the-art on
emergent zero-shot image, video and audio captioning. MILS seamlessly applies
to media generation as well, discovering prompt rewrites to improve
text-to-image generation, and even edit prompts for style transfer! Finally,
being a gradient-free optimization approach, MILS can invert multimodal
embeddings into text, enabling applications like cross-modal arithmetic.
|
2501.18098
|
Disentangling Safe and Unsafe Corruptions via Anisotropy and Locality
|
cs.CV cs.LG
|
State-of-the-art machine learning systems are vulnerable to small
perturbations to their input, where ``small'' is defined according to a threat
model that assigns a positive threat to each perturbation. Most prior works
define a task-agnostic, isotropic, and global threat, like the $\ell_p$ norm,
where the magnitude of the perturbation fully determines the degree of the
threat and neither the direction of the attack nor its position in space
matter. However, common corruptions in computer vision, such as blur,
compression, or occlusions, are not well captured by such threat models. This
paper proposes a novel threat model called \texttt{Projected Displacement} (PD)
to study robustness beyond existing isotropic and global threat models. The
proposed threat model measures the threat of a perturbation via its alignment
with \textit{unsafe directions}, defined as directions in the input space along
which a perturbation of sufficient magnitude changes the ground truth class
label. Unsafe directions are identified locally for each input based on
observed training data. In this way, the PD threat model exhibits anisotropy
and locality. Experiments on Imagenet-1k data indicate that, for any input, the
set of perturbations with small PD threat includes \textit{safe} perturbations
of large $\ell_p$ norm that preserve the true label, such as noise, blur and
compression, while simultaneously excluding \textit{unsafe} perturbations that
alter the true label. Unlike perceptual threat models based on embeddings of
large-vision models, the PD threat model can be readily computed for arbitrary
classification tasks without pre-training or finetuning. Further additional
task annotation such as sensitivity to image regions or concept hierarchies can
be easily integrated into the assessment of threat and thus the PD threat model
presents practitioners with a flexible, task-driven threat specification.
|
2501.18099
|
Learning to Plan & Reason for Evaluation with Thinking-LLM-as-a-Judge
|
cs.AI cs.CL
|
LLM-as-a-Judge models generate chain-of-thought (CoT) sequences intended to
capture the step-bystep reasoning process that underlies the final evaluation
of a response. However, due to the lack of human annotated CoTs for evaluation,
the required components and structure of effective reasoning traces remain
understudied. Consequently, previous approaches often (1) constrain reasoning
traces to hand-designed components, such as a list of criteria, reference
answers, or verification questions and (2) structure them such that planning is
intertwined with the reasoning for evaluation. In this work, we propose
EvalPlanner, a preference optimization algorithm for Thinking-LLM-as-a-Judge
that first generates an unconstrained evaluation plan, followed by its
execution, and then the final judgment. In a self-training loop, EvalPlanner
iteratively optimizes over synthetically constructed evaluation plans and
executions, leading to better final verdicts. Our method achieves a new
state-of-the-art performance for generative reward models on RewardBench (with
a score of 93.9), despite being trained on fewer amount of, and synthetically
generated, preference pairs. Additional experiments on other benchmarks like
RM-Bench, JudgeBench, and FollowBenchEval further highlight the utility of both
planning and reasoning for building robust LLM-as-a-Judge reasoning models.
|
2501.18100
|
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via
Post-fine-tuning Perturbation
|
cs.CL cs.AI
|
Harmful fine-tuning attack introduces significant security risks to the
fine-tuning services. Mainstream defenses aim to vaccinate the model such that
the later harmful fine-tuning attack is less effective. However, our evaluation
results show that such defenses are fragile -- with a few fine-tuning steps,
the model still can learn the harmful knowledge. To this end, we do further
experiment and find that an embarrassingly simple solution -- adding purely
random perturbations to the fine-tuned model, can recover the model from
harmful behavior, though it leads to a degradation in the model's fine-tuning
performance. To address the degradation of fine-tuning performance, we further
propose Panacea, which optimizes an adaptive perturbation that will be applied
to the model after fine-tuning. Panacea maintains model's safety alignment
performance without compromising downstream fine-tuning performance.
Comprehensive experiments are conducted on different harmful ratios,
fine-tuning tasks and mainstream LLMs, where the average harmful scores are
reduced by up-to 21.5%, while maintaining fine-tuning performance. As a
by-product, we analyze the optimized perturbation and show that different
layers in various LLMs have distinct safety coefficients. Source code available
at https://github.com/w-yibo/Panacea
|
2501.18101
|
Diverse Preference Optimization
|
cs.CL
|
Post-training of language models, either through reinforcement learning,
preference optimization or supervised finetuning, tends to sharpen the output
probability distribution and reduce the diversity of generated responses. This
is particularly a problem for creative generative tasks where varied responses
are desired. In this work we introduce Diverse Preference Optimization (DivPO),
an optimization method which learns to generate much more diverse responses
than standard pipelines, while maintaining the quality of the generations. In
DivPO, preference pairs are selected by first considering a pool of responses,
and a measure of diversity among them, and selecting chosen examples as being
more rare but high quality, while rejected examples are more common, but low
quality. DivPO results in generating 45.6% more diverse persona attributes, and
an 74.6% increase in story diversity, while maintaining similar win rates as
standard baselines.
|
2501.18103
|
Beyond Turn-taking: Introducing Text-based Overlap into Human-LLM
Interactions
|
cs.HC cs.CL
|
Traditional text-based human-AI interactions often adhere to a strict
turn-taking approach. In this research, we propose a novel approach that
incorporates overlapping messages, mirroring natural human conversations.
Through a formative study, we observed that even in text-based contexts, users
instinctively engage in overlapping behaviors like "A: Today I went to-" "B:
yeah." To capitalize on these insights, we developed OverlapBot, a prototype
chatbot where both AI and users can initiate overlapping. Our user study
revealed that OverlapBot was perceived as more communicative and immersive than
traditional turn-taking chatbot, fostering faster and more natural
interactions. Our findings contribute to the understanding of design space for
overlapping interactions. We also provide recommendations for implementing
overlap-capable AI interactions to enhance the fluidity and engagement of
text-based conversations.
|
2501.18107
|
Scaling Inference-Efficient Language Models
|
cs.LG cs.AI cs.CL
|
Scaling laws are powerful tools to predict the performance of large language
models. However, current scaling laws fall short of accounting for inference
costs. In this work, we first show that model architecture affects inference
latency, where models of the same size can have up to 3.5x difference in
latency. To tackle this challenge, we modify the Chinchilla scaling laws to
co-optimize the model parameter count, the number of training tokens, and the
model architecture. Due to the reason that models of similar training loss
exhibit gaps in downstream evaluation, we also propose a novel method to train
inference-efficient models based on the revised scaling laws. We perform
extensive empirical studies to fit and evaluate our inference-aware scaling
laws. We vary model parameters from 80M to 1B, training tokens from 1.6B to
30B, and model shapes, training a total of 63 models. Guided by our
inference-efficient scaling law and model selection method, we release the
Morph-1B model, which improves inference latency by 1.8x while maintaining
accuracy on downstream tasks compared to open-source models, pushing the Pareto
frontier of accuracy-latency tradeoff.
|
2501.18108
|
Investigating an Intelligent System to Monitor \& Explain Abnormal
Activity Patterns of Older Adults
|
cs.HC cs.AI cs.LG
|
Despite the growing potential of older adult care technologies, the adoption
of these technologies remains challenging. In this work, we conducted a
focus-group session with family caregivers to scope designs of the older adult
care technology. We then developed a high-fidelity prototype and conducted its
qualitative study with professional caregivers and older adults to understand
their perspectives on the system functionalities. This system monitors abnormal
activity patterns of older adults using wireless motion sensors and machine
learning models and supports interactive dialogue responses to explain abnormal
activity patterns of older adults to caregivers and allow older adults
proactively sharing their status with caregivers for an adequate intervention.
Both older adults and professional caregivers appreciated that our system can
provide a faster, personalized service while proactively controlling what
information is to be shared through interactive dialogue responses. We further
discuss other considerations to realize older adult technology in practice.
|
2501.18109
|
Influence of High-Performance Image-to-Image Translation Networks on
Clinical Visual Assessment and Outcome Prediction: Utilizing Ultrasound to
MRI Translation in Prostate Cancer
|
eess.IV cs.CV physics.bio-ph
|
Purpose: This study examines the core traits of image-to-image translation
(I2I) networks, focusing on their effectiveness and adaptability in everyday
clinical settings. Methods: We have analyzed data from 794 patients diagnosed
with prostate cancer (PCa), using ten prominent 2D/3D I2I networks to convert
ultrasound (US) images into MRI scans. We also introduced a new analysis of
Radiomic features (RF) via the Spearman correlation coefficient to explore
whether networks with high performance (SSIM>85%) could detect subtle RFs. Our
study further examined synthetic images by 7 invited physicians. As a final
evaluation study, we have investigated the improvement that are achieved using
the synthetic MRI data on two traditional machine learning and one deep
learning method. Results: In quantitative assessment, 2D-Pix2Pix network
substantially outperformed the other 7 networks, with an average SSIM~0.855.
The RF analysis revealed that 76 out of 186 RFs were identified using the
2D-Pix2Pix algorithm alone, although half of the RFs were lost during the
translation process. A detailed qualitative review by 7 medical doctors noted a
deficiency in low-level feature recognition in I2I tasks. Furthermore, the
study found that synthesized image-based classification outperformed US
image-based classification with an average accuracy and AUC~0.93. Conclusion:
This study showed that while 2D-Pix2Pix outperformed cutting-edge networks in
low-level feature discovery and overall error and similarity metrics, it still
requires improvement in low-level feature performance, as highlighted by Group
3. Further, the study found using synthetic image-based classification
outperformed original US image-based methods.
|
2501.18110
|
Lifelong 3D Mapping Framework for Hand-held & Robot-mounted LiDAR
Mapping Systems
|
cs.RO cs.CV
|
We propose a lifelong 3D mapping framework that is modular, cloud-native by
design and more importantly, works for both hand-held and robot-mounted 3D
LiDAR mapping systems. Our proposed framework comprises of dynamic point
removal, multi-session map alignment, map change detection and map version
control. First, our sensor-setup agnostic dynamic point removal algorithm works
seamlessly with both hand-held and robot-mounted setups to produce clean static
3D maps. Second, the multi-session map alignment aligns these clean static maps
automatically, without manual parameter fine-tuning, into a single reference
frame, using a two stage approach based on feature descriptor matching and fine
registration. Third, our novel map change detection identifies positive and
negative changes between two aligned maps. Finally, the map version control
maintains a single base map that represents the current state of the
environment, and stores the detected positive and negative changes, and
boundary information. Our unique map version control system can reconstruct any
of the previous clean session maps and allows users to query changes between
any two random mapping sessions, all without storing any input raw session
maps, making it very unique. Extensive experiments are performed using
hand-held commercial LiDAR mapping devices and open-source robot-mounted LiDAR
SLAM algorithms to evaluate each module and the whole 3D lifelong mapping
framework.
|
2501.18112
|
ACTGNN: Assessment of Clustering Tendency with Synthetically-Trained
Graph Neural Networks
|
cs.LG
|
Determining clustering tendency in datasets is a fundamental but challenging
task, especially in noisy or high-dimensional settings where traditional
methods, such as the Hopkins Statistic and Visual Assessment of Tendency (VAT),
often struggle to produce reliable results. In this paper, we propose ACTGNN, a
graph-based framework designed to assess clustering tendency by leveraging
graph representations of data. Node features are constructed using
Locality-Sensitive Hashing (LSH), which captures local neighborhood
information, while edge features incorporate multiple similarity metrics, such
as the Radial Basis Function (RBF) kernel, to model pairwise relationships. A
Graph Neural Network (GNN) is trained exclusively on synthetic datasets,
enabling robust learning of clustering structures under controlled conditions.
Extensive experiments demonstrate that ACTGNN significantly outperforms
baseline methods on both synthetic and real-world datasets, exhibiting superior
performance in detecting faint clustering structures, even in high-dimensional
or noisy data. Our results highlight the generalizability and effectiveness of
the proposed approach, making it a promising tool for robust clustering
tendency assessment.
|
2501.18114
|
DCatalyst: A Unified Accelerated Framework for Decentralized
Optimization
|
math.OC cs.LG
|
We study decentralized optimization over a network of agents, modeled as
graphs, with no central server. The goal is to minimize $f+r$, where $f$
represents a (strongly) convex function averaging the local agents' losses, and
$r$ is a convex, extended-value function.
We introduce DCatalyst, a unified black-box framework that integrates
Nesterov acceleration into decentralized optimization algorithms. %, enhancing
their performance. At its core, DCatalyst operates as an \textit{inexact},
\textit{momentum-accelerated} proximal method (forming the outer loop) that
seamlessly incorporates any selected decentralized algorithm (as the inner
loop). We demonstrate that DCatalyst achieves optimal communication and
computational complexity (up to log-factors) across various decentralized
algorithms and problem instances. Notably, it extends acceleration capabilities
to problem classes previously lacking accelerated solution methods, thereby
broadening the effectiveness of decentralized methods.
On the technical side, our framework introduce the {\it inexact estimating
sequences}--a novel extension of the well-known Nesterov's estimating
sequences, tailored for the minimization of composite losses in decentralized
settings. This method adeptly handles consensus errors and inexact solutions of
agents' subproblems, challenges not addressed by existing models.
|
2501.18115
|
A spectral clustering-type algorithm for the consistent estimation of
the Hurst distribution in moderately high dimensions
|
stat.ME cs.LG stat.ML
|
Scale invariance (fractality) is a prominent feature of the large-scale
behavior of many stochastic systems. In this work, we construct an algorithm
for the statistical identification of the Hurst distribution (in particular,
the scaling exponents) undergirding a high-dimensional fractal system. The
algorithm is based on wavelet random matrices, modified spectral clustering and
a model selection step for picking the value of the clustering precision
hyperparameter. In a moderately high-dimensional regime where the dimension,
the sample size and the scale go to infinity, we show that the algorithm
consistently estimates the Hurst distribution. Monte Carlo simulations show
that the proposed methodology is efficient for realistic sample sizes and
outperforms another popular clustering method based on mixed-Gaussian modeling.
We apply the algorithm in the analysis of real-world macroeconomic time series
to unveil evidence for cointegration.
|
2501.18116
|
DeepFRC: An End-to-End Deep Learning Model for Functional Registration
and Classification
|
cs.CV cs.LG stat.ML
|
Functional data analysis (FDA) is essential for analyzing continuous,
high-dimensional data, yet existing methods often decouple functional
registration and classification, limiting their efficiency and performance. We
present DeepFRC, an end-to-end deep learning framework that unifies these tasks
within a single model. Our approach incorporates an alignment module that
learns time warping functions via elastic function registration and a learnable
basis representation module for dimensionality reduction on aligned data. This
integration enhances both alignment accuracy and predictive performance.
Theoretical analysis establishes that DeepFRC achieves low misalignment and
generalization error, while simulations elucidate the progression of
registration, reconstruction, and classification during training. Experiments
on real-world datasets demonstrate that DeepFRC consistently outperforms
state-of-the-art methods, particularly in addressing complex registration
challenges. Code is available at: https://github.com/Drivergo-93589/DeepFRC.
|
2501.18117
|
Improving Minimax Group Fairness in Sequential Recommendation
|
cs.IR
|
Training sequential recommenders such as SASRec with uniform sample weights
achieves good overall performance but can fall short on specific user groups.
One such example is popularity bias, where mainstream users receive better
recommendations than niche content viewers. To improve recommendation quality
across diverse user groups, we explore three Distributionally Robust
Optimization(DRO) methods: Group DRO, Streaming DRO, and Conditional Value at
Risk (CVaR) DRO. While Group and Streaming DRO rely on group annotations and
struggle with users belonging to multiple groups, CVaR does not require such
annotations and can naturally handle overlapping groups. In experiments on two
real-world datasets, we show that the DRO methods outperform standard training,
with CVaR delivering the best results. Additionally, we find that Group and
Streaming DRO are sensitive to the choice of group used for loss computation.
Our contributions include (i) a novel application of CVaR to recommenders, (ii)
showing that the DRO methods improve group metrics as well as overall
performance, and (iii) demonstrating CVaR's effectiveness in the practical
scenario of intersecting user groups.
|
2501.18119
|
Self-supervised Quantized Representation for Seamlessly Integrating
Knowledge Graphs with Large Language Models
|
cs.CL cs.AI
|
Due to the presence of the natural gap between Knowledge Graph (KG)
structures and the natural language, the effective integration of holistic
structural information of KGs with Large Language Models (LLMs) has emerged as
a significant question. To this end, we propose a two-stage framework to learn
and apply quantized codes for each entity, aiming for the seamless integration
of KGs with LLMs. Firstly, a self-supervised quantized representation (SSQR)
method is proposed to compress both KG structural and semantic knowledge into
discrete codes (\ie, tokens) that align the format of language sentences. We
further design KG instruction-following data by viewing these learned codes as
features to directly input to LLMs, thereby achieving seamless integration. The
experiment results demonstrate that SSQR outperforms existing unsupervised
quantized methods, producing more distinguishable codes. Further, the
fine-tuned LLaMA2 and LLaMA3.1 also have superior performance on KG link
prediction and triple classification tasks, utilizing only 16 tokens per entity
instead of thousands in conventional prompting methods.
|
2501.18121
|
Optimal Survey Design for Private Mean Estimation
|
stat.ML cs.CR cs.LG math.ST stat.TH
|
This work identifies the first privacy-aware stratified sampling scheme that
minimizes the variance for general private mean estimation under the Laplace,
Discrete Laplace (DLap) and Truncated-Uniform-Laplace (TuLap) mechanisms within
the framework of differential privacy (DP). We view stratified sampling as a
subsampling operation, which amplifies the privacy guarantee; however, to have
the same final privacy guarantee for each group, different nominal privacy
budgets need to be used depending on the subsampling rate. Ignoring the effect
of DP, traditional stratified sampling strategies risk significant variance
inflation. We phrase our optimal survey design as an optimization problem,
where we determine the optimal subsampling sizes for each group with the goal
of minimizing the variance of the resulting estimator. We establish strong
convexity of the variance objective, propose an efficient algorithm to identify
the integer-optimal design, and offer insights on the structure of the optimal
design.
|
2501.18122
|
VQLTI: Long-Term Tropical Cyclone Intensity Forecasting with Physical
Constraints
|
cs.LG cs.AI
|
Tropical cyclone (TC) intensity forecasting is crucial for early disaster
warning and emergency decision-making. Numerous researchers have explored
deep-learning methods to address computational and post-processing issues in
operational forecasting. Regrettably, they exhibit subpar long-term forecasting
capabilities. We use two strategies to enhance long-term forecasting. (1) By
enhancing the matching between TC intensity and spatial information, we can
improve long-term forecasting performance. (2) Incorporating physical knowledge
and physical constraints can help mitigate the accumulation of forecasting
errors. To achieve the above strategies, we propose the VQLTI framework. VQLTI
transfers the TC intensity information to a discrete latent space while
retaining the spatial information differences, using large-scale spatial
meteorological data as conditions. Furthermore, we leverage the forecast from
the weather prediction model FengWu to provide additional physical knowledge
for VQLTI. Additionally, we calculate the potential intensity (PI) to impose
physical constraints on the latent variables. In the global long-term TC
intensity forecasting, VQLTI achieves state-of-the-art results for the 24h to
120h, with the MSW (Maximum Sustained Wind) forecast error reduced by
35.65%-42.51% compared to ECMWF-IFS.
|
2501.18123
|
Battery State of Health Estimation Using LLM Framework
|
cs.LG eess.SP
|
Battery health monitoring is critical for the efficient and reliable
operation of electric vehicles (EVs). This study introduces a transformer-based
framework for estimating the State of Health (SoH) and predicting the Remaining
Useful Life (RUL) of lithium titanate (LTO) battery cells by utilizing both
cycle-based and instantaneous discharge data. Testing on eight LTO cells under
various cycling conditions over 500 cycles, we demonstrate the impact of charge
durations on energy storage trends and apply Differential Voltage Analysis
(DVA) to monitor capacity changes (dQ/dV) across voltage ranges. Our LLM model
achieves superior performance, with a Mean Absolute Error (MAE) as low as
0.87\% and varied latency metrics that support efficient processing,
demonstrating its strong potential for real-time integration into EVs. The
framework effectively identifies early signs of degradation through anomaly
detection in high-resolution data, facilitating predictive maintenance to
prevent sudden battery failures and enhance energy efficiency.
|
2501.18124
|
REMOTE: Real-time Ego-motion Tracking for Various Endoscopes via
Multimodal Visual Feature Learning
|
cs.CV cs.AI
|
Real-time ego-motion tracking for endoscope is a significant task for
efficient navigation and robotic automation of endoscopy. In this paper, a
novel framework is proposed to perform real-time ego-motion tracking for
endoscope. Firstly, a multi-modal visual feature learning network is proposed
to perform relative pose prediction, in which the motion feature from the
optical flow, the scene features and the joint feature from two adjacent
observations are all extracted for prediction. Due to more correlation
information in the channel dimension of the concatenated image, a novel feature
extractor is designed based on an attention mechanism to integrate
multi-dimensional information from the concatenation of two continuous frames.
To extract more complete feature representation from the fused features, a
novel pose decoder is proposed to predict the pose transformation from the
concatenated feature map at the end of the framework. At last, the absolute
pose of endoscope is calculated based on relative poses. The experiment is
conducted on three datasets of various endoscopic scenes and the results
demonstrate that the proposed method outperforms state-of-the-art methods.
Besides, the inference speed of the proposed method is over 30 frames per
second, which meets the real-time requirement. The project page is here:
remote-bmxs.netlify.app
|
2501.18126
|
HyperZero: A Customized End-to-End Auto-Tuning System for Recommendation
with Hourly Feedback
|
cs.IR cs.LG
|
Modern recommendation systems can be broadly divided into two key stages: the
ranking stage, where the system predicts various user engagements (e.g.,
click-through rate, like rate, follow rate, watch time), and the value model
stage, which aggregates these predictive scores through a function (e.g., a
linear combination defined by a weight vector) to measure the value of each
content by a single numerical score. Both stages play roughly equally important
roles in real industrial systems; however, how to optimize the model weights
for the second stage still lacks systematic study. This paper focuses on
optimizing the second stage through auto-tuning technology. Although general
auto-tuning systems and solutions - both from established production practices
and open-source solutions - can address this problem, they typically require
weeks or even months to identify a feasible solution. Such prolonged tuning
processes are unacceptable in production environments for recommendation
systems, as suboptimal value models can severely degrade user experience. An
effective auto-tuning solution is required to identify a viable model within
2-3 days, rather than the extended timelines typically associated with existing
approaches. In this paper, we introduce a practical auto-tuning system named
HyperZero that addresses these time constraints while effectively solving the
unique challenges inherent in modern recommendation systems. Moreover, this
framework has the potential to be expanded to broader tuning tasks within
recommendation systems.
|
2501.18128
|
Unraveling the Capabilities of Language Models in News Summarization
|
cs.CL cs.AI
|
Given the recent introduction of multiple language models and the ongoing
demand for improved Natural Language Processing tasks, particularly
summarization, this work provides a comprehensive benchmarking of 20 recent
language models, focusing on smaller ones for the news summarization task. In
this work, we systematically test the capabilities and effectiveness of these
models in summarizing news article texts which are written in different styles
and presented in three distinct datasets. Specifically, we focus in this study
on zero-shot and few-shot learning settings and we apply a robust evaluation
methodology that combines different evaluation concepts including automatic
metrics, human evaluation, and LLM-as-a-judge. Interestingly, including
demonstration examples in the few-shot learning setting did not enhance models'
performance and, in some cases, even led to worse quality of the generated
summaries. This issue arises mainly due to the poor quality of the gold
summaries that have been used as reference summaries, which negatively impacts
the models' performance. Furthermore, our study's results highlight the
exceptional performance of GPT-3.5-Turbo and GPT-4, which generally dominate
due to their advanced capabilities. However, among the public models evaluated,
certain models such as Qwen1.5-7B, SOLAR-10.7B-Instruct-v1.0, Meta-Llama-3-8B
and Zephyr-7B-Beta demonstrated promising results. These models showed
significant potential, positioning them as competitive alternatives to large
models for the task of news summarization.
|
2501.18129
|
Revisiting gender bias research in bibliometrics: Standardizing
methodological variability using Scholarly Data Analysis (SoDA) Cards
|
cs.DL cs.AI cs.SI
|
Gender biases in scholarly metrics remain a persistent concern, despite
numerous bibliometric studies exploring their presence and absence across
productivity, impact, acknowledgment, and self-citations. However,
methodological inconsistencies, particularly in author name disambiguation and
gender identification, limit the reliability and comparability of these
studies, potentially perpetuating misperceptions and hindering effective
interventions. A review of 70 relevant publications over the past 12 years
reveals a wide range of approaches, from name-based and manual searches to more
algorithmic and gold-standard methods, with no clear consensus on best
practices. This variability, compounded by challenges such as accurately
disambiguating Asian names and managing unassigned gender labels, underscores
the urgent need for standardized and robust methodologies. To address this
critical gap, we propose the development and implementation of ``Scholarly Data
Analysis (SoDA) Cards." These cards will provide a structured framework for
documenting and reporting key methodological choices in scholarly data
analysis, including author name disambiguation and gender identification
procedures. By promoting transparency and reproducibility, SoDA Cards will
facilitate more accurate comparisons and aggregations of research findings,
ultimately supporting evidence-informed policymaking and enabling the
longitudinal tracking of analytical approaches in the study of gender and other
social biases in academia.
|
2501.18130
|
Waste Animal Bone-derived Calcium Phosphate Particles with High Solar
Reflectance
|
eess.SY cs.SY
|
Highly reflective Calcium Phosphate (CAP) nanoparticles have been obtained
from waste chicken and porcine bones. Chicken and pork bones have been
processed and calcined at temperatures between 600{\deg}C and 1200{\deg}C to
remove organic material and resulting in CAP bio-ceramic compounds with high
reflectance. The reflectivity of the materials in the solar wavelength region
is on par with chemically synthesized CAP. The high reflectivity, consistently
over 90%, as well as the size distribution and packing density of the
nanoparticles obtained in these early bone studies make a strong case for
pursuing this avenue to obtain pigment for high solar reflectivity
applications, such as passive daytime radiative cooling. The results presented
indicate a viable path toward a cost-effective and eco-friendly source of
highly reflective cooling pigments. By sourcing calcium phosphates from animal
bones, there is also the potential to divert large quantities of bone waste
generated by the meat industry from landfills, further contributing toward
sustainability and energy reduction efforts in the construction industry and
beyond.
|
2501.18131
|
Entropy-Synchronized Neural Hashing for Unsupervised Ransomware
Detection
|
cs.CR cs.AI
|
Entropy-based detection methodologies have gained significant attention due
to their ability to analyze structural irregularities within executable files,
particularly in the identification of malicious software employing advanced
obfuscation techniques. The Entropy-Synchronized Neural Hashing (ESNH)
framework introduces a novel approach that leverages entropy-driven hash
representations to classify software binaries based on their underlying entropy
characteristics. Through the synchronization of entropy profiles with neural
network architectures, the model generates robust and unique hash values that
maintain stability even when faced with polymorphic and metamorphic
transformations. Comparative analysis against traditional detection approaches
revealed superior performance in identifying novel threats, reducing
false-positive rates, and achieving consistent classification across diverse
ransomware families. The incorporation of a self-regulating hash convergence
mechanism further ensured that entropy-synchronized hashes remained invariant
across executions, minimizing classification inconsistencies that often arise
due to dynamic modifications in ransomware payloads. Experimental results
demonstrated high detection rates across contemporary ransomware strains, with
the model exhibiting resilience against encryption-based evasion mechanisms,
code injection strategies, and reflective loading techniques. Unlike
conventional detection mechanisms that rely on static signatures and heuristic
analysis, the proposed entropy-aware classification framework adapts to
emerging threats through an inherent ability to capture entropy anomalies
within executable structures. The findings reinforce the potential of
entropy-based detection in addressing the limitations of traditional
methodologies while enhancing detection robustness against obfuscation and
adversarial evasion techniques.
|
2501.18137
|
Tensor Completion for Surrogate Modeling of Material Property Prediction
|
cs.LG cond-mat.mtrl-sci cs.AI
|
When designing materials to optimize certain properties, there are often many
possible configurations of designs that need to be explored. For example, the
materials' composition of elements will affect properties such as strength or
conductivity, which are necessary to know when developing new materials.
Exploring all combinations of elements to find optimal materials becomes very
time consuming, especially when there are more design variables. For this
reason, there is growing interest in using machine learning (ML) to predict a
material's properties. In this work, we model the optimization of certain
material properties as a tensor completion problem, to leverage the structure
of our datasets and navigate the vast number of combinations of material
configurations. Across a variety of material property prediction tasks, our
experiments show tensor completion methods achieving 10-20% decreased error
compared with baseline ML models such as GradientBoosting and Multilayer
Perceptron (MLP), while maintaining similar training speed.
|
2501.18138
|
B3C: A Minimalist Approach to Offline Multi-Agent Reinforcement Learning
|
cs.LG
|
Overestimation arising from selecting unseen actions during policy evaluation
is a major challenge in offline reinforcement learning (RL). A minimalist
approach in the single-agent setting -- adding behavior cloning (BC)
regularization to existing online RL algorithms -- has been shown to be
effective; however, this approach is understudied in multi-agent settings. In
particular, overestimation becomes worse in multi-agent settings due to the
presence of multiple actions, resulting in the BC regularization-based approach
easily suffering from either over-regularization or critic divergence. To
address this, we propose a simple yet effective method, Behavior Cloning
regularization with Critic Clipping (B3C), which clips the target critic value
in policy evaluation based on the maximum return in the dataset and pushes the
limit of the weight on the RL objective over BC regularization, thereby
improving performance. Additionally, we leverage existing value factorization
techniques, particularly non-linear factorization, which is understudied in
offline settings. Integrated with non-linear value factorization, B3C
outperforms state-of-the-art algorithms on various offline multi-agent
benchmarks.
|
2501.18143
|
Dual-Bounded Nonlinear Optimal Transport for Size Constrained Min Cut
Clustering
|
cs.LG
|
Min cut is an important graph partitioning method. However, current solutions
to the min cut problem suffer from slow speeds, difficulty in solving, and
often converge to simple solutions. To address these issues, we relax the min
cut problem into a dual-bounded constraint and, for the first time, treat the
min cut problem as a dual-bounded nonlinear optimal transport problem.
Additionally, we develop a method for solving dual-bounded nonlinear optimal
transport based on the Frank-Wolfe method (abbreviated as DNF). Notably, DNF
not only solves the size constrained min cut problem but is also applicable to
all dual-bounded nonlinear optimal transport problems. We prove that for convex
problems satisfying Lipschitz smoothness, the DNF method can achieve a
convergence rate of \(\mathcal{O}(\frac{1}{t})\). We apply the DNF method to
the min cut problem and find that it achieves state-of-the-art performance in
terms of both the loss function and clustering accuracy at the fastest speed,
with a convergence rate of \(\mathcal{O}(\frac{1}{\sqrt{t}})\). Moreover, the
DNF method for the size constrained min cut problem requires no parameters and
exhibits better stability.
|
2501.18154
|
Mixed-Precision Graph Neural Quantization for Low Bit Large Language
Models
|
cs.CL
|
Post-Training Quantization (PTQ) is pivotal for deploying large language
models (LLMs) within resource-limited settings by significantly reducing
resource demands. However, existing PTQ strategies underperform at low bit
levels < 3 bits due to the significant difference between the quantized and
original weights. To enhance the quantization performance at low bit widths, we
introduce a Mixed-precision Graph Neural PTQ (MG-PTQ) approach, employing a
graph neural network (GNN) module to capture dependencies among weights and
adaptively assign quantization bit-widths. Through the information propagation
of the GNN module, our method more effectively captures dependencies among
target weights, leading to a more accurate assessment of weight importance and
optimized allocation of quantization strategies. Extensive experiments on the
WikiText2 and C4 datasets demonstrate that our MG-PTQ method outperforms
previous state-of-the-art PTQ method GPTQ, setting new benchmarks for
quantization performance under low-bit conditions.
|
2501.18157
|
Efficient Audiovisual Speech Processing via MUTUD: Multimodal Training
and Unimodal Deployment
|
cs.SD cs.CV cs.MM eess.AS
|
Building reliable speech systems often requires combining multiple
modalities, like audio and visual cues. While such multimodal solutions
frequently lead to improvements in performance and may even be critical in
certain cases, they come with several constraints such as increased sensory
requirements, computational cost, and modality synchronization, to mention a
few. These challenges constrain the direct uses of these multimodal solutions
in real-world applications. In this work, we develop approaches where the
learning happens with all available modalities but the deployment or inference
is done with just one or reduced modalities. To do so, we propose a Multimodal
Training and Unimodal Deployment (MUTUD) framework which includes a Temporally
Aligned Modality feature Estimation (TAME) module that can estimate information
from missing modality using modalities present during inference. This
innovative approach facilitates the integration of information across different
modalities, enhancing the overall inference process by leveraging the strengths
of each modality to compensate for the absence of certain modalities during
inference. We apply MUTUD to various audiovisual speech tasks and show that it
can reduce the performance gap between the multimodal and corresponding
unimodal models to a considerable extent. MUTUD can achieve this while reducing
the model size and compute compared to multimodal models, in some cases by
almost 80%.
|
2501.18158
|
Large Language Models for Cryptocurrency Transaction Analysis: A Bitcoin
Case Study
|
cs.CR cs.LG
|
Cryptocurrencies are widely used, yet current methods for analyzing
transactions heavily rely on opaque, black-box models. These lack
interpretability and adaptability, failing to effectively capture behavioral
patterns. Many researchers, including us, believe that Large Language Models
(LLMs) could bridge this gap due to their robust reasoning abilities for
complex tasks. In this paper, we test this hypothesis by applying LLMs to
real-world cryptocurrency transaction graphs, specifically within the Bitcoin
network. We introduce a three-tiered framework to assess LLM capabilities:
foundational metrics, characteristic overview, and contextual interpretation.
This includes a new, human-readable graph representation format, LLM4TG, and a
connectivity-enhanced sampling algorithm, CETraS, which simplifies larger
transaction graphs. Experimental results show that LLMs excel at foundational
metrics and offer detailed characteristic overviews. Their effectiveness in
contextual interpretation suggests they can provide useful explanations of
transaction behaviors, even with limited labeled data.
|
2501.18161
|
Using Computer Vision for Skin Disease Diagnosis in Bangladesh Enhancing
Interpretability and Transparency in Deep Learning Models for Skin Cancer
Classification
|
eess.IV cs.CV
|
With over 2 million new cases identified annually, skin cancer is the most
prevalent type of cancer globally and the second most common in Bangladesh,
following breast cancer. Early detection and treatment are crucial for
enhancing patient outcomes; however, Bangladesh faces a shortage of
dermatologists and qualified medical professionals capable of diagnosing and
treating skin cancer. As a result, many cases are diagnosed only at advanced
stages. Research indicates that deep learning algorithms can effectively
classify skin cancer images. However, these models typically lack
interpretability, making it challenging to understand their decision-making
processes. This lack of clarity poses barriers to utilizing deep learning in
improving skin cancer detection and treatment. In this article, we present a
method aimed at enhancing the interpretability of deep learning models for skin
cancer classification in Bangladesh. Our technique employs a combination of
saliency maps and attention maps to visualize critical features influencing the
model's diagnoses.
|
2501.18162
|
IROAM: Improving Roadside Monocular 3D Object Detection Learning from
Autonomous Vehicle Data Domain
|
cs.CV cs.RO
|
In autonomous driving, The perception capabilities of the ego-vehicle can be
improved with roadside sensors, which can provide a holistic view of the
environment. However, existing monocular detection methods designed for vehicle
cameras are not suitable for roadside cameras due to viewpoint domain gaps. To
bridge this gap and Improve ROAdside Monocular 3D object detection, we propose
IROAM, a semantic-geometry decoupled contrastive learning framework, which
takes vehicle-side and roadside data as input simultaneously. IROAM has two
significant modules. In-Domain Query Interaction module utilizes a transformer
to learn content and depth information for each domain and outputs object
queries. Cross-Domain Query Enhancement To learn better feature representations
from two domains, Cross-Domain Query Enhancement decouples queries into
semantic and geometry parts and only the former is used for contrastive
learning. Experiments demonstrate the effectiveness of IROAM in improving
roadside detector's performance. The results validate that IROAM has the
capabilities to learn cross-domain information.
|
2501.18164
|
Faster Convergence of Riemannian Stochastic Gradient Descent with
Increasing Batch Size
|
cs.LG math.OC stat.ML
|
Many models used in machine learning have become so large that even computer
computation of the full gradient of the loss function is impractical. This has
made it necessary to efficiently train models using limited available
information, such as batch size and learning rate. We have theoretically
analyzed the use of Riemannian stochastic gradient descent (RSGD) and found
that using an increasing batch size leads to faster RSGD convergence than using
a constant batch size not only with a constant learning rate but also with a
decaying learning rate, such as cosine annealing decay and polynomial decay. In
particular, RSGD has a better convergence rate $O(\frac{1}{\sqrt{T}})$ than the
existing rate $O(\frac{\sqrt{\log T}}{\sqrt[4]{T}})$ with a diminishing
learning rate, where $T$ is the number of iterations. The results of
experiments on principal component analysis and low-rank matrix completion
problems confirmed that, except for the MovieLens dataset and a constant
learning rate, using a polynomial growth batch size or an exponential growth
batch size results in better performance than using a constant batch size.
|
2501.18167
|
Scattering approach to diffusion quantifies axonal damage in brain
injury
|
physics.med-ph cs.CV physics.bio-ph
|
Early diagnosis and noninvasive monitoring of neurological disorders require
sensitivity to elusive cellular-level alterations that occur much earlier than
volumetric changes observable with the millimeter-resolution of medical imaging
modalities. Morphological changes in axons, such as axonal varicosities or
beadings, are observed in neurological disorders, as well as in development and
aging. Here, we reveal the sensitivity of time-dependent diffusion MRI (dMRI)
to axonal morphology at the micrometer scale. Scattering theory uncovers the
two parameters that determine the diffusive dynamics of water in axons: the
average reciprocal cross-section and the variance of long-range cross-sectional
fluctuations. This theoretical development allowed us to predict dMRI metrics
sensitive to axonal alterations across tens of thousands of axons in seconds
rather than months of simulations in a rat model of traumatic brain injury. Our
approach bridges the gap between micrometers and millimeters in resolution,
offering quantitative, objective biomarkers applicable to a broad spectrum of
neurological disorders.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.