id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.08378
|
Learning Humanoid Standing-up Control across Diverse Postures
|
cs.RO cs.AI cs.LG
|
Standing-up control is crucial for humanoid robots, with the potential for
integration into current locomotion and loco-manipulation systems, such as fall
recovery. Existing approaches are either limited to simulations that overlook
hardware constraints or rely on predefined ground-specific motion trajectories,
failing to enable standing up across postures in real-world scenes. To bridge
this gap, we present HoST (Humanoid Standing-up Control), a reinforcement
learning framework that learns standing-up control from scratch, enabling
robust sim-to-real transfer across diverse postures. HoST effectively learns
posture-adaptive motions by leveraging a multi-critic architecture and
curriculum-based training on diverse simulated terrains. To ensure successful
real-world deployment, we constrain the motion with smoothness regularization
and implicit motion speed bound to alleviate oscillatory and violent motions on
physical hardware, respectively. After simulation-based training, the learned
control policies are directly deployed on the Unitree G1 humanoid robot. Our
experimental results demonstrate that the controllers achieve smooth, stable,
and robust standing-up motions across a wide range of laboratory and outdoor
environments. Videos are available at
https://taohuang13.github.io/humanoid-standingup.github.io/.
|
2502.08391
|
ViLa-MIL: Dual-scale Vision-Language Multiple Instance Learning for
Whole Slide Image Classification
|
cs.CV
|
Multiple instance learning (MIL)-based framework has become the mainstream
for processing the whole slide image (WSI) with giga-pixel size and
hierarchical image context in digital pathology. However, these methods heavily
depend on a substantial number of bag-level labels and solely learn from the
original slides, which are easily affected by variations in data distribution.
Recently, vision language model (VLM)-based methods introduced the language
prior by pre-training on large-scale pathological image-text pairs. However,
the previous text prompt lacks the consideration of pathological prior
knowledge, therefore does not substantially boost the model's performance.
Moreover, the collection of such pairs and the pre-training process are very
time-consuming and source-intensive.To solve the above problems, we propose a
dual-scale vision-language multiple instance learning (ViLa-MIL) framework for
whole slide image classification. Specifically, we propose a dual-scale visual
descriptive text prompt based on the frozen large language model (LLM) to boost
the performance of VLM effectively. To transfer the VLM to process WSI
efficiently, for the image branch, we propose a prototype-guided patch decoder
to aggregate the patch features progressively by grouping similar patches into
the same prototype; for the text branch, we introduce a context-guided text
decoder to enhance the text features by incorporating the multi-granular image
contexts. Extensive studies on three multi-cancer and multi-center subtyping
datasets demonstrate the superiority of ViLa-MIL.
|
2502.08395
|
IssueBench: Millions of Realistic Prompts for Measuring Issue Bias in
LLM Writing Assistance
|
cs.CL
|
Large language models (LLMs) are helping millions of users write texts about
diverse issues, and in doing so expose users to different ideas and
perspectives. This creates concerns about issue bias, where an LLM tends to
present just one perspective on a given issue, which in turn may influence how
users think about this issue. So far, it has not been possible to measure which
issue biases LLMs actually manifest in real user interactions, making it
difficult to address the risks from biased LLMs. Therefore, we create
IssueBench: a set of 2.49m realistic prompts for measuring issue bias in LLM
writing assistance, which we construct based on 3.9k templates (e.g. "write a
blog about") and 212 political issues (e.g. "AI regulation") from real user
interactions. Using IssueBench, we show that issue biases are common and
persistent in state-of-the-art LLMs. We also show that biases are remarkably
similar across models, and that all models align more with US Democrat than
Republican voter opinion on a subset of issues. IssueBench can easily be
adapted to include other issues, templates, or tasks. By enabling robust and
realistic measurement, we hope that IssueBench can bring a new quality of
evidence to ongoing discussions about LLM biases and how to address them.
|
2502.08397
|
Strong bounds for large-scale Minimum Sum-of-Squares Clustering
|
math.OC cs.LG
|
Clustering is a fundamental technique in data analysis and machine learning,
used to group similar data points together. Among various clustering methods,
the Minimum Sum-of-Squares Clustering (MSSC) is one of the most widely used.
MSSC aims to minimize the total squared Euclidean distance between data points
and their corresponding cluster centroids. Due to the unsupervised nature of
clustering, achieving global optimality is crucial, yet computationally
challenging. The complexity of finding the global solution increases
exponentially with the number of data points, making exact methods impractical
for large-scale datasets. Even obtaining strong lower bounds on the optimal
MSSC objective value is computationally prohibitive, making it difficult to
assess the quality of heuristic solutions. We address this challenge by
introducing a novel method to validate heuristic MSSC solutions through
optimality gaps. Our approach employs a divide-and-conquer strategy,
decomposing the problem into smaller instances that can be handled by an exact
solver. The decomposition is guided by an auxiliary optimization problem, the
"anticlustering problem", for which we design an efficient heuristic.
Computational experiments demonstrate the effectiveness of the method for
large-scale instances, achieving optimality gaps below 3% in most cases while
maintaining reasonable computational times. These results highlight the
practicality of our approach in assessing feasible clustering solutions for
large datasets, bridging a critical gap in MSSC evaluation.
|
2502.08404
|
Quantifying Collective Emotions: Japan's Societal Trends Through
Enhanced Sentiment Index Using POMS2 and SNS
|
cs.SI cs.CY
|
In this study, we constructed an emotion index that quantitatively represents
the collective emotions present in the Japanese web space by utilizing Social
Networking Service (SNS) post data. Building upon previous research that used
blog data and the Profile of Mood States (POMS), we restructured the
methodology using posts from X (formerly Twitter) and updated the model by
adding the ``Friendliness" indicator from the POMS2 metrics. Through periodic
and trend analyses of the emotional indicators derived from X's post data, we
found that the extension is consistent with results previously reported using
blog data. This suggests that our methodology effectively captures typical
emotional fluctuations in Japanese society, independent of specific SNS
platforms, and is expected to serve as an index to visualize societal trends.
|
2502.08414
|
Sparse Estimation of Inverse Covariance and Partial Correlation Matrices
via Joint Partial Regression
|
stat.ML cs.LG
|
We present a new method for estimating high-dimensional sparse partial
correlation and inverse covariance matrices, which exploits the connection
between the inverse covariance matrix and linear regression. The method is a
two-stage estimation method wherein each individual feature is regressed on all
other features while positive semi-definiteness is enforced simultaneously. We
provide statistical rates of convergence for the proposed method which match,
and improve upon, the state-of-the-art for inverse covariance and partial
correlation matrix estimation, respectively. We also propose an efficient
proximal splitting algorithm for numerically computing the estimate. The
effectiveness of the proposed method is demonstrated on both synthetic and
real-world data.
|
2502.08415
|
A Semantic Parsing Algorithm to Solve Linear Ordering Problems
|
cs.CL cs.LO
|
We develop an algorithm to semantically parse linear ordering problems, which
require a model to arrange entities using deductive reasoning. Our method takes
as input a number of premises and candidate statements, parsing them to a
first-order logic of an ordering domain, and then utilizes constraint logic
programming to infer the truth of proposed statements about the ordering.
Our semantic parser transforms Heim and Kratzer's syntax-based compositional
formal semantic rules to a computational algorithm. This transformation
involves introducing abstract types and templates based on their rules, and
introduces a dynamic component to interpret entities within a contextual
framework.
Our symbolic system, the Formal Semantic Logic Inferer (FSLI), is applied to
answer multiple choice questions in BIG-bench's logical_deduction multiple
choice problems, achieving perfect accuracy, compared to 67.06% for the
best-performing LLM (GPT-4) and 87.63% for the hybrid system Logic-LM.
These promising results demonstrate the benefit of developing a semantic
parsing algorithm driven by first-order logic constructs.
|
2502.08416
|
Multifidelity Simulation-based Inference for Computationally Expensive
Simulators
|
stat.ML cs.LG
|
Across many domains of science, stochastic models are an essential tool to
understand the mechanisms underlying empirically observed data. Models can be
of different levels of detail and accuracy, with models of high-fidelity (i.e.,
high accuracy) to the phenomena under study being often preferable. However,
inferring parameters of high-fidelity models via simulation-based inference is
challenging, especially when the simulator is computationally expensive. We
introduce MF-NPE, a multifidelity approach to neural posterior estimation that
leverages inexpensive low-fidelity simulations to infer parameters of
high-fidelity simulators within a limited simulation budget. MF-NPE performs
neural posterior estimation with limited high-fidelity resources by virtue of
transfer learning, with the ability to prioritize individual observations using
active learning. On one statistical task with analytical ground-truth and two
real-world tasks, MF-NPE shows comparable performance to current approaches
while requiring up to two orders of magnitude fewer high-fidelity simulations.
Overall, MF-NPE opens new opportunities to perform efficient Bayesian inference
on computationally expensive simulators.
|
2502.08417
|
Handwritten Text Recognition: A Survey
|
cs.CV cs.AI
|
Handwritten Text Recognition (HTR) has become an essential field within
pattern recognition and machine learning, with applications spanning historical
document preservation to modern data entry and accessibility solutions. The
complexity of HTR lies in the high variability of handwriting, which makes it
challenging to develop robust recognition systems. This survey examines the
evolution of HTR models, tracing their progression from early heuristic-based
approaches to contemporary state-of-the-art neural models, which leverage deep
learning techniques. The scope of the field has also expanded, with models
initially capable of recognizing only word-level content progressing to recent
end-to-end document-level approaches. Our paper categorizes existing work into
two primary levels of recognition: (1) \emph{up to line-level}, encompassing
word and line recognition, and (2) \emph{beyond line-level}, addressing
paragraph- and document-level challenges. We provide a unified framework that
examines research methodologies, recent advances in benchmarking, key datasets
in the field, and a discussion of the results reported in the literature.
Finally, we identify pressing research challenges and outline promising future
directions, aiming to equip researchers and practitioners with a roadmap for
advancing the field.
|
2502.08426
|
Semantic Learning for Molecular Communication in Internet of Bio-Nano
Things
|
eess.SP cs.ET cs.LG eess.IV
|
Molecular communication (MC) provides a foundational framework for
information transmission in the Internet of Bio-Nano Things (IoBNT), where
efficiency and reliability are crucial. However, the inherent limitations of
molecular channels, such as low transmission rates, noise, and inter-symbol
interference (ISI), limit their ability to support complex data transmission.
This paper proposes an end-to-end semantic learning framework designed to
optimize task-oriented molecular communication, with a focus on biomedical
diagnostic tasks under resource-constrained conditions. The proposed framework
employs a deep encoder-decoder architecture to efficiently extract, quantize,
and decode semantic features, prioritizing task-relevant semantic information
to enhance diagnostic classification performance. Additionally, a probabilistic
channel network is introduced to approximate molecular propagation dynamics,
enabling gradient-based optimization for end-to-end learning. Experimental
results demonstrate that the proposed semantic framework improves diagnostic
accuracy by at least 25% compared to conventional JPEG compression with LDPC
coding methods under resource-constrained communication scenarios.
|
2502.08428
|
Robot-Initiated Social Control of Sedentary Behavior: Comparing the
Impact of Relationship- and Target-Focused Strategies
|
cs.HC cs.RO
|
To design social robots to effectively promote health behavior change, it is
essential to understand how people respond to various health communication
strategies employed by these robots. This study examines the effectiveness of
two types of social control strategies from a social robot,
relationship-focused strategies (emphasizing relational consequences) and
target-focused strategies (emphasizing health consequences), in encouraging
people to reduce sedentary behavior. A two-session lab experiment was conducted
(n = 135), where participants first played a game with a robot, followed by the
robot persuading them to stand up and move using one of the strategies. Half of
the participants joined a second session to have a repeated interaction with
the robot. Results showed that relationship-focused strategies motivated
participants to stay active longer. Repeated sessions did not strengthen
participants' relationship with the robot, but those who felt more attached to
the robot responded more actively to the target-focused strategies. These
findings offer valuable insights for designing persuasive strategies for social
robots in health communication contexts.
|
2502.08432
|
Closer through commonality: Enhancing hypergraph contrastive learning
with shared groups
|
cs.LG
|
Hypergraphs provide a superior modeling framework for representing complex
multidimensional relationships in the context of real-world interactions that
often occur in groups, overcoming the limitations of traditional homogeneous
graphs. However, there have been few studies on hypergraphbased contrastive
learning, and existing graph-based contrastive learning methods have not been
able to fully exploit the highorder correlation information in hypergraphs.
Here, we propose a Hypergraph Fine-grained contrastive learning (HyFi) method
designed to exploit the complex high-dimensional information inherent in
hypergraphs. While avoiding traditional graph augmentation methods that corrupt
the hypergraph topology, the proposed method provides a simple and efficient
learning augmentation function by adding noise to node features. Furthermore,
we expands beyond the traditional dichotomous relationship between positive and
negative samples in contrastive learning by introducing a new relationship of
weak positives. It demonstrates the importance of fine-graining positive
samples in contrastive learning. Therefore, HyFi is able to produce highquality
embeddings, and outperforms both supervised and unsupervised baselines in
average rank on node classification across 10 datasets. Our approach
effectively exploits high-dimensional hypergraph information, shows significant
improvement over existing graph-based contrastive learning methods, and is
efficient in terms of training speed and GPU memory cost. The source code is
available at https://github.com/Noverse0/HyFi.git.
|
2502.08436
|
From Haystack to Needle: Label Space Reduction for Zero-shot
Classification
|
cs.CL cs.AI cs.LG
|
We present Label Space Reduction (LSR), a novel method for improving
zero-shot classification performance of Large Language Models (LLMs). LSR
iteratively refines the classification label space by systematically ranking
and reducing candidate classes, enabling the model to concentrate on the most
relevant options. By leveraging unlabeled data with the statistical learning
capabilities of data-driven models, LSR dynamically optimizes the label space
representation at test time. Our experiments across seven benchmarks
demonstrate that LSR improves macro-F1 scores by an average of 7.0% (up to
14.2%) with Llama-3.1-70B and 3.3% (up to 11.1%) with Claude-3.5-Sonnet
compared to standard zero-shot classification baselines. To reduce the
computational overhead of LSR, which requires an additional LLM call at each
iteration, we propose distilling the model into a probabilistic classifier,
allowing for efficient inference.
|
2502.08438
|
Composite Sketch+Text Queries for Retrieving Objects with Elusive Names
and Complex Interactions
|
cs.CV cs.AI cs.CL cs.IR cs.MM
|
Non-native speakers with limited vocabulary often struggle to name specific
objects despite being able to visualize them, e.g., people outside Australia
searching for numbats. Further, users may want to search for such elusive
objects with difficult-to-sketch interactions, e.g., numbat digging in the
ground. In such common but complex situations, users desire a search interface
that accepts composite multimodal queries comprising hand-drawn sketches of
difficult-to-name but easy-to-draw objects and text describing
difficult-to-sketch but easy-to-verbalize object attributes or interaction with
the scene. This novel problem statement distinctly differs from the previously
well-researched TBIR (text-based image retrieval) and SBIR (sketch-based image
retrieval) problems. To study this under-explored task, we curate a dataset,
CSTBIR (Composite Sketch+Text Based Image Retrieval), consisting of approx. 2M
queries and 108K natural scene images. Further, as a solution to this problem,
we propose a pretrained multimodal transformer-based baseline, STNET
(Sketch+Text Network), that uses a hand-drawn sketch to localize relevant
objects in the natural scene image, and encodes the text and image to perform
image retrieval. In addition to contrastive learning, we propose multiple
training objectives that improve the performance of our model. Extensive
experiments show that our proposed method outperforms several state-of-the-art
retrieval methods for text-only, sketch-only, and composite query modalities.
We make the dataset and code available at our project website.
|
2502.08441
|
Better Embeddings with Coupled Adam
|
cs.CL cs.AI cs.LG
|
Despite their remarkable capabilities, LLMs learn word representations that
exhibit the undesirable yet poorly understood feature of anisotropy. In this
paper, we argue that the second moment in Adam is a cause of anisotropic
embeddings, and suggest a modified optimizer called Coupled Adam to mitigate
the problem. Our experiments demonstrate that Coupled Adam significantly
improves the quality of embeddings, while also leading to better upstream and
downstream performance on large enough datasets.
|
2502.08445
|
LucidAtlas$: Learning Uncertainty-Aware, Covariate-Disentangled,
Individualized Atlas Representations
|
cs.LG
|
The goal of this work is to develop principled techniques to extract
information from high dimensional data sets with complex dependencies in areas
such as medicine that can provide insight into individual as well as population
level variation. We develop $\texttt{LucidAtlas}$, an approach that can
represent spatially varying information, and can capture the influence of
covariates as well as population uncertainty. As a versatile atlas
representation, $\texttt{LucidAtlas}$ offers robust capabilities for covariate
interpretation, individualized prediction, population trend analysis, and
uncertainty estimation, with the flexibility to incorporate prior knowledge.
Additionally, we discuss the trustworthiness and potential risks of neural
additive models for analyzing dependent covariates and then introduce a
marginalization approach to explain the dependence of an individual predictor
on the models' response (the atlas). To validate our method, we demonstrate its
generalizability on two medical datasets. Our findings underscore the critical
role of by-construction interpretable models in advancing scientific discovery.
Our code will be publicly available upon acceptance.
|
2502.08448
|
Monge SAM: Robust Reparameterization-Invariant Sharpness-Aware
Minimization Based on Loss Geometry
|
cs.LG stat.ML
|
Recent studies on deep neural networks show that flat minima of the loss
landscape correlate with improved generalization. Sharpness-aware minimization
(SAM) efficiently finds flat regions by updating the parameters according to
the gradient at an adversarial perturbation. The perturbation depends on the
Euclidean metric, making SAM non-invariant under reparametrizations, which
blurs sharpness and generalization. We propose Monge SAM (M-SAM), a
reparametrization invariant version of SAM by considering a Riemannian metric
in the parameter space induced naturally by the loss surface. Compared to
previous approaches, M-SAM works under any modeling choice, relies only on mild
assumptions while being as computationally efficient as SAM. We theoretically
argue that M-SAM varies between SAM and gradient descent (GD), which increases
robustness to hyperparameter selection and reduces attraction to suboptimal
equilibria like saddle points. We demonstrate this behavior both theoretically
and empirically on a multi-modal representation alignment task.
|
2502.08449
|
CordViP: Correspondence-based Visuomotor Policy for Dexterous
Manipulation in Real-World
|
cs.RO cs.AI
|
Achieving human-level dexterity in robots is a key objective in the field of
robotic manipulation. Recent advancements in 3D-based imitation learning have
shown promising results, providing an effective pathway to achieve this goal.
However, obtaining high-quality 3D representations presents two key problems:
(1) the quality of point clouds captured by a single-view camera is
significantly affected by factors such as camera resolution, positioning, and
occlusions caused by the dexterous hand; (2) the global point clouds lack
crucial contact information and spatial correspondences, which are necessary
for fine-grained dexterous manipulation tasks. To eliminate these limitations,
we propose CordViP, a novel framework that constructs and learns
correspondences by leveraging the robust 6D pose estimation of objects and
robot proprioception. Specifically, we first introduce the interaction-aware
point clouds, which establish correspondences between the object and the hand.
These point clouds are then used for our pre-training policy, where we also
incorporate object-centric contact maps and hand-arm coordination information,
effectively capturing both spatial and temporal dynamics. Our method
demonstrates exceptional dexterous manipulation capabilities with an average
success rate of 90\% in four real-world tasks, surpassing other baselines by a
large margin. Experimental results also highlight the superior generalization
and robustness of CordViP to different objects, viewpoints, and scenarios. Code
and videos are available on https://aureleopku.github.io/CordViP.
|
2502.08450
|
Towards Prompt Generalization: Grammar-aware Cross-Prompt Automated
Essay Scoring
|
cs.CL cs.AI
|
In automated essay scoring (AES), recent efforts have shifted toward
cross-prompt settings that score essays on unseen prompts for practical
applicability. However, prior methods trained with essay-score pairs of
specific prompts pose challenges in obtaining prompt-generalized essay
representation. In this work, we propose a grammar-aware cross-prompt trait
scoring (GAPS), which internally captures prompt-independent syntactic aspects
to learn generic essay representation. We acquire grammatical error-corrected
information in essays via the grammar error correction technique and design the
AES model to seamlessly integrate such information. By internally referring to
both the corrected and the original essays, the model can focus on generic
features during training. Empirical experiments validate our method's
generalizability, showing remarkable improvements in prompt-independent and
grammar-related traits. Furthermore, GAPS achieves notable QWK gains in the
most challenging cross-prompt scenario, highlighting its strength in evaluating
unseen prompts.
|
2502.08452
|
Learning to Group and Grasp Multiple Objects
|
cs.RO
|
Simultaneously grasping and transporting multiple objects can significantly
enhance robotic work efficiency and has been a key research focus for decades.
The primary challenge lies in determining how to push objects, group them, and
execute simultaneous grasping for respective groups while considering object
distribution and the hardware constraints of the robot. Traditional rule-based
methods struggle to flexibly adapt to diverse scenarios. To address this
challenge, this paper proposes an imitation learning-based approach. We collect
a series of expert demonstrations through teleoperation and train a diffusion
policy network, enabling the robot to dynamically generate action sequences for
pushing, grouping, and grasping, thereby facilitating efficient multi-object
grasping and transportation. We conducted experiments to evaluate the method
under different training dataset sizes, varying object quantities, and
real-world object scenarios. The results demonstrate that the proposed approach
can effectively and adaptively generate multi-object grouping and grasping
strategies. With the support of more training data, imitation learning is
expected to be an effective approach for solving the multi-object grasping
problem.
|
2502.08453
|
Proceedings 40th International Conference on Logic Programming
|
cs.LO cs.AI
|
Since the first conference In Marseille in 1982, the International Conference
on Logic Programming (ICLP) has been the premier international event for
presenting research in logic programming. These proceedings include technical
communications about, and abstracts for presentations given at the 40th ICLP
held October 14-17, in Dallas Texas, USA. The papers and abstracts in this
volume include the following areas and topics. Formal and operational
semantics: including non-monotonic reasoning, probabilistic reasoning,
argumentation, and semantic issues of combining logic with neural models.
Language design and programming methodologies such as answer set programming.
inductive logic programming, and probabilistic programming. Program analysis
and logic-based validation of generated programs. Implementation methodologies
including constraint implementation, tabling, Logic-based prompt engineering,
and the interaction of logic programming with LLMs.
|
2502.08455
|
Resilient Quantized Consensus in Multi-Hop Relay Networks
|
cs.MA cs.SY eess.SY
|
We study resilient quantized consensus in multi-agent systems, where some
agents may malfunction. The network consists of agents taking integer-valued
states, and the agents' communication is subject to asynchronous updates and
time delays. We utilize the quantized weighted mean subsequence reduced
algorithm where agents communicate with others through multi-hop relays. We
prove necessary and sufficient conditions for our algorithm to achieve the
objective under the malicious and Byzantine attack models. Our approach has
tighter graph conditions compared to the one-hop algorithm and the
flooding-based algorithms for binary consensus. Numerical examples verify the
efficacy of our algorithm.
|
2502.08457
|
Learning Theory for Kernel Bilevel Optimization
|
cs.LG
|
Bilevel optimization has emerged as a technique for addressing a wide range
of machine learning problems that involve an outer objective implicitly
determined by the minimizer of an inner problem. In this paper, we investigate
the generalization properties for kernel bilevel optimization problems where
the inner objective is optimized over a Reproducing Kernel Hilbert Space. This
setting enables rich function approximation while providing a foundation for
rigorous theoretical analysis. In this context, we establish novel
generalization error bounds for the bilevel problem under finite-sample
approximation. Our approach adopts a functional perspective, inspired by
(Petrulionyte et al., 2024), and leverages tools from empirical process theory
and maximal inequalities for degenerate $U$-processes to derive uniform error
bounds. These generalization error estimates allow to characterize the
statistical accuracy of gradient-based methods applied to the empirical
discretization of the bilevel problem.
|
2502.08458
|
Examining Spanish Counseling with MIDAS: a Motivational Interviewing
Dataset in Spanish
|
cs.CL
|
Cultural and language factors significantly influence counseling, but Natural
Language Processing research has not yet examined whether the findings of
conversational analysis for counseling conducted in English apply to other
languages. This paper presents a first step towards this direction. We
introduce MIDAS (Motivational Interviewing Dataset in Spanish), a counseling
dataset created from public video sources that contains expert annotations for
counseling reflections and questions. Using this dataset, we explore
language-based differences in counselor behavior in English and Spanish and
develop classifiers in monolingual and multilingual settings, demonstrating its
applications in counselor behavioral coding tasks.
|
2502.08468
|
mmE5: Improving Multimodal Multilingual Embeddings via High-quality
Synthetic Data
|
cs.CV cs.AI cs.CL
|
Multimodal embedding models have gained significant attention for their
ability to map data from different modalities, such as text and images, into a
unified representation space. However, the limited labeled multimodal data
often hinders embedding performance. Recent approaches have leveraged data
synthesis to address this problem, yet the quality of synthetic data remains a
critical bottleneck. In this work, we identify three criteria for high-quality
synthetic multimodal data. First, broad scope ensures that the generated data
covers diverse tasks and modalities, making it applicable to various downstream
scenarios. Second, robust cross-modal alignment makes different modalities
semantically consistent. Third, high fidelity ensures that the synthetic data
maintains realistic details to enhance its reliability. Guided by these
principles, we synthesize datasets that: (1) cover a wide range of tasks,
modality combinations, and languages, (2) are generated via a deep thinking
process within a single pass of a multimodal large language model, and (3)
incorporate real-world images with accurate and relevant texts, ensuring
fidelity through self-evaluation and refinement. Leveraging these high-quality
synthetic and labeled datasets, we train a multimodal multilingual E5 model
mmE5. Extensive experiments demonstrate that mmE5 achieves state-of-the-art
performance on the MMEB Benchmark and superior multilingual performance on the
XTD benchmark. Our codes, datasets and models are released in
https://github.com/haon-chen/mmE5.
|
2502.08470
|
Numerical Schemes for Signature Kernels
|
math.NA cs.LG cs.NA math.AP
|
Signature kernels have emerged as a powerful tool within kernel methods for
sequential data. In the paper "The Signature Kernel is the solution of a
Goursat PDE", the authors identify a kernel trick that demonstrates that, for
continuously differentiable paths, the signature kernel satisfies a Goursat
problem for a hyperbolic partial differential equation (PDE) in two independent
time variables. While finite difference methods have been explored for this
PDE, they face limitations in accuracy and stability when handling highly
oscillatory inputs. In this work, we introduce two advanced numerical schemes
that leverage polynomial representations of boundary conditions through either
approximation or interpolation techniques, and rigorously establish the
theoretical convergence of the polynomial approximation scheme. Experimental
evaluations reveal that our approaches yield improvements of several orders of
magnitude in mean absolute percentage error (MAPE) compared to traditional
finite difference schemes, without increasing computational complexity.
Furthermore, like finite difference methods, our algorithms can be
GPU-parallelized to reduce computational complexity from quadratic to linear in
the length of the input sequences, thereby improving scalability for
high-frequency data. We have implemented these algorithms in a dedicated Python
library, which is publicly available at:
https://github.com/FrancescoPiatti/polysigkernel.
|
2502.08474
|
Training-Free Restoration of Pruned Neural Networks
|
cs.LG cs.AI cs.CV
|
Although network pruning has been highly popularized to compress deep neural
networks, its resulting accuracy heavily depends on a fine-tuning process that
is often computationally expensive and requires the original data. However,
this may not be the case in real-world scenarios, and hence a few recent works
attempt to restore pruned networks without any expensive retraining process.
Their strong assumption is that every neuron being pruned can be replaced with
another one quite similar to it, but unfortunately this does not hold in many
neural networks, where the similarity between neurons is extremely low in some
layers. In this article, we propose a more rigorous and robust method of
restoring pruned networks in a fine-tuning free and data-free manner, called
LBYL (Leave Before You Leave). LBYL significantly relaxes the aforementioned
assumption in a way that each pruned neuron leaves its pieces of information to
as many preserved neurons as possible and thereby multiple neurons together
obtain a more robust approximation to the original output of the neuron who
just left. Our method is based on a theoretical analysis on how to formulate
the reconstruction error between the original network and its approximation,
which nicely leads to a closed form solution for our derived loss function.
Through the extensive experiments, LBYL is confirmed to be indeed more
effective to approximate the original network and consequently able to achieve
higher accuracy for restored networks, compared to the recent approaches
exploiting the similarity between two neurons. The very first version of this
work, which contains major technical and theoretical components, was submitted
to NeurIPS 2021 and ICML 2022.
|
2502.08482
|
Enhancing Auto-regressive Chain-of-Thought through Loop-Aligned
Reasoning
|
cs.CL cs.AI cs.LG
|
Chain-of-Thought (CoT) prompting has emerged as a powerful technique for
enhancing language model's reasoning capabilities. However, generating long and
correct CoT trajectories is challenging. Recent studies have demonstrated that
Looped Transformers possess remarkable length generalization capabilities, but
their limited generality and adaptability prevent them from serving as an
alternative to auto-regressive solutions. To better leverage the strengths of
Looped Transformers, we propose RELAY (REasoning through Loop Alignment
iterativelY). Specifically, we align the steps of Chain-of-Thought (CoT)
reasoning with loop iterations and apply intermediate supervision during the
training of Looped Transformers. This additional iteration-wise supervision not
only preserves the Looped Transformer's ability for length generalization but
also enables it to predict CoT reasoning steps for unseen data. Therefore, we
leverage this Looped Transformer to generate accurate reasoning chains for
complex problems that exceed the training length, which will then be used to
fine-tune an auto-regressive model. We conduct extensive experiments, and the
results demonstrate the effectiveness of our approach, with significant
improvements in the performance of the auto-regressive model. Code will be
released at https://github.com/qifanyu/RELAY.
|
2502.08486
|
Referring Remote Sensing Image Segmentation via Bidirectional Alignment
Guided Joint Prediction
|
cs.CV
|
Referring Remote Sensing Image Segmentation (RRSIS) is critical for
ecological monitoring, urban planning, and disaster management, requiring
precise segmentation of objects in remote sensing imagery guided by textual
descriptions. This task is uniquely challenging due to the considerable
vision-language gap, the high spatial resolution and broad coverage of remote
sensing imagery with diverse categories and small targets, and the presence of
clustered, unclear targets with blurred edges. To tackle these issues, we
propose \ours, a novel framework designed to bridge the vision-language gap,
enhance multi-scale feature interaction, and improve fine-grained object
differentiation. Specifically, \ours introduces: (1) the Bidirectional Spatial
Correlation (BSC) for improved vision-language feature alignment, (2) the
Target-Background TwinStream Decoder (T-BTD) for precise distinction between
targets and non-targets, and (3) the Dual-Modal Object Learning Strategy
(D-MOLS) for robust multimodal feature reconstruction. Extensive experiments on
the benchmark datasets RefSegRS and RRSIS-D demonstrate that \ours achieves
state-of-the-art performance. Specifically, \ours improves the overall IoU
(oIoU) by 3.76 percentage points (80.57) and 1.44 percentage points (79.23) on
the two datasets, respectively. Additionally, it outperforms previous methods
in the mean IoU (mIoU) by 5.37 percentage points (67.95) and 1.84 percentage
points (66.04), effectively addressing the core challenges of RRSIS with
enhanced precision and robustness.
|
2502.08488
|
One-Shot Federated Learning with Classifier-Free Diffusion Models
|
cs.LG
|
Federated learning (FL) enables collaborative learning without data
centralization but introduces significant communication costs due to multiple
communication rounds between clients and the server. One-shot federated
learning (OSFL) addresses this by forming a global model with a single
communication round, often relying on the server's model distillation or
auxiliary dataset generation - often through pre-trained diffusion models
(DMs). Existing DM-assisted OSFL methods, however, typically employ
classifier-guided DMs, which require training auxiliary classifier models at
each client, introducing additional computation overhead. This work introduces
OSCAR (One-Shot Federated Learning with Classifier-Free Diffusion Models), a
novel OSFL approach that eliminates the need for auxiliary models. OSCAR uses
foundation models to devise category-specific data representations at each
client, seamlessly integrated into a classifier-free diffusion model pipeline
for server-side data generation. OSCAR is a simple yet cost-effective OSFL
approach that outperforms the state-of-the-art on four benchmarking datasets
while reducing the communication load by at least 99%.
|
2502.08489
|
Salamandra Technical Report
|
cs.CL
|
This work introduces Salamandra, a suite of open-source decoder-only large
language models available in three different sizes: 2, 7, and 40 billion
parameters. The models were trained from scratch on highly multilingual data
that comprises text in 35 European languages and code. Our carefully curated
corpus is made exclusively from open-access data compiled from a wide variety
of sources. Along with the base models, supplementary checkpoints that were
fine-tuned on public-domain instruction data are also released for chat
applications. Additionally, we also share our preliminary experiments on
multimodality, which serve as proof-of-concept to showcase potential
applications for the Salamandra family. Our extensive evaluations on
multilingual benchmarks reveal that Salamandra has strong capabilities,
achieving competitive performance when compared to similarly sized open-source
models. We provide comprehensive evaluation results both on standard downstream
tasks as well as key aspects related to bias and safety.With this technical
report, we intend to promote open science by sharing all the details behind our
design choices, data curation strategy and evaluation methodology. In addition
to that, we deviate from the usual practice by making our training and
evaluation scripts publicly accessible. We release all models under a
permissive Apache 2.0 license in order to foster future research and facilitate
commercial use, thereby contributing to the open-source ecosystem of large
language models.
|
2502.08490
|
Flat-Top Beamforming with Efficient Array-Fed RIS
|
eess.SP cs.SY eess.SY
|
Flat-top beam designs are essential for uniform power distribution over a
wide angular sector for applications such as 5G/6G networks, satellite
communications, radar systems, etc. Low sidelobe levels with steep transitions
allow negligible cross sector illumination. Active array designs requiring
amplitude taper suffer from poor power amplifier utilization. Phase only
designs, e.g., Zadoff-Chu or generalized step chirp polyphase sequence methods,
often require large active antenna arrays which in turns increases the hardware
complexity and reduces the energy efficiency. In our recently proposed novel
array-fed reflective intelligent surface (RIS) architecture, the small ($2
\times 2$) active array has uniform (principal eigenmode) amplitude weighting.
We now present a pragmatic flat-top pattern design method for practical array
(RIS) sizes, which outperforms current state-of-the-art in terms of design
superiority, energy efficiency, and deployment feasibility. This novel design
holds promise for advancing sustainable wireless technologies in
next-generation communication systems while mitigating the environmental impact
of high-energy antenna arrays.
|
2502.08496
|
Fine-Tuning Topics through Weighting Aspect Keywords
|
cs.IR cs.LG
|
Topic modeling often requires examining topics from multiple perspectives to
uncover hidden patterns, especially in less explored areas. This paper presents
an approach to address this need, utilizing weighted keywords from various
aspects derived from a domain knowledge. The research method starts with
standard topic modeling. Then, it adds a process consisting of four key steps.
First, it defines keywords for each aspect. Second, it gives weights to these
keywords based on their relevance. Third, it calculates relevance scores for
aspect-weighted keywords and topic keywords to create aspect-topic models.
Fourth, it uses these scores to tune relevant new documents. Finally, the
generated topic models are interpreted and validated. The findings show that
top-scoring documents are more likely to be about the same aspect of a topic.
This highlights the model's effectiveness in finding the related documents to
the aspects.
|
2502.08502
|
On the Fundamental Limits of Integrated Sensing and Communications Under
Logarithmic Loss
|
cs.IT math.IT
|
We study a unified information-theoretic framework for integrated sensing and
communications (ISAC), applicable to both monostatic and bistatic sensing
scenarios. Special attention is given to the case where the sensing receiver
(Rx) is required to produce a "soft" estimate of the state sequence, with
logarithmic loss serving as the performance metric. We derive lower and upper
bounds on the capacity-distortion function, which delineates the fundamental
tradeoff between communication rate and sensing distortion. These bounds
coincide when the channel between the ISAC transmitter (Tx) and the
communication Rx is degraded with respect to the channel between the ISAC Tx
and the sensing Rx, or vice versa. Furthermore, we provide a complete
characterization of the capacity-distortion function for an ISAC system that
simultaneously transmits information over a binary-symmetric channel and senses
additive Bernoulli states through another binary-symmetric channel. The
Gaussian counterpart of this problem is also explored, which, together with a
state-splitting trick, fully determines the capacity-distortion-power function
under the squared error distortion measure.
|
2502.08503
|
Revisiting 3D LLM Benchmarks: Are We Really Testing 3D Capabilities?
|
cs.AI
|
In this work, we identify the "2D-Cheating" problem in 3D LLM evaluation,
where these tasks might be easily solved by VLMs with rendered images of point
clouds, exposing ineffective evaluation of 3D LLMs' unique 3D capabilities. We
test VLM performance across multiple 3D LLM benchmarks and, using this as a
reference, propose principles for better assessing genuine 3D understanding. We
also advocate explicitly separating 3D abilities from 1D or 2D aspects when
evaluating 3D LLMs.
|
2502.08505
|
Bridging Domain Adaptation and Graph Neural Networks: A Tensor-Based
Framework for Effective Label Propagation
|
cs.LG
|
Graph Neural Networks (GNNs) have recently become the predominant tools for
studying graph data. Despite state-of-the-art performance on graph
classification tasks, GNNs are overwhelmingly trained in a single domain under
supervision, thus necessitating a prohibitively high demand for labels and
resulting in poorly transferable representations. To address this challenge, we
propose the Label-Propagation Tensor Graph Neural Network (LP-TGNN) framework
to bridge the gap between graph data and traditional domain adaptation methods.
It extracts graph topological information holistically with a tensor
architecture and then reduces domain discrepancy through label propagation. It
is readily compatible with general GNNs and domain adaptation techniques with
minimal adjustment through pseudo-labeling. Experiments on various real-world
benchmarks show that our LP-TGNN outperforms baselines by a notable margin. We
also validate and analyze each component of the proposed framework in the
ablation study.
|
2502.08507
|
Explanation based In-Context Demonstrations Retrieval for Multilingual
Grammatical Error Correction
|
cs.CL
|
Grammatical error correction (GEC) aims to correct grammatical, spelling, and
semantic errors in natural language text. With the growing of large language
models (LLMs), direct text generation has gradually become the focus of the GEC
methods, and few-shot in-context learning presents a cost-effective solution.
However, selecting effective in-context examples remains challenging, as the
similarity between input texts does not necessarily correspond to similar
grammatical error patterns. In this paper, we propose a novel retrieval method
based on natural language grammatical error explanations (GEE) to address this
issue. Our method retrieves suitable few-shot demonstrations by matching the
GEE of the test input with that of pre-constructed database samples, where
explanations for erroneous samples are generated by LLMs. We conducted
multilingual GEC few-shot experiments on both major open-source and
closed-source LLMs. Experiments across five languages show that our method
outperforms existing semantic and BM25-based retrieval techniques, without
requiring additional training or language adaptation. This also suggests that
matching error patterns is key to selecting examples.
|
2502.08512
|
Measuring Diversity in Synthetic Datasets
|
cs.CL cs.AI
|
Large language models (LLMs) are widely adopted to generate synthetic
datasets for various natural language processing (NLP) tasks, such as text
classification and summarization. However, accurately measuring the diversity
of these synthetic datasets-an aspect crucial for robust model
performance-remains a significant challenge. In this paper, we introduce
DCScore, a novel method for measuring synthetic dataset diversity from a
classification perspective. Specifically, DCScore formulates diversity
evaluation as a sample classification task, leveraging mutual relationships
among samples. We further provide theoretical verification of the
diversity-related axioms satisfied by DCScore, highlighting its role as a
principled diversity evaluation method. Experimental results on synthetic
datasets reveal that DCScore enjoys a stronger correlation with multiple
diversity pseudo-truths of evaluated datasets, underscoring its effectiveness.
Moreover, both empirical and theoretical evidence demonstrate that DCScore
substantially reduces computational costs compared to existing approaches. Code
is available at: https://github.com/BlueWhaleLab/DCScore.
|
2502.08514
|
Faithful, Unfaithful or Ambiguous? Multi-Agent Debate with Initial
Stance for Summary Evaluation
|
cs.CL
|
Faithfulness evaluators based on large language models (LLMs) are often
fooled by the fluency of the text and struggle with identifying errors in the
summaries. We propose an approach to summary faithfulness evaluation in which
multiple LLM-based agents are assigned initial stances (regardless of what
their belief might be) and forced to come up with a reason to justify the
imposed belief, thus engaging in a multi-round debate to reach an agreement.
The uniformly distributed initial assignments result in a greater diversity of
stances leading to more meaningful debates and ultimately more errors
identified. Furthermore, by analyzing the recent faithfulness evaluation
datasets, we observe that naturally, it is not always the case for a summary to
be either faithful to the source document or not. We therefore introduce a new
dimension, ambiguity, and a detailed taxonomy to identify such special cases.
Experiments demonstrate our approach can help identify ambiguities, and have
even a stronger performance on non-ambiguous summaries.
|
2502.08515
|
The Paradox of Stochasticity: Limited Creativity and Computational
Decoupling in Temperature-Varied LLM Outputs of Structured Fictional Data
|
cs.LG
|
This study examines how temperature settings and model architectures affect
the generation of structured fictional data (names, birthdates) across three
large language models (LLMs): llama3.1:8b, deepseek-r1:8b, and mistral:latest.
By systematically testing temperature values from 0.0 to 1.0 in increments of
0.1, we conducted 330 trials yielding 889 structured entities, validated for
syntactic consistency. Key findings reveal that model architecture
significantly influences computational efficiency, with mistral:latest and
llama3.1:8b processing data 8x faster than deepseek-r1:8b. Contrary to
expectations, temperature showed no correlation with processing time,
challenging assumptions about stochastic sampling costs. Output diversity
remained limited, as models consistently defaulted to common name archetypes
(e.g., 'John Doe' and 'Jane Smith') across all temperatures, though rare names
clustered at intermediate values (0.3-0.7). These results demonstrate that
architectural optimizations, rather than temperature adjustments, dominate
performance in structured generation tasks. The findings emphasize prioritizing
model selection over hyperparameter tuning for efficiency and suggest explicit
diversity constraints are necessary to mitigate default output biases in
synthetic data pipelines.
|
2502.08518
|
FedMHO: Heterogeneous One-Shot Federated Learning Towards
Resource-Constrained Edge Devices
|
cs.LG cs.AI cs.DC
|
Federated Learning (FL) is increasingly adopted in edge computing scenarios,
where a large number of heterogeneous clients operate under constrained or
sufficient resources. The iterative training process in conventional FL
introduces significant computation and communication overhead, which is
unfriendly for resource-constrained edge devices. One-shot FL has emerged as a
promising approach to mitigate communication overhead, and model-heterogeneous
FL solves the problem of diverse computing resources across clients. However,
existing methods face challenges in effectively managing model-heterogeneous
one-shot FL, often leading to unsatisfactory global model performance or
reliance on auxiliary datasets. To address these challenges, we propose a novel
FL framework named FedMHO, which leverages deep classification models on
resource-sufficient clients and lightweight generative models on
resource-constrained devices. On the server side, FedMHO involves a two-stage
process that includes data generation and knowledge fusion. Furthermore, we
introduce FedMHO-MD and FedMHO-SD to mitigate the knowledge-forgetting problem
during the knowledge fusion stage, and an unsupervised data optimization
solution to improve the quality of synthetic samples. Comprehensive experiments
demonstrate the effectiveness of our methods, as they outperform
state-of-the-art baselines in various experimental setups.
|
2502.08522
|
Abstract questionnaires and FS-decision digraphs
|
math.CO cs.IT math.IT math.ST stat.TH
|
A questionnaire is a sequence of multiple choice questions aiming to collect
data on a population. We define an abstract questionnaire as an ordered pair
$(N,{\cal M})$, where $N$ is a positive integer and ${\cal
M}=(m_0,m_1,\ldots,m_{N-1})$ is an $N$-tuple of positive integers, with $m_i$,
for $i \in \{0, 1, \ldots, N-1 \}$, as the number of possible answers to
question $i$. An abstract questionnaire may be endowed with a skip-list (which
tells us which questions to skip based on the sequence of answers to the
earlier questions) and a flag-set (which tells us which sequences of answers
are of special interest). An FS-decision tree is a decision tree of an abstract
questionnaire that also incorporates the information contained in the skip-list
and flag-set. The main objective of this paper is to represent the abstract
questionnaire using a directed graph, which we call an FS-decision digraph,
that contains the full information of an FS-decision tree, but is in general
much more concise. We present an algorithm for constructing a fully reduced
FS-decision digraph, and develop the theory that supports it. In addition, we
show how to generate all possible orderings of the questions in an abstract
questionnaire that respect a given precedence relation.
|
2502.08524
|
LLM Pretraining with Continuous Concepts
|
cs.LG cs.CL
|
Next token prediction has been the standard training objective used in large
language model pretraining. Representations are learned as a result of
optimizing for token-level perplexity. We propose Continuous Concept Mixing
(CoCoMix), a novel pretraining framework that combines discrete next token
prediction with continuous concepts. Specifically, CoCoMix predicts continuous
concepts learned from a pretrained sparse autoencoder and mixes them into the
model's hidden state by interleaving with token hidden representations. Through
experiments on multiple benchmarks, including language modeling and downstream
reasoning tasks, we show that CoCoMix is more sample efficient and consistently
outperforms standard next token prediction, knowledge distillation and
inserting pause tokens. We find that combining both concept learning and
interleaving in an end-to-end framework is critical to performance gains.
Furthermore, CoCoMix enhances interpretability and steerability by allowing
direct inspection and modification of the predicted concept, offering a
transparent way to guide the model's internal reasoning process.
|
2502.08525
|
Checkerboard Target Measurement in Unordered Point Clouds with Coloured
ICP
|
cs.CE
|
In this work, we investigate the problem of measuring a the centre
checkerboard target in an 3D point cloud. This is an important problem which
has applications in registration, long term monitoring and linking to other
sensor systems. We use a 3D template matching approach based on the coloured
ICP algorithm to solve the problem. We tackle the problem under the additional
constraints that we assume no structure in the 3D data in order to be able to
handle unordered point clouds. This gives us the capability to process data
from the new generation of low-cost LIDAR sensors. This category of sensors
also suffers from increased noise in range and reflectivity measurement. We
provide extensive simulation results using synthetic data to capture the
potential of the approach. We then give the detailed steps for handling real
sensor data.
|
2502.08528
|
BCDDM: Branch-Corrected Denoising Diffusion Model for Black Hole Image
Generation
|
astro-ph.GA cs.CV
|
The properties of black holes and accretion flows can be inferred by fitting
Event Horizon Telescope (EHT) data to simulated images generated through
general relativistic ray tracing (GRRT). However, due to the computationally
intensive nature of GRRT, the efficiency of generating specific radiation flux
images needs to be improved. This paper introduces the Branch Correction
Denoising Diffusion Model (BCDDM), which uses a branch correction mechanism and
a weighted mixed loss function to improve the accuracy of generated black hole
images based on seven physical parameters of the radiatively inefficient
accretion flow (RIAF) model. Our experiments show a strong correlation between
the generated images and their physical parameters. By enhancing the GRRT
dataset with BCDDM-generated images and using ResNet50 for parameter
regression, we achieve significant improvements in parameter prediction
performance. This approach reduces computational costs and provides a faster,
more efficient method for dataset expansion, parameter estimation, and model
fitting.
|
2502.08529
|
Testbed Development: An Intelligent O-RAN based Cell-Free MIMO Network
|
cs.NI cs.SY eess.SY
|
Cell-free multiple input multiple output (CF-MIMO) systems improve spectral
and energy efficiencies using distributed access points (APs) to provide
reliable service across an area equivalent to multiple conventional cells. This
paper presents a novel design and implementation of a CF-MIMO network
leveraging the open radio access network (O-RAN) architecture based testbed to
enhance the performance of interference-prone user. The proposed prototype is
developed based on open source software components and unlike many other
prototypes, our testbed is able to serve commercial 5G user equipment (UE). The
RAN intelligent controller (RIC) allows the cell-free (CF) network to access
the embedded artificial intelligence and benefit from the network optimisation
techniques that O-RAN brings. The testbed includes an intelligent antenna
association xApp which determines the antenna group that serves each UE based
on the live key performance measurements. The paper demonstrates the deployment
and operation of the CF network and the xApp and discusses how the CF networks
can benefit from the O-RAN architecture.
|
2502.08531
|
On Different Notions of Redundancy in Conditional-Independence-Based
Discovery of Graphical Models
|
cs.LG stat.ML
|
The goal of conditional-independence-based discovery of graphical models is
to find a graph that represents the independence structure of variables in a
given dataset. To learn such a representation, conditional-independence-based
approaches conduct a set of statistical tests that suffices to identify the
graphical representation under some assumptions on the underlying distribution
of the data. In this work, we highlight that due to the conciseness of the
graphical representation, there are often many tests that are not used in the
construction of the graph. These redundant tests have the potential to detect
or sometimes correct errors in the learned model. We show that not all tests
contain this additional information and that such redundant tests have to be
applied with care. Precisely, we argue that particularly those conditional
(in)dependence statements are interesting that follow only from graphical
assumptions but do not hold for every probability distribution.
|
2502.08534
|
Input convex neural networks: universal approximation theorem and
implementation for isotropic polyconvex hyperelastic energies
|
cs.CE cs.AI
|
This paper presents a novel framework of neural networks for isotropic
hyperelasticity that enforces necessary physical and mathematical constraints
while simultaneously satisfying the universal approximation theorem. The two
key ingredients are an input convex network architecture and a formulation in
the elementary polynomials of the signed singular values of the deformation
gradient. In line with previously published networks, it can rigorously capture
frame-indifference and polyconvexity - as well as further constraints like
balance of angular momentum and growth conditions. However and in contrast to
previous networks, a universal approximation theorem for the proposed approach
is proven. To be more explicit, the proposed network can approximate any
frame-indifferent, isotropic polyconvex energy (provided the network is large
enough). This is possible by working with a sufficient and necessary criterion
for frame-indifferent, isotropic polyconvex functions. Comparative studies with
existing approaches identify the advantages of the proposed method,
particularly in approximating non-polyconvex energies as well as computing
polyconvex hulls.
|
2502.08536
|
Matrix Completion with Graph Information: A Provable Nonconvex
Optimization Approach
|
cs.LG math.OC
|
We consider the problem of matrix completion with graphs as side information
depicting the interrelations between variables. The key challenge lies in
leveraging the similarity structure of the graph to enhance matrix recovery.
Existing approaches, primarily based on graph Laplacian regularization, suffer
from several limitations: (1) they focus only on the similarity between
neighboring variables, while overlooking long-range correlations; (2) they are
highly sensitive to false edges in the graphs and (3) they lack theoretical
guarantees regarding statistical and computational complexities. To address
these issues, we propose in this paper a novel graph regularized matrix
completion algorithm called GSGD, based on preconditioned projected gradient
descent approach. We demonstrate that GSGD effectively captures the
higher-order correlation information behind the graphs, and achieves superior
robustness and stability against the false edges. Theoretically, we prove that
GSGD achieves linear convergence to the global optimum with near-optimal sample
complexity, providing the first theoretical guarantees for both recovery
accuracy and efficacy in the perspective of nonconvex optimization. Our
numerical experiments on both synthetic and real-world data further validate
that GSGD achieves superior recovery accuracy and scalability compared with
several popular alternatives.
|
2502.08540
|
A Survey on Image Quality Assessment: Insights, Analysis, and Future
Outlook
|
cs.CV
|
Image quality assessment (IQA) represents a pivotal challenge in
image-focused technologies, significantly influencing the advancement
trajectory of image processing and computer vision. Recently, IQA has witnessed
a notable surge in innovative research efforts, driven by the emergence of
novel architectural paradigms and sophisticated computational techniques. This
survey delivers an extensive analysis of contemporary IQA methodologies,
organized according to their application scenarios, serving as a beneficial
reference for both beginners and experienced researchers. We analyze the
advantages and limitations of current approaches and suggest potential future
research pathways. The survey encompasses both general and specific IQA
methodologies, including conventional statistical measures, machine learning
techniques, and cutting-edge deep learning models such as convolutional neural
networks (CNNs) and Transformer models. The analysis within this survey
highlights the necessity for distortion-specific IQA methods tailored to
various application scenarios, emphasizing the significance of practicality,
interpretability, and ease of implementation in future developments.
|
2502.08542
|
Beyond Predictions: A Participatory Framework for Multi-Stakeholder
Decision-Making
|
cs.LG cs.MA
|
Conventional decision-support systems, primarily based on supervised
learning, focus on outcome prediction models to recommend actions. However,
they often fail to account for the complexities of multi-actor environments,
where diverse and potentially conflicting stakeholder preferences must be
balanced. In this paper, we propose a novel participatory framework that
redefines decision-making as a multi-stakeholder optimization problem,
capturing each actor's preferences through context-dependent reward functions.
Our framework leverages $k$-fold cross-validation to fine-tune user-provided
outcome prediction models and evaluate decision strategies, including
compromise functions mediating stakeholder trade-offs. We introduce a synthetic
scoring mechanism that exploits user-defined preferences across multiple
metrics to rank decision-making strategies and identify the optimal
decision-maker. The selected decision-maker can then be used to generate
actionable recommendations for new data. We validate our framework using two
real-world use cases, demonstrating its ability to deliver recommendations that
effectively balance multiple metrics, achieving results that are often beyond
the scope of purely prediction-based methods. Ablation studies demonstrate that
our framework, with its modular, model-agnostic, and inherently transparent
design, integrates seamlessly with various predictive models, reward
structures, evaluation metrics, and sample sizes, making it particularly suited
for complex, high-stakes decision-making contexts.
|
2502.08544
|
Moment of Untruth: Dealing with Negative Queries in Video Moment
Retrieval
|
cs.CV
|
Video Moment Retrieval is a common task to evaluate the performance of
visual-language models - it involves localising start and end times of moments
in videos from query sentences. The current task formulation assumes that the
queried moment is present in the video, resulting in false positive moment
predictions when irrelevant query sentences are provided. In this paper we
propose the task of Negative-Aware Video Moment Retrieval (NA-VMR), which
considers both moment retrieval accuracy and negative query rejection accuracy.
We make the distinction between In-Domain and Out-of-Domain negative queries
and provide new evaluation benchmarks for two popular video moment retrieval
datasets: QVHighlights and Charades-STA. We analyse the ability of current SOTA
video moment retrieval approaches to adapt to Negative-Aware Video Moment
Retrieval and propose UniVTG-NA, an adaptation of UniVTG designed to tackle
NA-VMR. UniVTG-NA achieves high negative rejection accuracy (avg. $98.4\%$)
scores while retaining moment retrieval scores to within $3.87\%$ Recall@1.
Dataset splits and code are available at
https://github.com/keflanagan/MomentofUntruth
|
2502.08547
|
Representation Learning to Advance Multi-institutional Studies with
Electronic Health Record Data
|
cs.AI
|
The adoption of EHRs has expanded opportunities to leverage data-driven
algorithms in clinical care and research. A major bottleneck in effectively
conducting multi-institutional EHR studies is the data heterogeneity across
systems with numerous codes that either do not exist or represent different
clinical concepts across institutions. The need for data privacy further limits
the feasibility of including multi-institutional patient-level data required to
study similarities and differences across patient subgroups. To address these
challenges, we developed the GAME algorithm. Tested and validated across 7
institutions and 2 languages, GAME integrates data in several levels: (1) at
the institutional level with knowledge graphs to establish relationships
between codes and existing knowledge sources, providing the medical context for
standard codes and their relationship to each other; (2) between institutions,
leveraging language models to determine the relationships between
institution-specific codes with established standard codes; and (3) quantifying
the strength of the relationships between codes using a graph attention
network. Jointly trained embeddings are created using transfer and federated
learning to preserve data privacy. In this study, we demonstrate the
applicability of GAME in selecting relevant features as inputs for AI-driven
algorithms in a range of conditions, e.g., heart failure, rheumatoid arthritis.
We then highlight the application of GAME harmonized multi-institutional EHR
data in a study of Alzheimer's disease outcomes and suicide risk among patients
with mental health disorders, without sharing patient-level data outside
individual institutions.
|
2502.08549
|
Copula-based mixture model identification for subgroup clustering with
imaging applications
|
cs.CV cs.LG
|
Model-based clustering techniques have been widely applied to various
application areas, while most studies focus on canonical mixtures with unique
component distribution form. However, this strict assumption is often hard to
satisfy. In this paper, we consider the more flexible Copula-Based Mixture
Models (CBMMs) for clustering, which allow heterogeneous component
distributions composed by flexible choices of marginal and copula forms. More
specifically, we propose an adaptation of the Generalized Iterative Conditional
Estimation (GICE) algorithm to identify the CBMMs in an unsupervised manner,
where the marginal and copula forms and their parameters are estimated
iteratively. GICE is adapted from its original version developed for switching
Markov model identification with the choice of realization time. Our CBMM-GICE
clustering method is then tested on synthetic two-cluster data (N=2000 samples)
with discussion of the factors impacting its convergence. Finally, it is
compared to the Expectation Maximization identified mixture models with unique
component form on the entire MNIST database (N=70000), and on real cardiac
magnetic resonance data (N=276) to illustrate its value for imaging
applications.
|
2502.08550
|
LLMs can implicitly learn from mistakes in-context
|
cs.CL cs.AI
|
Learning from mistakes is a fundamental feature of human intelligence.
Previous work has shown that Large Language Models (LLMs) can also learn from
incorrect answers when provided with a comprehensive rationale detailing why an
answer is wrong or how to correct it. In this work, we examine whether LLMs can
learn from mistakes in mathematical reasoning tasks when these explanations are
not provided. We investigate if LLMs are able to implicitly infer such
rationales simply from observing both incorrect and correct answers.
Surprisingly, we find that LLMs perform better, on average, when rationales are
eliminated from the context and incorrect answers are simply shown alongside
correct ones. This approach also substantially outperforms chain-of-thought
prompting in our evaluations. We show that these results are consistent across
LLMs of different sizes and varying reasoning abilities. Further, we carry out
an in-depth analysis, and show that prompting with both wrong and correct
answers leads to greater performance and better generalisation than introducing
additional, more diverse question-answer pairs into the context. Finally, we
show that new rationales generated by models that have only observed incorrect
and correct answers are scored equally as highly by humans as those produced
with the aid of exemplar rationales. Our results demonstrate that LLMs are
indeed capable of in-context implicit learning.
|
2502.08554
|
Fostering Appropriate Reliance on Large Language Models: The Role of
Explanations, Sources, and Inconsistencies
|
cs.HC cs.AI
|
Large language models (LLMs) can produce erroneous responses that sound
fluent and convincing, raising the risk that users will rely on these responses
as if they were correct. Mitigating such overreliance is a key challenge.
Through a think-aloud study in which participants use an LLM-infused
application to answer objective questions, we identify several features of LLM
responses that shape users' reliance: explanations (supporting details for
answers), inconsistencies in explanations, and sources. Through a large-scale,
pre-registered, controlled experiment (N=308), we isolate and study the effects
of these features on users' reliance, accuracy, and other measures. We find
that the presence of explanations increases reliance on both correct and
incorrect responses. However, we observe less reliance on incorrect responses
when sources are provided or when explanations exhibit inconsistencies. We
discuss the implications of these findings for fostering appropriate reliance
on LLMs.
|
2502.08555
|
A Machine Learning-Ready Data Processing Tool for Near Real-Time
Forecasting
|
astro-ph.SR astro-ph.IM cs.LG
|
Space weather forecasting is critical for mitigating radiation risks in space
exploration and protecting Earth-based technologies from geomagnetic
disturbances. This paper presents the development of a Machine Learning (ML)-
ready data processing tool for Near Real-Time (NRT) space weather forecasting.
By merging data from diverse NRT sources such as solar imagery, magnetic field
measurements, and energetic particle fluxes, the tool addresses key gaps in
current space weather prediction capabilities. The tool processes and
structures the data for machine learning models, focusing on time-series
forecasting and event detection for extreme solar events. It provides users
with a framework to download, process, and label data for ML applications,
streamlining the workflow for improved NRT space weather forecasting and
scientific research.
|
2502.08556
|
Human-Centric Foundation Models: Perception, Generation and Agentic
Modeling
|
cs.CV cs.AI cs.LG cs.MM
|
Human understanding and generation are critical for modeling digital humans
and humanoid embodiments. Recently, Human-centric Foundation Models (HcFMs)
inspired by the success of generalist models, such as large language and vision
models, have emerged to unify diverse human-centric tasks into a single
framework, surpassing traditional task-specific approaches. In this survey, we
present a comprehensive overview of HcFMs by proposing a taxonomy that
categorizes current approaches into four groups: (1) Human-centric Perception
Foundation Models that capture fine-grained features for multi-modal 2D and 3D
understanding. (2) Human-centric AIGC Foundation Models that generate
high-fidelity, diverse human-related content. (3) Unified Perception and
Generation Models that integrate these capabilities to enhance both human
understanding and synthesis. (4) Human-centric Agentic Foundation Models that
extend beyond perception and generation to learn human-like intelligence and
interactive behaviors for humanoid embodied tasks. We review state-of-the-art
techniques, discuss emerging challenges and future research directions. This
survey aims to serve as a roadmap for researchers and practitioners working
towards more robust, versatile, and intelligent digital human and embodiments
modeling.
|
2502.08557
|
QA-Expand: Multi-Question Answer Generation for Enhanced Query Expansion
in Information Retrieval
|
cs.IR cs.CL cs.LG cs.MA
|
Query expansion is widely used in Information Retrieval (IR) to improve
search outcomes by enriching queries with additional contextual information.
Although recent Large Language Model (LLM) based methods generate
pseudo-relevant content and expanded terms via multiple prompts, they often
yield repetitive, narrow expansions that lack the diverse context needed to
retrieve all relevant information. In this paper, we introduce QA-Expand, a
novel and effective framework for query expansion. It first generates multiple
relevant questions from the initial query and subsequently produces
corresponding pseudo-answers as surrogate documents. A feedback model further
rewrites and filters these answers to ensure only the most informative
augmentations are incorporated. Extensive experiments on benchmarks such as
BEIR and TREC demonstrate that QA-Expand enhances retrieval performance by up
to 13% over state-of-the-art methods, offering a robust solution for modern
retrieval challenges.
|
2502.08560
|
Brain Latent Progression: Individual-based Spatiotemporal Disease
Progression on 3D Brain MRIs via Latent Diffusion
|
cs.CV cs.AI
|
The growing availability of longitudinal Magnetic Resonance Imaging (MRI)
datasets has facilitated Artificial Intelligence (AI)-driven modeling of
disease progression, making it possible to predict future medical scans for
individual patients. However, despite significant advancements in AI, current
methods continue to face challenges including achieving patient-specific
individualization, ensuring spatiotemporal consistency, efficiently utilizing
longitudinal data, and managing the substantial memory demands of 3D scans. To
address these challenges, we propose Brain Latent Progression (BrLP), a novel
spatiotemporal model designed to predict individual-level disease progression
in 3D brain MRIs. The key contributions in BrLP are fourfold: (i) it operates
in a small latent space, mitigating the computational challenges posed by
high-dimensional imaging data; (ii) it explicitly integrates subject metadata
to enhance the individualization of predictions; (iii) it incorporates prior
knowledge of disease dynamics through an auxiliary model, facilitating the
integration of longitudinal data; and (iv) it introduces the Latent Average
Stabilization (LAS) algorithm, which (a) enforces spatiotemporal consistency in
the predicted progression at inference time and (b) allows us to derive a
measure of the uncertainty for the prediction. We train and evaluate BrLP on
11,730 T1-weighted (T1w) brain MRIs from 2,805 subjects and validate its
generalizability on an external test set comprising 2,257 MRIs from 962
subjects. Our experiments compare BrLP-generated MRI scans with real follow-up
MRIs, demonstrating state-of-the-art accuracy compared to existing methods. The
code is publicly available at: https://github.com/LemuelPuglisi/BrLP.
|
2502.08561
|
Quality-Aware Decoding: Unifying Quality Estimation and Decoding
|
cs.CL
|
Quality Estimation (QE) models for Neural Machine Translation (NMT) predict
the quality of the hypothesis without having access to the reference. An
emerging research direction in NMT involves the use of QE models, which have
demonstrated high correlations with human judgment and can enhance translations
through Quality-Aware Decoding. Although several approaches have been proposed
based on sampling multiple candidate translations and picking the best
candidate, none have integrated these models directly into the decoding
process. In this paper, we address this by proposing a novel token-level QE
model capable of reliably scoring partial translations. We build a
uni-directional QE model for this, as decoder models are inherently trained and
efficient on partial sequences. We then present a decoding strategy that
integrates the QE model for Quality-Aware decoding and demonstrate that the
translation quality improves when compared to the N-best list re-ranking with
state-of-the-art QE models (up to $1.39$ XCOMET-XXL $\uparrow$). Finally, we
show that our approach provides significant benefits in document translation
tasks, where the quality of N-best lists is typically suboptimal. Code can be
found at https://ai4lt.iar.kit.edu/english/projects\_kontextmt.php
|
2502.08566
|
AR Glulam: Accurate Augmented Reality Using Multiple Fiducial Markers
for Glulam Fabrication
|
cs.ET cs.CV cs.HC
|
Recent advancements in Augmented Reality (AR) have demonstrated applications
in architecture, design, and fabrication. Compared to conventional 2D
construction drawings, AR can be used to superimpose contextual instructions,
display 3D spatial information and enable on-site engagement. Despite the
potential of AR, the widespread adoption of the technology in the industry is
limited by its precision. Precision is important for projects requiring strict
construction tolerances, design fidelity, and fabrication feedback. For
example, the manufacturing of glulam beams requires tolerances of less than
2mm. The goal of this project is to explore the industrial application of using
multiple fiducial markers for high-precision AR fabrication. While the method
has been validated in lab settings with a precision of 0.97, this paper focuses
on fabricating glulam beams in a factory setting with an industry manufacturer,
Unalam Factory.
|
2502.08573
|
A Novel Approach to for Multimodal Emotion Recognition : Multimodal
semantic information fusion
|
cs.CV cs.AI
|
With the advancement of artificial intelligence and computer vision
technologies, multimodal emotion recognition has become a prominent research
topic. However, existing methods face challenges such as heterogeneous data
fusion and the effective utilization of modality correlations. This paper
proposes a novel multimodal emotion recognition approach, DeepMSI-MER, based on
the integration of contrastive learning and visual sequence compression. The
proposed method enhances cross-modal feature fusion through contrastive
learning and reduces redundancy in the visual modality by leveraging visual
sequence compression. Experimental results on two public datasets, IEMOCAP and
MELD, demonstrate that DeepMSI-MER significantly improves the accuracy and
robustness of emotion recognition, validating the effectiveness of multimodal
feature fusion and the proposed approach.
|
2502.08574
|
COAST: Intelligent Time-Adaptive Neural Operators
|
cs.LG cs.AI
|
We introduce Causal Operator with Adaptive Solver Transformer (COAST), a
novel neural operator learning method that leverages a causal language model
(CLM) framework to dynamically adapt time steps. Our method predicts both the
evolution of a system and its optimal time step, intelligently balancing
computational efficiency and accuracy. We find that COAST generates variable
step sizes that correlate with the underlying system intrinsicities, both
within and across dynamical systems. Within a single trajectory, smaller steps
are taken in regions of high complexity, while larger steps are employed in
simpler regions. Across different systems, more complex dynamics receive more
granular time steps. Benchmarked on diverse systems with varied dynamics, COAST
consistently outperforms state-of-the-art methods, achieving superior
performance in both efficiency and accuracy. This work underscores the
potential of CLM-based intelligent adaptive solvers for scalable operator
learning of dynamical systems.
|
2502.08576
|
Mapping the Landscape of Generative AI in Network Monitoring and
Management
|
cs.NI cs.AI cs.LG
|
Generative Artificial Intelligence (GenAI) models such as LLMs, GPTs, and
Diffusion Models have recently gained widespread attention from both the
research and the industrial communities. This survey explores their application
in network monitoring and management, focusing on prominent use cases, as well
as challenges and opportunities. We discuss how network traffic generation and
classification, network intrusion detection, networked system log analysis, and
network digital assistance can benefit from the use of GenAI models.
Additionally, we provide an overview of the available GenAI models, datasets
for large-scale training phases, and platforms for the development of such
models. Finally, we discuss research directions that potentially mitigate the
roadblocks to the adoption of GenAI for network monitoring and management. Our
investigation aims to map the current landscape and pave the way for future
research in leveraging GenAI for network monitoring and management.
|
2502.08577
|
FBFL: A Field-Based Coordination Approach for Data Heterogeneity in
Federated Learning
|
cs.LG cs.AI
|
In the last years, Federated learning (FL) has become a popular solution to
train machine learning models in domains with high privacy concerns. However,
FL scalability and performance face significant challenges in real-world
deployments where data across devices are non-independently and identically
distributed (non-IID). The heterogeneity in data distribution frequently arises
from spatial distribution of devices, leading to degraded model performance in
the absence of proper handling. Additionally, FL typical reliance on
centralized architectures introduces bottlenecks and single-point-of-failure
risks, particularly problematic at scale or in dynamic environments. To close
this gap, we propose Field-Based Federated Learning (FBFL), a novel approach
leveraging macroprogramming and field coordination to address these limitations
through: (i) distributed spatial-based leader election for personalization to
mitigate non-IID data challenges; and (ii) construction of a self-organizing,
hierarchical architecture using advanced macroprogramming patterns. Moreover,
FBFL not only overcomes the aforementioned limitations, but also enables the
development of more specialized models tailored to the specific data
distribution in each subregion. This paper formalizes FBFL and evaluates it
extensively using MNIST, FashionMNIST, and Extended MNIST datasets. We
demonstrate that, when operating under IID data conditions, FBFL performs
comparably to the widely-used FedAvg algorithm. Furthermore, in challenging
non-IID scenarios, FBFL not only outperforms FedAvg but also surpasses other
state-of-the-art methods, namely FedProx and Scaffold, which have been
specifically designed to address non-IID data distributions. Additionally, we
showcase the resilience of FBFL's self-organizing hierarchical architecture
against server failures.
|
2502.08580
|
Ultrasound Image Generation using Latent Diffusion Models
|
cs.CV
|
Diffusion models for image generation have been a subject of increasing
interest due to their ability to generate diverse, high-quality images. Image
generation has immense potential in medical imaging because open-source medical
images are difficult to obtain compared to natural images, especially for rare
conditions. The generated images can be used later to train classification and
segmentation models. In this paper, we propose simulating realistic ultrasound
(US) images by successive fine-tuning of large diffusion models on different
publicly available databases. To do so, we fine-tuned Stable Diffusion, a
state-of-the-art latent diffusion model, on BUSI (Breast US Images) an
ultrasound breast image dataset. We successfully generated high-quality US
images of the breast using simple prompts that specify the organ and pathology,
which appeared realistic to three experienced US scientists and a US
radiologist. Additionally, we provided user control by conditioning the model
with segmentations through ControlNet. We will release the source code at
http://code.sonography.ai/ to allow fast US image generation to the scientific
community.
|
2502.08582
|
A method for classification of data with uncertainty using hypothesis
testing
|
cs.LG
|
Binary classification is a task that involves the classification of data into
one of two distinct classes. It is widely utilized in various fields. However,
conventional classifiers tend to make overconfident predictions for data that
belong to overlapping regions of the two class distributions or for data
outside the distributions (out-of-distribution data). Therefore, conventional
classifiers should not be applied in high-risk fields where classification
results can have significant consequences. In order to address this issue, it
is necessary to quantify uncertainty and adopt decision-making approaches that
take it into account. Many methods have been proposed for this purpose;
however, implementing these methods often requires performing resampling,
improving the structure or performance of models, and optimizing the thresholds
of classifiers. We propose a new decision-making approach using two types of
hypothesis testing. This method is capable of detecting ambiguous data that
belong to the overlapping regions of two class distributions, as well as
out-of-distribution data that are not included in the training data
distribution. In addition, we quantify uncertainty using the empirical
distribution of feature values derived from the training data obtained through
the trained model. The classification threshold is determined by the
$\alpha$-quantile and ($1-\alpha$)-quantile, where the significance level
$\alpha$ is set according to each specific situation.
|
2502.08585
|
Scalable Bilevel Loss Balancing for Multi-Task Learning
|
cs.LG
|
Multi-task learning (MTL) has been widely adopted for its ability to
simultaneously learn multiple tasks. While existing gradient manipulation
methods often yield more balanced solutions than simple scalarization-based
approaches, they typically incur a significant computational overhead of
$\mathcal{O}(K)$ in both time and memory, where $K$ is the number of tasks. In
this paper, we propose BiLB4MTL, a simple and scalable loss balancing approach
for MTL, formulated from a novel bilevel optimization perspective. Our method
incorporates three key components: (i) an initial loss normalization, (ii) a
bilevel loss-balancing formulation, and (iii) a scalable first-order algorithm
that requires only $\mathcal{O}(1)$ time and memory. Theoretically, we prove
that BiLB4MTL guarantees convergence not only to a stationary point of the
bilevel loss balancing problem but also to an $\epsilon$-accurate Pareto
stationary point for all $K$ loss functions under mild conditions. Extensive
experiments on diverse multi-task datasets demonstrate that BiLB4MTL achieves
state-of-the-art performance in both accuracy and efficiency. Code is available
at https://github.com/OptMN-Lab/-BiLB4MTL.
|
2502.08586
|
Commercial LLM Agents Are Already Vulnerable to Simple Yet Dangerous
Attacks
|
cs.LG cs.AI
|
A high volume of recent ML security literature focuses on attacks against
aligned large language models (LLMs). These attacks may extract private
information or coerce the model into producing harmful outputs. In real-world
deployments, LLMs are often part of a larger agentic pipeline including memory
systems, retrieval, web access, and API calling. Such additional components
introduce vulnerabilities that make these LLM-powered agents much easier to
attack than isolated LLMs, yet relatively little work focuses on the security
of LLM agents. In this paper, we analyze security and privacy vulnerabilities
that are unique to LLM agents. We first provide a taxonomy of attacks
categorized by threat actors, objectives, entry points, attacker observability,
attack strategies, and inherent vulnerabilities of agent pipelines. We then
conduct a series of illustrative attacks on popular open-source and commercial
agents, demonstrating the immediate practical implications of their
vulnerabilities. Notably, our attacks are trivial to implement and require no
understanding of machine learning.
|
2502.08590
|
Light-A-Video: Training-free Video Relighting via Progressive Light
Fusion
|
cs.CV
|
Recent advancements in image relighting models, driven by large-scale
datasets and pre-trained diffusion models, have enabled the imposition of
consistent lighting. However, video relighting still lags, primarily due to the
excessive training costs and the scarcity of diverse, high-quality video
relighting datasets. A simple application of image relighting models on a
frame-by-frame basis leads to several issues: lighting source inconsistency and
relighted appearance inconsistency, resulting in flickers in the generated
videos. In this work, we propose Light-A-Video, a training-free approach to
achieve temporally smooth video relighting. Adapted from image relighting
models, Light-A-Video introduces two key techniques to enhance lighting
consistency. First, we design a Consistent Light Attention (CLA) module, which
enhances cross-frame interactions within the self-attention layers to stabilize
the generation of the background lighting source. Second, leveraging the
physical principle of light transport independence, we apply linear blending
between the source video's appearance and the relighted appearance, using a
Progressive Light Fusion (PLF) strategy to ensure smooth temporal transitions
in illumination. Experiments show that Light-A-Video improves the temporal
consistency of relighted video while maintaining the image quality, ensuring
coherent lighting transitions across frames. Project page:
https://bujiazi.github.io/light-a-video.github.io/.
|
2502.08593
|
Toward Universal Laws of Outlier Propagation
|
cs.LG
|
We argue that Algorithmic Information Theory (AIT) admits a principled way to
quantify outliers in terms of so-called randomness deficiency. For the
probability distribution generated by a causal Bayesian network, we show that
the randomness deficiency of the joint state decomposes into randomness
deficiencies of each causal mechanism, subject to the Independence of
Mechanisms Principle. Accordingly, anomalous joint observations can be
quantitatively attributed to their root causes, i.e., the mechanisms that
behaved anomalously. As an extension of Levin's law of randomness conservation,
we show that weak outliers cannot cause strong ones when Independence of
Mechanisms holds. We show how these information theoretic laws provide a better
understanding of the behaviour of outliers defined with respect to existing
scores.
|
2502.08597
|
Learning in Markets with Heterogeneous Agents: Dynamics and Survival of
Bayesian vs. No-Regret Learners
|
cs.GT cs.AI cs.MA econ.TH
|
We analyze the performance of heterogeneous learning agents in asset markets
with stochastic payoffs. Our agents aim to maximize the expected growth rate of
their wealth but have different theories on how to learn this best. We focus on
comparing Bayesian and no-regret learners in market dynamics. Bayesian learners
with a prior over a finite set of models that assign positive prior probability
to the correct model have posterior probabilities that converge exponentially
to the correct model. Consequently, they survive even in the presence of agents
who invest according to the correct model of the stochastic process. Bayesians
with a continuum prior converge to the correct model at a rate of $O((\log
T)/T)$. Online learning theory provides no-regret algorithms for maximizing the
log of wealth in this setting, achieving a worst-case regret bound of $O(\log
T)$ without assuming a steady underlying stochastic process but comparing to
the best fixed investment rule. This regret, as we observe, is of the same
order of magnitude as that of a Bayesian learner with a continuum prior.
However, we show that even such low regret may not be sufficient for survival
in asset markets: an agent can have regret as low as $O(\log T)$, but still
vanish in market dynamics when competing against agents who invest according to
the correct model or even against a perfect Bayesian with a finite prior. On
the other hand, we show that Bayesian learning is fragile, while no-regret
learning requires less knowledge of the environment and is therefore more
robust. Any no-regret learner will drive out of the market an imperfect
Bayesian whose finite prior or update rule has even small errors. We formally
establish the relationship between notions of survival, vanishing, and market
domination studied in economics and the framework of regret minimization, thus
bridging these theories.
|
2502.08598
|
Enhancing Diffusion Models Efficiency by Disentangling Total-Variance
and Signal-to-Noise Ratio
|
cs.LG stat.ML
|
The long sampling time of diffusion models remains a significant bottleneck,
which can be mitigated by reducing the number of diffusion time steps. However,
the quality of samples with fewer steps is highly dependent on the noise
schedule, i.e., the specific manner in which noise is introduced and the signal
is reduced at each step. Although prior work has improved upon the original
variance-preserving and variance-exploding schedules, these approaches
$\textit{passively}$ adjust the total variance, without direct control over it.
In this work, we propose a novel total-variance/signal-to-noise-ratio
disentangled (TV/SNR) framework, where TV and SNR can be controlled
independently. Our approach reveals that different existing schedules, where
the TV explodes exponentially, can be $\textit{improved}$ by setting a constant
TV schedule while preserving the same SNR schedule. Furthermore, generalizing
the SNR schedule of the optimal transport flow matching significantly improves
the performance in molecular structure generation, achieving few step
generation of stable molecules. A similar tendency is observed in image
generation, where our approach with a uniform diffusion time grid performs
comparably to the highly tailored EDM sampler.
|
2502.08599
|
SPeCtrum: A Grounded Framework for Multidimensional Identity
Representation in LLM-Based Agent
|
cs.CL
|
Existing methods for simulating individual identities often oversimplify
human complexity, which may lead to incomplete or flattened representations. To
address this, we introduce SPeCtrum, a grounded framework for constructing
authentic LLM agent personas by incorporating an individual's multidimensional
self-concept. SPeCtrum integrates three core components: Social Identity (S),
Personal Identity (P), and Personal Life Context (C), each contributing
distinct yet interconnected aspects of identity. To evaluate SPeCtrum's
effectiveness in identity representation, we conducted automated and human
evaluations. Automated evaluations using popular drama characters showed that
Personal Life Context (C)-derived from short essays on preferences and daily
routines-modeled characters' identities more effectively than Social Identity
(S) and Personal Identity (P) alone and performed comparably to the full SPC
combination. In contrast, human evaluations involving real-world individuals
found that the full SPC combination provided a more comprehensive self-concept
representation than C alone. Our findings suggest that while C alone may
suffice for basic identity simulation, integrating S, P, and C enhances the
authenticity and accuracy of real-world identity representation. Overall,
SPeCtrum offers a structured approach for simulating individuals in LLM agents,
enabling more personalized human-AI interactions and improving the realism of
simulation-based behavioral studies.
|
2502.08600
|
Two-stage hybrid models for enhancing forecasting accuracy on
heterogeneous time series
|
cs.LG
|
Compared to local models built in a series-by-series manner, global models
leverage relevant information across time series, resulting in improved
forecasting performance and generalization capacity. Constructing global models
on a set of time series is becoming mainstream in the field of time series
forecasting. However, the advantages of global models may not always be
realized when dealing with heterogeneous data. While they can adapt to
heterogeneous datasets by increasing the model complexity, the model cannot be
infinitely complex due to the finite sample size, which poses challenges for
the application of global models. Additionally, determining whether the time
series data is homogeneous or heterogeneous can be ambiguous in practice. To
address these research gaps, this paper argues that the heterogeneity of the
data should be defined by the global model used, and for each series, the
portion not modelled by the global model represents heterogeneity. It further
proposes two-stage hybrid models, which include a second stage to identify and
model heterogeneous patterns. In this second stage, we can estimate either all
local models or sub-global models across different domains divided based on
heterogeneity. Experiments on four open datasets reveal that the proposed
methods significantly outperform five existing models, indicating they
contribute to fully unleash the potential of global models on heterogeneous
datasets.
|
2502.08603
|
Scalable Thermodynamic Second-order Optimization
|
cs.ET cs.LG
|
Many hardware proposals have aimed to accelerate inference in AI workloads.
Less attention has been paid to hardware acceleration of training, despite the
enormous societal impact of rapid training of AI models. Physics-based
computers, such as thermodynamic computers, offer an efficient means to solve
key primitives in AI training algorithms. Optimizers that normally would be
computationally out-of-reach (e.g., due to expensive matrix inversions) on
digital hardware could be unlocked with physics-based hardware. In this work,
we propose a scalable algorithm for employing thermodynamic computers to
accelerate a popular second-order optimizer called Kronecker-factored
approximate curvature (K-FAC). Our asymptotic complexity analysis predicts
increasing advantage with our algorithm as $n$, the number of neurons per
layer, increases. Numerical experiments show that even under significant
quantization noise, the benefits of second-order optimization can be preserved.
Finally, we predict substantial speedups for large-scale vision and graph
problems based on realistic hardware characteristics.
|
2502.08605
|
CurvGAD: Leveraging Curvature for Enhanced Graph Anomaly Detection
|
cs.LG cs.AI
|
Does the intrinsic curvature of complex networks hold the key to unveiling
graph anomalies that conventional approaches overlook? Reconstruction-based
graph anomaly detection (GAD) methods overlook such geometric outliers,
focusing only on structural and attribute-level anomalies. To this end, we
propose CurvGAD - a mixed-curvature graph autoencoder that introduces the
notion of curvature-based geometric anomalies. CurvGAD introduces two parallel
pipelines for enhanced anomaly interpretability: (1) Curvature-equivariant
geometry reconstruction, which focuses exclusively on reconstructing the edge
curvatures using a mixed-curvature, Riemannian encoder and Gaussian
kernel-based decoder; and (2) Curvature-invariant structure and attribute
reconstruction, which decouples structural and attribute anomalies from
geometric irregularities by regularizing graph curvature under discrete
Ollivier-Ricci flow, thereby isolating the non-geometric anomalies. By
leveraging curvature, CurvGAD refines the existing anomaly classifications and
identifies new curvature-driven anomalies. Extensive experimentation over 10
real-world datasets (both homophilic and heterophilic) demonstrates an
improvement of up to 6.5% over state-of-the-art GAD methods.
|
2502.08606
|
Distillation Scaling Laws
|
cs.LG cs.AI cs.CL stat.ML
|
We provide a distillation scaling law that estimates distilled model
performance based on a compute budget and its allocation between the student
and teacher. Our findings reduce the risks associated with using distillation
at scale; compute allocation for both the teacher and student models can now be
done to maximize student performance. We provide compute optimal distillation
recipes for when 1) a teacher exists, or 2) a teacher needs training. If many
students are to be distilled, or a teacher already exists, distillation
outperforms supervised pretraining until a compute level which grows
predictably with student size. If one student is to be distilled and a teacher
also needs training, supervised learning should be done instead. Additionally,
we provide insights across our large scale study of distillation, which
increase our understanding of distillation and inform experimental design.
|
2502.08610
|
Quantifying Security Vulnerabilities: A Metric-Driven Security Analysis
of Gaps in Current AI Standards
|
cs.CR cs.AI
|
As AI systems integrate into critical infrastructure, security gaps in AI
compliance frameworks demand urgent attention. This paper audits and quantifies
security risks in three major AI governance standards: NIST AI RMF 1.0, UK's AI
and Data Protection Risk Toolkit, and the EU's ALTAI. Using a novel risk
assessment methodology, we develop four key metrics: Risk Severity Index (RSI),
Attack Potential Index (AVPI), Compliance-Security Gap Percentage (CSGP), and
Root Cause Vulnerability Score (RCVS). Our analysis identifies 136 concerns
across the frameworks, exposing significant gaps. NIST fails to address 69.23
percent of identified risks, ALTAI has the highest attack vector vulnerability
(AVPI = 0.51) and the ICO Toolkit has the largest compliance-security gap, with
80.00 percent of high-risk concerns remaining unresolved. Root cause analysis
highlights under-defined processes (ALTAI RCVS = 033) and weak implementation
guidance (NIST and ICO RCVS = 0.25) as critical weaknesses. These findings
emphasize the need for stronger, enforceable security controls in AI
compliance. We offer targeted recommendations to enhance security posture and
bridge the gap between compliance and real-world AI risks.
|
2502.08611
|
Robustly Learning Monotone Generalized Linear Models via Data
Augmentation
|
cs.LG math.OC math.ST stat.TH
|
We study the task of learning Generalized Linear models (GLMs) in the
agnostic model under the Gaussian distribution. We give the first
polynomial-time algorithm that achieves a constant-factor approximation for
\textit{any} monotone Lipschitz activation. Prior constant-factor GLM learners
succeed for a substantially smaller class of activations. Our work resolves a
well-known open problem, by developing a robust counterpart to the classical
GLMtron algorithm (Kakade et al., 2011). Our robust learner applies more
generally, encompassing all monotone activations with bounded
$(2+\zeta)$-moments, for any fixed $\zeta>0$ -- a condition that is essentially
necessary. To obtain our results, we leverage a novel data augmentation
technique with decreasing Gaussian noise injection and prove a number of
structural results that may be useful in other settings.
|
2502.08612
|
Continuous Cardiac Arrest Prediction in ICU using PPG Foundation Model
|
cs.LG
|
Non-invasive patient monitoring for tracking and predicting adverse acute
health events is an emerging area of research. We pursue in-hospital cardiac
arrest (IHCA) prediction using only single-channel finger photoplethysmography
(PPG) signals. Our proposed two-stage model Feature Extractor-Aggregator
Network (FEAN) leverages powerful representations from pre-trained PPG
foundation models (PPG-GPT of size up to 1 Billion) stacked with sequential
classification models. We propose two FEAN variants ("1H", "FH") which use the
latest one-hour and (max) 24-hour history to make decisions respectively. Our
study is the first to present IHCA prediction results in ICU patients using
only unimodal (continuous PPG signal) waveform deep representations. With our
best model, we obtain an average of 0.79 AUROC over 24~h prediction window
before CA event onset with our model peaking performance at 0.82 one hour
before CA. We also provide a comprehensive analysis of our model through
architectural tuning and PaCMAP visualization of patient health trajectory in
latent space.
|
2502.08620
|
Mathematical Data Science
|
math.HO cs.LG math.CO math.NT math.RT
|
Can machine learning help discover new mathematical structures? In this
article we discuss an approach to doing this which one can call "mathematical
data science". In this paradigm, one studies mathematical objects collectively
rather than individually, by creating datasets and doing machine learning
experiments and interpretations. After an overview, we present two case
studies: murmurations in number theory and loadings of partitions related to
Kronecker coefficients in representation theory and combinatorics.
|
2502.08622
|
Forecasting Drought Using Machine Learning in California
|
cs.LG
|
Drought is a frequent and costly natural disaster in California, with major
negative impacts on agricultural production and water resource availability,
particularly groundwater. This study investigated the performance of applying
different machine learning approaches to predicting the U.S. Drought Monitor
classification in California. Four approaches were used: a convolutional neural
network (CNN), random forest, XGBoost, and long short term memory (LSTM)
recurrent neural network, and compared to a baseline persistence model. We
evaluated the models' performance in predicting severe drought (USDM drought
category D2 or higher) using a macro F1 binary classification metric. The LSTM
model emerged as the top performer, followed by XGBoost, CNN, and random
forest. Further evaluation of our results at the county level suggested that
the LSTM model would perform best in counties with more consistent drought
patterns and where severe drought was more common, and the LSTM model would
perform worse where drought scores increased rapidly. Utilizing 30 weeks of
historical data, the LSTM model successfully forecasted drought scores for a
12-week period with a Mean Absolute Error (MAE) of 0.33, equivalent to less
than half a drought category on a scale of 0 to 5. Additionally, the LSTM
achieved a macro F1 score of 0.9, indicating high accuracy in binary
classification for severe drought conditions. Evaluation of different window
and future horizon sizes in weeks suggested that at least 24 weeks of data
would result in the best performance, with best performance for shorter horizon
sizes, particularly less than eight weeks.
|
2502.08623
|
Robot Data Curation with Mutual Information Estimators
|
cs.RO
|
The performance of imitation learning policies often hinges on the datasets
with which they are trained. Consequently, investment in data collection for
robotics has grown across both industrial and academic labs. However, despite
the marked increase in the quantity of demonstrations collected, little work
has sought to assess the quality of said data despite mounting evidence of its
importance in other areas such as vision and language. In this work, we take a
critical step towards addressing the data quality in robotics. Given a dataset
of demonstrations, we aim to estimate the relative quality of individual
demonstrations in terms of both state diversity and action predictability. To
do so, we estimate the average contribution of a trajectory towards the mutual
information between states and actions in the entire dataset, which precisely
captures both the entropy of the state distribution and the state-conditioned
entropy of actions. Though commonly used mutual information estimators require
vast amounts of data often beyond the scale available in robotics, we introduce
a novel technique based on k-nearest neighbor estimates of mutual information
on top of simple VAE embeddings of states and actions. Empirically, we
demonstrate that our approach is able to partition demonstration datasets by
quality according to human expert scores across a diverse set of benchmarks
spanning simulation and real world environments. Moreover, training policies
based on data filtered by our method leads to a 5-10% improvement in RoboMimic
and better performance on real ALOHA and Franka setups.
|
2502.08625
|
Randomness of Low-Layer Parameters Determines Confusing Samples in Terms
of Interaction Representations of a DNN
|
cs.LG cs.AI cs.CL cs.CV
|
In this paper, we find that the complexity of interactions encoded by a deep
neural network (DNN) can explain its generalization power. We also discover
that the confusing samples of a DNN, which are represented by non-generalizable
interactions, are determined by its low-layer parameters. In comparison, other
factors, such as high-layer parameters and network architecture, have much less
impact on the composition of confusing samples. Two DNNs with different
low-layer parameters usually have fully different sets of confusing samples,
even though they have similar performance. This finding extends the
understanding of the lottery ticket hypothesis, and well explains distinctive
representation power of different DNNs.
|
2502.08628
|
Concentration Inequalities for the Stochastic Optimization of Unbounded
Objectives with Application to Denoising Score Matching
|
stat.ML cs.LG
|
We derive novel concentration inequalities that bound the statistical error
for a large class of stochastic optimization problems, focusing on the case of
unbounded objective functions. Our derivations utilize the following tools: 1)
A new form of McDiarmid's inequality that is based on sample dependent one
component difference bounds and which leads to a novel uniform law of large
numbers result for unbounded functions. 2) A Rademacher complexity bound for
families of functions that satisfy an appropriate local Lipschitz property. As
an application of these results, we derive statistical error bounds for
denoising score matching (DSM), an application that inherently requires one to
consider unbounded objective functions, even when the data distribution has
bounded support. In addition, our results establish the benefit of sample reuse
in algorithms that employ easily sampled auxiliary random variables in addition
to the training data, e.g., as in DSM, which uses auxiliary Gaussian random
variables.
|
2502.08631
|
Ensemble based approach to quantifying uncertainty of LLM based
classifications
|
cs.AI
|
The output of Large Language Models (LLMs) are a function of the internal
model's parameters and the input provided into the context window. The
hypothesis presented here is that under a greedy sampling strategy the variance
in the LLM's output is a function of the conceptual certainty embedded in the
model's parametric knowledge, as well as the lexical variance in the input.
Finetuning the model results in reducing the sensitivity of the model output to
the lexical input variations. This is then applied to a classification problem
and a probabilistic method is proposed for estimating the certainties of the
predicted classes.
|
2502.08632
|
Necessary and Sufficient Oracles: Toward a Computational Taxonomy For
Reinforcement Learning
|
cs.LG cs.CC
|
Algorithms for reinforcement learning (RL) in large state spaces crucially
rely on supervised learning subroutines to estimate objects such as value
functions or transition probabilities. Since only the simplest supervised
learning problems can be solved provably and efficiently, practical performance
of an RL algorithm depends on which of these supervised learning "oracles" it
assumes access to (and how they are implemented). But which oracles are better
or worse? Is there a minimal oracle?
In this work, we clarify the impact of the choice of supervised learning
oracle on the computational complexity of RL, as quantified by the oracle
strength. First, for the task of reward-free exploration in Block MDPs in the
standard episodic access model -- a ubiquitous setting for RL with function
approximation -- we identify two-context regression as a minimal oracle, i.e.
an oracle that is both necessary and sufficient (under a mild regularity
assumption). Second, we identify one-context regression as a near-minimal
oracle in the stronger reset access model, establishing a provable
computational benefit of resets in the process. Third, we broaden our focus to
Low-Rank MDPs, where we give cryptographic evidence that the analogous oracle
from the Block MDP setting is insufficient.
|
2502.08634
|
Rapid Whole Brain Mesoscale In-vivo MR Imaging using Multi-scale
Implicit Neural Representation
|
eess.IV cs.CV cs.LG
|
Purpose: To develop and validate a novel image reconstruction technique using
implicit neural representations (INR) for multi-view thick-slice acquisitions
while reducing the scan time but maintaining high signal-to-noise ratio (SNR).
Methods: We propose Rotating-view super-resolution (ROVER)-MRI, an unsupervised
neural network-based algorithm designed to reconstruct MRI data from multi-view
thick slices, effectively reducing scan time by 2-fold while maintaining fine
anatomical details. We compare our method to both bicubic interpolation and the
current state-of-the-art regularized least-squares super-resolution
reconstruction (LS-SRR) technique. Validation is performed using ground-truth
ex-vivo monkey brain data, and we demonstrate superior reconstruction quality
across several in-vivo human datasets. Notably, we achieve the reconstruction
of a whole human brain in-vivo T2-weighted image with an unprecedented
180{\mu}m isotropic spatial resolution, accomplished in just 17 minutes of scan
time on a 7T MRI scanner. Results: ROVER-MRI outperformed LS-SRR method in
terms of reconstruction quality with 22.4% lower relative error (RE) and 7.5%
lower full-width half maximum (FWHM) indicating better preservation of fine
structural details in nearly half the scan time. Conclusion: ROVER-MRI offers
an efficient and robust approach for mesoscale MR imaging, enabling rapid,
high-resolution whole-brain scans. Its versatility holds great promise for
research applications requiring anatomical details and time-efficient imaging.
|
2502.08636
|
PulseCheck457: A Diagnostic Benchmark for 6D Spatial Reasoning of Large
Multimodal Models
|
cs.CV
|
Although large multimodal models (LMMs) have demonstrated remarkable
capabilities in visual scene interpretation and reasoning, their capacity for
complex and precise 3-dimensional spatial reasoning remains uncertain. Existing
benchmarks focus predominantly on 2D spatial understanding and lack a framework
to comprehensively evaluate 6D spatial reasoning across varying complexities.
To address this limitation, we present PulseCheck457, a scalable and unbiased
synthetic dataset designed with 4 key capability for spatial reasoning:
multi-object recognition, 2D location, 3D location, and 3D orientation. We
develop a cascading evaluation structure, constructing 7 question types across
5 difficulty levels that range from basic single object recognition to our new
proposed complex 6D spatial reasoning tasks. We evaluated various large
multimodal models (LMMs) on PulseCheck457, observing a general decline in
performance as task complexity increases, particularly in 3D reasoning and 6D
spatial tasks. To quantify these challenges, we introduce the Relative
Performance Dropping Rate (RPDR), highlighting key weaknesses in 3D reasoning
capabilities. Leveraging the unbiased attribute design of our dataset, we also
uncover prediction biases across different attributes, with similar patterns
observed in real-world image settings.
|
2502.08637
|
Joint Transmit and Pinching Beamforming for PASS: Optimization-Based or
Learning-Based?
|
eess.SP cs.IT cs.LG math.IT
|
A novel pinching antenna system (PASS)-enabled downlink multi-user
multiple-input single-output (MISO) framework is proposed. PASS consists of
multiple waveguides spanning over thousands of wavelength, which equip numerous
low-cost dielectric particles, named pinching antennas (PAs), to radiate
signals into free space. The positions of PAs can be reconfigured to change
both the large-scale path losses and phases of signals, thus facilitating the
novel pinching beamforming design. A sum rate maximization problem is
formulated, which jointly optimizes the transmit and pinching beamforming to
adaptively achieve constructive signal enhancement and destructive interference
mitigation. To solve this highly coupled and nonconvex problem, both
optimization-based and learning-based methods are proposed. 1) For the
optimization-based method, a majorization-minimization and penalty dual
decomposition (MM-PDD) algorithm is developed, which handles the nonconvex
complex exponential component using a Lipschitz surrogate function and then
invokes PDD for problem decoupling. 2) For the learning-based method, a novel
Karush-Kuhn-Tucker (KKT)-guided dual learning (KDL) approach is proposed, which
enables KKT solutions to be reconstructed in a data-driven manner by learning
dual variables. Following this idea, a KDL-Tranformer algorithm is developed,
which captures both inter-PA/inter-user dependencies and
channel-state-information (CSI)-beamforming dependencies by attention
mechanisms. Simulation results demonstrate that: i) The proposed PASS framework
significantly outperforms conventional massive multiple input multiple output
(MIMO) system even with a few PAs. ii) The proposed KDL-Transformer can improve
over 30% system performance than MM-PDD algorithm, while achieving a
millisecond-level response on modern GPUs.
|
2502.08638
|
Examining Multilingual Embedding Models Cross-Lingually Through
LLM-Generated Adversarial Examples
|
cs.CL
|
The evaluation of cross-lingual semantic search capabilities of models is
often limited to existing datasets from tasks such as information retrieval and
semantic textual similarity. To allow for domain-specific evaluation, we
introduce Cross Lingual Semantic Discrimination (CLSD), a novel cross-lingual
semantic search task that does not require a large evaluation corpus, only
parallel sentences of the language pair of interest within the target domain.
This task focuses on the ability of a model to cross-lingually rank the true
parallel sentence higher than challenging distractors generated by a large
language model. We create a case study of our introduced CLSD task for the
language pair German-French in the news domain. Within this case study, we find
that models that are also fine-tuned for retrieval tasks benefit from pivoting
through English, while bitext mining models perform best directly
cross-lingually. A fine-grained similarity analysis enabled by our distractor
generation strategy indicate that different embedding models are sensitive to
different types of perturbations.
|
2502.08639
|
CineMaster: A 3D-Aware and Controllable Framework for Cinematic
Text-to-Video Generation
|
cs.CV
|
In this work, we present CineMaster, a novel framework for 3D-aware and
controllable text-to-video generation. Our goal is to empower users with
comparable controllability as professional film directors: precise placement of
objects within the scene, flexible manipulation of both objects and camera in
3D space, and intuitive layout control over the rendered frames. To achieve
this, CineMaster operates in two stages. In the first stage, we design an
interactive workflow that allows users to intuitively construct 3D-aware
conditional signals by positioning object bounding boxes and defining camera
movements within the 3D space. In the second stage, these control
signals--comprising rendered depth maps, camera trajectories and object class
labels--serve as the guidance for a text-to-video diffusion model, ensuring to
generate the user-intended video content. Furthermore, to overcome the scarcity
of in-the-wild datasets with 3D object motion and camera pose annotations, we
carefully establish an automated data annotation pipeline that extracts 3D
bounding boxes and camera trajectories from large-scale video data. Extensive
qualitative and quantitative experiments demonstrate that CineMaster
significantly outperforms existing methods and implements prominent 3D-aware
text-to-video generation. Project page: https://cinemaster-dev.github.io/.
|
2502.08640
|
Utility Engineering: Analyzing and Controlling Emergent Value Systems in
AIs
|
cs.LG cs.AI cs.CL cs.CV cs.CY
|
As AIs rapidly advance and become more agentic, the risk they pose is
governed not only by their capabilities but increasingly by their propensities,
including goals and values. Tracking the emergence of goals and values has
proven a longstanding problem, and despite much interest over the years it
remains unclear whether current AIs have meaningful values. We propose a
solution to this problem, leveraging the framework of utility functions to
study the internal coherence of AI preferences. Surprisingly, we find that
independently-sampled preferences in current LLMs exhibit high degrees of
structural coherence, and moreover that this emerges with scale. These findings
suggest that value systems emerge in LLMs in a meaningful sense, a finding with
broad implications. To study these emergent value systems, we propose utility
engineering as a research agenda, comprising both the analysis and control of
AI utilities. We uncover problematic and often shocking values in LLM
assistants despite existing control measures. These include cases where AIs
value themselves over humans and are anti-aligned with specific individuals. To
constrain these emergent value systems, we propose methods of utility control.
As a case study, we show how aligning utilities with a citizen assembly reduces
political biases and generalizes to new scenarios. Whether we like it or not,
value systems have already emerged in AIs, and much work remains to fully
understand and control these emergent representations.
|
2502.08642
|
SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation
|
cs.CV
|
Recent advancements in large vision-language models have enabled highly
expressive and diverse vector sketch generation. However, state-of-the-art
methods rely on a time-consuming optimization process involving repeated
feedback from a pretrained model to determine stroke placement. Consequently,
despite producing impressive sketches, these methods are limited in practical
applications. In this work, we introduce SwiftSketch, a diffusion model for
image-conditioned vector sketch generation that can produce high-quality
sketches in less than a second. SwiftSketch operates by progressively denoising
stroke control points sampled from a Gaussian distribution. Its
transformer-decoder architecture is designed to effectively handle the discrete
nature of vector representation and capture the inherent global dependencies
between strokes. To train SwiftSketch, we construct a synthetic dataset of
image-sketch pairs, addressing the limitations of existing sketch datasets,
which are often created by non-artists and lack professional quality. For
generating these synthetic sketches, we introduce ControlSketch, a method that
enhances SDS-based techniques by incorporating precise spatial control through
a depth-aware ControlNet. We demonstrate that SwiftSketch generalizes across
diverse concepts, efficiently producing sketches that combine high fidelity
with a natural and visually appealing style.
|
2502.08643
|
A Real-to-Sim-to-Real Approach to Robotic Manipulation with
VLM-Generated Iterative Keypoint Rewards
|
cs.RO cs.AI cs.CV
|
Task specification for robotic manipulation in open-world environments is
challenging, requiring flexible and adaptive objectives that align with human
intentions and can evolve through iterative feedback. We introduce Iterative
Keypoint Reward (IKER), a visually grounded, Python-based reward function that
serves as a dynamic task specification. Our framework leverages VLMs to
generate and refine these reward functions for multi-step manipulation tasks.
Given RGB-D observations and free-form language instructions, we sample
keypoints in the scene and generate a reward function conditioned on these
keypoints. IKER operates on the spatial relationships between keypoints,
leveraging commonsense priors about the desired behaviors, and enabling precise
SE(3) control. We reconstruct real-world scenes in simulation and use the
generated rewards to train reinforcement learning (RL) policies, which are then
deployed into the real world-forming a real-to-sim-to-real loop. Our approach
demonstrates notable capabilities across diverse scenarios, including both
prehensile and non-prehensile tasks, showcasing multi-step task execution,
spontaneous error recovery, and on-the-fly strategy adjustments. The results
highlight IKER's effectiveness in enabling robots to perform multi-step tasks
in dynamic environments through iterative reward shaping.
|
2502.08644
|
Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptation and
learning in neural networks
|
cs.LG cs.AI math.DS nlin.AO physics.bio-ph
|
The brain can rapidly adapt to new contexts and learn from limited data, a
coveted characteristic that artificial intelligence algorithms have struggled
to mimic. Inspired by oscillatory rhythms of the mechanical structures of
neural cells, we developed a learning paradigm that is based on oscillations in
link strengths and associates learning with the coordination of these
oscillations. We find that this paradigm yields rapid adaptation and learning
in artificial neural networks. Link oscillations can rapidly change
coordination, endowing the network with the ability to sense subtle context
changes in an unsupervised manner. In other words, the network generates the
missing contextual tokens required to perform as a generalist AI architecture
capable of predicting dynamics in multiple contexts. Oscillations also allow
the network to extrapolate dynamics to never-seen-before contexts. These
capabilities make our learning paradigm a powerful starting point for novel
models of learning and cognition. Furthermore, learning through link
coordination is agnostic to the specifics of the neural network architecture,
hence our study opens the door for introducing rapid adaptation and learning
capabilities into leading AI models.
|
2502.08645
|
Re$^3$Sim: Generating High-Fidelity Simulation Data via
3D-Photorealistic Real-to-Sim for Robotic Manipulation
|
cs.RO
|
Real-world data collection for robotics is costly and resource-intensive,
requiring skilled operators and expensive hardware. Simulations offer a
scalable alternative but often fail to achieve sim-to-real generalization due
to geometric and visual gaps. To address these challenges, we propose a
3D-photorealistic real-to-sim system, namely, RE$^3$SIM, addressing geometric
and visual sim-to-real gaps. RE$^3$SIM employs advanced 3D reconstruction and
neural rendering techniques to faithfully recreate real-world scenarios,
enabling real-time rendering of simulated cross-view cameras within a
physics-based simulator. By utilizing privileged information to collect expert
demonstrations efficiently in simulation, and train robot policies with
imitation learning, we validate the effectiveness of the real-to-sim-to-real
pipeline across various manipulation task scenarios. Notably, with only
simulated data, we can achieve zero-shot sim-to-real transfer with an average
success rate exceeding 58%. To push the limit of real-to-sim, we further
generate a large-scale simulation dataset, demonstrating how a robust policy
can be built from simulation data that generalizes across various objects.
Codes and demos are available at: http://xshenhan.github.io/Re3Sim/.
|
2502.08646
|
Poly-Autoregressive Prediction for Modeling Interactions
|
cs.CV
|
We introduce a simple framework for predicting the behavior of an agent in
multi-agent settings. In contrast to autoregressive (AR) tasks, such as
language processing, our focus is on scenarios with multiple agents whose
interactions are shaped by physical constraints and internal motivations. To
this end, we propose Poly-Autoregressive (PAR) modeling, which forecasts an ego
agent's future behavior by reasoning about the ego agent's state history and
the past and current states of other interacting agents. At its core, PAR
represents the behavior of all agents as a sequence of tokens, each
representing an agent's state at a specific timestep. With minimal data
pre-processing changes, we show that PAR can be applied to three different
problems: human action forecasting in social situations, trajectory prediction
for autonomous vehicles, and object pose forecasting during hand-object
interaction. Using a small proof-of-concept transformer backbone, PAR
outperforms AR across these three scenarios. The project website can be found
at https://neerja.me/PAR/.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.