id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.02407
|
Avoiding spurious sharpness minimization broadens applicability of SAM
|
cs.LG cs.CL stat.ML
|
Curvature regularization techniques like Sharpness Aware Minimization (SAM)
have shown great promise in improving generalization on vision tasks. However,
we find that SAM performs poorly in domains like natural language processing
(NLP), often degrading performance -- even with twice the compute budget. We
investigate the discrepancy across domains and find that in the NLP setting,
SAM is dominated by regularization of the logit statistics -- instead of
improving the geometry of the function itself. We use this observation to
develop an alternative algorithm we call Functional-SAM, which regularizes
curvature only through modification of the statistics of the overall function
implemented by the neural network, and avoids spurious minimization through
logit manipulation. Furthermore, we argue that preconditioning the SAM
perturbation also prevents spurious minimization, and when combined with
Functional-SAM, it gives further improvements. Our proposed algorithms show
improved performance over AdamW and SAM baselines when trained for an equal
number of steps, in both fixed-length and Chinchilla-style training settings,
at various model scales (including billion-parameter scale). On the whole, our
work highlights the importance of more precise characterizations of sharpness
in broadening the applicability of curvature regularization to large language
models (LLMs).
|
2502.02409
|
Extending SEEDS to a Supervoxel Algorithm for Medical Image Analysis
|
cs.CV
|
In this work, we extend the SEEDS superpixel algorithm from 2D images to 3D
volumes, resulting in 3D SEEDS, a faster, better, and open-source supervoxel
algorithm for medical image analysis. We compare 3D SEEDS with the widely used
supervoxel algorithm SLIC on 13 segmentation tasks across 10 organs. 3D SEEDS
accelerates supervoxel generation by a factor of 10, improves the achievable
Dice score by +6.5%, and reduces the under-segmentation error by -0.16%. The
code is available at https://github.com/Zch0414/3d_seeds
|
2502.02410
|
Privacy Amplification by Structured Subsampling for Deep Differentially
Private Time Series Forecasting
|
cs.LG cs.CR stat.ML
|
Many forms of sensitive data, such as web traffic, mobility data, or hospital
occupancy, are inherently sequential. The standard method for training machine
learning models while ensuring privacy for units of sensitive information, such
as individual hospital visits, is differentially private stochastic gradient
descent (DP-SGD). However, we observe in this work that the formal guarantees
of DP-SGD are incompatible with timeseries-specific tasks like forecasting,
since they rely on the privacy amplification attained by training on small,
unstructured batches sampled from an unstructured dataset. In contrast, batches
for forecasting are generated by (1) sampling sequentially structured time
series from a dataset, (2) sampling contiguous subsequences from these series,
and (3) partitioning them into context and ground-truth forecast windows. We
theoretically analyze the privacy amplification attained by this structured
subsampling to enable the training of forecasting models with sound and tight
event- and user-level privacy guarantees. Towards more private models, we
additionally prove how data augmentation amplifies privacy in self-supervised
training of sequence models. Our empirical evaluation demonstrates that
amplification by structured subsampling enables the training of forecasting
models with strong formal privacy guarantees.
|
2502.02414
|
Transolver++: An Accurate Neural Solver for PDEs on Million-Scale
Geometries
|
cs.LG
|
Although deep models have been widely explored in solving partial
differential equations (PDEs), previous works are primarily limited to data
only with up to tens of thousands of mesh points, far from the million-point
scale required by industrial simulations that involve complex geometries. In
the spirit of advancing neural PDE solvers to real industrial applications, we
present Transolver++, a highly parallel and efficient neural solver that can
accurately solve PDEs on million-scale geometries. Building upon previous
advancements in solving PDEs by learning physical states via Transolver,
Transolver++ is further equipped with an extremely optimized parallelism
framework and a local adaptive mechanism to efficiently capture eidetic
physical states from massive mesh points, successfully tackling the thorny
challenges in computation and physics learning when scaling up input mesh size.
Transolver++ increases the single-GPU input capacity to million-scale points
for the first time and is capable of continuously scaling input size in linear
complexity by increasing GPUs. Experimentally, Transolver++ yields 13% relative
promotion across six standard PDE benchmarks and achieves over 20% performance
gain in million-scale high-fidelity industrial simulations, whose sizes are
100$\times$ larger than previous benchmarks, covering car and 3D aircraft
designs.
|
2502.02415
|
Towards Fast Graph Generation via Autoregressive Noisy Filtration
Modeling
|
cs.LG
|
Graph generative models often face a critical trade-off between learning
complex distributions and achieving fast generation speed. We introduce
Autoregressive Noisy Filtration Modeling (ANFM), a novel approach that
addresses both challenges. ANFM leverages filtration, a concept from
topological data analysis, to transform graphs into short sequences of
monotonically increasing subgraphs. This formulation extends the sequence
families used in previous autoregressive models. To learn from these sequences,
we propose a novel autoregressive graph mixer model. Our experiments suggest
that exposure bias might represent a substantial hurdle in autoregressive graph
generation and we introduce two mitigation strategies to address it: noise
augmentation and a reinforcement learning approach. Incorporating these
techniques leads to substantial performance gains, making ANFM competitive with
state-of-the-art diffusion models across diverse synthetic and real-world
datasets. Notably, ANFM produces remarkably short sequences, achieving a
100-fold speedup in generation time compared to diffusion models. This work
marks a significant step toward high-throughput graph generation.
|
2502.02417
|
CVKAN: Complex-Valued Kolmogorov-Arnold Networks
|
cs.LG
|
In this work we propose CKAN, a complex-valued KAN, to join the intrinsic
interpretability of KANs and the advantages of Complex-Valued Neural Networks
(CVNNs). We show how to transfer a KAN and the necessary associated mechanisms
into the complex domain. To confirm that CKAN meets expectations we conduct
experiments on symbolic complex-valued function fitting and physically
meaningful formulae as well as on a more realistic dataset from knot theory.
Our proposed CKAN is more stable and performs on par or better than real-valued
KANs while requiring less parameters and a shallower network architecture,
making it more explainable.
|
2502.02421
|
Activation-Informed Merging of Large Language Models
|
cs.CL cs.AI
|
Model merging, a method that combines the parameters and embeddings of
multiple fine-tuned large language models (LLMs), offers a promising approach
to enhance model performance across various tasks while maintaining
computational efficiency. This paper introduces Activation-Informed Merging
(AIM), a technique that integrates the information from the activation space of
LLMs into the merging process to improve performance and robustness. AIM is
designed as a flexible, complementary solution that is applicable to any
existing merging method. It aims to preserve critical weights from the base
model, drawing on principles from continual learning~(CL) and model
compression. Utilizing a task-agnostic calibration set, AIM selectively
prioritizes essential weights during merging. We empirically demonstrate that
AIM significantly enhances the performance of merged models across multiple
benchmarks. Our findings suggest that considering the activation-space
information can provide substantial advancements in the model merging
strategies for LLMs with up to 40\% increase in benchmark performance.
|
2502.02424
|
Pruning-aware Loss Functions for STOI-Optimized Pruned Recurrent
Autoencoders for the Compression of the Stimulation Patterns of Cochlear
Implants at Zero Delay
|
cs.SD cs.LG eess.AS
|
Cochlear implants (CIs) are surgically implanted hearing devices, which allow
to restore a sense of hearing in people suffering from profound hearing loss.
Wireless streaming of audio from external devices to CI signal processors has
become common place. Specialized compression based on the stimulation patterns
of a CI by deep recurrent autoencoders can decrease the power consumption in
such a wireless streaming application through bit-rate reduction at zero
latency.
While previous research achieved considerable bit-rate reductions, model
sizes were ignored, which can be of crucial importance in hearing-aids due to
their limited computational resources. This work investigates maximizing
objective speech intelligibility of the coded stimulation patterns of deep
recurrent autoencoders while minimizing model size. For this purpose, a
pruning-aware loss is proposed, which captures the impact of pruning during
training. This training with a pruning-aware loss is compared to conventional
magnitude-informed pruning and is found to yield considerable improvements in
objective intelligibility, especially at higher pruning rates. After
fine-tuning, little to no degradation of objective intelligibility is observed
up to a pruning rate of about 55\,\%. The proposed pruning-aware loss yields
substantial gains in objective speech intelligibility scores after pruning
compared to the magnitude-informed baseline for pruning rates above 45\,\%.
|
2502.02428
|
TransformDAS: Mapping {\Phi}-OTDR Signals to Riemannian Manifold for
Robust Classification
|
cs.LG
|
Phase-sensitive optical time-domain reflectometry ({\Phi}-OTDR) is a widely
used distributed fiber optic sensing system in engineering. Machine learning
algorithms for {\Phi}-OTDR event classification require high volumes and
quality of datasets; however, high-quality datasets are currently extremely
scarce in the field, leading to a lack of robustness in models, which is
manifested by higher false alarm rates in real-world scenarios. One promising
approach to address this issue is to augment existing data using generative
models combined with a small amount of real-world data. We explored mapping
both {\Phi}-OTDR features in a GAN-based generative pipeline and signal
features in a Transformer classifier to hyperbolic space to seek more effective
model generalization. The results indicate that state-of-the-art models exhibit
stronger generalization performance and lower false alarm rates in real-world
scenarios when trained on augmented datasets. TransformDAS, in particular,
demonstrates the best classification performance, highlighting the benefits of
Riemannian manifold mapping in {\Phi}-OTDR data generation and model
classification.
|
2502.02430
|
A Scalable Crawling Algorithm Utilizing Noisy Change-Indicating Signals
|
stat.ML cs.IR cs.LG
|
Web refresh crawling is the problem of keeping a cache of web pages fresh,
that is, having the most recent copy available when a page is requested, given
a limited bandwidth available to the crawler. Under the assumption that the
change and request events, resp., to each web page follow independent Poisson
processes, the optimal scheduling policy was derived by Azar et al. 2018. In
this paper, we study an extension of this problem where side information
indicating content changes, such as various types of web pings, for example,
signals from sitemaps, content delivery networks, etc., is available.
Incorporating such side information into the crawling policy is challenging,
because (i) the signals can be noisy with false positive events and with
missing change events; and (ii) the crawler should achieve a fair performance
over web pages regardless of the quality of the side information, which might
differ from web page to web page. We propose a scalable crawling algorithm
which (i) uses the noisy side information in an optimal way under mild
assumptions; (ii) can be deployed without heavy centralized computation; (iii)
is able to crawl web pages at a constant total rate without spikes in the total
bandwidth usage over any time interval, and automatically adapt to the new
optimal solution when the total bandwidth changes without centralized
computation. Experiments clearly demonstrate the versatility of our approach.
|
2502.02431
|
Connections between Schedule-Free Optimizers, AdEMAMix, and Accelerated
SGD Variants
|
cs.LG cs.AI
|
Recent advancements in deep learning optimization have introduced new
algorithms, such as Schedule-Free optimizers, AdEMAMix, MARS and Lion which
modify traditional momentum mechanisms. In a separate line of work, theoretical
acceleration of stochastic gradient descent (SGD) in noise-dominated regime has
been achieved by decoupling the momentum coefficient from the current
gradient's weight. In this paper, we establish explicit connections between
these two lines of work. We substantiate our theoretical findings with
preliminary experiments on a 150m language modeling task. We find that
AdEMAMix, which most closely resembles accelerated versions of stochastic
gradient descent, exhibits superior performance. Building on these insights, we
introduce a modification to AdEMAMix, termed Simplified-AdEMAMix, which
maintains the same performance as AdEMAMix across both large and small
batch-size settings while eliminating the need for two different momentum
terms. The code for Simplified-AdEMAMix is available on the repository:
https://github.com/DepenM/Simplified-AdEMAMix/.
|
2502.02433
|
A coding theoretic study of homogeneous Markovian predictive games
|
cs.IT cs.GT math.IT math.PR
|
This paper explores a predictive game in which a Forecaster announces odds
based on a time-homogeneous Markov kernel, establishing a game-theoretic law of
large numbers for the relative frequencies of occurrences of all finite
strings. A key feature of our proof is a betting strategy built on a universal
coding scheme, inspired by the martingale convergence theorem and algorithmic
randomness theory, without relying on a diversified betting approach that
involves countably many operating accounts. We apply these insights to
thermodynamics, offering a game-theoretic perspective on Le\'o Szil\'ard's
thought experiment.
|
2502.02434
|
mPOLICE: Provable Enforcement of Multi-Region Affine Constraints in Deep
Neural Networks
|
cs.LG
|
Deep neural networks are increasingly employed in fields such as climate
modeling, robotics, and industrial control, where strict output constraints
must be upheld. Although prior methods like the POLICE algorithm can enforce
affine constraints in a single convex region by adjusting network parameters,
they struggle with multiple disjoint regions, often leading to conflicts or
unintended affine extensions. We present mPOLICE, a new method that extends
POLICE to handle constraints imposed on multiple regions. mPOLICE assigns a
distinct activation pattern to each constrained region, preserving exact affine
behavior locally while avoiding overreach into other parts of the input domain.
We formulate a layer-wise optimization problem that adjusts both the weights
and biases to assign unique activation patterns to each convex region, ensuring
that constraints are met without conflicts, while maintaining the continuity
and smoothness of the learned function. Our experiments show the enforcement of
multi-region constraints for multiple scenarios, including regression and
classification, function approximation, and non-convex regions through
approximation. Notably, mPOLICE adds zero inference overhead and minimal
training overhead.
|
2502.02437
|
H-MBR: Hypervisor-level Memory Bandwidth Reservation for Mixed
Criticality Systems
|
cs.DC cs.SY eess.SY
|
Recent advancements in fields such as automotive and aerospace have driven a
growing demand for robust computational resources. Applications that were once
designed for basic MCUs are now deployed on highly heterogeneous SoC platforms.
While these platforms deliver the necessary computational performance, they
also present challenges related to resource sharing and predictability. These
challenges are particularly pronounced when consolidating safety and
non-safety-critical systems, the so-called Mixed-Criticality Systems (MCS) to
adhere to strict SWaP-C requirements. MCS consolidation on shared platforms
requires stringent spatial and temporal isolation to comply with functional
safety standards. Virtualization, mainly leveraged by hypervisors, is a key
technology that ensures spatial isolation across multiple OSes and
applications; however, ensuring temporal isolation remains challenging due to
contention on shared hardwar resources, which impacts real-time performance and
predictability. To mitigate this problem, several strategies as cache coloring
and memory bandwidth reservation have been proposed. Although cache coloring is
typically implemented on state-of-the-art hypervisors, memory bandwidth
reservation approaches are commonly implemented at the Linux kernel level or
rely on dedicated hardware and typically do not consider the concept of VMs
that can run different OSes. To fill the gap between current memory bandwidth
reservation solutions and the deployment of MCSs that operate on a hypervisor,
this work introduces H-MBR, an open-source VM-centric memory bandwidth
reservation mechanism. H-MBR features (i) VM-centric bandwidth reservation,
(ii) OS and platform agnosticism, and (iii) reduced overhead. Empirical results
evidenced no overhead on non-regulated workloads, and negligible overhead (<1%)
for regulated workloads for regulation periods of 2 us or higher.
|
2502.02438
|
Medical Multimodal Model Stealing Attacks via Adversarial Domain
Alignment
|
cs.CR cs.AI
|
Medical multimodal large language models (MLLMs) are becoming an instrumental
part of healthcare systems, assisting medical personnel with decision making
and results analysis. Models for radiology report generation are able to
interpret medical imagery, thus reducing the workload of radiologists. As
medical data is scarce and protected by privacy regulations, medical MLLMs
represent valuable intellectual property. However, these assets are potentially
vulnerable to model stealing, where attackers aim to replicate their
functionality via black-box access. So far, model stealing for the medical
domain has focused on classification; however, existing attacks are not
effective against MLLMs. In this paper, we introduce Adversarial Domain
Alignment (ADA-STEAL), the first stealing attack against medical MLLMs.
ADA-STEAL relies on natural images, which are public and widely available, as
opposed to their medical counterparts. We show that data augmentation with
adversarial noise is sufficient to overcome the data distribution gap between
natural images and the domain-specific distribution of the victim MLLM.
Experiments on the IU X-RAY and MIMIC-CXR radiology datasets demonstrate that
Adversarial Domain Alignment enables attackers to steal the medical MLLM
without any access to medical data.
|
2502.02439
|
System Integrity Protection Schemes in the Nordics -- a comparative
analysis
|
eess.SY cs.SY
|
To increase the utilisation rate of the power system and accelerate
electrification while providing a high degree of security and reliability,
System Integrity Protection Schemes (SIPS) are of great importance. SIPS
functions are automatic remedial actions, detecting abnormal conditions or
contingencies in the system and taking control action to mitigate these
conditions. Design, implementation, maintenance and coordination of SIPS are
all important aspects for desired operation. However, different actors have
chosen different approaches to using SIPS for capacity enhancement, and there
are discrepancies in how capacity is valued in relation to for example
complexity, reliability and risk. Additionally, definitions often vary between
countries. This paper reports on a joint survey and interview study on SIPS
with stakeholders and experts in the Nordic countries - including TSOs, DSOs
and industry. Combined with a literature review, a comparison and analysis of
how SIPS are used in the Nordics is performed, particularly in relation to
ENTSO-E capacity allocation.
|
2502.02441
|
LLMER: Crafting Interactive Extended Reality Worlds with JSON Data
Generated by Large Language Models
|
cs.MM cs.AI
|
The integration of Large Language Models (LLMs) like GPT-4 with Extended
Reality (XR) technologies offers the potential to build truly immersive XR
environments that interact with human users through natural language, e.g.,
generating and animating 3D scenes from audio inputs. However, the complexity
of XR environments makes it difficult to accurately extract relevant contextual
data and scene/object parameters from an overwhelming volume of XR artifacts.
It leads to not only increased costs with pay-per-use models, but also elevated
levels of generation errors. Moreover, existing approaches focusing on coding
script generation are often prone to generation errors, resulting in flawed or
invalid scripts, application crashes, and ultimately a degraded user
experience. To overcome these challenges, we introduce LLMER, a novel framework
that creates interactive XR worlds using JSON data generated by LLMs. Unlike
prior approaches focusing on coding script generation, LLMER translates natural
language inputs into JSON data, significantly reducing the likelihood of
application crashes and processing latency. It employs a multi-stage strategy
to supply only the essential contextual information adapted to the user's
request and features multiple modules designed for various XR tasks. Our
preliminary user study reveals the effectiveness of the proposed system, with
over 80% reduction in consumed tokens and around 60% reduction in task
completion time compared to state-of-the-art approaches. The analysis of users'
feedback also illuminates a series of directions for further optimization.
|
2502.02443
|
A Null Space Compliance Approach for Maintaining Safety and Tracking
Performance in Human-Robot Interactions
|
cs.RO cs.SY eess.SY
|
In recent years, the focus on developing robot manipulators has shifted
towards prioritizing safety in Human-Robot Interaction (HRI). Impedance control
is a typical approach for interaction control in collaboration tasks. However,
such a control approach has two main limitations: 1) the end-effector (EE)'s
limited compliance to adapt to unknown physical interactions, and 2) inability
of the robot body to compliantly adapt to unknown physical interactions. In
this work, we present an approach to address these drawbacks. We introduce a
modified Cartesian impedance control method combined with a Dynamical System
(DS)-based motion generator, aimed at enhancing the interaction capability of
the EE without compromising main task tracking performance. This approach
enables human coworkers to interact with the EE on-the-fly, e.g. tool
changeover, after which the robot compliantly resumes its task. Additionally,
combining with a new null space impedance control method enables the robot body
to exhibit compliant behaviour in response to interactions, avoiding serious
injuries from accidental contact while mitigating the impact on main task
tracking performance. Finally, we prove the passivity of the system and
validate the proposed approach through comprehensive comparative experiments on
a 7 Degree-of-Freedom (DOF) KUKA LWR IV+ robot.
|
2502.02444
|
Generative Psycho-Lexical Approach for Constructing Value Systems in
Large Language Models
|
cs.CL cs.AI
|
Values are core drivers of individual and collective perception, cognition,
and behavior. Value systems, such as Schwartz's Theory of Basic Human Values,
delineate the hierarchy and interplay among these values, enabling
cross-disciplinary investigations into decision-making and societal dynamics.
Recently, the rise of Large Language Models (LLMs) has raised concerns
regarding their elusive intrinsic values. Despite growing efforts in
evaluating, understanding, and aligning LLM values, a psychologically grounded
LLM value system remains underexplored. This study addresses the gap by
introducing the Generative Psycho-Lexical Approach (GPLA), a scalable,
adaptable, and theoretically informed method for constructing value systems.
Leveraging GPLA, we propose a psychologically grounded five-factor value system
tailored for LLMs. For systematic validation, we present three benchmarking
tasks that integrate psychological principles with cutting-edge AI priorities.
Our results reveal that the proposed value system meets standard psychological
criteria, better captures LLM values, improves LLM safety prediction, and
enhances LLM alignment, when compared to the canonical Schwartz's values.
|
2502.02446
|
Towards graph neural networks for provably solving convex optimization
problems
|
cs.AI cs.LG cs.NE
|
Recently, message-passing graph neural networks (MPNNs) have shown potential
for solving combinatorial and continuous optimization problems due to their
ability to capture variable-constraint interactions. While existing approaches
leverage MPNNs to approximate solutions or warm-start traditional solvers, they
often lack guarantees for feasibility, particularly in convex optimization
settings. Here, we propose an iterative MPNN framework to solve convex
optimization problems with provable feasibility guarantees. First, we
demonstrate that MPNNs can provably simulate standard interior-point methods
for solving quadratic problems with linear constraints, covering relevant
problems such as SVMs. Secondly, to ensure feasibility, we introduce a variant
that starts from a feasible point and iteratively restricts the search within
the feasible region. Experimental results show that our approach outperforms
existing neural baselines in solution quality and feasibility, generalizes well
to unseen problem sizes, and, in some cases, achieves faster solution times
than state-of-the-art solvers such as Gurobi.
|
2502.02448
|
Sparse Data Generation Using Diffusion Models
|
cs.LG
|
Sparse data is ubiquitous, appearing in numerous domains, from economics and
recommender systems to astronomy and biomedical sciences. However, efficiently
and realistically generating sparse data remains a significant challenge. We
introduce Sparse Data Diffusion (SDD), a novel method for generating sparse
data. SDD extends continuous state-space diffusion models by explicitly
modeling sparsity through the introduction of Sparsity Bits. Empirical
validation on image data from various domains-including two scientific
applications, physics and biology-demonstrates that SDD achieves high fidelity
in representing data sparsity while preserving the quality of the generated
data.
|
2502.02449
|
TUMTraffic-VideoQA: A Benchmark for Unified Spatio-Temporal Video
Understanding in Traffic Scenes
|
cs.CV
|
We present TUMTraffic-VideoQA, a novel dataset and benchmark designed for
spatio-temporal video understanding in complex roadside traffic scenarios. The
dataset comprises 1,000 videos, featuring 85,000 multiple-choice QA pairs,
2,300 object captioning, and 5,700 object grounding annotations, encompassing
diverse real-world conditions such as adverse weather and traffic anomalies. By
incorporating tuple-based spatio-temporal object expressions,
TUMTraffic-VideoQA unifies three essential tasks-multiple-choice video question
answering, referred object captioning, and spatio-temporal object
grounding-within a cohesive evaluation framework. We further introduce the
TUMTraffic-Qwen baseline model, enhanced with visual token sampling strategies,
providing valuable insights into the challenges of fine-grained spatio-temporal
reasoning. Extensive experiments demonstrate the dataset's complexity,
highlight the limitations of existing models, and position TUMTraffic-VideoQA
as a robust foundation for advancing research in intelligent transportation
systems. The dataset and benchmark are publicly available to facilitate further
exploration.
|
2502.02451
|
Beyond English: Evaluating Automated Measurement of Moral Foundations in
Non-English Discourse with a Chinese Case Study
|
cs.CL cs.SI
|
This study explores computational approaches for measuring moral foundations
(MFs) in non-English corpora. Since most resources are developed primarily for
English, cross-linguistic applications of moral foundation theory remain
limited. Using Chinese as a case study, this paper evaluates the effectiveness
of applying English resources to machine translated text, local language
lexicons, multilingual language models, and large language models (LLMs) in
measuring MFs in non-English texts. The results indicate that machine
translation and local lexicon approaches are insufficient for complex moral
assessments, frequently resulting in a substantial loss of cultural
information. In contrast, multilingual models and LLMs demonstrate reliable
cross-language performance with transfer learning, with LLMs excelling in terms
of data efficiency. Importantly, this study also underscores the need for
human-in-the-loop validation of automated MF assessment, as the most advanced
models may overlook cultural nuances in cross-language measurements. The
findings highlight the potential of LLMs for cross-language MF measurements and
other complex multilingual deductive coding tasks.
|
2502.02452
|
Personalization Toolkit: Training Free Personalization of Large Vision
Language Models
|
cs.CV
|
Large Vision Language Models (LVLMs) have significant potential to deliver
personalized assistance by adapting to individual users' unique needs and
preferences. Personalization of LVLMs is an emerging area that involves
customizing models to recognize specific object instances and provide tailored
responses. However, existing approaches rely on time-consuming test-time
training for each user and object, rendering them impractical. This paper
proposes a novel, training-free approach to LVLM personalization by leveraging
pre-trained vision foundation models to extract distinct features,
retrieval-augmented generation (RAG) techniques to recognize instances in the
visual input, and visual prompting methods. Our model-agnostic vision toolkit
enables flexible and efficient personalization without extensive retraining. We
demonstrate state-of-the-art results, outperforming conventional training-based
approaches and establish a new standard for LVLM personalization.
|
2502.02454
|
IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View
Automated Prompt Learning
|
cs.CV
|
Using extensive training data from SA-1B, the Segment Anything Model (SAM)
has demonstrated exceptional generalization and zero-shot capabilities,
attracting widespread attention in areas such as medical image segmentation and
remote sensing image segmentation. However, its performance in the field of
image manipulation detection remains largely unexplored and unconfirmed. There
are two main challenges in applying SAM to image manipulation detection: a)
reliance on manual prompts, and b) the difficulty of single-view information in
supporting cross-dataset generalization. To address these challenges, we
develops a cross-view prompt learning paradigm called IMDPrompter based on SAM.
Benefiting from the design of automated prompts, IMDPrompter no longer relies
on manual guidance, enabling automated detection and localization.
Additionally, we propose components such as Cross-view Feature Perception,
Optimal Prompt Selection, and Cross-View Prompt Consistency, which facilitate
cross-view perceptual learning and guide SAM to generate accurate masks.
Extensive experimental results from five datasets (CASIA, Columbia, Coverage,
IMD2020, and NIST16) validate the effectiveness of our proposed method.
|
2502.02456
|
Model Human Learners: Computational Models to Guide Instructional Design
|
cs.HC cs.AI cs.SC
|
Instructional designers face an overwhelming array of design choices, making
it challenging to identify the most effective interventions. To address this
issue, I propose the concept of a Model Human Learner, a unified computational
model of learning that can aid designers in evaluating candidate interventions.
This paper presents the first successful demonstration of this concept, showing
that a computational model can accurately predict the outcomes of two human A/B
experiments -- one testing a problem sequencing intervention and the other
testing an item design intervention. It also demonstrates that such a model can
generate learning curves without requiring human data and provide theoretical
insights into why an instructional intervention is effective. These findings
lay the groundwork for future Model Human Learners that integrate cognitive and
learning theories to support instructional design across diverse tasks and
interventions.
|
2502.02457
|
Orientation-aware interaction-based deep material network in
polycrystalline materials modeling
|
cs.CE cs.LG
|
Multiscale simulations are indispensable for connecting microstructural
features to the macroscopic behavior of polycrystalline materials, but their
high computational demands limit their practicality. Deep material networks
(DMNs) have been proposed as efficient surrogate models, yet they fall short of
capturing texture evolution. To address this limitation, we propose the
orientation-aware interaction-based deep material network (ODMN), which
incorporates an orientation-aware mechanism and an interaction mechanism
grounded in the Hill-Mandel principle. The orientation-aware mechanism learns
the crystallographic textures, while the interaction mechanism captures
stress-equilibrium directions among representative volume element (RVE)
subregions, offering insight into internal microstructural mechanics. Notably,
ODMN requires only linear elastic data for training yet generalizes effectively
to complex nonlinear and anisotropic responses. Our results show that ODMN
accurately predicts both mechanical responses and texture evolution under
complex plastic deformation, thus expanding the applicability of DMNs to
polycrystalline materials. By balancing computational efficiency with
predictive fidelity, ODMN provides a robust framework for multiscale
simulations of polycrystalline materials.
|
2502.02458
|
SAISA: Towards Multimodal Large Language Models with Both Training and
Inference Efficiency
|
cs.CL cs.CV
|
Multimodal Large Language Models (MLLMs) mainly fall into two architectures,
each involving a trade-off between training and inference efficiency: embedding
space alignment (e.g., LLaVA-1.5) is inefficient during inference, while
cross-attention space alignment (e.g., Flamingo) is inefficient in training. In
this paper, we compare these two architectures and identify the key factors for
building efficient MLLMs. A primary difference between them lies in how
attention is applied to visual tokens, particularly in their interactions with
each other. To investigate whether attention among visual tokens is necessary,
we propose a new self-attention mechanism, NAAViT (\textbf{N}o
\textbf{A}ttention \textbf{A}mong \textbf{Vi}sual \textbf{T}okens), which
eliminates this type of attention. Our pilot experiment on LLaVA-1.5 shows that
attention among visual tokens is highly redundant. Based on these insights, we
introduce SAISA (\textbf{S}elf-\textbf{A}ttention \textbf{I}nput \textbf{S}pace
\textbf{A}lignment), a novel architecture that enhance both training and
inference efficiency. SAISA directly aligns visual features with the input
spaces of NAAViT self-attention blocks, reducing computational overhead in both
self-attention blocks and feed-forward networks (FFNs). Using the same
configuration as LLaVA-1.5, SAISA reduces inference FLOPs by 66\% and training
budget by 26\%, while achieving superior performance in terms of accuracy.
Comprehensive ablation studies further validate the effectiveness of SAISA
across various LLMs and visual encoders. The code and model will be publicly
available at https://github.com/icip-cas/SAISA.
|
2502.02463
|
Distribution Transformers: Fast Approximate Bayesian Inference With
On-The-Fly Prior Adaptation
|
stat.ML cs.LG
|
While Bayesian inference provides a principled framework for reasoning under
uncertainty, its widespread adoption is limited by the intractability of exact
posterior computation, necessitating the use of approximate inference. However,
existing methods are often computationally expensive, or demand costly
retraining when priors change, limiting their utility, particularly in
sequential inference problems such as real-time sensor fusion. To address these
challenges, we introduce the Distribution Transformer -- a novel architecture
that can learn arbitrary distribution-to-distribution mappings. Our method can
be trained to map a prior to the corresponding posterior, conditioned on some
dataset -- thus performing approximate Bayesian inference. Our novel
architecture represents a prior distribution as a (universally-approximating)
Gaussian Mixture Model (GMM), and transforms it into a GMM representation of
the posterior. The components of the GMM attend to each other via
self-attention, and to the datapoints via cross-attention. We demonstrate that
Distribution Transformers both maintain flexibility to vary the prior, and
significantly reduces computation times-from minutes to milliseconds-while
achieving log-likelihood performance on par with or superior to existing
approximate inference methods across tasks such as sequential inference,
quantum system parameter inference, and Gaussian Process predictive posterior
inference with hyperpriors.
|
2502.02464
|
Rankify: A Comprehensive Python Toolkit for Retrieval, Re-Ranking, and
Retrieval-Augmented Generation
|
cs.IR cs.CL
|
Retrieval, re-ranking, and retrieval-augmented generation (RAG) are critical
components of modern applications in information retrieval, question answering,
or knowledge-based text generation. However, existing solutions are often
fragmented, lacking a unified framework that easily integrates these essential
processes. The absence of a standardized implementation, coupled with the
complexity of retrieval and re-ranking workflows, makes it challenging for
researchers to compare and evaluate different approaches in a consistent
environment. While existing toolkits such as Rerankers and RankLLM provide
general-purpose reranking pipelines, they often lack the flexibility required
for fine-grained experimentation and benchmarking. In response to these
challenges, we introduce Rankify, a powerful and modular open-source toolkit
designed to unify retrieval, re-ranking, and RAG within a cohesive framework.
Rankify supports a wide range of retrieval techniques, including dense and
sparse retrievers, while incorporating state-of-the-art re-ranking models to
enhance retrieval quality. Additionally, Rankify includes a collection of
pre-retrieved datasets to facilitate benchmarking, available at Huggingface
(https://huggingface.co/datasets/abdoelsayed/reranking-datasets-light). To
encourage adoption and ease of integration, we provide comprehensive
documentation (http://rankify.readthedocs.io/), an open-source implementation
on GitHub (https://github.com/DataScienceUIBK/rankify), and a PyPI package for
easy installation (https://pypi.org/project/rankify/). As a unified and
lightweight framework, Rankify allows researchers and practitioners to advance
retrieval and re-ranking methodologies while ensuring consistency, scalability,
and ease of use.
|
2502.02465
|
Towards Consistent and Controllable Image Synthesis for Face Editing
|
cs.CV
|
Face editing methods, essential for tasks like virtual avatars, digital human
synthesis and identity preservation, have traditionally been built upon
GAN-based techniques, while recent focus has shifted to diffusion-based models
due to their success in image reconstruction. However, diffusion models still
face challenges in controlling specific attributes and preserving the
consistency of other unchanged attributes especially the identity
characteristics. To address these issues and facilitate more convenient editing
of face images, we propose a novel approach that leverages the power of
Stable-Diffusion (SD) models and crude 3D face models to control the lighting,
facial expression and head pose of a portrait photo. We observe that this task
essentially involves the combinations of target background, identity and face
attributes aimed to edit. We strive to sufficiently disentangle the control of
these factors to enable consistency of face editing. Specifically, our method,
coined as RigFace, contains: 1) A Spatial Attribute Encoder that provides
presise and decoupled conditions of background, pose, expression and lighting;
2) A high-consistency FaceFusion method that transfers identity features from
the Identity Encoder to the denoising UNet of a pre-trained SD model; 3) An
Attribute Rigger that injects those conditions into the denoising UNet. Our
model achieves comparable or even superior performance in both identity
preservation and photorealism compared to existing face editing models. Code is
publicly available at https://github.com/weimengting/RigFace.
|
2502.02468
|
High-Fidelity Human Avatars from Laptop Webcams using Edge Compute
|
cs.CV
|
Applications of generating photo-realistic human avatars are many, however,
high-fidelity avatar generation traditionally required expensive professional
camera rigs and artistic labor, but recent research has enabled constructing
them automatically from smartphones with RGB and IR sensors. However, these new
methods still rely on the presence of high-resolution cameras on modern
smartphones and often require offloading the processing to powerful servers
with GPUs. Modern applications such as video conferencing call for the ability
to generate these avatars from consumer-grade laptop webcams using limited
compute available on-device. In this work, we develop a novel method based on
3D morphable models, landmark detection, photo-realistic texture GANs, and
differentiable rendering to tackle the problem of low webcam image quality and
edge computation. We build an automatic system to generate high-fidelity
animatable avatars under these limitations, leveraging the neural compute
capabilities of mobile chips.
|
2502.02470
|
Modular Training of Neural Networks aids Interpretability
|
cs.LG cs.AI
|
An approach to improve neural network interpretability is via clusterability,
i.e., splitting a model into disjoint clusters that can be studied
independently. We define a measure for clusterability and show that pre-trained
models form highly enmeshed clusters via spectral graph clustering. We thus
train models to be more modular using a "clusterability loss" function that
encourages the formation of non-interacting clusters. Using automated
interpretability techniques, we show that our method can help train models that
are more modular and learn different, disjoint, and smaller circuits. We
investigate CNNs trained on MNIST and CIFAR, small transformers trained on
modular addition, and language models. Our approach provides a promising
direction for training neural networks that learn simpler functions and are
easier to interpret.
|
2502.02471
|
Mind the Gap: Evaluating Patch Embeddings from General-Purpose and
Histopathology Foundation Models for Cell Segmentation and Classification
|
cs.CV cs.AI cs.LG q-bio.QM
|
Recent advancements in foundation models have transformed computer vision,
driving significant performance improvements across diverse domains, including
digital histopathology. However, the advantages of domain-specific
histopathology foundation models over general-purpose models for specialized
tasks such as cell analysis remain underexplored. This study investigates the
representation learning gap between these two categories by analyzing
multi-level patch embeddings applied to cell instance segmentation and
classification. We implement an encoder-decoder architecture with a consistent
decoder and various encoders. These include convolutional, vision transformer
(ViT), and hybrid encoders pre-trained on ImageNet-22K or LVD-142M,
representing general-purpose foundation models. These are compared against ViT
encoders from the recently released UNI, Virchow2, and Prov-GigaPath foundation
models, trained on patches extracted from hundreds of thousands of
histopathology whole-slide images. The decoder integrates patch embeddings from
different encoder depths via skip connections to generate semantic and distance
maps. These maps are then post-processed to create instance segmentation masks
where each label corresponds to an individual cell and to perform cell-type
classification. All encoders remain frozen during training to assess their
pre-trained feature extraction capabilities. Using the PanNuke and CoNIC
histopathology datasets, and the newly introduced Nissl-stained CytoDArk0
dataset for brain cytoarchitecture studies, we evaluate instance-level
detection, segmentation accuracy, and cell-type classification. This study
provides insights into the comparative strengths and limitations of
general-purpose vs. histopathology foundation models, offering guidance for
model selection in cell-focused histopathology and brain cytoarchitecture
analysis workflows.
|
2502.02472
|
SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic
Differential Equations
|
stat.ML cs.LG
|
The Latent Stochastic Differential Equation (SDE) is a powerful tool for time
series and sequence modeling. However, training Latent SDEs typically relies on
adjoint sensitivity methods, which depend on simulation and backpropagation
through approximate SDE solutions, which limit scalability. In this work, we
propose SDE Matching, a new simulation-free method for training Latent SDEs.
Inspired by modern Score- and Flow Matching algorithms for learning generative
dynamics, we extend these ideas to the domain of stochastic dynamics for time
series and sequence modeling, eliminating the need for costly numerical
simulations. Our results demonstrate that SDE Matching achieves performance
comparable to adjoint sensitivity methods while drastically reducing
computational complexity.
|
2502.02475
|
Style transfer as data augmentation: evaluating unpaired image-to-image
translation models in mammography
|
eess.IV cs.CV physics.med-ph
|
Several studies indicate that deep learning models can learn to detect breast
cancer from mammograms (X-ray images of the breasts). However, challenges with
overfitting and poor generalisability prevent their routine use in the clinic.
Models trained on data from one patient population may not perform well on
another due to differences in their data domains, emerging due to variations in
scanning technology or patient characteristics. Data augmentation techniques
can be used to improve generalisability by expanding the diversity of feature
representations in the training data by altering existing examples.
Image-to-image translation models are one approach capable of imposing the
characteristic feature representations (i.e. style) of images from one dataset
onto another. However, evaluating model performance is non-trivial,
particularly in the absence of ground truths (a common reality in medical
imaging). Here, we describe some key aspects that should be considered when
evaluating style transfer algorithms, highlighting the advantages and
disadvantages of popular metrics, and important factors to be mindful of when
implementing them in practice. We consider two types of generative models: a
cycle-consistent generative adversarial network (CycleGAN) and a
diffusion-based SynDiff model. We learn unpaired image-to-image translation
across three mammography datasets. We highlight that undesirable aspects of
model performance may determine the suitability of some metrics, and also
provide some analysis indicating the extent to which various metrics assess
unique aspects of model performance. We emphasise the need to use several
metrics for a comprehensive assessment of model performance.
|
2502.02479
|
Using Random Noise Equivariantly to Boost Graph Neural Networks
Universally
|
cs.LG
|
Recent advances in Graph Neural Networks (GNNs) have explored the potential
of random noise as an input feature to enhance expressivity across diverse
tasks. However, naively incorporating noise can degrade performance, while
architectures tailored to exploit noise for specific tasks excel yet lack broad
applicability. This paper tackles these issues by laying down a theoretical
framework that elucidates the increased sample complexity when introducing
random noise into GNNs without careful design. We further propose Equivariant
Noise GNN (ENGNN), a novel architecture that harnesses the symmetrical
properties of noise to mitigate sample complexity and bolster generalization.
Our experiments demonstrate that using noise equivariantly significantly
enhances performance on node-level, link-level, subgraph, and graph-level tasks
and achieves comparable performance to models designed for specific tasks,
thereby offering a general method to boost expressivity across various graph
tasks.
|
2502.02480
|
Stable Port-Hamiltonian Neural Networks
|
cs.LG
|
In recent years, nonlinear dynamic system identification using artificial
neural networks has garnered attention due to its manifold potential
applications in virtually all branches of science and engineering. However,
purely data-driven approaches often struggle with extrapolation and may yield
physically implausible forecasts. Furthermore, the learned dynamics can exhibit
instabilities, making it difficult to apply such models safely and robustly.
This article proposes stable port-Hamiltonian neural networks, a machine
learning architecture that incorporates the physical biases of energy
conservation or dissipation while guaranteeing global Lyapunov stability of the
learned dynamics. Evaluations with illustrative examples and real-world
measurement data demonstrate the model's ability to generalize from sparse
data, outperforming purely data-driven approaches and avoiding instability
issues. In addition, the model's potential for data-driven surrogate modeling
is highlighted in application to multi-physics simulation data.
|
2502.02481
|
Multilingual Machine Translation with Open Large Language Models at
Practical Scale: An Empirical Study
|
cs.CL
|
Large language models (LLMs) have shown continuously improving multilingual
capabilities, and even small-scale open-source models have demonstrated rapid
performance enhancement. In this paper, we systematically explore the abilities
of open LLMs with less than ten billion parameters to handle multilingual
machine translation (MT) tasks. We conduct comprehensive evaluations on six
popular LLMs and find that models like Gemma2-9B exhibit impressive
multilingual translation capabilities. We then introduce the Parallel-First
Monolingual-Second (PFMS) data mixing strategy in the continual pretraining
stage to further enhance the MT performance and present GemmaX2-28, a 9B model
achieving top-tier multilingual translation performance across 28 languages.
Specifically, GemmaX2-28 consistently outperforms the state-of-the-art (SOTA)
models such as TowerInstruct and XALMA and achieves competitive performance
with Google Translate and GPT-4-turbo.
|
2502.02483
|
Distributional Diffusion Models with Scoring Rules
|
cs.LG stat.ML
|
Diffusion models generate high-quality synthetic data. They operate by
defining a continuous-time forward process which gradually adds Gaussian noise
to data until fully corrupted. The corresponding reverse process progressively
"denoises" a Gaussian sample into a sample from the data distribution. However,
generating high-quality outputs requires many discretization steps to obtain a
faithful approximation of the reverse process. This is expensive and has
motivated the development of many acceleration methods. We propose to
accomplish sample generation by learning the posterior {\em distribution} of
clean data samples given their noisy versions, instead of only the mean of this
distribution. This allows us to sample from the probability transitions of the
reverse process on a coarse time scale, significantly accelerating inference
with minimal degradation of the quality of the output. This is accomplished by
replacing the standard regression loss used to estimate conditional means with
a scoring rule. We validate our method on image and robot trajectory
generation, where we consistently outperform standard diffusion models at few
discretization steps.
|
2502.02486
|
Catoni Contextual Bandits are Robust to Heavy-tailed Rewards
|
stat.ML cs.LG
|
Typical contextual bandit algorithms assume that the rewards at each round
lie in some fixed range $[0, R]$, and their regret scales polynomially with
this reward range $R$. However, many practical scenarios naturally involve
heavy-tailed rewards or rewards where the worst-case range can be substantially
larger than the variance. In this paper, we develop an algorithmic approach
building on Catoni's estimator from robust statistics, and apply it to
contextual bandits with general function approximation. When the variance of
the reward at each round is known, we use a variance-weighted regression
approach and establish a regret bound that depends only on the cumulative
reward variance and logarithmically on the reward range $R$ as well as the
number of rounds $T$. For the unknown-variance case, we further propose a
careful peeling-based algorithm and remove the need for cumbersome variance
estimation. With additional dependence on the fourth moment, our algorithm also
enjoys a variance-based bound with logarithmic reward-range dependence.
Moreover, we demonstrate the optimality of the leading-order term in our regret
bound through a matching lower bound.
|
2502.02487
|
Hier-EgoPack: Hierarchical Egocentric Video Understanding with Diverse
Task Perspectives
|
cs.CV
|
Our comprehension of video streams depicting human activities is naturally
multifaceted: in just a few moments, we can grasp what is happening, identify
the relevance and interactions of objects in the scene, and forecast what will
happen soon, everything all at once. To endow autonomous systems with such a
holistic perception, learning how to correlate concepts, abstract knowledge
across diverse tasks, and leverage tasks synergies when learning novel skills
is essential. A significant step in this direction is EgoPack, a unified
framework for understanding human activities across diverse tasks with minimal
overhead. EgoPack promotes information sharing and collaboration among
downstream tasks, essential for efficiently learning new skills. In this paper,
we introduce Hier-EgoPack, which advances EgoPack by enabling reasoning also
across diverse temporal granularities, which expands its applicability to a
broader range of downstream tasks. To achieve this, we propose a novel
hierarchical architecture for temporal reasoning equipped with a GNN layer
specifically designed to tackle the challenges of multi-granularity reasoning
effectively. We evaluate our approach on multiple Ego4d benchmarks involving
both clip-level and frame-level reasoning, demonstrating how our hierarchical
unified architecture effectively solves these diverse tasks simultaneously.
|
2502.02488
|
Do Graph Diffusion Models Accurately Capture and Generate Substructure
Distributions?
|
cs.LG
|
Diffusion models have gained popularity in graph generation tasks; however,
the extent of their expressivity concerning the graph distributions they can
learn is not fully understood. Unlike models in other domains, popular
backbones for graph diffusion models, such as Graph Transformers, do not
possess universal expressivity to accurately model the distribution scores of
complex graph data. Our work addresses this limitation by focusing on the
frequency of specific substructures as a key characteristic of target graph
distributions. When evaluating existing models using this metric, we find that
they fail to maintain the distribution of substructure counts observed in the
training set when generating new graphs. To address this issue, we establish a
theoretical connection between the expressivity of Graph Neural Networks (GNNs)
and the overall performance of graph diffusion models, demonstrating that more
expressive GNN backbones can better capture complex distribution patterns. By
integrating advanced GNNs into the backbone architecture, we achieve
significant improvements in substructure generation.
|
2502.02489
|
A Self-Supervised Framework for Improved Generalisability in Ultrasound
B-mode Image Segmentation
|
cs.CV cs.AI cs.LG
|
Ultrasound (US) imaging is clinically invaluable due to its noninvasive and
safe nature. However, interpreting US images is challenging, requires
significant expertise, and time, and is often prone to errors. Deep learning
offers assistive solutions such as segmentation. Supervised methods rely on
large, high-quality, and consistently labeled datasets, which are challenging
to curate. Moreover, these methods tend to underperform on out-of-distribution
data, limiting their clinical utility. Self-supervised learning (SSL) has
emerged as a promising alternative, leveraging unlabeled data to enhance model
performance and generalisability. We introduce a contrastive SSL approach
tailored for B-mode US images, incorporating a novel Relation Contrastive Loss
(RCL). RCL encourages learning of distinct features by differentiating positive
and negative sample pairs through a learnable metric. Additionally, we propose
spatial and frequency-based augmentation strategies for the representation
learning on US images. Our approach significantly outperforms traditional
supervised segmentation methods across three public breast US datasets,
particularly in data-limited scenarios. Notable improvements on the Dice
similarity metric include a 4% increase on 20% and 50% of the BUSI dataset,
nearly 6% and 9% improvements on 20% and 50% of the BrEaST dataset, and 6.4%
and 3.7% improvements on 20% and 50% of the UDIAT dataset, respectively.
Furthermore, we demonstrate superior generalisability on the
out-of-distribution UDIAT dataset with performance boosts of 20.6% and 13.6%
compared to the supervised baseline using 20% and 50% of the BUSI and BrEaST
training data, respectively. Our research highlights that domain-inspired SSL
can improve US segmentation, especially under data-limited conditions.
|
2502.02492
|
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion
Generation in Video Models
|
cs.CV
|
Despite tremendous recent progress, generative video models still struggle to
capture real-world motion, dynamics, and physics. We show that this limitation
arises from the conventional pixel reconstruction objective, which biases
models toward appearance fidelity at the expense of motion coherence. To
address this, we introduce VideoJAM, a novel framework that instills an
effective motion prior to video generators, by encouraging the model to learn a
joint appearance-motion representation. VideoJAM is composed of two
complementary units. During training, we extend the objective to predict both
the generated pixels and their corresponding motion from a single learned
representation. During inference, we introduce Inner-Guidance, a mechanism that
steers the generation toward coherent motion by leveraging the model's own
evolving motion prediction as a dynamic guidance signal. Notably, our framework
can be applied to any video model with minimal adaptations, requiring no
modifications to the training data or scaling of the model. VideoJAM achieves
state-of-the-art performance in motion coherence, surpassing highly competitive
proprietary models while also enhancing the perceived visual quality of the
generations. These findings emphasize that appearance and motion can be
complementary and, when effectively integrated, enhance both the visual quality
and the coherence of video generation. Project website:
https://hila-chefer.github.io/videojam-paper.github.io/
|
2502.02493
|
EasySpec: Layer-Parallel Speculative Decoding for Efficient Multi-GPU
Utilization
|
cs.LG
|
Speculative decoding is an effective and lossless method for Large Language
Model (LLM) inference acceleration. It employs a smaller model to generate a
draft token sequence, which is then verified by the original base model. In
multi-GPU systems, inference latency can be further reduced through tensor
parallelism (TP), while the optimal TP size of the draft model is typically
smaller than that of the base model, leading to GPU idling during the drafting
stage. To solve this problem, we propose EasySpec, a layer-parallel speculation
strategy that optimizes the efficiency of multi-GPU utilization.EasySpec breaks
the sequential execution order of layers in the drafting model, enabling
multi-layer parallelization across devices, albeit with some induced
approximation errors. After each drafting-and-verification iteration, the draft
model's key-value (KV) cache is calibrated in a single forward pass, preventing
long-term error accumulation at minimal additional latency. We evaluated
EasySpec on several mainstream open-source LLMs, using smaller versions of
models from the same series as drafters. The results demonstrate that EasySpec
can achieve a peak speedup of 4.17x compared to vanilla decoding, while
preserving the original distribution of the base LLMs. Specifically, the
drafting stage can be accelerated by up to 1.62x with a maximum accuracy drop
of only 7%, requiring no training or fine-tuning on the draft models.
|
2502.02494
|
Analyzing Similarity Metrics for Data Selection for Language Model
Pretraining
|
cs.LG cs.CL
|
Similarity between training examples is used to curate pretraining datasets
for language models by many methods -- for diversification and to select
examples similar to high-quality data. However, similarity is typically
measured with off-the-shelf embedding models that are generic or trained for
tasks such as retrieval. This paper introduces a framework to analyze the
suitability of embedding models specifically for data curation in the language
model pretraining setting. We quantify the correlation between similarity in
the embedding space to similarity in pretraining loss between different
training examples, and how diversifying in the embedding space affects
pretraining quality. We analyze a variety of embedding models in our framework,
with experiments using the Pile dataset for pretraining a 1.7B parameter
decoder-only language model. We find that the embedding models we consider are
all useful for pretraining data curation. Moreover, a simple approach of
averaging per-token embeddings proves to be surprisingly competitive with more
sophisticated embedding models -- likely because the latter are not designed
specifically for pretraining data curation. Indeed, we believe our analysis and
evaluation framework can serve as a foundation for the design of embedding
models that specifically reason about similarity in pretraining datasets.
|
2502.02495
|
The Causal-Effect Score in Data Management
|
cs.DB cs.AI
|
The Causal Effect (CE) is a numerical measure of causal influence of
variables on observed results. Despite being widely used in many areas, only
preliminary attempts have been made to use CE as an attribution score in data
management, to measure the causal strength of tuples for query answering in
databases. In this work, we introduce, generalize and investigate the so-called
Causal-Effect Score in the context of classical and probabilistic databases.
|
2502.02496
|
Deep Weight Factorization: Sparse Learning Through the Lens of
Artificial Symmetries
|
cs.LG stat.ML
|
Sparse regularization techniques are well-established in machine learning,
yet their application in neural networks remains challenging due to the
non-differentiability of penalties like the $L_1$ norm, which is incompatible
with stochastic gradient descent. A promising alternative is shallow weight
factorization, where weights are decomposed into two factors, allowing for
smooth optimization of $L_1$-penalized neural networks by adding differentiable
$L_2$ regularization to the factors. In this work, we introduce deep weight
factorization, extending previous shallow approaches to more than two factors.
We theoretically establish equivalence of our deep factorization with
non-convex sparse regularization and analyze its impact on training dynamics
and optimization. Due to the limitations posed by standard training practices,
we propose a tailored initialization scheme and identify important learning
rate requirements necessary for training factorized networks. We demonstrate
the effectiveness of our deep weight factorization through experiments on
various architectures and datasets, consistently outperforming its shallow
counterpart and widely used pruning methods.
|
2502.02499
|
Learning to generate physical ocean states: Towards hybrid climate
modeling
|
cs.LG
|
Ocean General Circulation Models require extensive computational resources to
reach equilibrium states, while deep learning emulators, despite offering fast
predictions, lack the physical interpretability and long-term stability
necessary for climate scientists to understand climate sensitivity (to
greenhouse gas emissions) and mechanisms of abrupt % variability such as
tipping points. We propose to take the best from both worlds by leveraging deep
generative models to produce physically consistent oceanic states that can
serve as initial conditions for climate projections. We assess the viability of
this hybrid approach through both physical metrics and numerical experiments,
and highlight the benefits of enforcing physical constraints during generation.
Although we train here on ocean variables from idealized numerical simulations,
we claim that this hybrid approach, combining the computational efficiency of
deep learning with the physical accuracy of numerical models, can effectively
reduce the computational burden of running climate models to equilibrium, and
reduce uncertainties in climate projections by minimizing drifts in baseline
simulations.
|
2502.02500
|
The Skin Game: Revolutionizing Standards for AI Dermatology Model
Comparison
|
eess.IV cs.CV q-bio.TO
|
Deep Learning approaches in dermatological image classification have shown
promising results, yet the field faces significant methodological challenges
that impede proper evaluation. This paper presents a dual contribution: first,
a systematic analysis of current methodological practices in skin disease
classification research, revealing substantial inconsistencies in data
preparation, augmentation strategies, and performance reporting; second, a
comprehensive training and evaluation framework demonstrated through
experiments with the DINOv2-Large vision transformer across three benchmark
datasets (HAM10000, DermNet, ISIC Atlas). The analysis identifies concerning
patterns, including pre-split data augmentation and validation-based reporting,
potentially leading to overestimated metrics, while highlighting the lack of
unified methodology standards. The experimental results demonstrate DINOv2's
performance in skin disease classification, achieving macro-averaged F1-scores
of 0.85 (HAM10000), 0.71 (DermNet), and 0.84 (ISIC Atlas). Attention map
analysis reveals critical patterns in the model's decision-making, showing
sophisticated feature recognition in typical presentations but significant
vulnerabilities with atypical cases and composite images. Our findings
highlight the need for standardized evaluation protocols and careful
implementation strategies in clinical settings. We propose comprehensive
methodological recommendations for model development, evaluation, and clinical
deployment, emphasizing rigorous data preparation, systematic error analysis,
and specialized protocols for different image types. To promote
reproducibility, we provide our implementation code through GitHub. This work
establishes a foundation for rigorous evaluation standards in dermatological
image classification and provides insights for responsible AI implementation in
clinical dermatology.
|
2502.02501
|
Graph-based Document Structure Analysis
|
cs.CV
|
When reading a document, glancing at the spatial layout of a document is an
initial step to understand it roughly. Traditional document layout analysis
(DLA) methods, however, offer only a superficial parsing of documents, focusing
on basic instance detection and often failing to capture the nuanced spatial
and logical relations between instances. These limitations hinder DLA-based
models from achieving a gradually deeper comprehension akin to human reading.
In this work, we propose a novel graph-based Document Structure Analysis (gDSA)
task. This task requires that model not only detects document elements but also
generates spatial and logical relations in form of a graph structure, allowing
to understand documents in a holistic and intuitive manner. For this new task,
we construct a relation graph-based document structure analysis dataset
(GraphDoc) with 80K document images and 4.13M relation annotations, enabling
training models to complete multiple tasks like reading order, hierarchical
structures analysis, and complex inter-element relation inference. Furthermore,
a document relation graph generator (DRGG) is proposed to address the gDSA
task, which achieves performance with 57.6% at mAP$_g$@0.5 for a strong
benchmark baseline on this novel task and dataset. We hope this graphical
representation of document structure can mark an innovative advancement in
document structure analysis and understanding. The new dataset and code will be
made publicly available at https://yufanchen96.github.io/projects/GraphDoc.
|
2502.02504
|
Unified Spatial-Temporal Edge-Enhanced Graph Networks for Pedestrian
Trajectory Prediction
|
cs.CV cs.AI
|
Pedestrian trajectory prediction aims to forecast future movements based on
historical paths. Spatial-temporal (ST) methods often separately model spatial
interactions among pedestrians and temporal dependencies of individuals. They
overlook the direct impacts of interactions among different pedestrians across
various time steps (i.e., high-order cross-time interactions). This limits
their ability to capture ST inter-dependencies and hinders prediction
performance. To address these limitations, we propose UniEdge with three major
designs. Firstly, we introduce a unified ST graph data structure that
simplifies high-order cross-time interactions into first-order relationships,
enabling the learning of ST inter-dependencies in a single step. This avoids
the information loss caused by multi-step aggregation. Secondly, traditional
GNNs focus on aggregating pedestrian node features, neglecting the propagation
of implicit interaction patterns encoded in edge features. We propose the
Edge-to-Edge-Node-to-Node Graph Convolution (E2E-N2N-GCN), a novel dual-graph
network that jointly models explicit N2N social interactions among pedestrians
and implicit E2E influence propagation across these interaction patterns.
Finally, to overcome the limited receptive fields and challenges in capturing
long-range dependencies of auto-regressive architectures, we introduce a
transformer encoder-based predictor that enables global modeling of temporal
correlation. UniEdge outperforms state-of-the-arts on multiple datasets,
including ETH, UCY, and SDD.
|
2502.02508
|
Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM
Reasoning via Autoregressive Search
|
cs.CL cs.AI
|
Large language models (LLMs) have demonstrated remarkable reasoning
capabilities across diverse domains. Recent studies have shown that increasing
test-time computation enhances LLMs' reasoning capabilities. This typically
involves extensive sampling at inference time guided by an external LLM
verifier, resulting in a two-player system. Despite external guidance, the
effectiveness of this system demonstrates the potential of a single LLM to
tackle complex tasks. Thus, we pose a new research problem: Can we internalize
the searching capabilities to fundamentally enhance the reasoning abilities of
a single LLM? This work explores an orthogonal direction focusing on
post-training LLMs for autoregressive searching (i.e., an extended reasoning
process with self-reflection and self-exploration of new strategies). To
achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a
two-stage training paradigm: 1) a small-scale format tuning stage to
internalize the COAT reasoning format and 2) a large-scale self-improvement
stage leveraging reinforcement learning. Our approach results in Satori, a 7B
LLM trained on open-source models and data. Extensive empirical evaluations
demonstrate that Satori achieves state-of-the-art performance on mathematical
reasoning benchmarks while exhibits strong generalization to out-of-domain
tasks. Code, data, and models will be fully open-sourced.
|
2502.02513
|
Generative Modeling on Lie Groups via Euclidean Generalized Score
Matching
|
cs.LG
|
We extend Euclidean score-based diffusion processes to generative modeling on
Lie groups. Through the formalism of Generalized Score Matching, our approach
yields a Langevin dynamics which decomposes as a direct sum of Lie algebra
representations, enabling generative processes on Lie groups while operating in
Euclidean space. Unlike equivariant models, which restrict the space of
learnable functions by quotienting out group orbits, our method can model any
target distribution on any (non-Abelian) Lie group. Standard score matching
emerges as a special case of our framework when the Lie group is the
translation group. We prove that our generalized generative processes arise as
solutions to a new class of paired stochastic differential equations (SDEs),
introduced here for the first time. We validate our approach through
experiments on diverse data types, demonstrating its effectiveness in
real-world applications such as SO(3)-guided molecular conformer generation and
modeling ligand-specific global SE(3) transformations for molecular docking,
showing improvement in comparison to Riemannian diffusion on the group itself.
We show that an appropriate choice of Lie group enhances learning efficiency by
reducing the effective dimensionality of the trajectory space and enables the
modeling of transitions between complex data distributions. Additionally, we
demonstrate the universality of our approach by deriving how it extends to flow
matching.
|
2502.02514
|
Privacy Attacks on Image AutoRegressive Models
|
cs.CV cs.LG
|
Image autoregressive (IAR) models have surpassed diffusion models (DMs) in
both image quality (FID: 1.48 vs. 1.58) and generation speed. However, their
privacy risks remain largely unexplored. To address this, we conduct a
comprehensive privacy analysis comparing IARs to DMs. We develop a novel
membership inference attack (MIA) that achieves a significantly higher success
rate in detecting training images (TPR@FPR=1%: 86.38% for IARs vs. 4.91% for
DMs). Using this MIA, we perform dataset inference (DI) and find that IARs
require as few as six samples to detect dataset membership, compared to 200 for
DMs, indicating higher information leakage. Additionally, we extract hundreds
of training images from an IAR (e.g., 698 from VAR-d30). Our findings highlight
a fundamental privacy-utility trade-off: while IARs excel in generation quality
and speed, they are significantly more vulnerable to privacy attacks. This
suggests that incorporating techniques from DMs, such as per-token probability
modeling using diffusion, could help mitigate IARs' privacy risks. Our code is
available at https://github.com/sprintml/privacy_attacks_against_iars.
|
2502.02516
|
Adaptive Exploration for Multi-Reward Multi-Policy Evaluation
|
cs.LG cs.AI stat.ML
|
We study the policy evaluation problem in an online multi-reward multi-policy
discounted setting, where multiple reward functions must be evaluated
simultaneously for different policies. We adopt an $(\epsilon,\delta)$-PAC
perspective to achieve $\epsilon$-accurate estimates with high confidence
across finite or convex sets of rewards, a setting that has not been
investigated in the literature. Building on prior work on Multi-Reward Best
Policy Identification, we adapt the MR-NaS exploration scheme to jointly
minimize sample complexity for evaluating different policies across different
reward sets. Our approach leverages an instance-specific lower bound revealing
how the sample complexity scales with a measure of value deviation, guiding the
design of an efficient exploration policy. Although computing this bound
entails a hard non-convex optimization, we propose an efficient convex
approximation that holds for both finite and convex reward sets. Experiments in
tabular domains demonstrate the effectiveness of this adaptive exploration
scheme.
|
2502.02523
|
Brief analysis of DeepSeek R1 and its implications for Generative AI
|
cs.LG
|
In late January 2025, DeepSeek released their new reasoning model (DeepSeek
R1); which was developed at a fraction of the cost yet remains competitive with
OpenAI's models, despite the US's GPU export ban. This report discusses the
model, and what its release means for the field of Generative AI more widely.
We briefly discuss other models released from China in recent weeks, their
similarities; innovative use of Mixture of Experts (MoE), Reinforcement
Learning (RL) and clever engineering appear to be key factors in the
capabilities of these models. This think piece has been written to a tight
timescale, providing broad coverage of the topic, and serves as introductory
material for those looking to understand the model's technical advancements, as
well as its place in the ecosystem. Several further areas of research are
identified.
|
2502.02525
|
Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object
Pose Estimation
|
cs.CV cs.RO
|
Nine-degrees-of-freedom (9-DoF) object pose and size estimation is crucial
for enabling augmented reality and robotic manipulation. Category-level methods
have received extensive research attention due to their potential for
generalization to intra-class unknown objects. However, these methods require
manual collection and labeling of large-scale real-world training data. To
address this problem, we introduce a diffusion-based paradigm for
domain-generalized category-level 9-DoF object pose estimation. Our motivation
is to leverage the latent generalization ability of the diffusion model to
address the domain generalization challenge in object pose estimation. This
entails training the model exclusively on rendered synthetic data to achieve
generalization to real-world scenes. We propose an effective diffusion model to
redefine 9-DoF object pose estimation from a generative perspective. Our model
does not require any 3D shape priors during training or inference. By employing
the Denoising Diffusion Implicit Model, we demonstrate that the reverse
diffusion process can be executed in as few as 3 steps, achieving near
real-time performance. Finally, we design a robotic grasping system comprising
both hardware and software components. Through comprehensive experiments on two
benchmark datasets and the real-world robotic system, we show that our method
achieves state-of-the-art domain generalization performance. Our code will be
made public at https://github.com/CNJianLiu/Diff9D.
|
2502.02527
|
TabPFN Unleashed: A Scalable and Effective Solution to Tabular
Classification Problems
|
cs.LG
|
TabPFN has emerged as a promising in-context learning model for tabular data,
capable of directly predicting the labels of test samples given labeled
training examples. It has demonstrated competitive performance, particularly on
small-scale classification tasks. However, despite its effectiveness, TabPFN
still requires further refinement in several areas, including handling
high-dimensional features, aligning with downstream datasets, and scaling to
larger datasets. In this paper, we revisit existing variants of TabPFN and
observe that most approaches focus either on reducing bias or variance, often
neglecting the need to address the other side, while also increasing inference
overhead. To fill this gap, we propose Beta (Bagging and Encoder-based
Fine-tuning for TabPFN Adaptation), a novel and effective method designed to
minimize both bias and variance. To reduce bias, we introduce a lightweight
encoder to better align downstream tasks with the pre-trained TabPFN. By
increasing the number of encoders in a lightweight manner, Beta mitigate
variance, thereby further improving the model's performance. Additionally,
bootstrapped sampling is employed to further reduce the impact of data
perturbations on the model, all while maintaining computational efficiency
during inference. Our approach enhances TabPFN's ability to handle
high-dimensional data and scale to larger datasets. Experimental results on
over 200 benchmark classification datasets demonstrate that Beta either
outperforms or matches state-of-the-art methods.
|
2502.02528
|
Why human-AI relationships need socioaffective alignment
|
cs.HC cs.AI
|
Humans strive to design safe AI systems that align with our goals and remain
under our control. However, as AI capabilities advance, we face a new
challenge: the emergence of deeper, more persistent relationships between
humans and AI systems. We explore how increasingly capable AI agents may
generate the perception of deeper relationships with users, especially as AI
becomes more personalised and agentic. This shift, from transactional
interaction to ongoing sustained social engagement with AI, necessitates a new
focus on socioaffective alignment-how an AI system behaves within the social
and psychological ecosystem co-created with its user, where preferences and
perceptions evolve through mutual influence. Addressing these dynamics involves
resolving key intrapersonal dilemmas, including balancing immediate versus
long-term well-being, protecting autonomy, and managing AI companionship
alongside the desire to preserve human social bonds. By framing these
challenges through a notion of basic psychological needs, we seek AI systems
that support, rather than exploit, our fundamental nature as social and
emotional beings.
|
2502.02531
|
Deep Linear Network Training Dynamics from Random Initialization: Data,
Width, Depth, and Hyperparameter Transfer
|
cs.LG cond-mat.dis-nn stat.ML
|
We theoretically characterize gradient descent dynamics in deep linear
networks trained at large width from random initialization and on large
quantities of random data. Our theory captures the ``wider is better" effect of
mean-field/maximum-update parameterized networks as well as hyperparameter
transfer effects, which can be contrasted with the neural-tangent
parameterization where optimal learning rates shift with model width. We
provide asymptotic descriptions of both non-residual and residual neural
networks, the latter of which enables an infinite depth limit when branches are
scaled as $1/\sqrt{\text{depth}}$. We also compare training with one-pass
stochastic gradient descent to the dynamics when training data are repeated at
each iteration. Lastly, we show that this model recovers the accelerated power
law training dynamics for power law structured data in the rich regime observed
in recent works.
|
2502.02533
|
Multi-Agent Design: Optimizing Agents with Better Prompts and Topologies
|
cs.LG cs.AI cs.CL cs.MA
|
Large language models, employed as multiple agents that interact and
collaborate with each other, have excelled at solving complex tasks. The agents
are programmed with prompts that declare their functionality, along with the
topologies that orchestrate interactions across agents. Designing prompts and
topologies for multi-agent systems (MAS) is inherently complex. To automate the
entire design process, we first conduct an in-depth analysis of the design
space aiming to understand the factors behind building effective MAS. We reveal
that prompts together with topologies play critical roles in enabling more
effective MAS design. Based on the insights, we propose Multi-Agent System
Search (MASS), a MAS optimization framework that efficiently exploits the
complex MAS design space by interleaving its optimization stages, from local to
global, from prompts to topologies, over three stages: 1) block-level (local)
prompt optimization; 2) workflow topology optimization; 3) workflow-level
(global) prompt optimization, where each stage is conditioned on the
iteratively optimized prompts/topologies from former stages. We show that
MASS-optimized multi-agent systems outperform a spectrum of existing
alternatives by a substantial margin. Based on the MASS-found systems, we
finally propose design principles behind building effective multi-agent
systems.
|
2502.02534
|
Adaptive Self-improvement LLM Agentic System for ML Library Development
|
cs.CL
|
ML libraries, often written in architecture-specific programming languages
(ASPLs) that target domain-specific architectures, are key to efficient ML
systems. However, writing these high-performance ML libraries is challenging
because it requires expert knowledge of ML algorithms and the ASPL. Large
language models (LLMs), on the other hand, have shown general coding
capabilities. However, challenges remain when using LLMs for generating ML
libraries using ASPLs because 1) this task is complicated even for experienced
human programmers and 2) there are limited code examples because of the
esoteric and evolving nature of ASPLs. Therefore, LLMs need complex reasoning
with limited data in order to complete this task. To address these challenges,
we introduce an adaptive self-improvement agentic system. In order to evaluate
the effectiveness of our system, we construct a benchmark of a typical ML
library and generate ASPL code with both open and closed-source LLMs on this
benchmark. Our results show improvements of up to $3.9\times$ over a baseline
single LLM.
|
2502.02537
|
Uncertainty Quantification for Collaborative Object Detection Under
Adversarial Attacks
|
cs.CV cs.LG
|
Collaborative Object Detection (COD) and collaborative perception can
integrate data or features from various entities, and improve object detection
accuracy compared with individual perception. However, adversarial attacks pose
a potential threat to the deep learning COD models, and introduce high output
uncertainty. With unknown attack models, it becomes even more challenging to
improve COD resiliency and quantify the output uncertainty for highly dynamic
perception scenes such as autonomous vehicles. In this study, we propose the
Trusted Uncertainty Quantification in Collaborative Perception framework
(TUQCP). TUQCP leverages both adversarial training and uncertainty
quantification techniques to enhance the adversarial robustness of existing COD
models. More specifically, TUQCP first adds perturbations to the shared
information of randomly selected agents during object detection collaboration
by adversarial training. TUQCP then alleviates the impacts of adversarial
attacks by providing output uncertainty estimation through learning-based
module and uncertainty calibration through conformal prediction. Our framework
works for early and intermediate collaboration COD models and single-agent
object detection models. We evaluate TUQCP on V2X-Sim, a comprehensive
collaborative perception dataset for autonomous driving, and demonstrate a
80.41% improvement in object detection accuracy compared to the baselines under
the same adversarial attacks. TUQCP demonstrates the importance of uncertainty
quantification to COD under adversarial attacks.
|
2502.02538
|
Flow Q-Learning
|
cs.LG cs.AI
|
We present flow Q-learning (FQL), a simple and performant offline
reinforcement learning (RL) method that leverages an expressive flow-matching
policy to model arbitrarily complex action distributions in data. Training a
flow policy with RL is a tricky problem, due to the iterative nature of the
action generation process. We address this challenge by training an expressive
one-step policy with RL, rather than directly guiding an iterative flow policy
to maximize values. This way, we can completely avoid unstable recursive
backpropagation, eliminate costly iterative action generation at test time, yet
still mostly maintain expressivity. We experimentally show that FQL leads to
strong performance across 73 challenging state- and pixel-based OGBench and
D4RL tasks in offline RL and offline-to-online RL. Project page:
https://seohong.me/projects/fql/
|
2502.02542
|
OverThink: Slowdown Attacks on Reasoning LLMs
|
cs.LG cs.CR
|
We increase overhead for applications that rely on reasoning LLMs-we force
models to spend an amplified number of reasoning tokens, i.e., "overthink", to
respond to the user query while providing contextually correct answers. The
adversary performs an OVERTHINK attack by injecting decoy reasoning problems
into the public content that is used by the reasoning LLM (e.g., for RAG
applications) during inference time. Due to the nature of our decoy problems
(e.g., a Markov Decision Process), modified texts do not violate safety
guardrails. We evaluated our attack across closed-(OpenAI o1, o1-mini, o3-mini)
and open-(DeepSeek R1) weights reasoning models on the FreshQA and SQuAD
datasets. Our results show up to 18x slowdown on FreshQA dataset and 46x
slowdown on SQuAD dataset. The attack also shows high transferability across
models. To protect applications, we discuss and implement defenses leveraging
LLM-based and system design approaches. Finally, we discuss societal,
financial, and energy impacts of OVERTHINK attack which could amplify the costs
for third-party applications operating reasoning models.
|
2502.02544
|
Addressing Label Shift in Distributed Learning via Entropy
Regularization
|
cs.LG cs.AI
|
We address the challenge of minimizing true risk in multi-node distributed
learning. These systems are frequently exposed to both inter-node and
intra-node label shifts, which present a critical obstacle to effectively
optimizing model performance while ensuring that data remains confined to each
node. To tackle this, we propose the Versatile Robust Label Shift (VRLS)
method, which enhances the maximum likelihood estimation of the test-to-train
label density ratio. VRLS incorporates Shannon entropy-based regularization and
adjusts the density ratio during training to better handle label shifts at the
test time. In multi-node learning environments, VRLS further extends its
capabilities by learning and adapting density ratios across nodes, effectively
mitigating label shifts and improving overall model performance. Experiments
conducted on MNIST, Fashion MNIST, and CIFAR-10 demonstrate the effectiveness
of VRLS, outperforming baselines by up to 20% in imbalanced settings. These
results highlight the significant improvements VRLS offers in addressing label
shifts. Our theoretical analysis further supports this by establishing
high-probability bounds on estimation errors.
|
2502.02545
|
Optimal Spectral Transitions in High-Dimensional Multi-Index Models
|
cs.LG cond-mat.dis-nn
|
We consider the problem of how many samples from a Gaussian multi-index model
are required to weakly reconstruct the relevant index subspace. Despite its
increasing popularity as a testbed for investigating the computational
complexity of neural networks, results beyond the single-index setting remain
elusive. In this work, we introduce spectral algorithms based on the
linearization of a message passing scheme tailored to this problem. Our main
contribution is to show that the proposed methods achieve the optimal
reconstruction threshold. Leveraging a high-dimensional characterization of the
algorithms, we show that above the critical threshold the leading eigenvector
correlates with the relevant index subspace, a phenomenon reminiscent of the
Baik-Ben Arous-Peche (BBP) transition in spiked models arising in random matrix
theory. Supported by numerical experiments and a rigorous theoretical
framework, our work bridges critical gaps in the computational limits of weak
learnability in multi-index model.
|
2502.02548
|
Mosaic3D: Foundation Dataset and Model for Open-Vocabulary 3D
Segmentation
|
cs.CV
|
We tackle open-vocabulary 3D scene understanding by introducing a novel data
generation pipeline and training framework. Our method addresses three critical
requirements for effective training: precise 3D region segmentation,
comprehensive textual descriptions, and sufficient dataset scale. By leveraging
state-of-the-art open-vocabulary image segmentation models and region-aware
Vision-Language Models, we develop an automatic pipeline that generates
high-quality 3D mask-text pairs. Applying this pipeline to multiple 3D scene
datasets, we create Mosaic3D-5.6M, a dataset of over 30K annotated scenes with
5.6M mask-text pairs, significantly larger than existing datasets. Building
upon this data, we propose Mosaic3D, a foundation model combining a 3D encoder
trained with contrastive learning and a lightweight mask decoder for
open-vocabulary 3D semantic and instance segmentation. Our approach achieves
state-of-the-art results on open-vocabulary 3D semantic and instance
segmentation tasks including ScanNet200, Matterport3D, and ScanNet++, with
ablation studies validating the effectiveness of our large-scale training data.
|
2502.02549
|
Anytime Incremental $\rho$POMDP Planning in Continuous Spaces
|
cs.AI cs.LG cs.RO
|
Partially Observable Markov Decision Processes (POMDPs) provide a robust
framework for decision-making under uncertainty in applications such as
autonomous driving and robotic exploration. Their extension, $\rho$POMDPs,
introduces belief-dependent rewards, enabling explicit reasoning about
uncertainty. Existing online $\rho$POMDP solvers for continuous spaces rely on
fixed belief representations, limiting adaptability and refinement - critical
for tasks such as information-gathering. We present $\rho$POMCPOW, an anytime
solver that dynamically refines belief representations, with formal guarantees
of improvement over time. To mitigate the high computational cost of updating
belief-dependent rewards, we propose a novel incremental computation approach.
We demonstrate its effectiveness for common entropy estimators, reducing
computational cost by orders of magnitude. Experimental results show that
$\rho$POMCPOW outperforms state-of-the-art solvers in both efficiency and
solution quality.
|
2502.02550
|
Reachability-Based Contingency Planning against Multi-Modal Predictions
with Branch MPC
|
eess.SY cs.SY
|
This paper presents a novel contingency planning framework that integrates
learning-based multi-modal predictions of traffic participants into Branch
Model Predictive Control (MPC). Leveraging reachability analysis, we address
the computational challenges associated with Branch MPC by organizing the
multitude of predictions into driving corridors. Analyzing the overlap between
these corridors, their number can be reduced through pruning and clustering
while ensuring safety since all prediction modes are preserved. These processed
corridors directly correspond to the distinct branches of the scenario tree and
provide an efficient constraint representation for the Branch MPC. We further
utilize the reachability for determining maximum feasible decision postponing
times, ensuring that branching decisions remain executable. Qualitative and
quantitative evaluations demonstrate significantly reduced computational
complexity and enhanced safety and comfort.
|
2502.02552
|
Hierarchical Sparse Bayesian Multitask Model with Scalable Inference for
Microbiome Analysis
|
cs.LG q-bio.BM stat.AP stat.CO stat.ME
|
This paper proposes a hierarchical Bayesian multitask learning model that is
applicable to the general multi-task binary classification learning problem
where the model assumes a shared sparsity structure across different tasks. We
derive a computationally efficient inference algorithm based on variational
inference to approximate the posterior distribution. We demonstrate the
potential of the new approach on various synthetic datasets and for predicting
human health status based on microbiome profile. Our analysis incorporates data
pooled from multiple microbiome studies, along with a comprehensive comparison
with other benchmark methods. Results in synthetic datasets show that the
proposed approach has superior support recovery property when the underlying
regression coefficients share a common sparsity structure across different
tasks. Our experiments on microbiome classification demonstrate the utility of
the method in extracting informative taxa while providing well-calibrated
predictions with uncertainty quantification and achieving competitive
performance in terms of prediction metrics. Notably, despite the heterogeneity
of the pooled datasets (e.g., different experimental objectives, laboratory
setups, sequencing equipment, patient demographics), our method delivers robust
results.
|
2502.02555
|
AAD-DCE: An Aggregated Multimodal Attention Mechanism for Early and Late
Dynamic Contrast Enhanced Prostate MRI Synthesis
|
eess.IV cs.CV
|
Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is a medical
imaging technique that plays a crucial role in the detailed visualization and
identification of tissue perfusion in abnormal lesions and radiological
suggestions for biopsy. However, DCE-MRI involves the administration of a
Gadolinium based (Gad) contrast agent, which is associated with a risk of
toxicity in the body. Previous deep learning approaches that synthesize DCE-MR
images employ unimodal non-contrast or low-dose contrast MRI images lacking
focus on the local perfusion information within the anatomy of interest. We
propose AAD-DCE, a generative adversarial network (GAN) with an aggregated
attention discriminator module consisting of global and local discriminators.
The discriminators provide a spatial embedded attention map to drive the
generator to synthesize early and late response DCE-MRI images. Our method
employs multimodal inputs - T2 weighted (T2W), Apparent Diffusion Coefficient
(ADC), and T1 pre-contrast for image synthesis. Extensive comparative and
ablation studies on the ProstateX dataset show that our model (i) is agnostic
to various generator benchmarks and (ii) outperforms other DCE-MRI synthesis
approaches with improvement margins of +0.64 dB PSNR, +0.0518 SSIM, -0.015 MAE
for early response and +0.1 dB PSNR, +0.0424 SSIM, -0.021 MAE for late
response, and (ii) emphasize the importance of attention ensembling. Our code
is available at https://github.com/bhartidivya/AAD-DCE.
|
2502.02558
|
Particle Trajectory Representation Learning with Masked Point Modeling
|
hep-ex cs.CV cs.LG
|
Effective self-supervised learning (SSL) techniques have been key to
unlocking large datasets for representation learning. While many promising
methods have been developed using online corpora and captioned photographs,
their application to scientific domains, where data encodes highly specialized
knowledge, remains in its early stages. We present a self-supervised masked
modeling framework for 3D particle trajectory analysis in Time Projection
Chambers (TPCs). These detectors produce globally sparse (<1% occupancy) but
locally dense point clouds, capturing meter-scale particle trajectories at
millimeter resolution. Starting with PointMAE, this work proposes volumetric
tokenization to group sparse ionization points into resolution-agnostic
patches, as well as an auxiliary energy infilling task to improve trajectory
semantics. This approach -- which we call Point-based Liquid Argon Masked
Autoencoder (PoLAr-MAE) -- achieves 99.4% track and 97.7% shower classification
F-scores, matching that of supervised baselines without any labeled data. While
the model learns rich particle trajectory representations, it struggles with
sub-token phenomena like overlapping or short-lived particle trajectories. To
support further research, we release PILArNet-M -- the largest open LArTPC
dataset (1M+ events, 5.2B labeled points) -- to advance SSL in high energy
physics (HEP). Project site: https://youngsm.com/polarmae/
|
2502.02561
|
Decision Theoretic Foundations for Conformal Prediction: Optimal
Uncertainty Quantification for Risk-Averse Agents
|
cs.LG cs.AI stat.ML
|
A fundamental question in data-driven decision making is how to quantify the
uncertainty of predictions in ways that can usefully inform downstream action.
This interface between prediction uncertainty and decision-making is especially
important in risk-sensitive domains, such as medicine. In this paper, we
develop decision-theoretic foundations that connect uncertainty quantification
using prediction sets with risk-averse decision-making. Specifically, we answer
three fundamental questions: (1) What is the correct notion of uncertainty
quantification for risk-averse decision makers? We prove that prediction sets
are optimal for decision makers who wish to optimize their value at risk. (2)
What is the optimal policy that a risk averse decision maker should use to map
prediction sets to actions? We show that a simple max-min decision policy is
optimal for risk-averse decision makers. Finally, (3) How can we derive
prediction sets that are optimal for such decision makers? We provide an exact
characterization in the population regime and a distribution free finite-sample
construction. Answering these questions naturally leads to an algorithm,
Risk-Averse Calibration (RAC), which follows a provably optimal design for
deriving action policies from predictions. RAC is designed to be both
practical-capable of leveraging the quality of predictions in a black-box
manner to enhance downstream utility-and safe-adhering to a user-defined risk
threshold and optimizing the corresponding risk quantile of the user's
downstream utility. Finally, we experimentally demonstrate the significant
advantages of RAC in applications such as medical diagnosis and recommendation
systems. Specifically, we show that RAC achieves a substantially improved
trade-off between safety and utility, offering higher utility compared to
existing methods while maintaining the safety guarantee.
|
2502.02562
|
Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
|
cs.LG cs.AI cs.CV cs.RO stat.ML
|
We introduce STRING: Separable Translationally Invariant Position Encodings.
STRING extends Rotary Position Encodings, a recently proposed and widely used
algorithm in large language models, via a unifying theoretical framework.
Importantly, STRING still provides exact translation invariance, including
token coordinates of arbitrary dimensionality, whilst maintaining a low
computational footprint. These properties are especially important in robotics,
where efficient 3D token representation is key. We integrate STRING into Vision
Transformers with RGB(-D) inputs (color plus optional depth), showing
substantial gains, e.g. in open-vocabulary object detection and for robotics
controllers. We complement our experiments with a rigorous mathematical
analysis, proving the universality of our methods.
|
2502.02565
|
Revisiting Expected Possession Value in Football: Introducing a
Benchmark, U-Net Architecture, and Reward and Risk for Passes
|
cs.CV cs.LG
|
This paper introduces the first Expected Possession Value (EPV) benchmark and
a new and improved EPV model for football. Through the introduction of the
OJN-Pass-EPV benchmark, we present a novel method to quantitatively assess the
quality of EPV models by using pairs of game states with given relative EPVs.
Next, we attempt to replicate the results of Fern\'andez et al. (2021) using a
dataset containing Dutch Eredivisie and World Cup matches. Following our
failure to do so, we propose a new architecture based on U-net-type
convolutional neural networks, achieving good results in model loss and
Expected Calibration Error. Finally, we present an improved pass model that
incorporates ball height and contains a new dual-component pass value model
that analyzes reward and risk. The resulting EPV model correctly identifies the
higher value state in 78% of the game state pairs in the OJN-Pass-EPV
benchmark, demonstrating its ability to accurately assess goal-scoring
potential. Our findings can help assess the quality of EPV models, improve EPV
predictions, help assess potential reward and risk of passing decisions, and
improve player and team performance.
|
2502.02567
|
Fairness in Survival Analysis: A Novel Conditional Mutual Information
Augmentation Approach
|
cs.LG cs.AI
|
Survival analysis, a vital tool for predicting the time to event, has been
used in many domains such as healthcare, criminal justice, and finance. Like
classification tasks, survival analysis can exhibit bias against disadvantaged
groups, often due to biases inherent in data or algorithms. Several studies in
both the IS and CS communities have attempted to address fairness in survival
analysis. However, existing methods often overlook the importance of prediction
fairness at pre-defined evaluation time points, which is crucial in real-world
applications where decision making often hinges on specific time frames. To
address this critical research gap, we introduce a new fairness concept:
equalized odds (EO) in survival analysis, which emphasizes prediction fairness
at pre-defined time points. To achieve the EO fairness in survival analysis, we
propose a Conditional Mutual Information Augmentation (CMIA) approach, which
features a novel fairness regularization term based on conditional mutual
information and an innovative censored data augmentation technique. Our CMIA
approach can effectively balance prediction accuracy and fairness, and it is
applicable to various survival models. We evaluate the CMIA approach against
several state-of-the-art methods within three different application domains,
and the results demonstrate that CMIA consistently reduces prediction disparity
while maintaining good accuracy and significantly outperforms the other
competing methods across multiple datasets and survival models (e.g., linear
COX, deep AFT).
|
2502.02573
|
Are Language Models Up to Sequential Optimization Problems? From
Evaluation to a Hegelian-Inspired Enhancement
|
cs.CL cs.AI
|
Large Language Models (LLMs) have demonstrated impressive capabilities across
numerous fields, presenting an opportunity to revolutionize optimization
problem-solving, a crucial, ubiquitous, and complex domain. This paper explores
the proficiency of LLMs in handling Sequential Optimization Problems (SOPs). We
introduce WorldGen, a dynamic framework for generating unseen SOPs with
controllable complexities, to evaluate LLM performance. Our initial
observations reveal that while LLMs perform well on simple SOPs, their
performance significantly degrades with increased complexity. Motivated by
this, we revisit philosophical hypotheses on reasoning to enhance LLM
performance. Inspired by the influential framework of Hegelian Dialectics, we
propose ACE, demonstrating how the performance of LLMs in SOP contexts can be
significantly improved without any retraining or further fine-tuning.
|
2502.02577
|
A comparison of translation performance between DeepL and Supertext
|
cs.CL
|
As strong machine translation (MT) systems are increasingly based on large
language models (LLMs), reliable quality benchmarking requires methods that
capture their ability to leverage extended context. This study compares two
commercial MT systems -- DeepL and Supertext -- by assessing their performance
on unsegmented texts. We evaluate translation quality across four language
directions with professional translators assessing segments with full
document-level context. While segment-level assessments indicate no strong
preference between the systems in most cases, document-level analysis reveals a
preference for Supertext in three out of four language directions, suggesting
superior consistency across longer texts. We advocate for more
context-sensitive evaluation methodologies to ensure that MT quality
assessments reflect real-world usability. We release all evaluation data and
scripts for further analysis and reproduction at
https://github.com/supertext/evaluation_deepl_supertext.
|
2502.02582
|
Open Materials Generation with Stochastic Interpolants
|
cs.LG cond-mat.mtrl-sci
|
The discovery of new materials is essential for enabling technological
advancements. Computational approaches for predicting novel materials must
effectively learn the manifold of stable crystal structures within an infinite
design space. We introduce Open Materials Generation (OMG), a unifying
framework for the generative design and discovery of inorganic crystalline
materials. OMG employs stochastic interpolants (SI) to bridge an arbitrary base
distribution to the target distribution of inorganic crystals via a broad class
of tunable stochastic processes, encompassing both diffusion models and flow
matching as special cases. In this work, we adapt the SI framework by
integrating an equivariant graph representation of crystal structures and
extending it to account for periodic boundary conditions in unit cell
representations. Additionally, we couple the SI flow over spatial coordinates
and lattice vectors with discrete flow matching for atomic species. We
benchmark OMG's performance on two tasks: Crystal Structure Prediction (CSP)
for specified compositions, and 'de novo' generation (DNG) aimed at discovering
stable, novel, and unique structures. In our ground-up implementation of OMG,
we refine and extend both CSP and DNG metrics compared to previous works. OMG
establishes a new state-of-the-art in generative modeling for materials
discovery, outperforming purely flow-based and diffusion-based implementations.
These results underscore the importance of designing flexible deep learning
frameworks to accelerate progress in materials science.
|
2502.02584
|
QLASS: Boosting Language Agent Inference via Q-Guided Stepwise Search
|
cs.LG cs.AI
|
Language agents have become a promising solution to complex interactive
tasks. One of the key ingredients to the success of language agents is the
reward model on the trajectory of the agentic workflow, which provides valuable
guidance during training or inference. However, due to the lack of annotations
of intermediate interactions, most existing works use an outcome reward model
to optimize policies across entire trajectories. This may lead to sub-optimal
policies and hinder the overall performance. To address this, we propose QLASS
(Q-guided Language Agent Stepwise Search), to automatically generate
annotations by estimating Q-values in a stepwise manner for open language
agents. By introducing a reasoning tree and performing process reward modeling,
QLASS provides effective intermediate guidance for each step. With the stepwise
guidance, we propose a Q-guided generation strategy to enable language agents
to better adapt to long-term value, resulting in significant performance
improvement during model inference on complex interactive agent tasks. Notably,
even with almost half the annotated data, QLASS retains strong performance,
demonstrating its efficiency in handling limited supervision. We also
empirically demonstrate that QLASS can lead to more effective decision making
through qualitative analysis. We will release our code and data.
|
2502.02587
|
Spatio-temporal transformer to support automatic sign language
translation
|
cs.CL
|
Sign Language Translation (SLT) systems support hearing-impaired people
communication by finding equivalences between signed and spoken languages. This
task is however challenging due to multiple sign variations, complexity in
language and inherent richness of expressions. Computational approaches have
evidenced capabilities to support SLT. Nonetheless, these approaches remain
limited to cover gestures variability and support long sequence translations.
This paper introduces a Transformer-based architecture that encodes
spatio-temporal motion gestures, preserving both local and long-range spatial
information through the use of multiple convolutional and attention mechanisms.
The proposed approach was validated on the Colombian Sign Language Translation
Dataset (CoL-SLTD) outperforming baseline approaches, and achieving a BLEU4 of
46.84%. Additionally, the proposed approach was validated on the
RWTH-PHOENIX-Weather-2014T (PHOENIX14T), achieving a BLEU4 score of 30.77%,
demonstrating its robustness and effectiveness in handling real-world
variations
|
2502.02588
|
Calibrated Multi-Preference Optimization for Aligning Diffusion Models
|
cs.CV
|
Aligning text-to-image (T2I) diffusion models with preference optimization is
valuable for human-annotated datasets, but the heavy cost of manual data
collection limits scalability. Using reward models offers an alternative,
however, current preference optimization methods fall short in exploiting the
rich information, as they only consider pairwise preference distribution.
Furthermore, they lack generalization to multi-preference scenarios and
struggle to handle inconsistencies between rewards. To address this, we present
Calibrated Preference Optimization (CaPO), a novel method to align T2I
diffusion models by incorporating the general preference from multiple reward
models without human annotated data. The core of our approach involves a reward
calibration method to approximate the general preference by computing the
expected win-rate against the samples generated by the pretrained models.
Additionally, we propose a frontier-based pair selection method that
effectively manages the multi-preference distribution by selecting pairs from
Pareto frontiers. Finally, we use regression loss to fine-tune diffusion models
to match the difference between calibrated rewards of a selected pair.
Experimental results show that CaPO consistently outperforms prior methods,
such as Direct Preference Optimization (DPO), in both single and multi-reward
settings validated by evaluation on T2I benchmarks, including GenEval and
T2I-Compbench.
|
2502.02589
|
COCONut-PanCap: Joint Panoptic Segmentation and Grounded Captions for
Fine-Grained Understanding and Generation
|
cs.CV
|
This paper introduces the COCONut-PanCap dataset, created to enhance panoptic
segmentation and grounded image captioning. Building upon the COCO dataset with
advanced COCONut panoptic masks, this dataset aims to overcome limitations in
existing image-text datasets that often lack detailed, scene-comprehensive
descriptions. The COCONut-PanCap dataset incorporates fine-grained,
region-level captions grounded in panoptic segmentation masks, ensuring
consistency and improving the detail of generated captions. Through
human-edited, densely annotated descriptions, COCONut-PanCap supports improved
training of vision-language models (VLMs) for image understanding and
generative models for text-to-image tasks. Experimental results demonstrate
that COCONut-PanCap significantly boosts performance across understanding and
generation tasks, offering complementary benefits to large-scale datasets. This
dataset sets a new benchmark for evaluating models on joint panoptic
segmentation and grounded captioning tasks, addressing the need for
high-quality, detailed image-text annotations in multi-modal learning.
|
2502.02590
|
Articulate AnyMesh: Open-Vocabulary 3D Articulated Objects Modeling
|
cs.CV cs.RO
|
3D articulated objects modeling has long been a challenging problem, since it
requires to capture both accurate surface geometries and semantically
meaningful and spatially precise structures, parts, and joints. Existing
methods heavily depend on training data from a limited set of handcrafted
articulated object categories (e.g., cabinets and drawers), which restricts
their ability to model a wide range of articulated objects in an
open-vocabulary context. To address these limitations, we propose Articulate
Anymesh, an automated framework that is able to convert any rigid 3D mesh into
its articulated counterpart in an open-vocabulary manner. Given a 3D mesh, our
framework utilizes advanced Vision-Language Models and visual prompting
techniques to extract semantic information, allowing for both the segmentation
of object parts and the construction of functional joints. Our experiments show
that Articulate Anymesh can generate large-scale, high-quality 3D articulated
objects, including tools, toys, mechanical devices, and vehicles, significantly
expanding the coverage of existing 3D articulated object datasets.
Additionally, we show that these generated assets can facilitate the
acquisition of new articulated object manipulation skills in simulation, which
can then be transferred to a real robotic system. Our Github website is
https://articulate-anymesh.github.io.
|
2502.02591
|
Investigation on the Shooting Method Ability to Solve Different Mooring
Lines Boundary Condition Types
|
cs.CE
|
The study of undersea cables and mooring lines statics remains an unavoidable
subject of simulation in offshore field for either steady-state analysis or
dynamic simulation initialization. Whether the study concerns mooring systems
pinned both at seabed and floating platform, cables towed by a moving
underwater system or when special links such as stiffeners are needed, the
ability to model every combination is a key point. To do so the authors propose
to investigate the use of the shooting method to solve the two point boundary
value problem (TPBVP) associated with Dirichlet, Robin or mixed boundary
conditions representing respectively, displacement, force and
force/displacement boundary conditions. 3D nonlinear static string calculations
are confronted to a semi-analytic formulation established from the catenary
closed form equations. The comparisons are performed on various pairs of
boundary conditions developed in five configurations.
|
2502.02592
|
A Paradigm Shift to Assembly-like Finite Element Model Updating
|
cs.CE
|
In general, there is a mismatch between a finite element model of a structure
and its real behaviour. In aeronautics, this mismatch must be small because
finite element models are a fundamental part of the development of an aircraft
and of increasing importance with the trend to more flexible wings in modern
designs. Finite element model updating can be computationally expensive for
complex structures and surrogate models can be employed to reduce the
computational burden. A novel approach for finite element model updating,
namely assembly-like, is proposed and validated using real experimental data.
The assembly-like model updating framework implies that the model is updated as
parts are assembled. Benchmarking against the classical global, or one-shot,
approach demonstrates that the proposed method is more computationally
efficient since it takes 20% fewer iterations to obtain convergence, also using
fewer parameters for the model evaluations. Despite the increase in
computational performance, the new approach retains the fidelity of the global
approach.
|
2502.02593
|
Reconstructing 3D Flow from 2D Data with Diffusion Transformer
|
cs.CE cs.AI physics.flu-dyn
|
Fluid flow is a widely applied physical problem, crucial in various fields.
Due to the highly nonlinear and chaotic nature of fluids, analyzing
fluid-related problems is exceptionally challenging. Computational fluid
dynamics (CFD) is the best tool for this analysis but involves significant
computational resources, especially for 3D simulations, which are slow and
resource-intensive. In experimental fluid dynamics, PIV cost increases with
dimensionality. Reconstructing 3D flow fields from 2D PIV data could reduce
costs and expand application scenarios. Here, We propose a Diffusion
Transformer-based method for reconstructing 3D flow fields from 2D flow data.
By embedding the positional information of 2D planes into the model, we enable
the reconstruction of 3D flow fields from any combination of 2D slices,
enhancing flexibility. We replace global attention with window and plane
attention to reduce computational costs associated with higher dimensions
without compromising performance. Our experiments demonstrate that our model
can efficiently and accurately reconstruct 3D flow fields from 2D data,
producing realistic results.
|
2502.02594
|
Offshore Wind Turbine Tower Design and Optimization: A Review and
AI-Driven Future Directions
|
cs.CE cs.SY eess.SY
|
Offshore wind energy leverages the high intensity and consistency of oceanic
winds, playing a key role in the transition to renewable energy. As energy
demands grow, larger turbines are required to optimize power generation and
reduce the Levelized Cost of Energy (LCoE), which represents the average cost
of electricity over a project's lifetime. However, upscaling turbines
introduces engineering challenges, particularly in the design of supporting
structures, especially towers. These towers must support increased loads while
maintaining structural integrity, cost-efficiency, and transportability, making
them essential to offshore wind projects' success. This paper presents a
comprehensive review of the latest advancements, challenges, and future
directions driven by Artificial Intelligence (AI) in the design optimization of
Offshore Wind Turbine (OWT) structures, with a focus on towers. It provides an
in-depth background on key areas such as design types, load types, analysis
methods, design processes, monitoring systems, Digital Twin (DT), software,
standards, reference turbines, economic factors, and optimization techniques.
Additionally, it includes a state-of-the-art review of optimization studies
related to tower design optimization, presenting a detailed examination of
turbine, software, loads, optimization method, design variables and
constraints, analysis, and findings, motivating future research to refine
design approaches for effective turbine upscaling and improved efficiency.
Lastly, the paper explores future directions where AI can revolutionize tower
design optimization, enabling the development of efficient, scalable, and
sustainable structures. By addressing the upscaling challenges and supporting
the growth of renewable energy, this work contributes to shaping the future of
offshore wind turbine towers and others supporting structures.
|
2502.02602
|
A Quasi-Optimal Shape Design Method for Lattice Structure Construction
|
cs.CE math.OC
|
Lattice structures, known for their superior mechanical properties, are
widely used in industries such as aerospace, automotive, and biomedical. Their
advantages primarily lie in the interconnected struts at the micro-scale. The
robust construction of these struts is crucial for downstream design and
manufacturing applications, as it provides a detailed shape description
necessary for precise simulation and fabrication. However, constructing lattice
structures presents significant challenges, particularly at nodes where
multiple struts intersect. The complexity of these intersections can lead to
robustness issues. To address this challenge, this paper presents an
optimization-based approach that simplifies the construction of lattice
structures by cutting struts and connecting them to optimized node shapes. By
utilizing the recent Grey Wolf optimization method -- a type of meta-heuristic
method -- for node shape design, the approach ensures robust model construction
and optimal shape design. Its effectiveness has been validated through a series
of case studies with increasing topological and geometric complexity.
|
2502.02603
|
SEAL: Speech Embedding Alignment Learning for Speech Large Language
Model with Retrieval-Augmented Generation
|
eess.AS cs.CL cs.SD
|
Embedding-based retrieval models have made significant strides in
retrieval-augmented generation (RAG) techniques for text and multimodal large
language models (LLMs) applications. However, when it comes to speech larage
language models (SLLMs), these methods are limited to a two-stage process,
where automatic speech recognition (ASR) is combined with text-based retrieval.
This sequential architecture suffers from high latency and error propagation.
To address these limitations, we propose a unified embedding framework that
eliminates the need for intermediate text representations. Specifically, the
framework includes separate speech and text encoders, followed by a shared
scaling layer that maps both modalities into a common embedding space. Our
model reduces pipeline latency by 50\% while achieving higher retrieval
accuracy compared to traditional two-stage methods. We also provide a
theoretical analysis of the challenges inherent in end-to-end speech retrieval
and introduce architectural principles for effective speech-to-document
matching. Extensive experiments demonstrate the robustness of our approach
across diverse acoustic conditions and speaker variations, paving the way for a
new paradigm in multimodal SLLMs retrieval systems.
|
2502.02605
|
Physically Interpretable Representation and Controlled Generation for
Turbulence Data
|
cs.CE cs.LG physics.comp-ph physics.flu-dyn
|
Computational Fluid Dynamics (CFD) plays a pivotal role in fluid mechanics,
enabling precise simulations of fluid behavior through partial differential
equations (PDEs). However, traditional CFD methods are resource-intensive,
particularly for high-fidelity simulations of complex flows, which are further
complicated by high dimensionality, inherent stochasticity, and limited data
availability. This paper addresses these challenges by proposing a data-driven
approach that leverages a Gaussian Mixture Variational Autoencoder (GMVAE) to
encode high-dimensional scientific data into low-dimensional, physically
meaningful representations. The GMVAE learns a structured latent space where
data can be categorized based on physical properties such as the Reynolds
number while maintaining global physical consistency. To assess the
interpretability of the learned representations, we introduce a novel metric
based on graph spectral theory, quantifying the smoothness of physical
quantities along the latent manifold. We validate our approach using 2D
Navier-Stokes simulations of flow past a cylinder over a range of Reynolds
numbers. Our results demonstrate that the GMVAE provides improved clustering,
meaningful latent structure, and robust generative capabilities compared to
baseline dimensionality reduction methods. This framework offers a promising
direction for data-driven turbulence modeling and broader applications in
computational fluid dynamics and engineering systems.
|
2502.02607
|
MIND: Microstructure INverse Design with Generative Hybrid Neural
Representation
|
cs.CV cs.GR cs.LG
|
The inverse design of microstructures plays a pivotal role in optimizing
metamaterials with specific, targeted physical properties. While traditional
forward design methods are constrained by their inability to explore the vast
combinatorial design space, inverse design offers a compelling alternative by
directly generating structures that fulfill predefined performance criteria.
However, achieving precise control over both geometry and material properties
remains a significant challenge due to their intricate interdependence.
Existing approaches, which typically rely on voxel or parametric
representations, often limit design flexibility and structural diversity. In
this work, we present a novel generative model that integrates latent diffusion
with Holoplane, an advanced hybrid neural representation that simultaneously
encodes both geometric and physical properties. This combination ensures
superior alignment between geometry and properties. Our approach generalizes
across multiple microstructure classes, enabling the generation of diverse,
tileable microstructures with significantly improved property accuracy and
enhanced control over geometric validity, surpassing the performance of
existing methods. We introduce a multi-class dataset encompassing a variety of
geometric morphologies, including truss, shell, tube, and plate structures, to
train and validate our model. Experimental results demonstrate the model's
ability to generate microstructures that meet target properties, maintain
geometric validity, and integrate seamlessly into complex assemblies.
Additionally, we explore the potential of our framework through the generation
of new microstructures, cross-class interpolation, and the infilling of
heterogeneous microstructures. The dataset and source code will be open-sourced
upon publication.
|
2502.02610
|
Secure & Personalized Music-to-Video Generation via CHARCHA
|
cs.AI cs.CV cs.HC cs.MM
|
Music is a deeply personal experience and our aim is to enhance this with a
fully-automated pipeline for personalized music video generation. Our work
allows listeners to not just be consumers but co-creators in the music video
generation process by creating personalized, consistent and context-driven
visuals based on lyrics, rhythm and emotion in the music. The pipeline combines
multimodal translation and generation techniques and utilizes low-rank
adaptation on listeners' images to create immersive music videos that reflect
both the music and the individual. To ensure the ethical use of users'
identity, we also introduce CHARCHA (patent pending), a facial identity
verification protocol that protects people against unauthorized use of their
face while at the same time collecting authorized images from users for
personalizing their videos. This paper thus provides a secure and innovative
framework for creating deeply personalized music videos.
|
2502.02617
|
PolarQuant: Quantizing KV Caches with Polar Transformation
|
cs.LG cs.AI
|
Large language models (LLMs) require significant memory to store Key-Value
(KV) embeddings in their KV cache, especially when handling long-range
contexts. Quantization of these KV embeddings is a common technique to reduce
memory consumption. This work introduces PolarQuant, a novel quantization
method employing random preconditioning and polar transformation. Our method
transforms the KV embeddings into polar coordinates using an efficient
recursive algorithm and then quantizes resulting angles. Our key insight is
that, after random preconditioning, the angles in the polar representation
exhibit a tightly bounded and highly concentrated distribution with an
analytically computable form. This nice distribution eliminates the need for
explicit normalization, a step required by traditional quantization methods
which introduces significant memory overhead because quantization parameters
(e.g., zero point and scale) must be stored in full precision per each data
block. PolarQuant bypasses this normalization step, enabling substantial memory
savings. The long-context evaluation demonstrates that PolarQuant compresses
the KV cache by over x4.2 while achieving the best quality scores compared to
the state-of-the-art methods.
|
2502.02618
|
Deep Learning-Based Facial Expression Recognition for the Elderly: A
Systematic Review
|
cs.CV cs.AI
|
The rapid aging of the global population has highlighted the need for
technologies to support elderly, particularly in healthcare and emotional
well-being. Facial expression recognition (FER) systems offer a non-invasive
means of monitoring emotional states, with applications in assisted living,
mental health support, and personalized care. This study presents a systematic
review of deep learning-based FER systems, focusing on their applications for
the elderly population. Following a rigorous methodology, we analyzed 31
studies published over the last decade, addressing challenges such as the
scarcity of elderly-specific datasets, class imbalances, and the impact of
age-related facial expression differences. Our findings show that convolutional
neural networks remain dominant in FER, and especially lightweight versions for
resource-constrained environments. However, existing datasets often lack
diversity in age representation, and real-world deployment remains limited.
Additionally, privacy concerns and the need for explainable artificial
intelligence emerged as key barriers to adoption. This review underscores the
importance of developing age-inclusive datasets, integrating multimodal
solutions, and adopting XAI techniques to enhance system usability,
reliability, and trustworthiness. We conclude by offering recommendations for
future research to bridge the gap between academic progress and real-world
implementation in elderly care.
|
2502.02619
|
Regret-Optimized Portfolio Enhancement through Deep Reinforcement
Learning and Future Looking Rewards
|
q-fin.PM cs.LG q-fin.RM
|
This paper introduces a novel agent-based approach for enhancing existing
portfolio strategies using Proximal Policy Optimization (PPO). Rather than
focusing solely on traditional portfolio construction, our approach aims to
improve an already high-performing strategy through dynamic rebalancing driven
by PPO and Oracle agents. Our target is to enhance the traditional 60/40
benchmark (60% stocks, 40% bonds) by employing the Regret-based Sharpe reward
function. To address the impact of transaction fee frictions and prevent signal
loss, we develop a transaction cost scheduler. We introduce a future-looking
reward function and employ synthetic data training through a circular block
bootstrap method to facilitate the learning of generalizable allocation
strategies. We focus on two key evaluation measures: return and maximum
drawdown. Given the high stochasticity of financial markets, we train 20
independent agents each period and evaluate their average performance against
the benchmark. Our method not only enhances the performance of the existing
portfolio strategy through strategic rebalancing but also demonstrates strong
results compared to other baselines.
|
2502.02622
|
Backcasting the Optimal Decisions in Transport Systems: An Example with
Electric Vehicle Purchase Incentives
|
math.OC cs.SY eess.SY
|
This study represents a first attempt to build a backcasting methodology to
identify the optimal policy roadmaps in transport systems. In this methodology,
desired objectives are set by decision makers at a given time horizon, and then
the optimal combinations of policies to achieve these objectives are computed
as a function of time (i.e., ``backcasted''). This approach is illustrated on
the transportation sector by considering a specific subsystem with a single
policy decision. The subsystem describes the evolution of the passenger car
fleet within a given region and its impact on greenhouse gas emissions. The
optimized policy is a monetary incentive for the purchase of electric vehicles
while minimizing the total budget of the state and achieving a desired CO$_2$
target. A case study applied to Metropolitan France is presented to illustrate
the approach. Additionally, alternative policy scenarios are also analyzed to
provide further insights.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.