id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.07424
|
RomanLens: The Role Of Latent Romanization In Multilinguality In LLMs
|
cs.CL cs.AI
|
Large Language Models (LLMs) exhibit remarkable multilingual generalization
despite being predominantly trained on English-centric corpora. A fundamental
question arises: how do LLMs achieve such robust multilingual capabilities? We
take the case of non-Roman script languages, we investigate the role of
Romanization - the representation of non-Roman scripts using Roman characters -
as a bridge in multilingual processing. Using mechanistic interpretability
techniques, we analyze next-token generation and find that intermediate layers
frequently represent target words in Romanized form before transitioning to
native script, a phenomenon we term Latent Romanization. Further, through
activation patching experiments, we demonstrate that LLMs encode semantic
concepts similarly across native and Romanized scripts, suggesting a shared
underlying representation. Additionally, for translation into non-Roman script
languages, our findings reveal that when the target language is in Romanized
form, its representations emerge earlier in the model's layers compared to
native script. These insights contribute to a deeper understanding of
multilingual representation in LLMs and highlight the implicit role of
Romanization in facilitating language transfer.
|
2502.07425
|
Towards a Foundation Model for Physics-Informed Neural Networks:
Multi-PDE Learning with Active Sampling
|
cs.LG
|
Physics-Informed Neural Networks (PINNs) have emerged as a powerful framework
for solving partial differential equations (PDEs) by embedding physical laws
into neural network training. However, traditional PINN models are typically
designed for single PDEs, limiting their generalizability across different
physical systems. In this work, we explore the potential of a foundation PINN
model capable of solving multiple PDEs within a unified architecture. We
investigate the efficacy of a single PINN framework trained on four distinct
PDEs-the Simple Harmonic Oscillator (SHO), the 1D Heat Equation, the 1D Wave
Equation, and the 2D Laplace Equation, demonstrating its ability to learn
diverse physical dynamics.
To enhance sample efficiency, we incorporate Active Learning (AL) using Monte
Carlo (MC) Dropout-based uncertainty estimation, selecting the most informative
training samples iteratively. We evaluate different active learning strategies,
comparing models trained on 10%, 20%, 30%, 40%, and 50% of the full dataset,
and analyze their impact on solution accuracy. Our results indicate that
targeted uncertainty sampling significantly improves performance with fewer
training samples, leading to efficient learning across multiple PDEs.
This work highlights the feasibility of a generalizable PINN-based foundation
model, capable of adapting to different physics-based problems without
redesigning network architectures. Our findings suggest that multi-PDE PINNs
with active learning can serve as an effective approach for reducing
computational costs while maintaining high accuracy in physics-based deep
learning applications.
|
2502.07431
|
ArthroPhase: A Novel Dataset and Method for Phase Recognition in
Arthroscopic Video
|
cs.CV
|
This study aims to advance surgical phase recognition in arthroscopic
procedures, specifically Anterior Cruciate Ligament (ACL) reconstruction, by
introducing the first arthroscopy dataset and developing a novel
transformer-based model. We aim to establish a benchmark for arthroscopic
surgical phase recognition by leveraging spatio-temporal features to address
the specific challenges of arthroscopic videos including limited field of view,
occlusions, and visual distortions. We developed the ACL27 dataset, comprising
27 videos of ACL surgeries, each labeled with surgical phases. Our model
employs a transformer-based architecture, utilizing temporal-aware frame-wise
feature extraction through a ResNet-50 and transformer layers. This approach
integrates spatio-temporal features and introduces a Surgical Progress Index
(SPI) to quantify surgery progression. The model's performance was evaluated
using accuracy, precision, recall, and Jaccard Index on the ACL27 and Cholec80
datasets. The proposed model achieved an overall accuracy of 72.91% on the
ACL27 dataset. On the Cholec80 dataset, the model achieved a comparable
performance with the state-of-the-art methods with an accuracy of 92.4%. The
SPI demonstrated an output error of 10.6% and 9.86% on ACL27 and Cholec80
datasets respectively, indicating reliable surgery progression estimation. This
study introduces a significant advancement in surgical phase recognition for
arthroscopy, providing a comprehensive dataset and a robust transformer-based
model. The results validate the model's effectiveness and generalizability,
highlighting its potential to improve surgical training, real-time assistance,
and operational efficiency in orthopedic surgery. The publicly available
dataset and code will facilitate future research and development in this
critical field.
|
2502.07432
|
CapyMOA: Efficient Machine Learning for Data Streams in Python
|
cs.LG
|
CapyMOA is an open-source library designed for efficient machine learning on
streaming data. It provides a structured framework for real-time learning and
evaluation, featuring a flexible data representation. CapyMOA includes an
extensible architecture that allows integration with external frameworks such
as MOA and PyTorch, facilitating hybrid learning approaches that combine
traditional online algorithms with deep learning techniques. By emphasizing
adaptability, scalability, and usability, CapyMOA allows researchers and
practitioners to tackle dynamic learning challenges across various domains.
|
2502.07436
|
Optimizing Knowledge Distillation in Transformers: Enabling Multi-Head
Attention without Alignment Barriers
|
cs.CV
|
Knowledge distillation (KD) in transformers often faces challenges due to
misalignment in the number of attention heads between teacher and student
models. Existing methods either require identical head counts or introduce
projectors to bridge dimensional gaps, limiting flexibility and efficiency. We
propose Squeezing-Heads Distillation (SHD), a novel approach that enables
seamless knowledge transfer between models with varying head counts by
compressing multi-head attention maps via efficient linear approximation.
Unlike prior work, SHD eliminates alignment barriers without additional
parameters or architectural modifications. Our method dynamically approximates
the combined effect of multiple teacher heads into fewer student heads,
preserving fine-grained attention patterns while reducing redundancy.
Experiments across language (LLaMA, GPT) and vision (DiT, MDT) generative and
vision (DeiT) discriminative tasks demonstrate SHD's effectiveness: it
outperforms logit-based and feature-alignment KD baselines, achieving
state-of-the-art results in image classification, image generation language
fine-tuning, and language pre-training. The key innovations of flexible head
compression, projector-free design, and linear-time complexity make SHD a
versatile and scalable solution for distilling modern transformers. This work
bridges a critical gap in KD, enabling efficient deployment of compact models
without compromising performance.
|
2502.07441
|
SensPS: Sensing Personal Space Comfortable Distance between Human-Human
Using Multimodal Sensors
|
cs.HC cs.AI
|
Personal space, also known as peripersonal space, is crucial in human social
interaction, influencing comfort, communication, and social stress. Estimating
and respecting personal space is essential for enhancing human-computer
interaction (HCI) and smart environments. Personal space preferences vary due
to individual traits, cultural background, and contextual factors. Advanced
multimodal sensing technologies, including eye-tracking and wristband sensors,
offer opportunities to develop adaptive systems that dynamically adjust to user
comfort levels. Integrating physiological and behavioral data enables a deeper
understanding of spatial interactions. This study develops a sensor-based model
to estimate comfortable personal space and identifies key features influencing
spatial preferences. Our findings show that multimodal sensors, particularly
eye-tracking and physiological wristband data, can effectively predict personal
space preferences, with eye-tracking data playing a more significant role. An
experimental study involving controlled human interactions demonstrates that a
Transformer-based model achieves the highest predictive accuracy (F1 score:
0.87) for estimating personal space. Eye-tracking features, such as gaze point
and pupil diameter, emerge as the most significant predictors, while
physiological signals from wristband sensors contribute marginally. These
results highlight the potential for AI-driven personalization of social space
in adaptive environments, suggesting that multimodal sensing can be leveraged
to develop intelligent systems that optimize spatial arrangements in
workplaces, educational institutions, and public settings. Future work should
explore larger datasets, real-world applications, and additional physiological
markers to enhance model robustness.
|
2502.07442
|
Hierarchical Document Parsing via Large Margin Feature Matching and
Heuristics
|
cs.CL cs.CV
|
We present our solution to the AAAI-25 VRD-IU challenge, achieving first
place in the competition. Our approach integrates large margin loss for
improved feature discrimination and employs heuristic rules to refine
hierarchical relationships. By combining a deep learning-based matching
strategy with greedy algorithms, we achieve a significant boost in accuracy
while maintaining computational efficiency. Our method attains an accuracy of
0.98904 on the private leaderboard, demonstrating its effectiveness in document
structure parsing. Source codes are publicly available at
https://github.com/ffyyytt/VRUID-AAAI-DAKiet
|
2502.07443
|
Approximating Human Strategic Reasoning with LLM-Enhanced Recursive
Reasoners Leveraging Multi-agent Hypergames
|
cs.AI cs.GT
|
LLM-driven multi-agent-based simulations have been gaining traction with
applications in game-theoretic and social simulations. While most
implementations seek to exploit or evaluate LLM-agentic reasoning, they often
do so with a weak notion of agency and simplified architectures. We implement a
role-based multi-agent strategic interaction framework tailored to
sophisticated recursive reasoners, providing the means for systematic in-depth
development and evaluation of strategic reasoning. Our game environment is
governed by the umpire responsible for facilitating games, from matchmaking
through move validation to environment management. Players incorporate
state-of-the-art LLMs in their decision mechanism, relying on a formal
hypergame-based model of hierarchical beliefs. We use one-shot, 2-player beauty
contests to evaluate the recursive reasoning capabilities of the latest LLMs,
providing a comparison to an established baseline model from economics and data
from human experiments. Furthermore, we introduce the foundations of an
alternative semantic measure of reasoning to the k-level theory. Our
experiments show that artificial reasoners can outperform the baseline model in
terms of both approximating human behaviour and reaching the optimal solution.
|
2502.07445
|
Forget What You Know about LLMs Evaluations -- LLMs are Like a Chameleon
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) often appear to excel on public benchmarks, but
these high scores may mask an overreliance on dataset-specific surface cues
rather than true language understanding. We introduce the Chameleon Benchmark
Overfit Detector (C-BOD), a meta-evaluation framework that systematically
distorts benchmark prompts via a parametric transformation and detects
overfitting of LLMs. By rephrasing inputs while preserving their semantic
content and labels, C-BOD exposes whether a model's performance is driven by
memorized patterns. Evaluated on the MMLU benchmark using 26 leading LLMs, our
method reveals an average performance degradation of 2.15% under modest
perturbations, with 20 out of 26 models exhibiting statistically significant
differences. Notably, models with higher baseline accuracy exhibit larger
performance differences under perturbation, and larger LLMs tend to be more
sensitive to rephrasings indicating that both cases may overrely on fixed
prompt patterns. In contrast, the Llama family and models with lower baseline
accuracy show insignificant degradation, suggesting reduced dependency on
superficial cues. Moreover, C-BOD's dataset- and model-agnostic design allows
easy integration into training pipelines to promote more robust language
understanding. Our findings challenge the community to look beyond leaderboard
scores and prioritize resilience and generalization in LLM evaluation.
|
2502.07452
|
Eliciting Rational Initial Weights in Gradual Argumentation
|
cs.AI
|
Many semantics for weighted argumentation frameworks assume that each
argument is associated with an initial weight. However, eliciting these initial
weights poses challenges: (1) accurately providing a specific numerical value
is often difficult, and (2) individuals frequently confuse initial weights with
acceptability degrees in the presence of other arguments. To address these
issues, we propose an elicitation pipeline that allows one to specify
acceptability degree intervals for each argument. By employing gradual
semantics, we can refine these intervals when they are rational, restore
rationality when they are not, and ultimately identify possible initial weights
for each argument.
|
2502.07455
|
RusCode: Russian Cultural Code Benchmark for Text-to-Image Generation
|
cs.CV cs.AI cs.CL
|
Text-to-image generation models have gained popularity among users around the
world. However, many of these models exhibit a strong bias toward
English-speaking cultures, ignoring or misrepresenting the unique
characteristics of other language groups, countries, and nationalities. The
lack of cultural awareness can reduce the generation quality and lead to
undesirable consequences such as unintentional insult, and the spread of
prejudice. In contrast to the field of natural language processing, cultural
awareness in computer vision has not been explored as extensively. In this
paper, we strive to reduce this gap. We propose a RusCode benchmark for
evaluating the quality of text-to-image generation containing elements of the
Russian cultural code. To do this, we form a list of 19 categories that best
represent the features of Russian visual culture. Our final dataset consists of
1250 text prompts in Russian and their translations into English. The prompts
cover a wide range of topics, including complex concepts from art, popular
culture, folk traditions, famous people's names, natural objects, scientific
achievements, etc. We present the results of a human evaluation of the
side-by-side comparison of Russian visual concepts representations using
popular generative models.
|
2502.07456
|
FedAPA: Server-side Gradient-Based Adaptive Personalized Aggregation for
Federated Learning on Heterogeneous Data
|
cs.LG cs.CV
|
Personalized federated learning (PFL) tailors models to clients' unique data
distributions while preserving privacy. However, existing
aggregation-weight-based PFL methods often struggle with heterogeneous data,
facing challenges in accuracy, computational efficiency, and communication
overhead. We propose FedAPA, a novel PFL method featuring a server-side,
gradient-based adaptive aggregation strategy to generate personalized models,
by updating aggregation weights based on gradients of client-parameter changes
with respect to the aggregation weights in a centralized manner. FedAPA
guarantees theoretical convergence and achieves superior accuracy and
computational efficiency compared to 10 PFL competitors across three datasets,
with competitive communication overhead.
|
2502.07457
|
Bidirectional Uncertainty-Aware Region Learning for Semi-Supervised
Medical Image Segmentation
|
cs.CV
|
In semi-supervised medical image segmentation, the poor quality of unlabeled
data and the uncertainty in the model's predictions lead to models that
inevitably produce erroneous pseudo-labels. These errors accumulate throughout
model training, thereby weakening the model's performance. We found that these
erroneous pseudo-labels are typically concentrated in high-uncertainty regions.
Traditional methods improve performance by directly discarding pseudo-labels in
these regions, but this can also result in neglecting potentially valuable
training data. To alleviate this problem, we propose a bidirectional
uncertainty-aware region learning strategy. In training labeled data, we focus
on high-uncertainty regions, using precise label information to guide the
model's learning in potentially uncontrollable areas. Meanwhile, in the
training of unlabeled data, we concentrate on low-uncertainty regions to reduce
the interference of erroneous pseudo-labels on the model. Through this
bidirectional learning strategy, the model's overall performance has
significantly improved. Extensive experiments show that our proposed method
achieves significant performance improvement on different medical image
segmentation tasks.
|
2502.07459
|
PerCul: A Story-Driven Cultural Evaluation of LLMs in Persian
|
cs.CL cs.AI cs.CY
|
Large language models predominantly reflect Western cultures, largely due to
the dominance of English-centric training data. This imbalance presents a
significant challenge, as LLMs are increasingly used across diverse contexts
without adequate evaluation of their cultural competence in non-English
languages, including Persian. To address this gap, we introduce PerCul, a
carefully constructed dataset designed to assess the sensitivity of LLMs toward
Persian culture. PerCul features story-based, multiple-choice questions that
capture culturally nuanced scenarios. Unlike existing benchmarks, PerCul is
curated with input from native Persian annotators to ensure authenticity and to
prevent the use of translation as a shortcut. We evaluate several
state-of-the-art multilingual and Persian-specific LLMs, establishing a
foundation for future research in cross-cultural NLP evaluation. Our
experiments demonstrate a 11.3% gap between best closed source model and
layperson baseline while the gap increases to 21.3% by using the best
open-weight model. You can access the dataset from here:
https://huggingface.co/datasets/teias-ai/percul
|
2502.07460
|
Logarithmic Regret for Online KL-Regularized Reinforcement Learning
|
cs.LG stat.ML
|
Recent advances in Reinforcement Learning from Human Feedback (RLHF) have
shown that KL-regularization plays a pivotal role in improving the efficiency
of RL fine-tuning for large language models (LLMs). Despite its empirical
advantage, the theoretical difference between KL-regularized RL and standard RL
remains largely under-explored. While there is a recent line of work on the
theoretical analysis of KL-regularized objective in decision making
\citep{xiong2024iterative, xie2024exploratory,zhao2024sharp}, these analyses
either reduce to the traditional RL setting or rely on strong coverage
assumptions. In this paper, we propose an optimism-based KL-regularized online
contextual bandit algorithm, and provide a novel analysis of its regret. By
carefully leveraging the benign optimization landscape induced by the
KL-regularization and the optimistic reward estimation, our algorithm achieves
an $\mathcal{O}\big(\eta\log (N_{\mathcal R} T)\cdot d_{\mathcal R}\big)$
logarithmic regret bound, where $\eta, N_{\mathcal R},T,d_{\mathcal R}$ denote
the KL-regularization parameter, the cardinality of the reward function class,
number of rounds, and the complexity of the reward function class. Furthermore,
we extend our algorithm and analysis to reinforcement learning by developing a
novel decomposition over transition steps and also obtain a similar logarithmic
regret bound.
|
2502.07461
|
JamendoMaxCaps: A Large Scale Music-caption Dataset with Imputed
Metadata
|
cs.SD cs.AI
|
We introduce JamendoMaxCaps, a large-scale music-caption dataset featuring
over 200,000 freely licensed instrumental tracks from the renowned Jamendo
platform. The dataset includes captions generated by a state-of-the-art
captioning model, enhanced with imputed metadata. We also introduce a retrieval
system that leverages both musical features and metadata to identify similar
songs, which are then used to fill in missing metadata using a local large
language model (LLLM). This approach allows us to provide a more comprehensive
and informative dataset for researchers working on music-language understanding
tasks. We validate this approach quantitatively with five different
measurements. By making the JamendoMaxCaps dataset publicly available, we
provide a high-quality resource to advance research in music-language
understanding tasks such as music retrieval, multimodal representation
learning, and generative music models.
|
2502.07465
|
Crime Forecasting: A Spatio-temporal Analysis with Deep Learning Models
|
cs.LG cs.AI
|
This study uses deep-learning models to predict city partition crime counts
on specific days. It helps police enhance surveillance, gather intelligence,
and proactively prevent crimes. We formulate crime count prediction as a
spatiotemporal sequence challenge, where both input data and prediction targets
are spatiotemporal sequences. In order to improve the accuracy of crime
forecasting, we introduce a new model that combines Convolutional Neural
Networks (CNN) and Long Short-Term Memory (LSTM) networks. We conducted a
comparative analysis to access the effects of various data sequences, including
raw and binned data, on the prediction errors of four deep learning forecasting
models. Directly inputting raw crime data into the forecasting model causes
high prediction errors, making the model unsuitable for real - world use. The
findings indicate that the proposed CNN-LSTM model achieves optimal performance
when crime data is categorized into 10 or 5 groups. Data binning can enhance
forecasting model performance, but poorly defined intervals may reduce map
granularity. Compared to dividing into 5 bins, binning into 10 intervals
strikes an optimal balance, preserving data characteristics and surpassing raw
data in predictive modelling efficacy.
|
2502.07466
|
Less is More: Masking Elements in Image Condition Features Avoids
Content Leakages in Style Transfer Diffusion Models
|
cs.CV
|
Given a style-reference image as the additional image condition,
text-to-image diffusion models have demonstrated impressive capabilities in
generating images that possess the content of text prompts while adopting the
visual style of the reference image. However, current state-of-the-art methods
often struggle to disentangle content and style from style-reference images,
leading to issues such as content leakages. To address this issue, we propose a
masking-based method that efficiently decouples content from style without the
need of tuning any model parameters. By simply masking specific elements in the
style reference's image features, we uncover a critical yet under-explored
principle: guiding with appropriately-selected fewer conditions (e.g., dropping
several image feature elements) can efficiently avoid unwanted content flowing
into the diffusion models, enhancing the style transfer performances of
text-to-image diffusion models. In this paper, we validate this finding both
theoretically and experimentally. Extensive experiments across various styles
demonstrate the effectiveness of our masking-based method and support our
theoretical results.
|
2502.07467
|
Integrated Sensing, Communication, and Over-The-Air Control of UAV Swarm
Dynamics
|
eess.SP cs.SY eess.SY
|
Coordinated controlling a large UAV swarm requires significant spectrum
resources due to the need for bandwidth allocation per UAV, posing a challenge
in resource-limited environments. Over-the-air (OTA) control has emerged as a
spectrum-efficient approach, leveraging electromagnetic superposition to form
control signals at a base station (BS). However, existing OTA controllers lack
sufficient optimization variables to meet UAV swarm control objectives and fail
to integrate control with other BS functions like sensing. This work proposes
an integrated sensing and OTA control framework (ISAC-OTA) for UAV swarm. The
BS performs OTA signal construction (uplink) and dispatch (downlink) while
simultaneously sensing objects. Two uplink post-processing methods are
developed: a control-centric approach generating closed-form control signals
via a feedback-looped OTA control problem, and a sensing-centric method
mitigating transmission-induced interference for accurate object sensing. For
the downlink, a non-convex problem is formulated and solved to minimize control
signal dispatch (transmission) error while maintaining a minimum sensing
signal-to-noise ratio (SNR). Simulation results show that the proposed ISAC-OTA
controller achieves control performance comparable to the benchmark optimal
control algorithm while maintaining high sensing accuracy, despite OTA
transmission interference. Moreover, it eliminates the need for per-UAV
bandwidth allocation, showcasing a spectrum-efficient method for cooperative
control in future wireless systems.
|
2502.07469
|
5D Neural Surrogates for Nonlinear Gyrokinetic Simulations of Plasma
Turbulence
|
physics.plasm-ph cs.AI cs.LG stat.ML
|
Nuclear fusion plays a pivotal role in the quest for reliable and sustainable
energy production. A major roadblock to achieving commercially viable fusion
power is understanding plasma turbulence, which can significantly degrade
plasma confinement. Modelling turbulence is crucial to design performing plasma
scenarios for next-generation reactor-class devices and current experimental
machines. The nonlinear gyrokinetic equation underpinning turbulence modelling
evolves a 5D distribution function over time. Solving this equation numerically
is extremely expensive, requiring up to weeks for a single run to converge,
making it unfeasible for iterative optimisation and control studies. In this
work, we propose a method for training neural surrogates for 5D gyrokinetic
simulations. Our method extends a hierarchical vision transformer to five
dimensions and is trained on the 5D distribution function for the adiabatic
electron approximation. We demonstrate that our model can accurately infer
downstream physical quantities such as heat flux time trace and electrostatic
potentials for single-step predictions two orders of magnitude faster than
numerical codes. Our work paves the way towards neural surrogates for plasma
turbulence simulations to accelerate deployment of commercial energy production
via nuclear fusion.
|
2502.07470
|
On Event-Triggered Resilient Consensus Using Auxiliary Layer
|
eess.SY cs.SY
|
Due to its design simplicity, auxiliary layer-based resilient control is
widely discussed in the literature to mitigate the effects of False Data
Injection (FDI) attacks. However, the increased communication burden due to
additional communication links for connecting an extra layer is often
overlooked in the literature. This paper bridges this gap by considering an
event-triggered approach for inter-layer communication between the physical
layer (containing actual agents) and the auxiliary layer (containing virtual
agents) for the resilient state consensus in a multi-agent system. We provide
state-based and dynamic event-triggering mechanisms, the former being the
motivation for the latter. The exclusion of Zeno behavior is established by
proving positive minimum inter-event time (MIET). Extensive simulation and
experimental results are provided to illustrate the proposed methodology.
|
2502.07472
|
Robotic In-Hand Manipulation for Large-Range Precise Object Movement:
The RGMC Champion Solution
|
cs.RO
|
In-hand manipulation using multiple dexterous fingers is a critical robotic
skill that can reduce the reliance on large arm motions, thereby saving space
and energy. This letter focuses on in-grasp object movement, which refers to
manipulating an object to a desired pose through only finger motions within a
stable grasp. The key challenge lies in simultaneously achieving high precision
and large-range movements while maintaining a constant stable grasp. To address
this problem, we propose a simple and practical approach based on kinematic
trajectory optimization with no need for pretraining or object geometries,
which can be easily applied to novel objects in real-world scenarios. Adopting
this approach, we won the championship for the in-hand manipulation track at
the 9th Robotic Grasping and Manipulation Competition (RGMC) held at ICRA 2024.
Implementation details, discussion, and further quantitative experimental
results are presented in this letter, which aims to comprehensively evaluate
our approach and share our key takeaways from the competition. Supplementary
materials including video and code are available at
https://rgmc-xl-team.github.io/ingrasp_manipulation .
|
2502.07474
|
ETimeline: An Extensive Timeline Generation Dataset based on Large
Language Model
|
cs.IR
|
Timeline generation is of great significance for a comprehensive
understanding of the development of events over time. Its goal is to organize
news chronologically, which helps to identify patterns and trends that may be
obscured when viewing news in isolation, making it easier to track the
development of stories and understand the interrelationships between key
events. Timelines are now common in various commercial products, but academic
research in this area is notably scarce. Additionally, the current datasets are
in need of refinement for enhanced utility and expanded coverage. In this
paper, we propose ETimeline, which encompasses over $13,000$ news articles,
spanning $600$ bilingual timelines across $28$ news domains. Specifically, we
gather a candidate pool of more than $120,000$ news articles and employ the
large language model (LLM) Pipeline to improve performance, ultimately yielding
the ETimeline. The data analysis underscores the appeal of ETimeline.
Additionally, we also provide the news pool data for further research and
analysis. This work contributes to the advancement of timeline generation
research and supports a wide range of tasks, including topic generation and
event relationships. We believe that this dataset will serve as a catalyst for
innovative research and bridge the gap between academia and industry in
understanding the practical application of technology services. The dataset is
available at https://zenodo.org/records/11392212
|
2502.07479
|
WebChecker: A Versatile EVL Plugin for Validating HTML Pages with
Bootstrap Frameworks
|
cs.SE cs.AI
|
WebChecker is a plugin for Epsilon Validation Language (EVL), designed to
validate both static and dynamic HTML pages utilizing frameworks like
Bootstrap. By employing configurable EVL constraints, WebChecker enforces
implicit rules governing HTML and CSS frameworks. The effectiveness of the
plugin is demonstrated through its application on Bootstrap, the widely adopted
HTML, CSS, and JavaScript framework. WebChecker comes with a set of EVL
constraints to assess Bootstrap based web pages. To substantiate our claims, I
present an illustrative example featuring two solutions that effectively
enforce implicit rules.
|
2502.07480
|
Overfitting Regimes of Nadaraya-Watson Interpolators
|
cs.LG math.ST stat.ML stat.TH
|
In recent years, there has been much interest in understanding the
generalization behavior of interpolating predictors, which overfit on noisy
training data. Whereas standard analyses are concerned with whether a method is
consistent or not, recent observations have shown that even inconsistent
predictors can generalize well. In this work, we revisit the classic
interpolating Nadaraya-Watson (NW) estimator (also known as Shepard's method),
and study its generalization capabilities through this modern viewpoint. In
particular, by varying a single bandwidth-like hyperparameter, we prove the
existence of multiple overfitting behaviors, ranging non-monotonically from
catastrophic, through benign, to tempered. Our results highlight how even
classical interpolating methods can exhibit intricate generalization behaviors.
Numerical experiments complement our theory, demonstrating the same phenomena.
|
2502.07486
|
Automated Road Extraction and Centreline Fitting in LiDAR Point Clouds
|
cs.CV
|
Road information extraction from 3D point clouds is useful for urban planning
and traffic management. Existing methods often rely on local features and the
refraction angle of lasers from kerbs, which makes them sensitive to variable
kerb designs and issues in high-density areas due to data homogeneity. We
propose an approach for extracting road points and fitting centrelines using a
top-down view of LiDAR based ground-collected point clouds. This prospective
view reduces reliance on specific kerb design and results in better road
extraction. We first perform statistical outlier removal and density-based
clustering to reduce noise from 3D point cloud data. Next, we perform ground
point filtering using a grid-based segmentation method that adapts to diverse
road scenarios and terrain characteristics. The filtered points are then
projected onto a 2D plane, and the road is extracted by a skeletonisation
algorithm. The skeleton is back-projected onto the 3D point cloud with
calculated normals, which guide a region growing algorithm to find nearby road
points. The extracted road points are then smoothed with the Savitzky-Golay
filter to produce the final centreline. Our initial approach without
post-processing of road skeleton achieved 67% in IoU by testing on the Perth
CBD dataset with different road types. Incorporating the post-processing of the
road skeleton improved the extraction of road points around the smoothed
skeleton. The refined approach achieved a higher IoU value of 73% and with 23%
reduction in the processing time. Our approach offers a generalised and
computationally efficient solution that combines 3D and 2D processing
techniques, laying the groundwork for future road reconstruction and 3D-to-2D
point cloud alignment.
|
2502.07487
|
Multi-Agent Collaboration for Multilingual Code Instruction Tuning
|
cs.CL
|
Recent advancement in code understanding and generation demonstrates that
code LLMs fine-tuned on a high-quality instruction dataset can gain powerful
capabilities to address wide-ranging code-related tasks. However, most previous
existing methods mainly view each programming language in isolation and ignore
the knowledge transfer among different programming languages. To bridge the gap
among different programming languages, we introduce a novel multi-agent
collaboration framework to enhance multilingual instruction tuning for code
LLMs, where multiple language-specific intelligent agent components with
generation memory work together to transfer knowledge from one language to
another efficiently and effectively. Specifically, we first generate the
language-specific instruction data from the code snippets and then provide the
generated data as the seed data for language-specific agents. Multiple
language-specific agents discuss and collaborate to formulate a new instruction
and its corresponding solution (A new programming language or existing
programming language), To further encourage the cross-lingual transfer, each
agent stores its generation history as memory and then summarizes its merits
and faults. Finally, the high-quality multilingual instruction data is used to
encourage knowledge transfer among different programming languages to train
Qwen2.5-xCoder. Experimental results on multilingual programming benchmarks
demonstrate the superior performance of Qwen2.5-xCoder in sharing common
knowledge, highlighting its potential to reduce the cross-lingual gap.
|
2502.07488
|
Improving Adaptive Moment Optimization via Preconditioner
Diagonalization
|
cs.LG
|
Modern adaptive optimization methods, such as Adam and its variants, have
emerged as the most widely used tools in deep learning over recent years. These
algorithms offer automatic mechanisms for dynamically adjusting the update step
based on estimates of gradient statistics. Compared to traditional algorithms
like Stochastic Gradient Descent, these adaptive methods are typically more
robust to model scale and hyperparameter tuning. However, the gradient
statistics employed by these methods often do not leverage sufficient gradient
covariance information, leading to suboptimal updates in certain directions of
the parameter space and potentially slower convergence. In this work, we keep
track of such covariance statistics in the form of a structured preconditioner
matrix. Unlike other works, our approach does not apply direct approximations
to estimate this matrix. We instead implement an invertible transformation that
maps the preconditioner matrix into a new space where it becomes approximately
diagonal. This enables a diagonal approximation of the preconditioner matrix in
the transformed space, offering several computational advantages. Empirical
results show that our approach can substantially enhance the convergence speed
of modern adaptive optimizers. Notably, for large language models like LLaMA,
we can achieve a speedup of 2x compared to the baseline Adam. Additionally, our
method can be integrated with memory-efficient optimizers like Adafactor to
manage computational overhead.
|
2502.07489
|
Physiome-ODE: A Benchmark for Irregularly Sampled Multivariate Time
Series Forecasting Based on Biological ODEs
|
cs.LG
|
State-of-the-art methods for forecasting irregularly sampled time series with
missing values predominantly rely on just four datasets and a few small toy
examples for evaluation. While ordinary differential equations (ODE) are the
prevalent models in science and engineering, a baseline model that forecasts a
constant value outperforms ODE-based models from the last five years on three
of these existing datasets. This unintuitive finding hampers further research
on ODE-based models, a more plausible model family. In this paper, we develop a
methodology to generate irregularly sampled multivariate time series (IMTS)
datasets from ordinary differential equations and to select challenging
instances via rejection sampling. Using this methodology, we create
Physiome-ODE, a large and sophisticated benchmark of IMTS datasets consisting
of 50 individual datasets, derived from real-world ordinary differential
equations from research in biology. Physiome-ODE is the first benchmark for
IMTS forecasting that we are aware of and an order of magnitude larger than the
current evaluation setting of four datasets. Using our benchmark Physiome-ODE,
we show qualitatively completely different results than those derived from the
current four datasets: on Physiome-ODE ODE-based models can play to their
strength and our benchmark can differentiate in a meaningful way between
different IMTS forecasting models. This way, we expect to give a new impulse to
research on ODE-based time series modeling.
|
2502.07490
|
Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn
More
|
cs.CL cs.LG
|
Large Language Models (LLMs) are discovered to suffer from accurately
retrieving key information. To address this, we propose Mask-Enhanced
Autoregressive Prediction (MEAP), a simple yet effective training paradigm that
seamlessly integrates Masked Language Modeling (MLM) into Next-Token Prediction
(NTP) to enhance the latter's in-context retrieval capabilities. Specifically,
MEAP first randomly masks a small fraction of input tokens and then directly
performs the standard next-token prediction autoregressive using a decoder-only
Transformer. MEAP eliminates the need for bidirectional attention or
encoder-decoder architectures for MLM, incurring no additional computational
overhead during pre-training or inference. Intensive experiments demonstrate
that MEAP substantially outperforms NTP on key information retrieval and
long-context reasoning tasks, while performing on par or better on commonsense
reasoning tasks. The benefits of MEAP also extend to supervised fine-tuning,
where it shows remarkable advantages in lost-in-the-middle scenarios,
outperforming NTP by 11.77 percentage points. Our analysis indicates that
MEAP's effectiveness arises from its ability to promote more distinguishable
attention scores by concentrating on a reduced set of non-masked tokens. This
mechanism improves the model's focus on task-relevant signals while mitigating
the influence of peripheral context. These findings position MEAP as a
promising training paradigm for large language models.
|
2502.07491
|
Exploring Patterns Behind Sports
|
cs.LG cs.IR
|
This paper presents a comprehensive framework for time series prediction
using a hybrid model that combines ARIMA and LSTM. The model incorporates
feature engineering techniques, including embedding and PCA, to transform raw
data into a lower-dimensional representation while retaining key information.
The embedding technique is used to convert categorical data into continuous
vectors, facilitating the capture of complex relationships. PCA is applied to
reduce dimensionality and extract principal components, enhancing model
performance and computational efficiency. To handle both linear and nonlinear
patterns in the data, the ARIMA model captures linear trends, while the LSTM
model models complex nonlinear dependencies. The hybrid model is trained on
historical data and achieves high accuracy, as demonstrated by low RMSE and MAE
scores. Additionally, the paper employs the run test to assess the randomness
of sequences, providing insights into the underlying patterns. Ablation studies
are conducted to validate the roles of different components in the model,
demonstrating the significance of each module. The paper also utilizes the SHAP
method to quantify the impact of traditional advantages on the predicted
results, offering a detailed understanding of feature importance. The KNN
method is used to determine the optimal prediction interval, further enhancing
the model's accuracy. The results highlight the effectiveness of combining
traditional statistical methods with modern deep learning techniques for robust
time series forecasting in Sports.
|
2502.07492
|
RoMA: Robust Malware Attribution via Byte-level Adversarial Training
with Global Perturbations and Adversarial Consistency Regularization
|
cs.CR cs.CV
|
Attributing APT (Advanced Persistent Threat) malware to their respective
groups is crucial for threat intelligence and cybersecurity. However, APT
adversaries often conceal their identities, rendering attribution inherently
adversarial. Existing machine learning-based attribution models, while
effective, remain highly vulnerable to adversarial attacks. For example, the
state-of-the-art byte-level model MalConv sees its accuracy drop from over 90%
to below 2% under PGD (projected gradient descent) attacks. Existing
gradient-based adversarial training techniques for malware detection or image
processing were applied to malware attribution in this study, revealing that
both robustness and training efficiency require significant improvement. To
address this, we propose RoMA, a novel single-step adversarial training
approach that integrates global perturbations to generate enhanced adversarial
samples and employs adversarial consistency regularization to improve
representation quality and resilience. A novel APT malware dataset named AMG18,
with diverse samples and realistic class imbalances, is introduced for
evaluation. Extensive experiments show that RoMA significantly outperforms
seven competing methods in both adversarial robustness (e.g., achieving over
80% robust accuracy-more than twice that of the next-best method under PGD
attacks) and training efficiency (e.g., more than twice as fast as the
second-best method in terms of accuracy), while maintaining superior standard
accuracy in non-adversarial scenarios.
|
2502.07494
|
URECA: The Chain of Two Minimum Set Cover Problems exists behind
Adaptation to Shifts in Semantic Code Search
|
cs.AI
|
Adaptation is to make model learn the patterns shifted from the training
distribution. In general, this adaptation is formulated as the minimum entropy
problem. However, the minimum entropy problem has inherent limitation --
shifted initialization cascade phenomenon. We extend the relationship between
the minimum entropy problem and the minimum set cover problem via Lebesgue
integral. This extension reveals that internal mechanism of the minimum entropy
problem ignores the relationship between disentangled representations, which
leads to shifted initialization cascade. From the analysis, we introduce a new
clustering algorithm, Union-find based Recursive Clustering Algorithm~(URECA).
URECA is an efficient clustering algorithm for the leverage of the
relationships between disentangled representations. The update rule of URECA
depends on Thresholdly-Updatable Stationary Assumption to dynamics as a
released version of Stationary Assumption. This assumption helps URECA to
transport disentangled representations with no errors based on the
relationships between disentangled representations. URECA also utilize
simulation trick to efficiently cluster disentangled representations. The wide
range of evaluations show that URECA achieves consistent performance gains for
the few-shot adaptation to diverse types of shifts along with advancement to
State-of-The-Art performance in CoSQA in the scenario of query shift.
|
2502.07495
|
LLM-Sketch: Enhancing Network Sketches with LLM
|
cs.NI cs.LG
|
Network stream mining is fundamental to many network operations. Sketches, as
compact data structures that offer low memory overhead with bounded accuracy,
have emerged as a promising solution for network stream mining. Recent studies
attempt to optimize sketches using machine learning; however, these approaches
face the challenges of lacking adaptivity to dynamic networks and incurring
high training costs. In this paper, we propose LLM-Sketch, based on the insight
that fields beyond the flow IDs in packet headers can also help infer flow
sizes. By using a two-tier data structure and separately recording large and
small flows, LLM-Sketch improves accuracy while minimizing memory usage.
Furthermore, it leverages fine-tuned large language models (LLMs) to reliably
estimate flow sizes. We evaluate LLM-Sketch on three representative tasks, and
the results demonstrate that LLM-Sketch outperforms state-of-the-art methods by
achieving a $7.5\times$ accuracy improvement.
|
2502.07497
|
On Training-Conditional Conformal Prediction and Binomial Proportion
Confidence Intervals
|
cs.LG
|
Estimating the expectation of a Bernoulli random variable based on N
independent trials is a classical problem in statistics, typically addressed
using Binomial Proportion Confidence Intervals (BPCI). In the control systems
community, many critical tasks-such as certifying the statistical safety of
dynamical systems-can be formulated as BPCI problems. Conformal Prediction
(CP), a distribution-free technique for uncertainty quantification, has gained
significant attention in recent years and has been applied to various control
systems problems, particularly to address uncertainties in learned dynamics or
controllers. A variant known as training-conditional CP was recently employed
to tackle the problem of safety certification. In this note, we highlight that
the use of training-conditional CP in this context does not provide valid
safety guarantees. We demonstrate why CP is unsuitable for BPCI problems and
argue that traditional BPCI methods are better suited for statistical safety
certification.
|
2502.07500
|
Unified Graph Networks (UGN): A Deep Neural Framework for Solving Graph
Problems
|
cs.LG
|
Deep neural networks have enabled researchers to create powerful generalized
frameworks, such as transformers, that can be used to solve well-studied
problems in various application domains, such as text and image. However, such
generalized frameworks are not available for solving graph problems. Graph
structures are ubiquitous in many applications around us and many graph
problems have been widely studied over years. In recent times, there has been a
surge in deep neural network based approaches to solve graph problems, with
growing availability of graph structured datasets across diverse domains.
Nevertheless, existing methods are mostly tailored to solve a specific task and
lack the capability to create a generalized model leading to solutions for
different downstream tasks. In this work, we propose a novel,
resource-efficient framework named \emph{U}nified \emph{G}raph \emph{N}etwork
(UGN) by leveraging the feature extraction capability of graph convolutional
neural networks (GCN) and 2-dimensional convolutional neural networks (Conv2D).
UGN unifies various graph learning tasks, such as link prediction, node
classification, community detection, graph-to-graph translation, knowledge
graph completion, and more, within a cohesive framework, while exercising
minimal task-specific extensions (e.g., formation of supernodes for coarsening
massive networks to increase scalability, use of \textit{mean target
connectivity matrix} (MTCM) representation for achieving scalability in graph
translation task, etc.) to enhance the generalization capability of graph
learning and analysis. We test the novel UGN framework for six uncorrelated
graph problems, using twelve different datasets. Experimental results show that
UGN outperforms the state-of-the-art baselines by a significant margin on ten
datasets, while producing comparable results on the remaining dataset.
|
2502.07503
|
Recursive Inference Scaling: A Winning Path to Scalable Inference in
Language and Multimodal Systems
|
cs.AI cs.LG
|
Recent research in language modeling reveals two scaling effects: the
well-known improvement from increased training compute, and a lesser-known
boost from applying more sophisticated or computationally intensive inference
methods. Inspired by recent findings on the fractal geometry of language, we
introduce Recursive INference Scaling (RINS) as a complementary, plug-in recipe
for scaling inference time. For a given fixed model architecture and training
compute budget, RINS substantially improves language modeling performance. It
also generalizes beyond pure language tasks, delivering gains in multimodal
systems, including a +2% improvement in 0-shot ImageNet accuracy for
SigLIP-B/16. Additionally, by deriving data scaling laws, we show that RINS
improves both the asymptotic performance limits and the scaling exponents.
These advantages are maintained even when compared to state-of-the-art
recursive techniques like the "repeat-all-over" (RAO) strategy in Mobile LLM.
Finally, stochastic RINS not only can enhance performance further but also
provides the flexibility to optionally forgo increased inference computation at
test time with minimal performance degradation.
|
2502.07505
|
Efficient Continuous Group Convolutions for Local SE(3) Equivariance in
3D Point Clouds
|
cs.CV
|
Extending the translation equivariance property of convolutional neural
networks to larger symmetry groups has been shown to reduce sample complexity
and enable more discriminative feature learning. Further, exploiting additional
symmetries facilitates greater weight sharing than standard convolutions,
leading to an enhanced network expressivity without an increase in parameter
count. However, extending the equivariant properties of a convolution layer
comes at a computational cost. In particular, for 3D data, expanding
equivariance to the SE(3) group (rotation and translation) results in a 6D
convolution operation, which is not tractable for larger data samples such as
3D scene scans. While efforts have been made to develop efficient SE(3)
equivariant networks, existing approaches rely on discretization or only
introduce global rotation equivariance. This limits their applicability to
point clouds representing a scene composed of multiple objects. This work
presents an efficient, continuous, and local SE(3) equivariant convolution
layer for point cloud processing based on general group convolution and local
reference frames. Our experiments show that our approach achieves competitive
or superior performance across a range of datasets and tasks, including object
classification and semantic segmentation, with negligible computational
overhead.
|
2502.07508
|
Enhance-A-Video: Better Generated Video for Free
|
cs.CV
|
DiT-based video generation has achieved remarkable results, but research into
enhancing existing models remains relatively unexplored. In this work, we
introduce a training-free approach to enhance the coherence and quality of
DiT-based generated videos, named Enhance-A-Video. The core idea is enhancing
the cross-frame correlations based on non-diagonal temporal attention
distributions. Thanks to its simple design, our approach can be easily applied
to most DiT-based video generation frameworks without any retraining or
fine-tuning. Across various DiT-based video generation models, our approach
demonstrates promising improvements in both temporal consistency and visual
quality. We hope this research can inspire future explorations in video
generation enhancement.
|
2502.07509
|
Dual Arm Steering of Deformable Linear Objects in 2-D and 3-D
Environments Using Euler's Elastica Solutions
|
cs.RO cs.SY eess.SY
|
This paper describes a method for steering deformable linear objects using
two robot hands in environments populated by sparsely spaced obstacles. The
approach involves manipulating an elastic inextensible rod by varying the
gripping endpoint positions and tangents. Closed form solutions that describe
the flexible linear object shape in planar environments, Euler's elastica, are
described. The paper uses these solutions to formulate criteria for non
self-intersection, stability and obstacle avoidance. These criteria are
formulated as constraints in the flexible object six-dimensional configuration
space that represents the robot gripping endpoint positions and tangents. In
particular, this paper introduces a novel criterion that ensures the flexible
object stability during steering. All safety criteria are integrated into a
scheme for steering flexible linear objects in planar environments, which is
lifted into a steering scheme in three-dimensional environments populated by
sparsely spaced obstacles. Experiments with a dual-arm robot demonstrate the
method.
|
2502.07510
|
Joint Metric Space Embedding by Unbalanced OT with Gromov-Wasserstein
Marginal Penalization
|
cs.LG
|
We propose a new approach for unsupervised alignment of heterogeneous
datasets, which maps data from two different domains without any known
correspondences to a common metric space. Our method is based on an unbalanced
optimal transport problem with Gromov-Wasserstein marginal penalization. It can
be seen as a counterpart to the recently introduced joint multidimensional
scaling method. We prove that there exists a minimizer of our functional and
that for penalization parameters going to infinity, the corresponding sequence
of minimizers converges to a minimizer of the so-called embedded Wasserstein
distance. Our model can be reformulated as a quadratic, multi-marginal,
unbalanced optimal transport problem, for which a bi-convex relaxation admits a
numerical solver via block-coordinate descent. We provide numerical examples
for joint embeddings in Euclidean as well as non-Euclidean spaces.
|
2502.07511
|
Quantitative evaluation of unsupervised clustering algorithms for
dynamic total-body PET image analysis
|
stat.AP cs.CV
|
Background. Recently, dynamic total-body positron emission tomography (PET)
imaging has become possible due to new scanner devices. While clustering
algorithms have been proposed for PET analysis already earlier, there is still
little research systematically evaluating these algorithms for processing of
dynamic total-body PET images. Materials and methods. Here, we compare the
performance of 15 unsupervised clustering methods, including K-means either by
itself or after principal component analysis (PCA) or independent component
analysis (ICA), Gaussian mixture model (GMM), fuzzy c-means (FCM),
agglomerative clustering, spectral clustering, and several newer clustering
algorithms, for classifying time activity curves (TACs) in dynamic PET images.
We use dynamic total-body $^{15}$O-water PET images collected from 30 patients
with suspected or confirmed coronary artery disease. To evaluate the clustering
algorithms in a quantitative way, we use them to classify 5000 TACs from each
image based on whether the curve is taken from brain, right heart ventricle,
right kidney, lower right lung lobe, or urinary bladder. Results. According to
our results, the best methods are GMM, FCM, and ICA combined with mini batch
K-means, which classified the TACs with a median accuracies of 89\%, 83\%, and
81\%, respectively, in a processing time of half a second or less on average
for each image. Conclusion. GMM, FCM, and ICA with mini batch K-means show
promise for dynamic total-body PET analysis.
|
2502.07514
|
A Near-optimal, Scalable and Corruption-tolerant Framework for
Stochastic Bandits: From Single-Agent to Multi-Agent and Beyond
|
cs.LG
|
We investigate various stochastic bandit problems in the presence of
adversarial corruption. A seminal contribution to this area is the
BARBAR~\citep{gupta2019better} algorithm, which is both simple and efficient,
tolerating significant levels of corruption with nearly no degradation in
performance. However, its regret upper bound exhibits a complexity of $O(KC)$,
while the lower bound is $\Omega(C)$. In this paper, we enhance the BARBAR
algorithm by proposing a novel framework called BARBAT, which eliminates the
factor of $K$ and achieves an optimal regret bound up to a logarithmic factor.
We also demonstrate how BARBAT can be extended to various settings, including
graph bandits, combinatorial semi-bandits, batched bandits and multi-agent
bandits. In comparison to the Follow-The-Regularized-Leader (FTRL) family of
methods, which provide a best-of-both-worlds guarantee, our approach is more
efficient and parallelizable. Notably, FTRL-based methods face challenges in
scaling to batched and multi-agent settings.
|
2502.07516
|
The Devil is in the Prompts: De-Identification Traces Enhance
Memorization Risks in Synthetic Chest X-Ray Generation
|
eess.IV cs.AI cs.CV cs.LG
|
Generative models, particularly text-to-image (T2I) diffusion models, play a
crucial role in medical image analysis. However, these models are prone to
training data memorization, posing significant risks to patient privacy.
Synthetic chest X-ray generation is one of the most common applications in
medical image analysis with the MIMIC-CXR dataset serving as the primary data
repository for this task. This study presents the first systematic attempt to
identify prompts and text tokens in MIMIC-CXR that contribute the most to
training data memorization. Our analysis reveals two unexpected findings: (1)
prompts containing traces of de-identification procedures (markers introduced
to hide Protected Health Information) are the most memorized, and (2) among all
tokens, de-identification markers contribute the most towards memorization.
This highlights a broader issue with the standard anonymization practices and
T2I synthesis with MIMIC-CXR. To exacerbate, existing inference-time
memorization mitigation strategies are ineffective and fail to sufficiently
reduce the model's reliance on memorized text tokens. On this front, we propose
actionable strategies for different stakeholders to enhance privacy and improve
the reliability of generative models in medical imaging. Finally, our results
provide a foundation for future work on developing and benchmarking
memorization mitigation techniques for synthetic chest X-ray generation using
the MIMIC-CXR dataset. The anonymized code is available at
https://anonymous.4open.science/r/diffusion_memorization-8011/
|
2502.07523
|
Scaling Off-Policy Reinforcement Learning with Batch and Weight
Normalization
|
cs.LG cs.AI
|
Reinforcement learning has achieved significant milestones, but sample
efficiency remains a bottleneck for real-world applications. Recently, CrossQ
has demonstrated state-of-the-art sample efficiency with a low update-to-data
(UTD) ratio of 1. In this work, we explore CrossQ's scaling behavior with
higher UTD ratios. We identify challenges in the training dynamics, which are
emphasized by higher UTD ratios. To address these, we integrate weight
normalization into the CrossQ framework, a solution that stabilizes training,
has been shown to prevent potential loss of plasticity and keeps the effective
learning rate constant. Our proposed approach reliably scales with increasing
UTD ratios, achieving competitive performance across 25 challenging continuous
control tasks on the DeepMind Control Suite and Myosuite benchmarks, notably
the complex dog and humanoid environments. This work eliminates the need for
drastic interventions, such as network resets, and offers a simple yet robust
pathway for improving sample efficiency and scalability in model-free
reinforcement learning.
|
2502.07526
|
CodePhys: Robust Video-based Remote Physiological Measurement through
Latent Codebook Querying
|
cs.CV
|
Remote photoplethysmography (rPPG) aims to measure non-contact physiological
signals from facial videos, which has shown great potential in many
applications. Most existing methods directly extract video-based rPPG features
by designing neural networks for heart rate estimation. Although they can
achieve acceptable results, the recovery of rPPG signal faces intractable
challenges when interference from real-world scenarios takes place on facial
video. Specifically, facial videos are inevitably affected by non-physiological
factors (e.g., camera device noise, defocus, and motion blur), leading to the
distortion of extracted rPPG signals. Recent rPPG extraction methods are easily
affected by interference and degradation, resulting in noisy rPPG signals. In
this paper, we propose a novel method named CodePhys, which innovatively treats
rPPG measurement as a code query task in a noise-free proxy space (i.e.,
codebook) constructed by ground-truth PPG signals. We consider noisy rPPG
features as queries and generate high-fidelity rPPG features by matching them
with noise-free PPG features from the codebook. Our approach also incorporates
a spatial-aware encoder network with a spatial attention mechanism to highlight
physiologically active areas and uses a distillation loss to reduce the
influence of non-periodic visual interference. Experimental results on four
benchmark datasets demonstrate that CodePhys outperforms state-of-the-art
methods in both intra-dataset and cross-dataset settings.
|
2502.07527
|
NatureLM: Deciphering the Language of Nature for Scientific Discovery
|
cs.AI cs.LG
|
Foundation models have revolutionized natural language processing and
artificial intelligence, significantly enhancing how machines comprehend and
generate human languages. Inspired by the success of these foundation models,
researchers have developed foundation models for individual scientific domains,
including small molecules, materials, proteins, DNA, and RNA. However, these
models are typically trained in isolation, lacking the ability to integrate
across different scientific domains. Recognizing that entities within these
domains can all be represented as sequences, which together form the "language
of nature", we introduce Nature Language Model (briefly, NatureLM), a
sequence-based science foundation model designed for scientific discovery.
Pre-trained with data from multiple scientific domains, NatureLM offers a
unified, versatile model that enables various applications including: (i)
generating and optimizing small molecules, proteins, RNA, and materials using
text instructions; (ii) cross-domain generation/design, such as
protein-to-molecule and protein-to-RNA generation; and (iii) achieving
state-of-the-art performance in tasks like SMILES-to-IUPAC translation and
retrosynthesis on USPTO-50k. NatureLM offers a promising generalist approach
for various scientific tasks, including drug discovery (hit
generation/optimization, ADMET optimization, synthesis), novel material design,
and the development of therapeutic proteins or nucleotides. We have developed
NatureLM models in different sizes (1 billion, 8 billion, and 46.7 billion
parameters) and observed a clear improvement in performance as the model size
increases.
|
2502.07528
|
Forecasting the future development in quality and value of professional
football players for applications in team management
|
stat.AP cs.LG
|
Transfers in professional football (soccer) are risky investments because of
the large transfer fees and high risks involved. Although data-driven models
can be used to improve transfer decisions, existing models focus on describing
players' historical progress, leaving their future performance unknown.
Moreover, recent developments have called for the use of explainable models
combined with uncertainty quantification of predictions. This paper assesses
explainable machine learning models based on predictive accuracy and
uncertainty quantification methods for the prediction of the future development
in quality and transfer value of professional football players. Using a
historical data set of data-driven indicators describing player quality and the
transfer value of a football player, the models are trained to forecast player
quality and player value one year ahead. These two prediction problems
demonstrate the efficacy of tree-based models, particularly random forest and
XGBoost, in making accurate predictions. In general, the random forest model is
found to be the most suitable model because it provides accurate predictions as
well as an uncertainty quantification method that naturally arises from the
bagging procedure of the random forest model. Additionally, our research shows
that the development of player performance contains nonlinear patterns and
interactions between variables, and that time series information can provide
useful information for the modeling of player performance metrics. Our research
provides models to help football clubs make more informed, data-driven transfer
decisions by forecasting player quality and transfer value.
|
2502.07529
|
Training Deep Learning Models with Norm-Constrained LMOs
|
cs.LG math.OC
|
In this work, we study optimization methods that leverage the linear
minimization oracle (LMO) over a norm-ball. We propose a new stochastic family
of algorithms that uses the LMO to adapt to the geometry of the problem and,
perhaps surprisingly, show that they can be applied to unconstrained problems.
The resulting update rule unifies several existing optimization methods under a
single framework. Furthermore, we propose an explicit choice of norm for deep
architectures, which, as a side benefit, leads to the transferability of
hyperparameters across model sizes. Experimentally, we demonstrate significant
speedups on nanoGPT training without any reliance on Adam. The proposed method
is memory-efficient, requiring only one set of model weights and one set of
gradients, which can be stored in half-precision.
|
2502.07531
|
VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video
Generation
|
cs.CV cs.AI cs.LG cs.MM
|
Recent image-to-video generation methods have demonstrated success in
enabling control over one or two visual elements, such as camera trajectory or
object motion. However, these methods are unable to offer control over multiple
visual elements due to limitations in data and network efficacy. In this paper,
we introduce VidCRAFT3, a novel framework for precise image-to-video generation
that enables control over camera motion, object motion, and lighting direction
simultaneously. To better decouple control over each visual element, we propose
the Spatial Triple-Attention Transformer, which integrates lighting direction,
text, and image in a symmetric way. Since most real-world video datasets lack
lighting annotations, we construct a high-quality synthetic video dataset, the
VideoLightingDirection (VLD) dataset. This dataset includes lighting direction
annotations and objects of diverse appearance, enabling VidCRAFT3 to
effectively handle strong light transmission and reflection effects.
Additionally, we propose a three-stage training strategy that eliminates the
need for training data annotated with multiple visual elements (camera motion,
object motion, and lighting direction) simultaneously. Extensive experiments on
benchmark datasets demonstrate the efficacy of VidCRAFT3 in producing
high-quality video content, surpassing existing state-of-the-art methods in
terms of control granularity and visual coherence. All code and data will be
publicly available.
|
2502.07532
|
Diffusion-LAM: Probabilistic Limited Area Weather Forecasting with
Diffusion
|
cs.LG physics.ao-ph
|
Machine learning methods have been shown to be effective for weather
forecasting, based on the speed and accuracy compared to traditional numerical
models. While early efforts primarily concentrated on deterministic
predictions, the field has increasingly shifted toward probabilistic
forecasting to better capture the forecast uncertainty. Most machine
learning-based models have been designed for global-scale predictions, with
only limited work targeting regional or limited area forecasting, which allows
more specialized and flexible modeling for specific locations. This work
introduces Diffusion-LAM, a probabilistic limited area weather model leveraging
conditional diffusion. By conditioning on boundary data from surrounding
regions, our approach generates forecasts within a defined area. Experimental
results on the MEPS limited area dataset demonstrate the potential of
Diffusion-LAM to deliver accurate probabilistic forecasts, highlighting its
promise for limited-area weather prediction.
|
2502.07541
|
Corporate Greenwashing Detection in Text -- a Survey
|
cs.CL
|
Greenwashing is an effort to mislead the public about the environmental
impact of an entity, such as a state or company. We provide a comprehensive
survey of the scientific literature addressing natural language processing
methods to identify potentially misleading climate-related corporate
communications, indicative of greenwashing. We break the detection of
greenwashing into intermediate tasks, and review the state-of-the-art
approaches for each of them. We discuss datasets, methods, and results, as well
as limitations and open challenges. We also provide an overview of how far the
field has come as a whole, and point out future research directions.
|
2502.07542
|
Exoplanet Transit Candidate Identification in TESS Full-Frame Images via
a Transformer-Based Algorithm
|
astro-ph.EP astro-ph.GA astro-ph.IM cs.AI
|
The Transiting Exoplanet Survey Satellite (TESS) is surveying a large
fraction of the sky, generating a vast database of photometric time series data
that requires thorough analysis to identify exoplanetary transit signals.
Automated learning approaches have been successfully applied to identify
transit signals. However, most existing methods focus on the classification and
validation of candidates, while few efforts have explored new techniques for
the search of candidates. To search for new exoplanet transit candidates, we
propose an approach to identify exoplanet transit signals without the need for
phase folding or assuming periodicity in the transit signals, such as those
observed in multi-transit light curves. To achieve this, we implement a new
neural network inspired by Transformers to directly process Full Frame Image
(FFI) light curves to detect exoplanet transits. Transformers, originally
developed for natural language processing, have recently demonstrated
significant success in capturing long-range dependencies compared to previous
approaches focused on sequential data. This ability allows us to employ
multi-head self-attention to identify exoplanet transit signals directly from
the complete light curves, combined with background and centroid time series,
without requiring prior transit parameters. The network is trained to learn
characteristics of the transit signal, like the dip shape, which helps
distinguish planetary transits from other variability sources. Our model
successfully identified 214 new planetary system candidates, including 122
multi-transit light curves, 88 single-transit and 4 multi-planet systems from
TESS sectors 1-26 with a radius > 0.27 $R_{\mathrm{Jupiter}}$, demonstrating
its ability to detect transits regardless of their periodicity.
|
2502.07544
|
Grammar Control in Dialogue Response Generation for Language Learning
Chatbots
|
cs.CL
|
Chatbots based on large language models offer cheap conversation practice
opportunities for language learners. However, they are hard to control for
linguistic forms that correspond to learners' current needs, such as grammar.
We control grammar in chatbot conversation practice by grounding a dialogue
response generation model in a pedagogical repository of grammar skills. We
also explore how this control helps learners to produce specific grammar. We
comprehensively evaluate prompting, fine-tuning, and decoding strategies for
grammar-controlled dialogue response generation. Strategically decoding Llama3
outperforms GPT-3.5 when tolerating minor response quality losses. Our
simulation predicts grammar-controlled responses to support grammar acquisition
adapted to learner proficiency. Existing language learning chatbots and
research on second language acquisition benefit from these affordances. Code
available on GitHub.
|
2502.07547
|
Instance-dependent Early Stopping
|
cs.LG
|
In machine learning practice, early stopping has been widely used to
regularize models and can save computational costs by halting the training
process when the model's performance on a validation set stops improving.
However, conventional early stopping applies the same stopping criterion to all
instances without considering their individual learning statuses, which leads
to redundant computations on instances that are already well-learned. To
further improve the efficiency, we propose an Instance-dependent Early Stopping
(IES) method that adapts the early stopping mechanism from the entire training
set to the instance level, based on the core principle that once the model has
mastered an instance, the training on it should stop. IES considers an instance
as mastered if the second-order differences of its loss value remain within a
small range around zero. This offers a more consistent measure of an instance's
learning status compared with directly using the loss value, and thus allows
for a unified threshold to determine when an instance can be excluded from
further backpropagation. We show that excluding mastered instances from
backpropagation can increase the gradient norms, thereby accelerating the
decrease of the training loss and speeding up the training process. Extensive
experiments on benchmarks demonstrate that IES method can reduce
backpropagation instances by 10%-50% while maintaining or even slightly
improving the test accuracy and transfer learning performance of a model.
|
2502.07549
|
HGTUL: A Hypergraph-based Model For Trajectory User Linking
|
cs.LG cs.AI
|
Trajectory User Linking (TUL), which links anonymous trajectories with users
who generate them, plays a crucial role in modeling human mobility. Despite
significant advancements in this field, existing studies primarily neglect the
high-order inter-trajectory relationships, which represent complex associations
among multiple trajectories, manifested through multi-location co-occurrence
patterns emerging when trajectories intersect at various Points of Interest
(POIs). Furthermore, they also overlook the variable influence of POIs on
different trajectories, as well as the user class imbalance problem caused by
disparities in user activity levels and check-in frequencies. To address these
limitations, we propose a novel HyperGraph-based multi-perspective Trajectory
User Linking model (HGTUL). Our model learns trajectory representations from
both relational and spatio-temporal perspectives: (1) it captures high-order
associations among trajectories by constructing a trajectory hypergraph and
leverages a hypergraph attention network to learn the variable impact of POIs
on trajectories; (2) it models the spatio-temporal characteristics of
trajectories by incorporating their temporal and spatial information into a
sequential encoder. Moreover, we design a data balancing method to effectively
address the user class imbalance problem and experimentally validate its
significance in TUL. Extensive experiments on three real-world datasets
demonstrate that HGTUL outperforms state-of-the-art baselines, achieving
improvements of 2.57%~20.09% and 5.68%~26.00% in ACC@1 and Macro-F1 metrics,
respectively.
|
2502.07551
|
Early Stopping Against Label Noise Without Validation Data
|
cs.LG
|
Early stopping methods in deep learning face the challenge of balancing the
volume of training and validation data, especially in the presence of label
noise. Concretely, sparing more data for validation from training data would
limit the performance of the learned model, yet insufficient validation data
could result in a sub-optimal selection of the desired model. In this paper, we
propose a novel early stopping method called Label Wave, which does not require
validation data for selecting the desired model in the presence of label noise.
It works by tracking the changes in the model's predictions on the training set
during the training process, aiming to halt training before the model unduly
fits mislabeled data. This method is empirically supported by our observation
that minimum fluctuations in predictions typically occur at the training epoch
before the model excessively fits mislabeled data. Through extensive
experiments, we show both the effectiveness of the Label Wave method across
various settings and its capability to enhance the performance of existing
methods for learning with noisy labels.
|
2502.07552
|
Unsupervised Translation of Emergent Communication
|
cs.CL cs.AI
|
Emergent Communication (EC) provides a unique window into the language
systems that emerge autonomously when agents are trained to jointly achieve
shared goals. However, it is difficult to interpret EC and evaluate its
relationship with natural languages (NL). This study employs unsupervised
neural machine translation (UNMT) techniques to decipher ECs formed during
referential games with varying task complexities, influenced by the semantic
diversity of the environment. Our findings demonstrate UNMT's potential to
translate EC, illustrating that task complexity characterized by semantic
diversity enhances EC translatability, while higher task complexity with
constrained semantic variability exhibits pragmatic EC, which, although
challenging to interpret, remains suitable for translation. This research marks
the first attempt, to our knowledge, to translate EC without the aid of
parallel data.
|
2502.07553
|
Attention Learning is Needed to Efficiently Learn Parity Function
|
cs.LG
|
Transformers, with their attention mechanisms, have emerged as the
state-of-the-art architectures of sequential modeling and empirically
outperform feed-forward neural networks (FFNNs) across many fields, such as
natural language processing and computer vision. However, their generalization
ability, particularly for low-sensitivity functions, remains less studied. We
bridge this gap by analyzing transformers on the $k$-parity problem. Daniely
and Malach (NeurIPS 2020) show that FFNNs with one hidden layer and $O(nk^7
\log k)$ parameters can learn $k$-parity, where the input length $n$ is
typically much larger than $k$. In this paper, we prove that FFNNs require at
least $\Omega(n)$ parameters to learn $k$-parity, while transformers require
only $O(k)$ parameters, surpassing the theoretical lower bound needed by FFNNs.
We further prove that this parameter efficiency cannot be achieved with fixed
attention heads. Our work establishes transformers as theoretically superior to
FFNNs in learning parity function, showing how their attention mechanisms
enable parameter-efficient generalization in functions with low sensitivity.
|
2502.07555
|
O1 Embedder: Let Retrievers Think Before Action
|
cs.CL
|
The growing power of large language models (LLMs) has revolutionized how
people access and utilize information. Notably, the LLMs excel at performing
fine-grained data representation, which facilitates precise retrieval of
information. They also generate high-quality answers based on external
references, enabling the production of useful knowledge. The recent
introduction of reasoning models, like OpenAI O1 and DeepSeek R1, marks another
leap forward, highlighting LLMs' ability to think progressively before
delivering final answers. This breakthrough significantly improves the ability
to address complex tasks, e.g., coding and math proofs.
Inspired by this progress, we aim to develop similar capabilities for
retrieval models, which hold great promise for tackling critical challenges in
the field, including multi-task retrieval, zero-shot retrieval, and tasks
requiring intensive reasoning of complex relationships. With this motivation,
we propose a novel approach called O1 Embedder, which generates useful thoughts
for the input query before making retrieval for the target documents. To
realize this objective, we conquer two technical difficulties. First, we design
a data synthesis workflow, creating training signals for O1 Embedder by
generating initial thoughts from an LLM-expert and subsequently refining them
using a retrieval committee. Second, we optimize the training process, enabling
a pre-trained model to be jointly fine-tuned to generate retrieval thoughts via
behavior cloning and perform dense retrieval through contrastive learning. Our
approach is evaluated by comprehensive experiments, where substantial
improvements are achieved across 12 popular datasets, spanning both in-domain
and out-of-domain scenarios. These results highlight O1 Embedder's remarkable
accuracy and generalizability, paving the way for the development of
next-generation IR foundation models.
|
2502.07556
|
SketchFlex: Facilitating Spatial-Semantic Coherence in Text-to-Image
Generation with Region-Based Sketches
|
cs.HC cs.CV
|
Text-to-image models can generate visually appealing images from text
descriptions. Efforts have been devoted to improving model controls with prompt
tuning and spatial conditioning. However, our formative study highlights the
challenges for non-expert users in crafting appropriate prompts and specifying
fine-grained spatial conditions (e.g., depth or canny references) to generate
semantically cohesive images, especially when multiple objects are involved. In
response, we introduce SketchFlex, an interactive system designed to improve
the flexibility of spatially conditioned image generation using rough region
sketches. The system automatically infers user prompts with rational
descriptions within a semantic space enriched by crowd-sourced object
attributes and relationships. Additionally, SketchFlex refines users' rough
sketches into canny-based shape anchors, ensuring the generation quality and
alignment of user intentions. Experimental results demonstrate that SketchFlex
achieves more cohesive image generations than end-to-end models, meanwhile
significantly reducing cognitive load and better matching user intentions
compared to region-based generation baseline.
|
2502.07558
|
Efficient Sparsification of Simplicial Complexes via Local Densities of
States
|
stat.ML cs.CG cs.DM cs.NA cs.SI math.NA
|
Simplicial complexes (SCs), a generalization of graph models for relational
data that account for higher-order relations between data items, have become a
popular abstraction for analyzing complex data using tools from topological
data analysis or topological signal processing. However, the analysis of many
real-world datasets leads to dense SCs with a large number of higher-order
interactions. Unfortunately, analyzing such large SCs often has a prohibitive
cost in terms of computation time and memory consumption. The sparsification of
such complexes, i.e., the approximation of an original SC with a sparser
simplicial complex with only a log-linear number of high-order simplices while
maintaining a spectrum close to the original SC, is of broad interest.
In this work, we develop a novel method for a probabilistic sparsifaction of
SCs. At its core lies the efficient computation of sparsifying sampling
probability through local densities of states as functional descriptors of the
spectral information. To avoid pathological structures in the spectrum of the
corresponding Hodge Laplacian operators, we suggest a "kernel-ignoring"
decomposition for approximating the sampling probability; additionally, we
exploit error estimates to show asymptotically prevailing algorithmic
complexity of the developed method. The performance of the framework is
demonstrated on the family of Vietoris--Rips filtered simplicial complexes.
|
2502.07560
|
Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning
|
cs.CV
|
Class-incremental learning (CIL) seeks to enable a model to sequentially
learn new classes while retaining knowledge of previously learned ones.
Balancing flexibility and stability remains a significant challenge,
particularly when the task ID is unknown. To address this, our study reveals
that the gap in feature distribution between novel and existing tasks is
primarily driven by differences in mean and covariance moments. Building on
this insight, we propose a novel semantic drift calibration method that
incorporates mean shift compensation and covariance calibration. Specifically,
we calculate each class's mean by averaging its sample embeddings and estimate
task shifts using weighted embedding changes based on their proximity to the
previous mean, effectively capturing mean shifts for all learned classes with
each new task. We also apply Mahalanobis distance constraint for covariance
calibration, aligning class-specific embedding covariances between old and
current networks to mitigate the covariance shift. Additionally, we integrate a
feature-level self-distillation approach to enhance generalization.
Comprehensive experiments on commonly used datasets demonstrate the
effectiveness of our approach. The source code is available at
\href{https://github.com/fwu11/MACIL.git}{https://github.com/fwu11/MACIL.git}.
|
2502.07562
|
LoRP-TTS: Low-Rank Personalized Text-To-Speech
|
cs.SD cs.AI eess.AS
|
Speech synthesis models convert written text into natural-sounding audio.
While earlier models were limited to a single speaker, recent advancements have
led to the development of zero-shot systems that generate realistic speech from
a wide range of speakers using their voices as additional prompts. However,
they still struggle with imitating non-studio-quality samples that differ
significantly from the training datasets. In this work, we demonstrate that
utilizing Low-Rank Adaptation (LoRA) allows us to successfully use even single
recordings of spontaneous speech in noisy environments as prompts. This
approach enhances speaker similarity by up to $30pp$ while preserving content
and naturalness. It represents a significant step toward creating truly diverse
speech corpora, that is crucial in all speech-related tasks.
|
2502.07563
|
LASP-2: Rethinking Sequence Parallelism for Linear Attention and Its
Hybrid
|
cs.LG cs.AI cs.CL
|
Linear sequence modeling approaches, such as linear attention, provide
advantages like linear-time training and constant-memory inference over
sequence lengths. However, existing sequence parallelism (SP) methods are
either not optimized for the right-product-first feature of linear attention or
use a ring-style communication strategy, which results in lower computation
parallelism, limits their scalability for longer sequences in distributed
systems. In this paper, we introduce LASP-2, a new SP method to enhance both
communication and computation parallelism when training linear attention
transformer models with very-long input sequences. Compared to previous work
LASP, LASP-2 rethinks the minimal communication requirement for SP on linear
attention layers, reorganizes the whole communication-computation workflow of
LASP. In this way, only one single AllGather collective communication is needed
on intermediate memory states, whose sizes are independent of the sequence
length, leading to significant improvements of both communication and
computation parallelism, as well as their overlap. Additionally, we extend
LASP-2 to LASP-2H by applying similar communication redesign to standard
attention modules, offering an efficient SP solution for hybrid models that
blend linear and standard attention layers. Our evaluation on a Linear-Llama3
model, a variant of Llama3 with linear attention replacing standard attention,
demonstrates the effectiveness of LASP-2 and LASP-2H. Specifically, LASP-2
achieves training speed improvements of 15.2% over LASP and 36.6% over Ring
Attention, with a sequence length of 2048K across 64 GPUs. The Code is released
as a part of: https://github.com/OpenSparseLLMs/Linear-MoE.
|
2502.07564
|
An Elliptic Curve Based Solution to the Perspective-Three-Point Problem
|
cs.CV math.AG
|
The Perspective-Three-Point Problem (P3P) is solved by first focusing on
determining the directions of the lines through pairs of control points,
relative to the camera, rather than the distances from the camera to the
control points. The analysis of this produces an efficient, accurate and
reasonably simple P3P solver, which is compared with a state-of-the-art P3P
solver, "Lambda Twist." Both methods depend on the accurate computation of a
single root of a cubic polynomial. They have been implemented and tested for a
wide range of control-point triangles, and under certain reasonable
restrictions, the new method is noticably more accurate than Lambda Twist,
though it is slower. However, the principal value of the present work is not in
introducing yet another P3P solver, but lies rather in the discovery of an
intimate connection between the P3P problem and a special family of elliptic
curves that includes curves utilized in cryptography. This holds the potential
for further advances in a number of directions. To make this connection, an
interesting spherical analogue of an ancient "sliding" problem is stated and
solved.
|
2502.07566
|
Capacity of the Binary Energy Harvesting Channel
|
cs.IT math.IT
|
The capacity of a channel with an energy-harvesting (EH) encoder and a finite
battery remains an open problem, even in the noiseless case. A key instance of
this scenario is the binary EH channel (BEHC), where the encoder has a
unit-sized battery and binary inputs. Existing capacity expressions for the
BEHC are not computable, motivating this work, which determines the capacity to
any desired precision via convex optimization. By modeling the system as a
finite-state channel with state information known causally at the encoder, we
derive single-letter lower and upper bounds using auxiliary directed graphs,
termed $Q$-graphs. These $Q$-graphs exhibit a special structure with a finite
number of nodes, $N$, enabling the formulation of the bounds as convex
optimization problems. As $N$ increases, the bounds tighten and converge to the
capacity with a vanishing gap of $O(N)$. For any EH probability parameter
$\eta\in \{0.1,0.2, \dots, 0.9\}$, we compute the capacity with a precision of
${1e-6}$, outperforming the best-known bounds in the literature. Finally, we
extend this framework to noisy EH channels with feedback, and present numerical
achievable rates for the binary symmetric channel using a Markov decision
process.
|
2502.07575
|
Towards Efficient and Multifaceted Computer-assisted Pronunciation
Training Leveraging Hierarchical Selective State Space Model and Decoupled
Cross-entropy Loss
|
eess.AS cs.CL
|
Prior efforts in building computer-assisted pronunciation training (CAPT)
systems often treat automatic pronunciation assessment (APA) and
mispronunciation detection and diagnosis (MDD) as separate fronts: the former
aims to provide multiple pronunciation aspect scores across diverse linguistic
levels, while the latter focuses instead on pinpointing the precise phonetic
pronunciation errors made by non-native language learners. However, it is
generally expected that a full-fledged CAPT system should perform both
functionalities simultaneously and efficiently. In response to this surging
demand, we in this work first propose HMamba, a novel CAPT approach that
seamlessly integrates APA and MDD tasks in parallel. In addition, we introduce
a novel loss function, decoupled cross-entropy loss (deXent), specifically
tailored for MDD to facilitate better-supervised learning for detecting
mispronounced phones, thereby enhancing overall performance. A comprehensive
set of empirical results on the speechocean762 benchmark dataset demonstrates
the effectiveness of our approach on APA. Notably, our proposed approach also
yields a considerable improvement in MDD performance over a strong baseline,
achieving an F1-score of 63.85%. Our codes are made available at
https://github.com/Fuann/hmamba
|
2502.07577
|
Automated Capability Discovery via Model Self-Exploration
|
cs.LG cs.AI cs.CL
|
Foundation models have become general-purpose assistants, exhibiting diverse
capabilities across numerous domains through training on web-scale data. It
remains challenging to precisely characterize even a fraction of the full
spectrum of capabilities and potential risks in any new model. Existing
evaluation approaches often require significant human effort, and it is taking
increasing effort to design ever harder challenges for more capable models. We
introduce Automated Capability Discovery (ACD), a framework that designates one
foundation model as a scientist to systematically propose open-ended tasks
probing the abilities of a subject model (potentially itself). By combining
frontier models with ideas from the field of open-endedness, ACD automatically
and systematically uncovers both surprising capabilities and failures in the
subject model. We demonstrate ACD across a range of foundation models
(including the GPT, Claude, and Llama series), showing that it automatically
reveals thousands of capabilities that would be challenging for any single team
to uncover. We further validate our method's automated scoring with extensive
human surveys, observing high agreement between model-generated and human
evaluations. By leveraging foundation models' ability to both create tasks and
self-evaluate, ACD is a significant step toward scalable, automated evaluation
of novel AI systems. All code and evaluation logs are open-sourced at
https://github.com/conglu1997/ACD.
|
2502.07579
|
Single-Step Consistent Diffusion Samplers
|
cs.LG stat.ML
|
Sampling from unnormalized target distributions is a fundamental yet
challenging task in machine learning and statistics. Existing sampling
algorithms typically require many iterative steps to produce high-quality
samples, leading to high computational costs that limit their practicality in
time-sensitive or resource-constrained settings. In this work, we introduce
consistent diffusion samplers, a new class of samplers designed to generate
high-fidelity samples in a single step. We first develop a distillation
algorithm to train a consistent diffusion sampler from a pretrained diffusion
model without pre-collecting large datasets of samples. Our algorithm leverages
incomplete sampling trajectories and noisy intermediate states directly from
the diffusion process. We further propose a method to train a consistent
diffusion sampler from scratch, fully amortizing exploration by training a
single model that both performs diffusion sampling and skips intermediate steps
using a self-consistency loss. Through extensive experiments on a variety of
unnormalized distributions, we show that our approach yields high-fidelity
samples using less than 1% of the network evaluations required by traditional
diffusion samplers.
|
2502.07580
|
Generative Modeling with Bayesian Sample Inference
|
cs.LG stat.ML
|
We derive a novel generative model from the simple act of Gaussian posterior
inference. Treating the generated sample as an unknown variable to infer lets
us formulate the sampling process in the language of Bayesian probability. Our
model uses a sequence of prediction and posterior update steps to narrow down
the unknown sample from a broad initial belief. In addition to a rigorous
theoretical analysis, we establish a connection between our model and diffusion
models and show that it includes Bayesian Flow Networks (BFNs) as a special
case. In our experiments, we demonstrate improved performance over both BFNs
and Variational Diffusion Models, achieving competitive likelihood scores on
CIFAR10 and ImageNet.
|
2502.07584
|
Understanding the Generalization Error of Markov algorithms through
Poissonization
|
stat.ML cs.LG
|
Using continuous-time stochastic differential equation (SDE) proxies to
stochastic optimization algorithms has proven fruitful for understanding their
generalization abilities. A significant part of these approaches are based on
the so-called ``entropy flows'', which greatly simplify the generalization
analysis. Unfortunately, such well-structured entropy flows cannot be obtained
for most discrete-time algorithms, and the existing SDE approaches remain
limited to specific noise and algorithmic structures. We aim to alleviate this
issue by introducing a generic framework for analyzing the generalization error
of Markov algorithms through `Poissonization', a continuous-time approximation
of discrete-time processes with formal approximation guarantees. Through this
approach, we first develop a novel entropy flow, which directly leads to
PAC-Bayesian generalization bounds. We then draw novel links to modified
versions of the celebrated logarithmic Sobolev inequalities (LSI), identify
cases where such LSIs are satisfied, and obtain improved bounds. Beyond its
generality, our framework allows exploiting specific properties of learning
algorithms. In particular, we incorporate the noise structure of different
algorithm types - namely, those with additional noise injections (noisy) and
those without (non-noisy) - through various technical tools. This illustrates
the capacity of our methods to achieve known (yet, Poissonized) and new
generalization bounds.
|
2502.07586
|
We Can't Understand AI Using our Existing Vocabulary
|
cs.CL cs.AI
|
This position paper argues that, in order to understand AI, we cannot rely on
our existing vocabulary of human words. Instead, we should strive to develop
neologisms: new words that represent precise human concepts that we want to
teach machines, or machine concepts that we need to learn. We start from the
premise that humans and machines have differing concepts. This means
interpretability can be framed as a communication problem: humans must be able
to reference and control machine concepts, and communicate human concepts to
machines. Creating a shared human-machine language through developing
neologisms, we believe, could solve this communication problem. Successful
neologisms achieve a useful amount of abstraction: not too detailed, so they're
reusable in many contexts, and not too high-level, so they convey precise
information. As a proof of concept, we demonstrate how a "length neologism"
enables controlling LLM response length, while a "diversity neologism" allows
sampling more variable responses. Taken together, we argue that we cannot
understand AI using our existing vocabulary, and expanding it through
neologisms creates opportunities for both controlling and understanding
machines better.
|
2502.07587
|
SEMU: Singular Value Decomposition for Efficient Machine Unlearning
|
cs.LG
|
While the capabilities of generative foundational models have advanced
rapidly in recent years, methods to prevent harmful and unsafe behaviors remain
underdeveloped. Among the pressing challenges in AI safety, machine unlearning
(MU) has become increasingly critical to meet upcoming safety regulations. Most
existing MU approaches focus on altering the most significant parameters of the
model. However, these methods often require fine-tuning substantial portions of
the model, resulting in high computational costs and training instabilities,
which are typically mitigated by access to the original training dataset.
In this work, we address these limitations by leveraging Singular Value
Decomposition (SVD) to create a compact, low-dimensional projection that
enables the selective forgetting of specific data points. We propose Singular
Value Decomposition for Efficient Machine Unlearning (SEMU), a novel approach
designed to optimize MU in two key aspects. First, SEMU minimizes the number of
model parameters that need to be modified, effectively removing unwanted
knowledge while making only minimal changes to the model's weights. Second,
SEMU eliminates the dependency on the original training dataset, preserving the
model's previously acquired knowledge without additional data requirements.
Extensive experiments demonstrate that SEMU achieves competitive performance
while significantly improving efficiency in terms of both data usage and the
number of modified parameters.
|
2502.07590
|
DSV: Exploiting Dynamic Sparsity to Accelerate Large-Scale Video DiT
Training
|
cs.DC cs.CV
|
Diffusion Transformers (DiTs) have shown remarkable performance in modeling
and generating high-quality videos. However, the quadratic computational
complexity of 3D full attention mechanism presents significant challenges in
scaling video DiT training, especially for high-definition and lengthy videos,
where attention can dominate up to 95% of the end-to-end time and necessitate
specialized communication paradigms to handle large input sizes.
This paper introduces DSV, a novel framework designed to accelerate and scale
the training of video DiTs by leveraging the inherent dynamic attention
sparsity throughout the training process. DSV employs a two-stage training
algorithm that exploits sparsity patterns, focusing on critical elements
supported by efficient, tailored kernels. To accommodate the new sparsity
dimension, we develop a hybrid sparsity-aware context parallelism that
effectively scales to large inputs by addressing the heterogeneity of sparsity
across attention heads and blocks, resulting in optimized sparse computation
and communication. Extensive evaluations demonstrate that DSV achieves up to
3.02x gain in training throughput with nearly no quality degradation.
|
2502.07591
|
DMWM: Dual-Mind World Model with Long-Term Imagination
|
cs.LG cs.AI
|
Imagination in world models is crucial for enabling agents to learn
long-horizon policy in a sample-efficient manner. Existing recurrent
state-space model (RSSM)-based world models depend on single-step statistical
inference to capture the environment dynamics, and, hence, they are unable to
perform long-term imagination tasks due to the accumulation of prediction
errors. Inspired by the dual-process theory of human cognition, we propose a
novel dual-mind world model (DMWM) framework that integrates logical reasoning
to enable imagination with logical consistency. DMWM is composed of two
components: an RSSM-based System 1 (RSSM-S1) component that handles state
transitions in an intuitive manner and a logic-integrated neural network-based
System 2 (LINN-S2) component that guides the imagination process through
hierarchical deep logical reasoning. The inter-system feedback mechanism is
designed to ensure that the imagination process follows the logical rules of
the real environment. The proposed framework is evaluated on benchmark tasks
that require long-term planning from the DMControl suite. Extensive
experimental results demonstrate that the proposed framework yields significant
improvements in terms of logical coherence, trial efficiency, data efficiency
and long-term imagination over the state-of-the-art world models.
|
2502.07592
|
YOLO Network For Defect Detection In Optical lenses
|
cs.CV
|
Mass-produced optical lenses often exhibit defects that alter their
scattering properties and compromise quality standards. Manual inspection is
usually adopted to detect defects, but it is not recommended due to low
accuracy, high error rate and limited scalability. To address these challenges,
this study presents an automated defect detection system based on the YOLOv8
deep learning model. A custom dataset of optical lenses, annotated with defect
and lens regions, was created to train the model. Experimental results obtained
in this study reveal that the system can be used to efficiently and accurately
detect defects in optical lenses. The proposed system can be utilized in
real-time industrial environments to enhance quality control processes by
enabling reliable and scalable defect detection in optical lens manufacturing.
|
2502.07595
|
Distributed Coverage Control for Time-Varying Spatial Processes
|
cs.RO
|
Multi-robot systems are essential for environmental monitoring, particularly
for tracking spatial phenomena like pollution, soil minerals, and water
salinity, and more. This study addresses the challenge of deploying a
multi-robot team for optimal coverage in environments where the density
distribution, describing areas of interest, is unknown and changes over time.
We propose a fully distributed control strategy that uses Gaussian Processes
(GPs) to model the spatial field and balance the trade-off between learning the
field and optimally covering it. Unlike existing approaches, we address a more
realistic scenario by handling time-varying spatial fields, where the
exploration-exploitation trade-off is dynamically adjusted over time. Each
robot operates locally, using only its own collected data and the information
shared by the neighboring robots. To address the computational limits of GPs,
the algorithm efficiently manages the volume of data by selecting only the most
relevant samples for the process estimation. The performance of the proposed
algorithm is evaluated through several simulations and experiments,
incorporating real-world data phenomena to validate its effectiveness.
|
2502.07596
|
Enviro-IoT: Calibrating Low-Cost Environmental Sensors in Urban Settings
|
eess.SY cs.SY
|
Low-cost miniaturised sensors offer significant advantage to monitor the
environment in real-time and accurately. The area of air quality monitoring has
attracted much attention in recent years because of the increasing impacts on
the environment and more personally to human health and mental wellbeing. Rapid
growth in sensors and Internet of Things (IoT) technologies is paving the way
for low-cost systems to transform global monitoring of air quality. Drawing on
4 years of development work, in this paper we outline the design,
implementation and analysis of \textit{Enviro-IoT} as a step forward to
monitoring air quality levels within urban environments by means of a low-cost
sensing system. An in-the-wild study for 9-months was performed to evaluate the
Enviro-IoT system against industry standard equipment is performed with
accuracy for measuring Particulate Matter 2.5, 10 and Nitrogen Dioxide
achieving 98\%, 97\% and 97\% respectively. The results in this case study are
made up Of 57, 120 which highlight that it is possible to take advantage of
low-cost sensors coupled with IoT technologies to validate the Enviro-IoT
device against research-grade industrial instruments.
|
2502.07599
|
DPO-Shift: Shifting the Distribution of Direct Preference Optimization
|
cs.CL
|
Direct Preference Optimization (DPO) and its variants have become
increasingly popular for aligning language models with human preferences. These
methods aim to teach models to better distinguish between chosen (or preferred)
and rejected (or dispreferred) responses. However, prior research has
identified that the probability of chosen responses often decreases during
training, and this phenomenon is known as likelihood displacement. To tackle
this challenge, in this work we introduce \method to controllably shift the
distribution of the chosen probability. Then, we show that \method exhibits a
fundamental trade-off between improving the chosen probability and sacrificing
the reward margin, as supported by both theoretical analysis and experimental
validation. Furthermore, we demonstrate the superiority of \method over DPO on
downstream tasks such as MT-Bench and a designed win rate experiment. We
believe this study shows that the likelihood displacement issue of DPO can be
effectively mitigated with a simple, theoretically grounded solution. Our code
is available at https://github.com/Meaquadddd/DPO-Shift.
|
2502.07600
|
PlaySlot: Learning Inverse Latent Dynamics for Controllable
Object-Centric Video Prediction and Planning
|
cs.CV cs.RO
|
Predicting future scene representations is a crucial task for enabling robots
to understand and interact with the environment. However, most existing methods
rely on video sequences and simulations with precise action annotations,
limiting their ability to leverage the large amount of available unlabeled
video data. To address this challenge, we propose PlaySlot, an object-centric
video prediction model that infers object representations and latent actions
from unlabeled video sequences. It then uses these representations to forecast
future object states and video frames. PlaySlot allows to generate multiple
possible futures conditioned on latent actions, which can be inferred from
video dynamics, provided by a user, or generated by a learned action policy,
thus enabling versatile and interpretable world modeling. Our results show that
PlaySlot outperforms both stochastic and object-centric baselines for video
prediction across different environments. Furthermore, we show that our
inferred latent actions can be used to learn robot behaviors sample-efficiently
from unlabeled video demonstrations. Videos and code are available at
https://play-slot.github.io/PlaySlot/.
|
2502.07601
|
Towards Zero-Shot Anomaly Detection and Reasoning with Multimodal Large
Language Models
|
cs.CV cs.CL
|
Zero-Shot Anomaly Detection (ZSAD) is an emerging AD paradigm. Unlike the
traditional unsupervised AD setting that requires a large number of normal
samples to train a model, ZSAD is more practical for handling data-restricted
real-world scenarios. Recently, Multimodal Large Language Models (MLLMs) have
shown revolutionary reasoning capabilities in various vision tasks. However,
the reasoning of image abnormalities remains underexplored due to the lack of
corresponding datasets and benchmarks. To facilitate research in AD &
reasoning, we establish the first visual instruction tuning dataset,
Anomaly-Instruct-125k, and the evaluation benchmark, VisA-D&R. Through
investigation with our benchmark, we reveal that current MLLMs like GPT-4o
cannot accurately detect and describe fine-grained anomalous details in images.
To address this, we propose Anomaly-OneVision (Anomaly-OV), the first
specialist visual assistant for ZSAD and reasoning. Inspired by human behavior
in visual inspection, Anomaly-OV leverages a Look-Twice Feature Matching (LTFM)
mechanism to adaptively select and emphasize abnormal visual tokens. Extensive
experiments demonstrate that Anomaly-OV achieves significant improvements over
advanced generalist models in both detection and reasoning. Extensions to
medical and 3D AD are provided for future study. The link to our project page:
https://xujiacong.github.io/Anomaly-OV/
|
2502.07602
|
An Improved Optimal Proximal Gradient Algorithm for Non-Blind Image
Deblurring
|
cs.CV math.OC
|
Image deblurring remains a central research area within image processing,
critical for its role in enhancing image quality and facilitating clearer
visual representations across diverse applications. This paper tackles the
optimization problem of image deblurring, assuming a known blurring kernel. We
introduce an improved optimal proximal gradient algorithm (IOptISTA), which
builds upon the optimal gradient method and a weighting matrix, to efficiently
address the non-blind image deblurring problem. Based on two regularization
cases, namely the $l_1$ norm and total variation norm, we perform numerical
experiments to assess the performance of our proposed algorithm. The results
indicate that our algorithm yields enhanced PSNR and SSIM values, as well as a
reduced tolerance, compared to existing methods.
|
2502.07606
|
Algorithmic Aspects of Strategic Trading
|
cs.GT cs.CE cs.LG
|
Algorithmic trading in modern financial markets is widely acknowledged to
exhibit strategic, game-theoretic behaviors whose complexity can be difficult
to model. A recent series of papers (Chriss, 2024b,c,a, 2025) has made progress
in the setting of trading for position building. Here parties wish to buy or
sell a fixed number of shares in a fixed time period in the presence of both
temporary and permanent market impact, resulting in exponentially large
strategy spaces. While these papers primarily consider the existence and
structural properties of equilibrium strategies, in this work we focus on the
algorithmic aspects of the proposed model. We give an efficient algorithm for
computing best responses, and show that while the temporary impact only setting
yields a potential game, best response dynamics do not generally converge for
the general setting, for which no fast algorithm for (Nash) equilibrium
computation is known. This leads us to consider the broader notion of Coarse
Correlated Equilibria (CCE), which we show can be computed efficiently via an
implementation of Follow the Perturbed Leader (FTPL). We illustrate the model
and our results with an experimental investigation, where FTPL exhibits
interesting behavior in different regimes of the relative weighting between
temporary and permanent market impact.
|
2502.07608
|
Beyond Prompting: Time2Lang -- Bridging Time-Series Foundation Models
and Large Language Models for Health Sensing
|
cs.LG cs.HC
|
Large language models (LLMs) show promise for health applications when
combined with behavioral sensing data. Traditional approaches convert sensor
data into text prompts, but this process is prone to errors, computationally
expensive, and requires domain expertise. These challenges are particularly
acute when processing extended time series data. While time series foundation
models (TFMs) have recently emerged as powerful tools for learning
representations from temporal data, bridging TFMs and LLMs remains challenging.
Here, we present Time2Lang, a framework that directly maps TFM outputs to LLM
representations without intermediate text conversion. Our approach first trains
on synthetic data using periodicity prediction as a pretext task, followed by
evaluation on mental health classification tasks. We validate Time2Lang on two
longitudinal wearable and mobile sensing datasets: daily depression prediction
using step count data (17,251 days from 256 participants) and flourishing
classification based on conversation duration (46 participants over 10 weeks).
Time2Lang maintains near constant inference times regardless of input length,
unlike traditional prompting methods. The generated embeddings preserve
essential time-series characteristics such as auto-correlation. Our results
demonstrate that TFMs and LLMs can be effectively integrated while minimizing
information loss and enabling performance transfer across these distinct
modeling paradigms. To our knowledge, we are the first to integrate a TFM and
an LLM for health, thus establishing a foundation for future research combining
general-purpose large models for complex healthcare tasks.
|
2502.07615
|
Flow Distillation Sampling: Regularizing 3D Gaussians with Pre-trained
Matching Priors
|
cs.CV
|
3D Gaussian Splatting (3DGS) has achieved excellent rendering quality with
fast training and rendering speed. However, its optimization process lacks
explicit geometric constraints, leading to suboptimal geometric reconstruction
in regions with sparse or no observational input views. In this work, we try to
mitigate the issue by incorporating a pre-trained matching prior to the 3DGS
optimization process. We introduce Flow Distillation Sampling (FDS), a
technique that leverages pre-trained geometric knowledge to bolster the
accuracy of the Gaussian radiance field. Our method employs a strategic
sampling technique to target unobserved views adjacent to the input views,
utilizing the optical flow calculated from the matching model (Prior Flow) to
guide the flow analytically calculated from the 3DGS geometry (Radiance Flow).
Comprehensive experiments in depth rendering, mesh reconstruction, and novel
view synthesis showcase the significant advantages of FDS over state-of-the-art
methods. Additionally, our interpretive experiments and analysis aim to shed
light on the effects of FDS on geometric accuracy and rendering quality,
potentially providing readers with insights into its performance. Project page:
https://nju-3dv.github.io/projects/fds
|
2502.07616
|
Tractable Transformers for Flexible Conditional Generation
|
cs.CL cs.LG
|
Non-autoregressive (NAR) generative models are valuable because they can
handle diverse conditional generation tasks in a more principled way than their
autoregressive (AR) counterparts, which are constrained by sequential
dependency requirements. Recent advancements in NAR models, such as diffusion
language models, have demonstrated superior performance in unconditional
generation compared to AR models (e.g., GPTs) of similar sizes. However, such
improvements do not always lead to improved conditional generation performance.
We show that a key reason for this gap is the difficulty in generalizing to
conditional probability queries unseen during training. As a result, strong
unconditional generation performance does not guarantee high-quality
conditional generation. This paper proposes Tractable Transformers
(Tracformer), a Transformer-based generative model that is more robust to
different conditional generation tasks. Unlike existing models that rely solely
on global contextual features derived from full inputs, Tracformers incorporate
a sparse Transformer encoder to capture both local and global contextual
information. This information is routed through a decoder for conditional
generation. Empirical results demonstrate that Tracformers achieve
state-of-the-art conditional generation performance on text modeling compared
to recent diffusion and AR model baselines.
|
2502.07617
|
Scaling Pre-training to One Hundred Billion Data for Vision Language
Models
|
cs.CV
|
We provide an empirical investigation of the potential of pre-training
vision-language models on an unprecedented scale: 100 billion examples. We find
that model performance tends to saturate at this scale on many common
Western-centric classification and retrieval benchmarks, such as COCO Captions.
Nevertheless, tasks of cultural diversity achieve more substantial gains from
the 100-billion scale web data, thanks to its coverage of long-tail concepts.
Furthermore, we analyze the model's multilinguality and show gains in
low-resource languages as well. In addition, we observe that reducing the size
of the pretraining dataset via quality filters like using CLIP, typically used
to enhance performance, may inadvertently reduce the cultural diversity
represented even in large-scale datasets. Our results highlight that while
traditional benchmarks may not benefit significantly from scaling noisy, raw
web data to 100 billion examples, this data scale is vital for building truly
inclusive multimodal systems.
|
2502.07620
|
Causal-Informed Contrastive Learning: Towards Bias-Resilient
Pre-training under Concept Drift
|
cs.LG cs.CV
|
The evolution of large-scale contrastive pre-training propelled by top-tier
datasets has reached a transition point in the scaling law. Consequently,
sustaining and enhancing a model's pre-training capabilities in drift
environments have surfaced as a notable challenge. In this paper, we initially
uncover that contrastive pre-training methods are significantly impacted by
concept drift wherein distributions change unpredictably, resulting in notable
biases in the feature space of the pre-trained model. Empowered by causal
inference, we construct a structural causal graph to analyze the impact of
concept drift to contrastive pre-training systemically, and propose the causal
interventional contrastive objective. Upon achieving this, we devise a
resilient contrastive pre-training approach to accommodate the data stream of
concept drift, with simple and scalable implementation. Extensive experiments
on various downstream tasks demonstrate our resilient contrastive pre-training
effectively mitigates the bias stemming from the concept drift data stream.
Codes are available at https://anonymous.4open.science/r/ResilientCL/.
|
2502.07623
|
Lexical categories of stem-forming roots in Mapud\"ungun verb forms
|
cs.CL
|
After developing a computational system for morphological analysis of the
Mapuche language, and evaluating it with texts from various authors and styles,
it became necessary to verify the linguistic assumptions of the source used as
the basis for implementing this tool.
In the present work, the primary focus is on the lexical category
classification of Mapud\"ungun roots recognised as verbal in the source
utilised for the development of the morphological analysis system.
The results of this lexical category revision directly benefit the
computational analyser, as they are implemented as soon as they are verified.
Additionally, it is hoped that these results will help clarify some
uncertainties about lexical categories in the Mapuche language.
This work addresses a preliminary task to identify the valency of true verbal
roots, the results of which will be presented in a subsequent work that
complements this article.
|
2502.07629
|
Exploring Mobile Touch Interaction with Large Language Models
|
cs.HC cs.CL
|
Interacting with Large Language Models (LLMs) for text editing on mobile
devices currently requires users to break out of their writing environment and
switch to a conversational AI interface. In this paper, we propose to control
the LLM via touch gestures performed directly on the text. We first chart a
design space that covers fundamental touch input and text transformations. In
this space, we then concretely explore two control mappings: spread-to-generate
and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a
user study (N=14) that compares three feedback designs: no visualisation, text
length indicator, and length + word indicator. The results demonstrate that
touch-based control of LLMs is both feasible and user-friendly, with the length
+ word indicator proving most effective for managing text generation. This work
lays the foundation for further research into gesture-based interaction with
LLMs on touch devices.
|
2502.07630
|
Rethinking Timing Residuals: Advancing PET Detectors with Explicit TOF
Corrections
|
physics.ins-det cs.LG
|
PET is a functional imaging method that visualizes metabolic processes. TOF
information can be derived from coincident detector signals and incorporated
into image reconstruction to enhance the SNR. PET detectors are typically
assessed by their CTR, but timing performance is degraded by various factors.
Research on timing calibration seeks to mitigate these degradations and restore
accurate timing information. While many calibration methods use analytical
approaches, machine learning techniques have recently gained attention due to
their flexibility. We developed a residual physics-based calibration approach
that combines prior domain knowledge with the power of machine learning models.
This approach begins with an initial analytical calibration addressing
first-order skews. The remaining deviations, regarded as residual effects, are
used to train machine learning models to eliminate higher-order skews. The key
advantage is that the experimenter guides the learning process through the
definition of timing residuals. In earlier studies, we developed models that
directly predicted the expected time difference, which offered corrections only
implicitly (implicit correction models). In this study, we introduce a new
definition for timing residuals, enabling us to train models that directly
predict correction values (explicit correction models). The explicit correction
approach significantly simplifies data acquisition, improves linearity, and
enhances timing performance from $371 \pm 6$ ps to $281 \pm 5$ ps for
coincidences from 430 keV to 590 keV. Additionally, the new definition reduces
model size, making it suitable for high-throughput applications like PET
scanners. Experiments were conducted using two detector stacks composed of $4
\times 4$ LYSO:Ce,Ca crystals ($3.8\times 3.8\times 20$ mm$^{3}$) coupled to $4
\times 4$ Broadcom NUV-MT SiPMs and digitized with the TOFPET2 ASIC.
|
2502.07631
|
Divide and Merge: Motion and Semantic Learning in End-to-End Autonomous
Driving
|
cs.CV
|
Perceiving the environment and its changes over time corresponds to two
fundamental yet heterogeneous types of information: semantics and motion.
Previous end-to-end autonomous driving works represent both types of
information in a single feature vector. However, including motion tasks, such
as prediction and planning, always impairs detection and tracking performance,
a phenomenon known as negative transfer in multi-task learning. To address this
issue, we propose Neural-Bayes motion decoding, a novel parallel detection,
tracking, and prediction method separating semantic and motion learning,
similar to the Bayes filter. Specifically, we employ a set of learned motion
queries that operate in parallel with the detection and tracking queries,
sharing a unified set of recursively updated reference points. Moreover, we
employ interactive semantic decoding to enhance information exchange in
semantic tasks, promoting positive transfer. Experiments on the nuScenes
dataset show improvements of 5% in detection and 11% in tracking. Our method
achieves state-of-the-art collision rates in open-loop planning evaluation
without any modifications to the planning module.
|
2502.07634
|
Efficient Distributed Training through Gradient Compression with
Sparsification and Quantization Techniques
|
cs.LG cs.MM
|
This study investigates the impact of gradient compression on distributed
training performance, focusing on sparsification and quantization techniques,
including top-k, DGC, and QSGD. In baseline experiments, random-k compression
results in severe performance degradation, highlighting its inefficacy. In
contrast, using top-k and DGC at 50 times compression yields performance
improvements, reducing perplexity by up to 0.06 compared to baseline.
Experiments across 1, 2, and 4 workers demonstrate that conservative
sparsification can have a regularizing effect, especially for smaller models,
while compression ratios above 5000 times impair performance, particularly for
DGC. Communication times are reduced across all compression methods, with top-k
and DGC decreasing communication to negligible levels at high compression
ratios. However, increased computation times offset this efficiency for top-k
due to sorting demands, making it less scalable than DGC or QSGD. In
convergence tests, sparsification techniques show accelerated convergence,
requiring fewer epochs than the baseline, which has implications for
computational savings. Although precision trade-offs emerge, floating point
errors are mitigated by compression. This study's findings underscore the need
to tune hyperparameters specifically for each compression technique to achieve
optimal model performance, especially in distributed training systems.
|
2502.07635
|
Distributed Value Decomposition Networks with Networked Agents
|
cs.LG cs.AI cs.MA
|
We investigate the problem of distributed training under partial
observability, whereby cooperative multi-agent reinforcement learning agents
(MARL) maximize the expected cumulative joint reward. We propose distributed
value decomposition networks (DVDN) that generate a joint Q-function that
factorizes into agent-wise Q-functions. Whereas the original value
decomposition networks rely on centralized training, our approach is suitable
for domains where centralized training is not possible and agents must learn by
interacting with the physical environment in a decentralized manner while
communicating with their peers. DVDN overcomes the need for centralized
training by locally estimating the shared objective. We contribute with two
innovative algorithms, DVDN and DVDN (GT), for the heterogeneous and
homogeneous agents settings respectively. Empirically, both algorithms
approximate the performance of value decomposition networks, in spite of the
information loss during communication, as demonstrated in ten MARL tasks in
three standard environments.
|
2502.07636
|
Consistency Training with Physical Constraints
|
cs.LG
|
We propose a physics-aware Consistency Training (CT) method that accelerates
sampling in Diffusion Models with physical constraints. Our approach leverages
a two-stage strategy: (1) learning the noise-to-data mapping via CT, and (2)
incorporating physics constraints as a regularizer. Experiments on toy examples
show that our method generates samples in a single step while adhering to the
imposed constraints. This approach has the potential to efficiently solve
partial differential equations (PDEs) using deep generative modeling.
|
2502.07637
|
BiaSWE: An Expert Annotated Dataset for Misogyny Detection in Swedish
|
cs.CL
|
In this study, we introduce the process for creating BiaSWE, an
expert-annotated dataset tailored for misogyny detection in the Swedish
language. To address the cultural and linguistic specificity of misogyny in
Swedish, we collaborated with experts from the social sciences and humanities.
Our interdisciplinary team developed a rigorous annotation process,
incorporating both domain knowledge and language expertise, to capture the
nuances of misogyny in a Swedish context. This methodology ensures that the
dataset is not only culturally relevant but also aligned with broader efforts
in bias detection for low-resource languages. The dataset, along with the
annotation guidelines, is publicly available for further research.
|
2502.07640
|
Goedel-Prover: A Frontier Model for Open-Source Automated Theorem
Proving
|
cs.LG cs.AI
|
We introduce Goedel-Prover, an open-source large language model (LLM) that
achieves the state-of-the-art (SOTA) performance in automated formal proof
generation for mathematical problems. The key challenge in this field is the
scarcity of formalized math statements and proofs, which we tackle in the
following ways. We train statement formalizers to translate the natural
language math problems from Numina into formal language (Lean 4), creating a
dataset of 1.64 million formal statements. LLMs are used to check that the
formal statements accurately preserve the content of the original natural
language problems. We then iteratively build a large dataset of formal proofs
by training a series of provers. Each prover succeeds in proving many
statements that the previous ones could not, and these new proofs are added to
the training set for the next prover. Despite using only supervised
fine-tuning, our final prover significantly outperforms the previous best
open-source model, DeepSeek-Prover-V1.5, which employs reinforcement learning.
On the miniF2F benchmark, our model achieves a success rate of 57.6% (Pass@32),
surpassing DeepSeek-Prover-V1.5 by 7.6%. On PutnamBench, Goedel-Prover
successfully solves 7 problems (Pass@512), ranking first on the leaderboard.
Furthermore, it generates 29.7K formal proofs for Lean Workbook problems,
nearly doubling the 15.7K produced by earlier works.
|
2502.07642
|
FoQA: A Faroese Question-Answering Dataset
|
cs.CL cs.LG
|
We present FoQA, a Faroese extractive question-answering (QA) dataset with
2,000 samples, created using a semi-automated approach combining Large Language
Models (LLMs) and human validation. The dataset was generated from Faroese
Wikipedia articles using GPT-4-turbo for initial QA generation, followed by
question rephrasing to increase complexity and native speaker validation to
ensure quality. We provide baseline performance metrics for FoQA across
multiple models, including LLMs and BERT, demonstrating its effectiveness in
evaluating Faroese QA performance. The dataset is released in three versions: a
validated set of 2,000 samples, a complete set of all 10,001 generated samples,
and a set of 2,395 rejected samples for error analysis.
|
2502.07644
|
SymGPT: Auditing Smart Contracts via Combining Symbolic Execution with
Large Language Models
|
cs.AI
|
To govern smart contracts running on Ethereum, multiple Ethereum Request for
Comment (ERC) standards have been developed, each having a set of rules to
guide the behaviors of smart contracts. Violating the ERC rules could cause
serious security issues and financial loss, signifying the importance of
verifying smart contracts follow ERCs. Today's practices of such verification
are to manually audit each single contract, use expert-developed
program-analysis tools, or use large language models (LLMs), all of which are
far from effective in identifying ERC rule violations. This paper introduces
SymGPT, a tool that combines the natural language understanding of large
language models (LLMs) with the formal guarantees of symbolic execution to
automatically verify smart contracts' compliance with ERC rules. To develop
SymGPT, we conduct an empirical study of 132 ERC rules from three widely used
ERC standards, examining their content, security implications, and natural
language descriptions. Based on this study, we design SymGPT by first
instructing an LLM to translate ERC rules into a defined EBNF grammar. We then
synthesize constraints from the formalized rules to represent scenarios where
violations may occur and use symbolic execution to detect them. Our evaluation
shows that SymGPT identifies 5,783 ERC rule violations in 4,000 real-world
contracts, including 1,375 violations with clear attack paths for stealing
financial assets, demonstrating its effectiveness. Furthermore, SymGPT
outperforms six automated techniques and a security-expert auditing service,
underscoring its superiority over current smart contract analysis methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.