id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.12674
|
SATA: Safe and Adaptive Torque-Based Locomotion Policies Inspired by
Animal Learning
|
cs.RO cs.LG
|
Despite recent advances in learning-based controllers for legged robots,
deployments in human-centric environments remain limited by safety concerns.
Most of these approaches use position-based control, where policies output
target joint angles that must be processed by a low-level controller (e.g., PD
or impedance controllers) to compute joint torques. Although impressive results
have been achieved in controlled real-world scenarios, these methods often
struggle with compliance and adaptability when encountering environments or
disturbances unseen during training, potentially resulting in extreme or unsafe
behaviors. Inspired by how animals achieve smooth and adaptive movements by
controlling muscle extension and contraction, torque-based policies offer a
promising alternative by enabling precise and direct control of the actuators
in torque space. In principle, this approach facilitates more effective
interactions with the environment, resulting in safer and more adaptable
behaviors. However, challenges such as a highly nonlinear state space and
inefficient exploration during training have hindered their broader adoption.
To address these limitations, we propose SATA, a bio-inspired framework that
mimics key biomechanical principles and adaptive learning mechanisms observed
in animal locomotion. Our approach effectively addresses the inherent
challenges of learning torque-based policies by significantly improving
early-stage exploration, leading to high-performance final policies.
Remarkably, our method achieves zero-shot sim-to-real transfer. Our
experimental results indicate that SATA demonstrates remarkable compliance and
safety, even in challenging environments such as soft/slippery terrain or
narrow passages, and under significant external disturbances, highlighting its
potential for practical deployments in human-centric and safety-critical
scenarios.
|
2502.12677
|
Spiking Vision Transformer with Saccadic Attention
|
cs.CV cs.AI
|
The combination of Spiking Neural Networks (SNNs) and Vision Transformers
(ViTs) holds potential for achieving both energy efficiency and high
performance, particularly suitable for edge vision applications. However, a
significant performance gap still exists between SNN-based ViTs and their ANN
counterparts. Here, we first analyze why SNN-based ViTs suffer from limited
performance and identify a mismatch between the vanilla self-attention
mechanism and spatio-temporal spike trains. This mismatch results in degraded
spatial relevance and limited temporal interactions. To address these issues,
we draw inspiration from biological saccadic attention mechanisms and introduce
an innovative Saccadic Spike Self-Attention (SSSA) method. Specifically, in the
spatial domain, SSSA employs a novel spike distribution-based method to
effectively assess the relevance between Query and Key pairs in SNN-based ViTs.
Temporally, SSSA employs a saccadic interaction module that dynamically focuses
on selected visual areas at each timestep and significantly enhances whole
scene understanding through temporal interactions. Building on the SSSA
mechanism, we develop a SNN-based Vision Transformer (SNN-ViT). Extensive
experiments across various visual tasks demonstrate that SNN-ViT achieves
state-of-the-art performance with linear computational complexity. The
effectiveness and efficiency of the SNN-ViT highlight its potential for
power-critical edge vision applications.
|
2502.12678
|
Multi-Step Alignment as Markov Games: An Optimistic Online Gradient
Descent Approach with Convergence Guarantees
|
cs.LG cs.AI cs.CL
|
Reinforcement Learning from Human Feedback (RLHF) has been highly successful
in aligning large language models with human preferences. While prevalent
methods like DPO have demonstrated strong performance, they frame interactions
with the language model as a bandit problem, which limits their applicability
in real-world scenarios where multi-turn conversations are common.
Additionally, DPO relies on the Bradley-Terry model assumption, which does not
adequately capture the non-transitive nature of human preferences. In this
paper, we address these challenges by modeling the alignment problem as a
two-player constant-sum Markov game, where each player seeks to maximize their
winning rate against the other across all steps of the conversation. Our
approach Multi-step Preference Optimization (MPO) is built upon the natural
actor-critic framework~\citep{peters2008natural}. We further develop OMPO based
on the optimistic online gradient descent
algorithm~\citep{rakhlin2013online,joulani17a}. Theoretically, we provide a
rigorous analysis for both algorithms on convergence and show that OMPO
requires $\mathcal{O}(\epsilon^{-1})$ policy updates to converge to an
$\epsilon$-approximate Nash equilibrium. We also validate the effectiveness of
our method on multi-turn conversations dataset and math reasoning dataset.
|
2502.12680
|
Introducing ROADS: A Systematic Comparison of Remote Control Interaction
Concepts for Automated Vehicles at Road Works
|
cs.HC cs.RO
|
As vehicle automation technology continues to mature, there is a necessity
for robust remote monitoring and intervention features. These are essential for
intervening during vehicle malfunctions, challenging road conditions, or in
areas that are difficult to navigate. This evolution in the role of the human
operator - from a constant driver to an intermittent teleoperator -
necessitates the development of suitable interaction interfaces. While some
interfaces were suggested, a comparative study is missing. We designed,
implemented, and evaluated three interaction concepts (path planning,
trajectory guidance, and waypoint guidance) with up to four concurrent requests
of automated vehicles in a within-subjects study with N=23 participants. The
results showed a clear preference for the path planning concept. It also led to
the highest usability but lower satisfaction. With trajectory guidance, the
fewest requests were resolved. The study's findings contribute to the ongoing
development of HMIs focused on the remote assistance of automated vehicles.
|
2502.12682
|
K-n\'ucleo: Una herramienta para detectar la estructura conceptual de
los campos de investigaci\'on. El caso pr\'actico de la Altmetr\'ia
|
stat.ME cs.SI physics.soc-ph
|
In Social Network Analysis (SNA), k-core decomposition is used to detect
hierarchical shells in networks. The application of the K-core decomposition to
a network of keywords allows us to represent the conceptual structure of a
research field. The objective of this work was to propose the application of
k-core decomposition to show the evolution of the conceptual structure of the
Altmetrics research field. The methodology was developed in several phases:
data collection, keyword selection, elaboration of a keyword co-occurrence
matrix, generation of a keyword network, k-core decomposition and visualization
of the hierarchical structure. The result was the detection of five
differentiated shells. A core shell with basic, densely interconnected concepts
that formed the knowledge base of the field. An intermediate shell with
mediating concepts that showed the evolution of knowledge in the field. A
lateral shell with concepts that indicated the specialization of the research
field. A border shell with peripheral and isolated concepts, which represented
the conceptual fronts in development. In conclusion, the hierarchical
decomposition of the keyword network achieved a deeper understanding of the
conceptual structure of the research field
|
2502.12684
|
Federated Variational Inference for Bayesian Mixture Models
|
stat.ML cs.LG stat.ME
|
We present a federated learning approach for Bayesian model-based clustering
of large-scale binary and categorical datasets. We introduce a principled
'divide and conquer' inference procedure using variational inference with local
merge and delete moves within batches of the data in parallel, followed by
'global' merge moves across batches to find global clustering structures. We
show that these merge moves require only summaries of the data in each batch,
enabling federated learning across local nodes without requiring the full
dataset to be shared. Empirical results on simulated and benchmark datasets
demonstrate that our method performs well in comparison to existing clustering
algorithms. We validate the practical utility of the method by applying it to
large scale electronic health record (EHR) data.
|
2502.12685
|
Theoretical Guarantees for Minimum Bayes Risk Decoding
|
cs.CL
|
Minimum Bayes Risk (MBR) decoding optimizes output selection by maximizing
the expected utility value of an underlying human distribution. While prior
work has shown the effectiveness of MBR decoding through empirical evaluation,
few studies have analytically investigated why the method is effective. As a
result of our analysis, we show that, given the size $n$ of the reference
hypothesis set used in computation, MBR decoding approaches the optimal
solution with high probability at a rate of $O\left(n^{-\frac{1}{2}}\right)$,
under certain assumptions, even though the language space $Y$ is significantly
larger $Y\gg n$. This result helps to theoretically explain the strong
performance observed in several prior empirical studies on MBR decoding. In
addition, we provide the performance gap for maximum-a-posteriori (MAP)
decoding and compare it to MBR decoding. The result of this paper indicates
that MBR decoding tends to converge to the optimal solution faster than MAP
decoding in several cases.
|
2502.12689
|
Role extraction by matrix equations and generalized random walks
|
math.NA cs.NA cs.SI
|
The nodes in a network can be grouped into 'roles' based on similar
connection patterns. This is usually achieved by defining a pairwise node
similarity matrix and then clustering rows and columns of this matrix. This
paper presents a new similarity matrix for solving role extraction problems in
directed networks, which is defined as the solution of a matrix equation and
computes node similarities based on random walks that can proceed along the
link direction and in the opposite direction. The resulting node similarity
measure performs remarkably in role extraction tasks on directed networks with
heterogeneous node degree distributions.
|
2502.12690
|
Fast Data Aware Neural Architecture Search via Supernet Accelerated
Evaluation
|
cs.NE cs.AI cs.CV cs.LG
|
Tiny machine learning (TinyML) promises to revolutionize fields such as
healthcare, environmental monitoring, and industrial maintenance by running
machine learning models on low-power embedded systems. However, the complex
optimizations required for successful TinyML deployment continue to impede its
widespread adoption. A promising route to simplifying TinyML is through
automatic machine learning (AutoML), which can distill elaborate optimization
workflows into accessible key decisions. Notably, Hardware Aware Neural
Architecture Searches - where a computer searches for an optimal TinyML model
based on predictive performance and hardware metrics - have gained significant
traction, producing some of today's most widely used TinyML models.
Nevertheless, limiting optimization solely to neural network architectures can
prove insufficient. Because TinyML systems must operate under extremely tight
resource constraints, the choice of input data configuration, such as
resolution or sampling rate, also profoundly impacts overall system efficiency.
Achieving truly optimal TinyML systems thus requires jointly tuning both input
data and model architecture. Despite its importance, this "Data Aware Neural
Architecture Search" remains underexplored. To address this gap, we propose a
new state-of-the-art Data Aware Neural Architecture Search technique and
demonstrate its effectiveness on the novel TinyML ``Wake Vision'' dataset. Our
experiments show that across varying time and hardware constraints, Data Aware
Neural Architecture Search consistently discovers superior TinyML systems
compared to purely architecture-focused methods, underscoring the critical role
of data-aware optimization in advancing TinyML.
|
2502.12691
|
Spherical Dense Text-to-Image Synthesis
|
cs.CV
|
Recent advancements in text-to-image (T2I) have improved synthesis results,
but challenges remain in layout control and generating omnidirectional
panoramic images. Dense T2I (DT2I) and spherical T2I (ST2I) models address
these issues, but so far no unified approach exists. Trivial approaches, like
prompting a DT2I model to generate panoramas can not generate proper spherical
distortions and seamless transitions at the borders. Our work shows that
spherical dense text-to-image (SDT2I) can be achieved by integrating
training-free DT2I approaches into finetuned panorama models. Specifically, we
propose MultiStitchDiffusion (MSTD) and MultiPanFusion (MPF) by integrating
MultiDiffusion into StitchDiffusion and PanFusion, respectively. Since no
benchmark for SDT2I exists, we further construct Dense-Synthetic-View
(DSynView), a new synthetic dataset containing spherical layouts to evaluate
our models. Our results show that MSTD outperforms MPF across image quality as
well as prompt- and layout adherence. MultiPanFusion generates more diverse
images but struggles to synthesize flawless foreground objects. We propose
bootstrap-coupling and turning off equirectangular perspective-projection
attention in the foreground as an improvement of MPF.
|
2502.12692
|
Channel Estimation for Stacked Intelligent Metasurfaces in Rician Fading
Channels
|
cs.IT math.IT
|
The recent combination of the rising architectures, known as stacked
intelligent metasurface (SIM) and holographic multiple-input multiple-output
(HMIMO), drives toward breakthroughs for next-generation wireless communication
systems. Given the fact that the number of elements per surface of the SIM is
much larger than the base station (BS) antennas, the acquisition of the channel
state information (CSI) in SIM-aided multi-user systems is challenging,
especially when a line-of-sight (LoS) component is present. Thus, in this
letter, we address the channel procedure under conditions of Rician fading by
proposing a protocol in terms of a minimum mean square error (MMSE) estimator
for wave-based design in a single phase. Moreover, we derive the normalized
mean square error (NMSE) of the suggested estimator, and provide the optimal
phase shifts minimising the NMSE. Numerical results illustrate the performance
of the new channel estimation protocol.
|
2502.12693
|
Neuromorphic Readout for Hadron Calorimeters
|
hep-ex cs.ET cs.LG cs.NE
|
We simulate hadrons impinging on a homogeneous lead-tungstate (PbWO4)
calorimeter to investigate how the resulting light yield and its temporal
structure, as detected by an array of light-sensitive sensors, can be processed
by a neuromorphic computing system. Our model encodes temporal photon
distributions as spike trains and employs a fully connected spiking neural
network to estimate the total deposited energy, as well as the position and
spatial distribution of the light emissions within the sensitive material. The
extracted primitives offer valuable topological information about the shower
development in the material, achieved without requiring a segmentation of the
active medium. A potential nanophotonic implementation using III-V
semiconductor nanowires is discussed. It can be both fast and energy efficient.
|
2502.12696
|
Radar Network for Gait Monitoring: Technology and Validation
|
eess.SP cs.SY eess.SY
|
In recent years, radar-based devices have emerged as an alternative approach
for gait monitoring. However, the radar configuration and the algorithms used
to extract the gait parameters often differ between contributions, lacking a
systematic evaluation of the most appropriate setup. Additionally, radar-based
studies often exclude motorically impaired subjects, leaving it unclear whether
the existing algorithms are applicable to such populations.
In this paper, a radar network is developed and validated by monitoring the
gait of five healthy individuals and three patients with Parkinson's disease.
Six configurations and four algorithms were compared using Vicon as
ground-truth to determine the most appropriate solution for gait monitoring.
The best results were obtained using only three nodes: two oriented towards
the feet and one towards the torso. The most accurate stride velocity and
distance in the state of the art were obtained with this configuration.
Moreover, we show that analyzing the feet velocity increases the reliability of
the temporal parameters, especially with aged or motorically impaired subjects.
The contribution is significant for the implementation of radar networks in
clinical and domestic environments, as it addresses critical aspects concerning
the radar network configuration and algorithms.
|
2502.12700
|
Multi-Novelty: Improve the Diversity and Novelty of Contents Generated
by Large Language Models via inference-time Multi-Views Brainstorming
|
cs.CL
|
Large Language Models (LLMs) demonstrate remarkable proficiency in generating
accurate and fluent text. However, they often struggle with diversity and
novelty, leading to repetitive or overly deterministic responses. These
limitations stem from constraints in training data, including gaps in specific
knowledge domains, outdated information, and an over-reliance on textual
sources. Such shortcomings reduce their effectiveness in tasks requiring
creativity, multi-perspective reasoning, and exploratory thinking, such as LLM
based AI scientist agents and creative artist agents . To address this
challenge, we introduce inference-time multi-view brainstorming method, a novel
approach that enriches input prompts with diverse perspectives derived from
both textual and visual sources, which we refere to as "Multi-Novelty". By
incorporating additional contextual information as diverse starting point for
chain of thoughts, this method enhances the variety and creativity of generated
outputs. Importantly, our approach is model-agnostic, requiring no
architectural modifications and being compatible with both open-source and
proprietary LLMs.
|
2502.12701
|
Translate Smart, not Hard: Cascaded Translation Systems with
Quality-Aware Deferral
|
cs.CL cs.AI cs.LG
|
Larger models often outperform smaller ones but come with high computational
costs. Cascading offers a potential solution. By default, it uses smaller
models and defers only some instances to larger, more powerful models. However,
designing effective deferral rules remains a challenge. In this paper, we
propose a simple yet effective approach for machine translation, using existing
quality estimation (QE) metrics as deferral rules. We show that QE-based
deferral allows a cascaded system to match the performance of a larger model
while invoking it for a small fraction (30% to 50%) of the examples,
significantly reducing computational costs. We validate this approach through
both automatic and human evaluation.
|
2502.12704
|
Maximizing Truth Learning in a Social Network is NP-hard
|
cs.SI
|
Sequential learning models situations where agents predict a ground truth in
sequence, by using their private, noisy measurements, and the predictions of
agents who came earlier in the sequence. We study sequential learning in a
social network, where agents only see the actions of the previous agents in
their own neighborhood. The fraction of agents who predict the ground truth
correctly depends heavily on both the network topology and the ordering in
which the predictions are made. A natural question is to find an ordering, with
a given network, to maximize the (expected) number of agents who predict the
ground truth correctly. In this paper, we show that it is in fact NP-hard to
answer this question for a general network, with both the Bayesian learning
model and a simple majority rule model. Finally, we show that even
approximating the answer is hard.
|
2502.12706
|
Scalable Model Merging with Progressive Layer-wise Distillation
|
cs.LG
|
Model merging offers an effective way to integrate the capabilities of
multiple fine-tuned models. However, the performance degradation of the merged
model remains a challenge, particularly when none or few data are available.
This paper first highlights the necessity of domain-specific data for model
merging by proving that data-agnostic algorithms can have arbitrarily bad
worst-case performance. Building on this theoretical insight, we explore the
relationship between model merging and distillation, introducing a novel
few-shot merging algorithm, ProDistill (Progressive Layer-wise Distillation).
Unlike common belief that layer wise training hurts performance, we show that
layer-wise teacher-student distillation not only enhances the scalability but
also improves model merging performance. We conduct extensive experiments to
show that compared to existing few-shot merging methods, ProDistill achieves
state-of-the-art performance, with up to 6.14% and 6.61% improvements in vision
and NLU tasks. Furthermore, we extend the experiments to models with over 10B
parameters, showcasing the exceptional scalability of ProDistill.
|
2502.12707
|
CausalMan: A physics-based simulator for large-scale causality
|
cs.LG stat.ML
|
A comprehensive understanding of causality is critical for navigating and
operating within today's complex real-world systems. The absence of realistic
causal models with known data generating processes complicates fair
benchmarking. In this paper, we present the CausalMan simulator, modeled after
a real-world production line. The simulator features a diverse range of linear
and non-linear mechanisms and challenging-to-predict behaviors, such as
discrete mode changes. We demonstrate the inadequacy of many state-of-the-art
approaches and analyze the significant differences in their performance and
tractability, both in terms of runtime and memory complexity. As a
contribution, we will release the CausalMan large-scale simulator. We present
two derived datasets, and perform an extensive evaluation of both.
|
2502.12710
|
TREND: A Whitespace Replacement Information Hiding Method
|
cs.CR cs.AI cs.SE
|
Large Language Models (LLMs) have gained significant popularity in recent
years. Differentiating between a text written by a human and a text generated
by an LLM has become almost impossible. Information hiding techniques such as
digital watermarking or steganography can help by embedding information inside
text without being noticed. However, existing techniques, such as
linguistic-based or format-based methods, change the semantics or do not work
on pure, unformatted text. In this paper, we introduce a novel method for
information hiding termed TREND, which is able to conceal any byte-encoded
sequence within a cover text. The proposed method is implemented as a
multi-platform library using the Kotlin programming language, accompanied by a
command-line tool and a web interface provided as examples of usage. By
substituting conventional whitespace characters with visually similar Unicode
whitespace characters, our proposed scheme preserves the semantics of the cover
text without increasing the number of characters. Furthermore, we propose a
specified structure for secret messages that enables configurable compression,
encryption, hashing, and error correction. Our experimental benchmark
comparison on a dataset of one million Wikipedia articles compares ten
algorithms from literature and practice. It proves the robustness of our
proposed method in various applications while remaining imperceptible to
humans. We discuss the limitations of limited embedding capacity and further
robustness, which guide implications for future work.
|
2502.12713
|
Uncertainty Propagation for Echocardiography Clinical Metric Estimation
via Contour Sampling
|
cs.CV
|
Echocardiography plays a fundamental role in the extraction of important
clinical parameters (e.g. left ventricular volume and ejection fraction)
required to determine the presence and severity of heart-related conditions.
When deploying automated techniques for computing these parameters, uncertainty
estimation is crucial for assessing their utility. Since clinical parameters
are usually derived from segmentation maps, there is no clear path for
converting pixel-wise uncertainty values into uncertainty estimates in the
downstream clinical metric calculation. In this work, we propose a novel
uncertainty estimation method based on contouring rather than segmentation. Our
method explicitly predicts contour location uncertainty from which contour
samples can be drawn. Finally, the sampled contours can be used to propagate
uncertainty to clinical metrics. Our proposed method not only provides accurate
uncertainty estimations for the task of contouring but also for the downstream
clinical metrics on two cardiac ultrasound datasets. Code is available at:
https://github.com/ThierryJudge/contouring-uncertainty.
|
2502.12714
|
Playing with Voices: Tabletop Role-Playing Game Recordings as a
Diarization Challenge
|
cs.CL cs.SD
|
This paper provides a proof of concept that audio of tabletop role-playing
games (TTRPG) could serve as a challenge for diarization systems. TTRPGs are
carried out mostly by conversation. Participants often alter their voices to
indicate that they are talking as a fictional character. Audio processing
systems are susceptible to voice conversion with or without technological
assistance. TTRPG present a conversational phenomenon in which voice conversion
is an inherent characteristic for an immersive gaming experience. This could
make it more challenging for diarizers to pick the real speaker and determine
that impersonating is just that. We present the creation of a small TTRPG audio
dataset and compare it against the AMI and the ICSI corpus. The performance of
two diarizers, pyannote.audio and wespeaker, were evaluated. We observed that
TTRPGs' properties result in a higher confusion rate for both diarizers.
Additionally, wespeaker strongly underestimates the number of speakers in the
TTRPG audio files. We propose TTRPG audio as a promising challenge for
diarization systems.
|
2502.12716
|
Soft Arm-Motor Thrust Characterization for a Pneumatically Actuated Soft
Morphing Quadrotor
|
cs.RO cs.SY eess.SY
|
In this work, an experimental characterization of the configuration space of
a soft, pneumatically actuated morphing quadrotor is presented, with a focus on
precise thrust characterization of its flexible arms, considering the effect of
downwash. Unlike traditional quadrotors, the soft drone has pneumatically
actuated arms, introducing complex, nonlinear interactions between motor thrust
and arm deformation, which make precise control challenging. The silicone arms
are actuated using differential pressure to achieve flexibility and thus have a
variable workspace compared to their fixed counter-parts. The deflection of the
soft arms during compression and expansion is controlled throughout the flight.
However, in real time, the downwash from the motor attached at the tip of the
soft arm generates a significant and random disturbance on the arm. This
disturbance affects both the desired deflection of the arm and the overall
stability of the system. To address this factor, an experimental
characterization of the effect of downwash on the deflection angle of the arm
is conducted.
|
2502.12717
|
Learning the symmetric group: large from small
|
cs.LG math.CO math.RT
|
Machine learning explorations can make significant inroads into solving
difficult problems in pure mathematics. One advantage of this approach is that
mathematical datasets do not suffer from noise, but a challenge is the amount
of data required to train these models and that this data can be
computationally expensive to generate. Key challenges further comprise
difficulty in a posteriori interpretation of statistical models and the
implementation of deep and abstract mathematical problems.
We propose a method for scalable tasks, by which models trained on simpler
versions of a task can then generalize to the full task. Specifically, we
demonstrate that a transformer neural-network trained on predicting
permutations from words formed by general transpositions in the symmetric group
$S_{10}$ can generalize to the symmetric group $S_{25}$ with near 100\%
accuracy. We also show that $S_{10}$ generalizes to $S_{16}$ with similar
performance if we only use adjacent transpositions. We employ identity
augmentation as a key tool to manage variable word lengths, and partitioned
windows for training on adjacent transpositions. Finally we compare variations
of the method used and discuss potential challenges with extending the method
to other tasks.
|
2502.12723
|
myEye2Wheeler: A Two-Wheeler Indian Driver Real-World Eye-Tracking
Dataset
|
cs.CV
|
This paper presents the myEye2Wheeler dataset, a unique resource of
real-world gaze behaviour of two-wheeler drivers navigating complex Indian
traffic. Most datasets are from four-wheeler drivers on well-planned roads and
homogeneous traffic. Our dataset offers a critical lens into the unique visual
attention patterns and insights into the decision-making of Indian two-wheeler
drivers. The analysis demonstrates that existing saliency models, like
TASED-Net, perform less effectively on the myEye-2Wheeler dataset compared to
when applied on the European 4-wheeler eye tracking datasets (DR(Eye)VE),
highlighting the need for models specifically tailored to the traffic
conditions. By introducing the dataset, we not only fill a significant gap in
two-wheeler driver behaviour research in India but also emphasise the critical
need for developing context-specific saliency models. The larger aim is to
improve road safety for two-wheeler users and lane-planning to support a
cost-effective mode of transport.
|
2502.12724
|
Responsive Noise-Relaying Diffusion Policy: Responsive and Efficient
Visuomotor Control
|
cs.RO
|
Imitation learning is an efficient method for teaching robots a variety of
tasks. Diffusion Policy, which uses a conditional denoising diffusion process
to generate actions, has demonstrated superior performance, particularly in
learning from multi-modal demonstrates. However, it relies on executing
multiple actions to retain performance and prevent mode bouncing, which limits
its responsiveness, as actions are not conditioned on the most recent
observations. To address this, we introduce Responsive Noise-Relaying Diffusion
Policy (RNR-DP), which maintains a noise-relaying buffer with progressively
increasing noise levels and employs a sequential denoising mechanism that
generates immediate, noise-free actions at the head of the sequence, while
appending noisy actions at the tail. This ensures that actions are responsive
and conditioned on the latest observations, while maintaining motion
consistency through the noise-relaying buffer. This design enables the handling
of tasks requiring responsive control, and accelerates action generation by
reusing denoising steps. Experiments on response-sensitive tasks demonstrate
that, compared to Diffusion Policy, ours achieves 18% improvement in success
rate. Further evaluation on regular tasks demonstrates that RNR-DP also exceeds
the best acceleration method by 6.9%, highlighting its computational efficiency
advantage in scenarios where responsiveness is less critical.
|
2502.12732
|
Circuit Representation Learning with Masked Gate Modeling and
Verilog-AIG Alignment
|
cs.LG
|
Understanding the structure and function of circuits is crucial for
electronic design automation (EDA). Circuits can be formulated as And-Inverter
graphs (AIGs), enabling efficient implementation of representation learning
through graph neural networks (GNNs). Masked modeling paradigms have been
proven effective in graph representation learning. However, masking
augmentation to original circuits will destroy their logical equivalence, which
is unsuitable for circuit representation learning. Moreover, existing masked
modeling paradigms often prioritize structural information at the expense of
abstract information such as circuit function. To address these limitations, we
introduce MGVGA, a novel constrained masked modeling paradigm incorporating
masked gate modeling (MGM) and Verilog-AIG alignment (VGA). Specifically, MGM
preserves logical equivalence by masking gates in the latent space rather than
in the original circuits, subsequently reconstructing the attributes of these
masked gates. Meanwhile, large language models (LLMs) have demonstrated an
excellent understanding of the Verilog code functionality. Building upon this
capability, VGA performs masking operations on original circuits and
reconstructs masked gates under the constraints of equivalent Verilog codes,
enabling GNNs to learn circuit functions from LLMs. We evaluate MGVGA on
various logic synthesis tasks for EDA and show the superior performance of
MGVGA compared to previous state-of-the-art methods. Our code is available at
https://github.com/wuhy68/MGVGA.
|
2502.12734
|
Iron Sharpens Iron: Defending Against Attacks in Machine-Generated Text
Detection with Adversarial Training
|
cs.CR cs.CL
|
Machine-generated Text (MGT) detection is crucial for regulating and
attributing online texts. While the existing MGT detectors achieve strong
performance, they remain vulnerable to simple perturbations and adversarial
attacks. To build an effective defense against malicious perturbations, we view
MGT detection from a threat modeling perspective, that is, analyzing the
model's vulnerability from an adversary's point of view and exploring effective
mitigations. To this end, we introduce an adversarial framework for training a
robust MGT detector, named GREedy Adversary PromoTed DefendER (GREATER). The
GREATER consists of two key components: an adversary GREATER-A and a detector
GREATER-D. The GREATER-D learns to defend against the adversarial attack from
GREATER-A and generalizes the defense to other attacks. GREATER-A identifies
and perturbs the critical tokens in embedding space, along with greedy search
and pruning to generate stealthy and disruptive adversarial examples. Besides,
we update the GREATER-A and GREATER-D synchronously, encouraging the GREATER-D
to generalize its defense to different attacks and varying attack intensities.
Our experimental results across 9 text perturbation strategies and 5
adversarial attacks show that our GREATER-D reduces the Attack Success Rate
(ASR) by 10.61% compared with SOTA defense methods while our GREATER-A is
demonstrated to be more effective and efficient than SOTA attack approaches.
|
2502.12736
|
Cross-Domain Continual Learning for Edge Intelligence in Wireless ISAC
Networks
|
eess.SP cs.LG
|
In wireless networks with integrated sensing and communications (ISAC), edge
intelligence (EI) is expected to be developed at edge devices (ED) for sensing
user activities based on channel state information (CSI). However, due to the
CSI being highly specific to users' characteristics, the CSI-activity
relationship is notoriously domain dependent, essentially demanding EI to learn
sufficient datasets from various domains in order to gain cross-domain sensing
capability. This poses a crucial challenge owing to the EDs' limited resources,
for which storing datasets across all domains will be a significant burden. In
this paper, we propose the EdgeCL framework, enabling the EI to continually
learn-then-discard each incoming dataset, while remaining resilient to
catastrophic forgetting. We design a transformer-based discriminator for
handling sequences of noisy and nonequispaced CSI samples. Besides, we propose
a distilled core-set based knowledge retention method with robustness-enhanced
optimization to train the discriminator, preserving its performance for
previous domains while preventing future forgetting. Experimental evaluations
show that EdgeCL achieves 89% of performance compared to cumulative training
while consuming only 3% of its memory, mitigating forgetting by 79%.
|
2502.12737
|
Beyond Seen Data: Improving KBQA Generalization Through Schema-Guided
Logical Form Generation
|
cs.CL cs.AI
|
Knowledge base question answering (KBQA) aims to answer user questions in
natural language using rich human knowledge stored in large KBs. As current
KBQA methods struggle with unseen knowledge base elements at test time,we
introduce SG-KBQA: a novel model that injects schema contexts into entity
retrieval and logical form generation to tackle this issue. It uses the richer
semantics and awareness of the knowledge base structure provided by schema
contexts to enhance generalizability. We show that SG-KBQA achieves strong
generalizability, outperforming state-of-the-art models on two commonly used
benchmark datasets across a variety of test settings. Our source code is
available at https://github.com/gaosx2000/SG_KBQA.
|
2502.12742
|
3D Shape-to-Image Brownian Bridge Diffusion for Brain MRI Synthesis from
Cortical Surfaces
|
cs.CV
|
Despite recent advances in medical image generation, existing methods
struggle to produce anatomically plausible 3D structures. In synthetic brain
magnetic resonance images (MRIs), characteristic fissures are often missing,
and reconstructed cortical surfaces appear scattered rather than densely
convoluted. To address this issue, we introduce Cor2Vox, the first diffusion
model-based method that translates continuous cortical shape priors to
synthetic brain MRIs. To achieve this, we leverage a Brownian bridge process
which allows for direct structured mapping between shape contours and medical
images. Specifically, we adapt the concept of the Brownian bridge diffusion
model to 3D and extend it to embrace various complementary shape
representations. Our experiments demonstrate significant improvements in the
geometric accuracy of reconstructed structures compared to previous voxel-based
approaches. Moreover, Cor2Vox excels in image quality and diversity, yielding
high variation in non-target structures like the skull. Finally, we highlight
the capability of our approach to simulate cortical atrophy at the sub-voxel
level. Our code is available at https://github.com/ai-med/Cor2Vox.
|
2502.12743
|
"I know myself better, but not really greatly": Using LLMs to Detect and
Explain LLM-Generated Texts
|
cs.CL cs.AI
|
Large language models (LLMs) have demonstrated impressive capabilities in
generating human-like texts, but the potential misuse of such LLM-generated
texts raises the need to distinguish between human-generated and LLM-generated
content. This paper explores the detection and explanation capabilities of
LLM-based detectors of LLM-generated texts, in the context of a binary
classification task (human-generated texts vs LLM-generated texts) and a
ternary classification task (human-generated texts, LLM-generated texts, and
undecided). By evaluating on six close/open-source LLMs with different sizes,
our findings reveal that while self-detection consistently outperforms
cross-detection, i.e., LLMs can detect texts generated by themselves more
accurately than those generated by other LLMs, the performance of
self-detection is still far from ideal, indicating that further improvements
are needed. We also show that extending the binary to the ternary
classification task with a new class "Undecided" can enhance both detection
accuracy and explanation quality, with improvements being statistically
significant and consistent across all LLMs. We finally conducted comprehensive
qualitative and quantitative analyses on the explanation errors, which are
categorized into three types: reliance on inaccurate features (the most
frequent error), hallucinations, and incorrect reasoning. These findings with
our human-annotated dataset emphasize the need for further research into
improving both self-detection and self-explanation, particularly to address
overfitting issues that may hinder generalization.
|
2502.12744
|
Self-Enhanced Reasoning Training: Activating Latent Reasoning in Small
Models for Enhanced Reasoning Distillation
|
cs.CL
|
The rapid advancement of large language models (LLMs) has significantly
enhanced their reasoning abilities, enabling increasingly complex tasks.
However, these capabilities often diminish in smaller, more computationally
efficient models like GPT-2. Recent research shows that reasoning distillation
can help small models acquire reasoning capabilities, but most existing methods
focus primarily on improving teacher-generated reasoning paths. Our
observations reveal that small models can generate high-quality reasoning paths
during sampling, even without chain-of-thought prompting, though these paths
are often latent due to their low probability under standard decoding
strategies. To address this, we propose Self-Enhanced Reasoning Training
(SERT), which activates and leverages latent reasoning capabilities in small
models through self-training on filtered, self-generated reasoning paths under
zero-shot conditions. Experiments using OpenAI's GPT-3.5 as the teacher model
and GPT-2 models as the student models demonstrate that SERT enhances the
reasoning abilities of small models, improving their performance in reasoning
distillation.
|
2502.12745
|
MediaMind: Revolutionizing Media Monitoring using Agentification
|
cs.CL cs.AI cs.LG
|
In an era of rapid technological advancements, agentification of software
tools has emerged as a critical innovation, enabling systems to function
autonomously and adaptively. This paper introduces MediaMind as a case study to
demonstrate the agentification process, highlighting how existing software can
be transformed into intelligent agents capable of independent decision-making
and dynamic interaction. Developed by aiXplain, MediaMind leverages agent-based
architecture to autonomously monitor, analyze, and provide insights from
multilingual media content in real time. The focus of this paper is on the
technical methodologies and design principles behind agentifying MediaMind,
showcasing how agentification enhances adaptability, efficiency, and
responsiveness. Through detailed case studies and practical examples, we
illustrate how the agentification of MediaMind empowers organizations to
streamline workflows, optimize decision-making, and respond to evolving trends.
This work underscores the broader potential of agentification to revolutionize
software tools across various domains.
|
2502.12747
|
ExoKit: A Toolkit for Rapid Prototyping of Interactions for Arm-based
Exoskeletons
|
cs.HC cs.RO
|
Exoskeletons open up a unique interaction space that seamlessly integrates
users' body movements with robotic actuation. Despite its potential,
human-exoskeleton interaction remains an underexplored area in HCI, largely due
to the lack of accessible prototyping tools that enable designers to easily
develop exoskeleton designs and customized interactive behaviors. We present
ExoKit, a do-it-yourself toolkit for rapid prototyping of low-fidelity,
functional exoskeletons targeted at novice roboticists. ExoKit includes modular
hardware components for sensing and actuating shoulder and elbow joints, which
are easy to fabricate and (re)configure for customized functionality and
wearability. To simplify the programming of interactive behaviors, we propose
functional abstractions that encapsulate high-level human-exoskeleton
interactions. These can be readily accessed either through ExoKit's
command-line or graphical user interface, a Processing library, or
microcontroller firmware, each targeted at different experience levels.
Findings from implemented application cases and two usage studies demonstrate
the versatility and accessibility of ExoKit for early-stage interaction design.
|
2502.12751
|
Architect of the Bits World: Masked Autoregressive Modeling for Circuit
Generation Guided by Truth Table
|
cs.LG
|
Logic synthesis, a critical stage in electronic design automation (EDA),
optimizes gate-level circuits to minimize power consumption and area occupancy
in integrated circuits (ICs). Traditional logic synthesis tools rely on
human-designed heuristics, often yielding suboptimal results. Although
differentiable architecture search (DAS) has shown promise in generating
circuits from truth tables, it faces challenges such as high computational
complexity, convergence to local optima, and extensive hyperparameter tuning.
Consequently, we propose a novel approach integrating conditional generative
models with DAS for circuit generation. Our approach first introduces
CircuitVQ, a circuit tokenizer trained based on our Circuit AutoEncoder We then
develop CircuitAR, a masked autoregressive model leveraging CircuitVQ as the
tokenizer. CircuitAR can generate preliminary circuit structures from truth
tables, which guide DAS in producing functionally equivalent circuits. Notably,
we observe the scalability and emergent capability in generating complex
circuit structures of our CircuitAR models. Extensive experiments also show the
superior performance of our method. This research bridges the gap between
probabilistic generative models and precise circuit generation, offering a
robust solution for logic synthesis.
|
2502.12752
|
High-Fidelity Novel View Synthesis via Splatting-Guided Diffusion
|
cs.CV
|
Despite recent advances in Novel View Synthesis (NVS), generating
high-fidelity views from single or sparse observations remains a significant
challenge. Existing splatting-based approaches often produce distorted geometry
due to splatting errors. While diffusion-based methods leverage rich 3D priors
to achieve improved geometry, they often suffer from texture hallucination. In
this paper, we introduce SplatDiff, a pixel-splatting-guided video diffusion
model designed to synthesize high-fidelity novel views from a single image.
Specifically, we propose an aligned synthesis strategy for precise control of
target viewpoints and geometry-consistent view synthesis. To mitigate texture
hallucination, we design a texture bridge module that enables high-fidelity
texture generation through adaptive feature fusion. In this manner, SplatDiff
leverages the strengths of splatting and diffusion to generate novel views with
consistent geometry and high-fidelity details. Extensive experiments verify the
state-of-the-art performance of SplatDiff in single-view NVS. Additionally,
without extra training, SplatDiff shows remarkable zero-shot performance across
diverse tasks, including sparse-view NVS and stereo video conversion.
|
2502.12753
|
Green LIME: Improving AI Explainability through Design of Experiments
|
stat.ML cs.LG stat.ME
|
In artificial intelligence (AI), the complexity of many models and processes
often surpasses human interpretability, making it challenging to understand why
a specific prediction is made. This lack of transparency is particularly
problematic in critical fields like healthcare, where trust in a model's
predictions is paramount. As a result, the explainability of machine learning
(ML) and other complex models has become a key area of focus. Efforts to
improve model interpretability often involve experimenting with AI systems and
approximating their behavior through simpler mechanisms. However, these
procedures can be resource-intensive. Optimal design of experiments, which
seeks to maximize the information obtained from a limited number of
observations, offers promising methods for improving the efficiency of these
explainability techniques.
To demonstrate this potential, we explore Local Interpretable Model-agnostic
Explanations (LIME), a widely used method introduced by Ribeiro, Singh, and
Guestrin, 2016. LIME provides explanations by generating new data points near
the instance of interest and passing them through the model. While effective,
this process can be computationally expensive, especially when predictions are
costly or require many samples. LIME is highly versatile and can be applied to
a wide range of models and datasets. In this work, we focus on models involving
tabular data, regression tasks, and linear models as interpretable local
approximations.
By utilizing optimal design of experiments' techniques, we reduce the number
of function evaluations of the complex model, thereby reducing the
computational effort of LIME by a significant amount. We consider this modified
version of LIME to be energy-efficient or "green".
|
2502.12755
|
Efficient Machine Translation Corpus Generation: Integrating
Human-in-the-Loop Post-Editing with Large Language Models
|
cs.CL cs.AI cs.HC
|
This paper introduces an advanced methodology for machine translation (MT)
corpus generation, integrating semi-automated, human-in-the-loop post-editing
with large language models (LLMs) to enhance efficiency and translation
quality. Building upon previous work that utilized real-time training of a
custom MT quality estimation metric, this system incorporates novel LLM
features such as Enhanced Translation Synthesis and Assisted Annotation
Analysis, which improve initial translation hypotheses and quality assessments,
respectively. Additionally, the system employs LLM-Driven Pseudo Labeling and a
Translation Recommendation System to reduce human annotator workload in
specific contexts. These improvements not only retain the original benefits of
cost reduction and enhanced post-edit quality but also open new avenues for
leveraging cutting-edge LLM advancements. The project's source code is
available for community use, promoting collaborative developments in the field.
The demo video can be accessed here.
|
2502.12756
|
Navigating Demand Uncertainty in Container Shipping: Deep Reinforcement
Learning for Enabling Adaptive and Feasible Master Stowage Planning
|
cs.LG math.OC
|
Reinforcement learning (RL) has shown promise in solving various
combinatorial optimization problems. However, conventional RL faces challenges
when dealing with real-world constraints, especially when action space
feasibility is explicit and dependent on the corresponding state or trajectory.
In this work, we focus on using RL in container shipping, often considered the
cornerstone of global trade, by dealing with the critical challenge of master
stowage planning. The main objective is to maximize cargo revenue and minimize
operational costs while navigating demand uncertainty and various complex
operational constraints, namely vessel capacity and stability, which must be
dynamically updated along the vessel's voyage. To address this problem, we
implement a deep reinforcement learning framework with feasibility projection
to solve the master stowage planning problem (MPP) under demand uncertainty.
The experimental results show that our architecture efficiently finds adaptive,
feasible solutions for this multi-stage stochastic optimization problem,
outperforming traditional mixed-integer programming and RL with feasibility
regularization. Our AI-driven decision-support policy enables adaptive and
feasible planning under uncertainty, optimizing operational efficiency and
capacity utilization while contributing to sustainable and resilient global
supply chains.
|
2502.12759
|
High-Fidelity Music Vocoder using Neural Audio Codecs
|
cs.SD cs.LG
|
While neural vocoders have made significant progress in high-fidelity speech
synthesis, their application on polyphonic music has remained underexplored. In
this work, we propose DisCoder, a neural vocoder that leverages a generative
adversarial encoder-decoder architecture informed by a neural audio codec to
reconstruct high-fidelity 44.1 kHz audio from mel spectrograms. Our approach
first transforms the mel spectrogram into a lower-dimensional representation
aligned with the Descript Audio Codec (DAC) latent space before reconstructing
it to an audio signal using a fine-tuned DAC decoder. DisCoder achieves
state-of-the-art performance in music synthesis on several objective metrics
and in a MUSHRA listening study. Our approach also shows competitive
performance in speech synthesis, highlighting its potential as a universal
vocoder.
|
2502.12762
|
One-bit Compressed Sensing using Generative Models
|
cs.LG eess.SP
|
This paper addresses the classical problem of one-bit compressed sensing
using a deep learning-based reconstruction algorithm that leverages a trained
generative model to enhance the signal reconstruction performance. The
generator, a pre-trained neural network, learns to map from a low-dimensional
latent space to a higher-dimensional set of sparse vectors. This generator is
then used to reconstruct sparse vectors from their one-bit measurements by
searching over its range. The presented algorithm provides an excellent
reconstruction performance because the generative model can learn additional
structural information about the signal beyond sparsity. Furthermore, we
provide theoretical guarantees on the reconstruction accuracy and sample
complexity of the algorithm. Through numerical experiments using three publicly
available image datasets, MNIST, Fashion-MNIST, and Omniglot, we demonstrate
the superior performance of the algorithm compared to other existing algorithms
and show that our algorithm can recover both the amplitude and the direction of
the signal from one-bit measurements.
|
2502.12767
|
R2-KG: General-Purpose Dual-Agent Framework for Reliable Reasoning on
Knowledge Graphs
|
cs.CL cs.AI
|
Recent studies have combined Large Language Models (LLMs) with Knowledge
Graphs (KGs) to enhance reasoning, improving inference accuracy without
additional training while mitigating hallucination. However, existing
frameworks are often rigid, struggling to adapt to KG or task changes. They
also rely heavily on powerful LLMs for reliable (i.e., trustworthy) reasoning.
To address this, We introduce R2-KG, a plug-and-play, dual-agent framework that
separates reasoning into two roles: an Operator (a low-capacity LLM) that
gathers evidence and a Supervisor (a high-capacity LLM) that makes final
judgments. This design is cost-efficient for LLM inference while still
maintaining strong reasoning accuracy. Additionally, R2-KG employs an
Abstention mechanism, generating answers only when sufficient evidence is
collected from KG, which significantly enhances reliability. Experiments across
multiple KG-based reasoning tasks show that R2-KG consistently outperforms
baselines in both accuracy and reliability, regardless of the inherent
capability of LLMs used as the Operator. Further experiments reveal that the
single-agent version of R2-KG, equipped with a strict self-consistency
strategy, achieves significantly higher-than-baseline reliability while
reducing inference cost. However, it also leads to a higher abstention rate in
complex KGs. Our findings establish R2-KG as a flexible and cost-effective
solution for KG-based reasoning. It reduces reliance on high-capacity LLMs
while ensuring trustworthy inference.
|
2502.12769
|
How Much Do LLMs Hallucinate across Languages? On Multilingual
Estimation of LLM Hallucination in the Wild
|
cs.CL cs.AI
|
In the age of misinformation, hallucination -- the tendency of Large Language
Models (LLMs) to generate non-factual or unfaithful responses -- represents the
main risk for their global utility. Despite LLMs becoming increasingly
multilingual, the vast majority of research on detecting and quantifying LLM
hallucination are (a) English-centric and (b) focus on machine translation (MT)
and summarization, tasks that are less common ``in the wild'' than open
information seeking. In contrast, we aim to quantify the extent of LLM
hallucination across languages in knowledge-intensive long-form question
answering. To this end, we train a multilingual hallucination detection model
and conduct a large-scale study across 30 languages and 6 open-source LLM
families. We start from an English hallucination detection dataset and rely on
MT to generate (noisy) training data in other languages. We also manually
annotate gold data for five high-resource languages; we then demonstrate, for
these languages, that the estimates of hallucination rates are similar between
silver (LLM-generated) and gold test sets, validating the use of silver data
for estimating hallucination rates for other languages. For the final rates
estimation, we build a knowledge-intensive QA dataset for 30 languages with
LLM-generated prompts and Wikipedia articles as references. We find that, while
LLMs generate longer responses with more hallucinated tokens for
higher-resource languages, there is no correlation between length-normalized
hallucination rates of languages and their digital representation. Further, we
find that smaller LLMs exhibit larger hallucination rates than larger models.
|
2502.12771
|
Mind the Gap: Aligning the Brain with Language Models Requires a
Nonlinear and Multimodal Approach
|
cs.CL q-bio.NC
|
Self-supervised language and audio models effectively predict brain responses
to speech. However, traditional prediction models rely on linear mappings from
unimodal features, despite the complex integration of auditory signals with
linguistic and semantic information across widespread brain networks during
speech comprehension. Here, we introduce a nonlinear, multimodal prediction
model that combines audio and linguistic features from pre-trained models
(e.g., LLAMA, Whisper). Our approach achieves a 17.2% and 17.9% improvement in
prediction performance (unnormalized and normalized correlation) over
traditional unimodal linear models, as well as a 7.7% and 14.4% improvement,
respectively, over prior state-of-the-art models. These improvements represent
a major step towards future robust in-silico testing and improved decoding
performance. They also reveal how auditory and semantic information are fused
in motor, somatosensory, and higher-level semantic regions, aligning with
existing neurolinguistic theories. Overall, our work highlights the often
neglected potential of nonlinear and multimodal approaches to brain modeling,
paving the way for future studies to embrace these strategies in naturalistic
neurolinguistics research.
|
2502.12776
|
Portable Reward Tuning: Towards Reusable Fine-Tuning across Different
Pretrained Models
|
cs.LG cs.AI stat.ML
|
While foundation models have been exploited for various expert tasks through
fine-tuning, any foundation model will become outdated due to its old knowledge
or limited capability. Thus the underlying foundation model should be
eventually replaced by new ones, which leads to repeated cost of fine-tuning
these new models. Existing work addresses this problem by inference-time
tuning, i.e., modifying the output probabilities from the new foundation model
with the outputs from the old foundation model and its fine-tuned model, which
involves an additional overhead in inference by the latter two models. In this
paper, we propose a new fine-tuning principle, Portable Reward Tuning (PRT),
that reduces the inference overhead by its nature, based on the reformulation
of fine-tuning as the reward maximization. Specifically, instead of fine-tuning
parameters of the foundation models, PRT trains the reward model explicitly
through the same loss function as in fine-tuning. During inference, the reward
model can be used with any foundation model (with the same set of vocabularies
or labels) through the formulation of reward maximization. Experimental
results, covering both vision and language models, demonstrate that the
PRT-trained model can achieve comparable accuracy to the existing work of
inference-time tuning, with less inference cost.
|
2502.12777
|
Evaluating link prediction: New perspectives and recommendations
|
cs.SI cs.AI
|
Link prediction (LP) is an important problem in network science and machine
learning research. The state-of-the-art LP methods are usually evaluated in a
uniform setup, ignoring several factors associated with the data and
application specific needs. We identify a number of such factors, such as,
network-type, problem-type, geodesic distance between the end nodes and its
distribution over the classes, nature and applicability of LP methods, class
imbalance and its impact on early retrieval, evaluation metric, etc., and
present an experimental setup which allows us to evaluate LP methods in a
rigorous and controlled manner. We perform extensive experiments with a variety
of LP methods over real network datasets in this controlled setup, and gather
valuable insights on the interactions of these factors with the performance of
LP through an array of carefully designed hypotheses. Following the insights,
we provide recommendations to be followed as best practice for evaluating LP
methods.
|
2502.12779
|
Dependence and Uncertainty: Information Measures using Tsallis Entropy
|
stat.ME cs.IT math.IT
|
In multivariate analysis, uncertainty arises from two sources: the marginal
distributions of the variables and their dependence structure. Quantifying the
dependence structure is crucial, as it provides valuable insights into the
relationships among components of a random vector. Copula functions effectively
capture this dependence structure independent of marginals, making copula-based
information measures highly significant. However, existing copula-based
information measures, such as entropy, divergence, and mutual information, rely
on copula densities, which may not exist in many scenarios, limiting their
applicability. Recently, to address this issue, Arshad et al. (2024) introduced
cumulative copula-based measures using Shannon entropy. In this paper, we
extend this framework by using Tsallis entropy, a non-additive entropy that
provides greater flexibility for quantifying uncertainties. We propose
cumulative copula Tsallis entropy, derive its properties and bounds, and
illustrate its utility through examples. We further develop a non-parametric
version of the measure and validate it using coupled periodic and chaotic maps.
Additionally, we extend Kerridge's inaccuracy measure and Kullback-Leibler (KL)
divergence to the cumulative copula framework. Using the relationship between
KL divergence and mutual information, we propose a new cumulative mutual
information (CMI) measure, which outperform the limitations of density-based
mutual information. Furthermore, we introduce a test procedure for testing the
mutual independence among random variables using CMI measure. Finally, we
illustrate the potential of the proposed CMI measure as an economic indicator
through real bivariate financial time series data.
|
2502.12782
|
VidCapBench: A Comprehensive Benchmark of Video Captioning for
Controllable Text-to-Video Generation
|
cs.AI
|
The training of controllable text-to-video (T2V) models relies heavily on the
alignment between videos and captions, yet little existing research connects
video caption evaluation with T2V generation assessment. This paper introduces
VidCapBench, a video caption evaluation scheme specifically designed for T2V
generation, agnostic to any particular caption format. VidCapBench employs a
data annotation pipeline, combining expert model labeling and human refinement,
to associate each collected video with key information spanning video
aesthetics, content, motion, and physical laws. VidCapBench then partitions
these key information attributes into automatically assessable and manually
assessable subsets, catering to both the rapid evaluation needs of agile
development and the accuracy requirements of thorough validation. By evaluating
numerous state-of-the-art captioning models, we demonstrate the superior
stability and comprehensiveness of VidCapBench compared to existing video
captioning evaluation approaches. Verification with off-the-shelf T2V models
reveals a significant positive correlation between scores on VidCapBench and
the T2V quality evaluation metrics, indicating that VidCapBench can provide
valuable guidance for training T2V models. The project is available at
https://github.com/VidCapBench/VidCapBench.
|
2502.12786
|
Composition and Control with Distilled Energy Diffusion Models and
Sequential Monte Carlo
|
stat.ML cs.LG
|
Diffusion models may be formulated as a time-indexed sequence of energy-based
models, where the score corresponds to the negative gradient of an energy
function. As opposed to learning the score directly, an energy parameterization
is attractive as the energy itself can be used to control generation via Monte
Carlo samplers. Architectural constraints and training instability in energy
parameterized models have so far yielded inferior performance compared to
directly approximating the score or denoiser. We address these deficiencies by
introducing a novel training regime for the energy function through
distillation of pre-trained diffusion models, resembling a Helmholtz
decomposition of the score vector field. We further showcase the synergies
between energy and score by casting the diffusion sampling procedure as a
Feynman Kac model where sampling is controlled using potentials from the learnt
energy functions. The Feynman Kac model formalism enables composition and low
temperature sampling through sequential Monte Carlo.
|
2502.12788
|
Commonsense Reasoning in Arab Culture
|
cs.CL
|
Despite progress in Arabic large language models, such as Jais and AceGPT,
their evaluation on commonsense reasoning has largely relied on
machine-translated datasets, which lack cultural depth and may introduce
Anglocentric biases. Commonsense reasoning is shaped by geographical and
cultural contexts, and existing English datasets fail to capture the diversity
of the Arab world. To address this, we introduce \datasetname, a commonsense
reasoning dataset in Modern Standard Arabic (MSA), covering cultures of 13
countries across the Gulf, Levant, North Africa, and the Nile Valley. The
dataset was built from scratch by engaging native speakers to write and
validate culturally relevant questions for their respective countries.
\datasetname spans 12 daily life domains with 54 fine-grained subtopics,
reflecting various aspects of social norms, traditions, and everyday
experiences. Zero-shot evaluations show that open-weight language models with
up to 32B parameters struggle to comprehend diverse Arab cultures, with
performance varying across regions. These findings highlight the need for more
culturally aware models and datasets tailored to the Arabic-speaking world.
|
2502.12791
|
Beyond Timesteps: A Novel Activation-wise Membrane Potential Propagation
Mechanism for Spiking Neural Networks in 3D cloud
|
cs.CV cs.LG
|
Due to the similar characteristics between event-based visual data and point
clouds, recent studies have emerged that treat event data as event clouds to
learn based on point cloud analysis. Additionally, some works approach point
clouds from the perspective of event vision, employing Spiking Neural Network
(SNN) due to their asynchronous nature. However, these contributions are often
domain-specific, making it difficult to extend their applicability to other
intersecting fields. Moreover, while SNN-based visual tasks have seen
significant growth, the conventional timestep-wise iterative activation
strategy largely limits their real-world applications by large timesteps,
resulting in significant delays and increased computational costs. Although
some innovative methods achieve good performance with short timesteps (<10),
few have fundamentally restructured the update strategy of spiking neurons to
completely overcome the limitations of timesteps. In response to these
concerns, we propose a novel and general activation strategy for spiking
neurons called Activation-wise Membrane Potential Propagation (AMP2). This
approach extends the concept of timesteps from a manually crafted parameter
within the activation function to any existing network structure. In
experiments on common point cloud tasks (classification, object, and scene
segmentation) and event cloud tasks (action recognition), we found that AMP2
stabilizes SNN training, maintains competitive performance, and reduces latency
compared to the traditional timestep-wise activation paradigm.
|
2502.12793
|
Unsupervised Anomaly Detection through Mass Repulsing Optimal Transport
|
stat.ML cs.AI cs.LG
|
Detecting anomalies in datasets is a longstanding problem in machine
learning. In this context, anomalies are defined as a sample that significantly
deviates from the remaining data. Meanwhile, optimal transport (OT) is a field
of mathematics concerned with the transportation, between two probability
measures, at least effort. In classical OT, the optimal transportation strategy
of a measure to itself is the identity. In this paper, we tackle anomaly
detection by forcing samples to displace its mass, while keeping the least
effort objective. We call this new transportation problem Mass Repulsing
Optimal Transport (MROT). Naturally, samples lying in low density regions of
space will be forced to displace mass very far, incurring a higher
transportation cost. We use these concepts to design a new anomaly score.
Through a series of experiments in existing benchmarks, and fault detection
problems, we show that our algorithm improves over existing methods.
|
2502.12794
|
RAPID: Retrieval Augmented Training of Differentially Private Diffusion
Models
|
cs.CR cs.CV cs.LG
|
Differentially private diffusion models (DPDMs) harness the remarkable
generative capabilities of diffusion models while enforcing differential
privacy (DP) for sensitive data. However, existing DPDM training approaches
often suffer from significant utility loss, large memory footprint, and
expensive inference cost, impeding their practical uses. To overcome such
limitations, we present RAPID: Retrieval Augmented PrIvate Diffusion model, a
novel approach that integrates retrieval augmented generation (RAG) into DPDM
training. Specifically, RAPID leverages available public data to build a
knowledge base of sample trajectories; when training the diffusion model on
private data, RAPID computes the early sampling steps as queries, retrieves
similar trajectories from the knowledge base as surrogates, and focuses on
training the later sampling steps in a differentially private manner. Extensive
evaluation using benchmark datasets and models demonstrates that, with the same
privacy guarantee, RAPID significantly outperforms state-of-the-art approaches
by large margins in generative quality, memory footprint, and inference cost,
suggesting that retrieval-augmented DP training represents a promising
direction for developing future privacy-preserving generative models. The code
is available at: https://github.com/TanqiuJiang/RAPID
|
2502.12796
|
Learning Counterfactually Fair Models via Improved Generation with
Neural Causal Models
|
cs.LG
|
One of the main concerns while deploying machine learning models in
real-world applications is fairness. Counterfactual fairness has emerged as an
intuitive and natural definition of fairness. However, existing methodologies
for enforcing counterfactual fairness seem to have two limitations: (i)
generating counterfactual samples faithful to the underlying causal graph, and
(ii) as we argue in this paper, existing regularizers are mere proxies and do
not directly enforce the exact definition of counterfactual fairness. In this
work, our aim is to mitigate both issues. Firstly, we propose employing Neural
Causal Models (NCMs) for generating the counterfactual samples. For
implementing the abduction step in NCMs, the posteriors of the exogenous
variables need to be estimated given a counterfactual query, as they are not
readily available. As a consequence, $\mathcal{L}_3$ consistency with respect
to the underlying causal graph cannot be guaranteed in practice due to the
estimation errors involved. To mitigate this issue, we propose a novel kernel
least squares loss term that enforces the $\mathcal{L}_3$ constraints
explicitly. Thus, we obtain an improved counterfactual generation suitable for
the counterfactual fairness task. Secondly, we propose a new MMD-based
regularizer term that explicitly enforces the counterfactual fairness
conditions into the base model while training. We show an improved trade-off
between counterfactual fairness and generalization over existing baselines on
synthetic and benchmark datasets.
|
2502.12798
|
Envious Explore and Exploit
|
cs.GT cs.AI cs.LG
|
Explore-and-exploit tradeoffs play a key role in recommendation systems
(RSs), aiming at serving users better by learning from previous interactions.
Despite their commercial success, the societal effects of explore-and-exploit
mechanisms are not well understood, especially regarding the utility
discrepancy they generate between different users. In this work, we measure
such discrepancy using the economic notion of envy. We present a multi-armed
bandit-like model in which every round consists of several sessions, and
rewards are realized once per round. We call the latter property reward
consistency, and show that the RS can leverage this property for better
societal outcomes. On the downside, doing so also generates envy, as
late-to-arrive users enjoy the information gathered by early-to-arrive users.
We examine the generated envy under several arrival order mechanisms and
virtually any anonymous algorithm, i.e., any algorithm that treats all similar
users similarly without leveraging their identities. We provide tight envy
bounds on uniform arrival and upper bound the envy for nudged arrival, in which
the RS can affect the order of arrival by nudging its users. Furthermore, we
study the efficiency-fairness trade-off by devising an algorithm that allows
constant envy and approximates the optimal welfare in restricted settings.
Finally, we validate our theoretical results empirically using simulations.
|
2502.12799
|
Towards Text-Image Interleaved Retrieval
|
cs.CL cs.CV cs.IR
|
Current multimodal information retrieval studies mainly focus on single-image
inputs, which limits real-world applications involving multiple images and
text-image interleaved content. In this work, we introduce the text-image
interleaved retrieval (TIIR) task, where the query and document are interleaved
text-image sequences, and the model is required to understand the semantics
from the interleaved context for effective retrieval. We construct a TIIR
benchmark based on naturally interleaved wikiHow tutorials, where a specific
pipeline is designed to generate interleaved queries. To explore the task, we
adapt several off-the-shelf retrievers and build a dense baseline by
interleaved multimodal large language model (MLLM). We then propose a novel
Matryoshka Multimodal Embedder (MME), which compresses the number of visual
tokens at different granularity, to address the challenge of excessive visual
tokens in MLLM-based TIIR models. Experiments demonstrate that simple adaption
of existing models does not consistently yield effective results. Our MME
achieves significant improvements over the baseline by substantially fewer
visual tokens. We provide extensive analysis and will release the dataset and
code to facilitate future research.
|
2502.12801
|
Learning Wall Segmentation in 3D Vessel Trees using Sparse Annotations
|
cs.CV
|
We propose a novel approach that uses sparse annotations from clinical
studies to train a 3D segmentation of the carotid artery wall. We use a
centerline annotation to sample perpendicular cross-sections of the carotid
artery and use an adversarial 2D network to segment them. These annotations are
then transformed into 3D pseudo-labels for training of a 3D convolutional
neural network, circumventing the creation of manual 3D masks. For pseudo-label
creation in the bifurcation area we propose the use of cross-sections
perpendicular to the bifurcation axis and show that this enhances segmentation
performance. Different sampling distances had a lesser impact. The proposed
method allows for efficient training of 3D segmentation, offering potential
improvements in the assessment of carotid artery stenosis and allowing the
extraction of 3D biomarkers such as plaque volume.
|
2502.12802
|
PPGF: Probability Pattern-Guided Time Series Forecasting
|
cs.LG
|
Time series forecasting (TSF) is an essential branch of machine learning with
various applications. Most methods for TSF focus on constructing different
networks to extract better information and improve performance. However,
practical application data contain different internal mechanisms, resulting in
a mixture of multiple patterns. That is, the model's ability to fit different
patterns is different and generates different errors. In order to solve this
problem, we propose an end-to-end framework, namely probability pattern-guided
time series forecasting (PPGF). PPGF reformulates the TSF problem as a
forecasting task guided by probabilistic pattern classification. Firstly, we
propose the grouping strategy to approach forecasting problems as
classification and alleviate the impact of data imbalance on classification.
Secondly, we predict in the corresponding class interval to guarantee the
consistency of classification and forecasting. In addition, True Class
Probability (TCP) is introduced to pay more attention to the difficult samples
to improve the classification accuracy. Detailedly, PPGF classifies the
different patterns to determine which one the target value may belong to and
estimates it accurately in the corresponding interval. To demonstrate the
effectiveness of the proposed framework, we conduct extensive experiments on
real-world datasets, and PPGF achieves significant performance improvements
over several baseline methods. Furthermore, the effectiveness of TCP and the
necessity of consistency between classification and forecasting are proved in
the experiments. All data and codes are available online:
https://github.com/syrGitHub/PPGF.
|
2502.12803
|
Design Optimization of Musculoskeletal Humanoids with Maximization of
Redundancy to Compensate for Muscle Rupture
|
cs.RO
|
Musculoskeletal humanoids have various biomimetic advantages, and the
redundant muscle arrangement allowing for variable stiffness control is one of
the most important. In this study, we focus on one feature of the redundancy,
which enables the humanoid to keep moving even if one of its muscles breaks, an
advantage that has not been dealt with in many studies. In order to make the
most of this advantage, the design of muscle arrangement is optimized by
considering the maximization of minimum available torque that can be exerted
when one muscle breaks. This method is applied to the elbow of a
musculoskeletal humanoid Musashi with simulations, the design policy is
extracted from the optimization results, and its effectiveness is confirmed
with the actual robot.
|
2502.12804
|
Reinforcement Learning for Dynamic Resource Allocation in Optical
Networks: Hype or Hope?
|
cs.NI cs.LG cs.SY eess.SY
|
The application of reinforcement learning (RL) to dynamic resource allocation
in optical networks has been the focus of intense research activity in recent
years, with almost 100 peer-reviewed papers. We present a review of progress in
the field, and identify significant gaps in benchmarking practices and
reproducibility. To determine the strongest benchmark algorithms, we
systematically evaluate several heuristics across diverse network topologies.
We find that path count and sort criteria for path selection significantly
affect the benchmark performance. We meticulously recreate the problems from
five landmark papers and apply the improved benchmarks. Our comparisons
demonstrate that simple heuristics consistently match or outperform the
published RL solutions, often with an order of magnitude lower blocking
probability. Furthermore, we present empirical lower bounds on network blocking
using a novel defragmentation-based method, revealing that potential
improvements over the benchmark heuristics are limited to 19--36\% increased
traffic load for the same blocking performance in our examples. We make our
simulation framework and results publicly available to promote reproducible
research and standardized evaluation https://doi.org/10.5281/zenodo.12594495.
|
2502.12807
|
An improved wind power prediction via a novel wind ramp identification
algorithm
|
cs.LG
|
Authors: Yifan Xu Abstract: Conventional wind power prediction methods often
struggle to provide accurate and reliable predictions in the presence of sudden
changes in wind speed and power output. To address this challenge, this study
proposes an integrated algorithm that combines a wind speed mutation
identification algorithm, an optimized similar period matching algorithm and a
wind power prediction algorithm. By exploiting the convergence properties of
meteorological events, the method significantly improves the accuracy of wind
power prediction under sudden meteorological changes. Firstly, a novel adaptive
model based on variational mode decomposition, the VMD-IC model, is developed
for identifying and labelling key turning points in the historical wind power
data, representing abrupt meteorological environments. At the same time, this
paper proposes Ramp Factor (RF) indicators and wind speed similarity
coefficient to optimize the definition algorithm of the current wind power ramp
event (WPRE). After innovating the definition of climbing and denoising
algorithm, this paper uses the Informer deep learning algorithm to output the
first two models as well as multimodal data such as NWP numerical weather
forecasts to achieve accurate wind forecasts. The experimental results of the
ablation study confirm the effectiveness and reliability of the proposed wind
slope identification method. Compared with existing methods, the proposed model
exhibits excellent performance and provides valuable guidance for the safe and
cost-effective operation of power systems.
|
2502.12808
|
Exceeding the Maximum Speed Limit of the Joint Angle for the Redundant
Tendon-driven Structures of Musculoskeletal Humanoids
|
cs.RO
|
The musculoskeletal humanoid has various biomimetic benefits, and the
redundant muscle arrangement is one of its most important characteristics. This
redundancy can achieve fail-safe redundant actuation and variable stiffness
control. However, there is a problem that the maximum joint angle velocity is
limited by the slowest muscle among the redundant muscles. In this study, we
propose two methods that can exceed the limited maximum joint angle velocity,
and verify the effectiveness with actual robot experiments.
|
2502.12810
|
Frequency-domain alignment of heterogeneous, multidimensional
separations data through complex orthogonal Procrustes analysis
|
math.NA cs.LG cs.NA
|
Multidimensional separations data have the capacity to reveal detailed
information about complex biological samples. However, data analysis has been
an ongoing challenge in the area since the peaks that represent chemical
factors may drift over the course of several analytical runs along the first
and second dimension retention times. This makes higher-level analyses of the
data difficult, since a 1-1 comparison of samples is seldom possible without
sophisticated pre-processing routines. Further complicating the issue is the
fact that closely co-eluting components will need to be resolved, typically
using some variants of Parallel Factor Analysis (PARAFAC), Multivariate Curve
Resolution (MCR), or the recently explored Shift-Invariant Multi-linearity.
These algorithms work with a user-specified number of components, and regions
of interest that are then summarized as a peak table that is invariant to
shift. However, identifying regions of interest across truly heterogeneous data
remains an ongoing issue, for automated deployment of these algorithms. This
work offers a very simple solution to the alignment problem through a
orthogonal Procrustes analysis of the frequency-domain representation of
synthetic multidimensional separations data, for peaks that are logarithmically
transformed to simulate shift while preserving the underlying topology of the
data. Using this very simple method for analysis, two synthetic chromatograms
can be compared under close to the worst possible scenarios for alignment.
|
2502.12811
|
Applications of Stretch Reflex for the Upper Limb of Musculoskeletal
Humanoids: Protective Behavior, Postural Stability, and Active Induction
|
cs.RO
|
The musculoskeletal humanoid has various biomimetic benefits, and it is
important that we can embed and evaluate human reflexes in the actual robot.
Although stretch reflex has been implemented in lower limbs of musculoskeletal
humanoids, we apply it to the upper limb to discover its useful applications.
We consider the implementation of stretch reflex in the actual robot, its
active/passive applications, and the change in behavior according to the
difference of parameters.
|
2502.12813
|
Simulating User Diversity in Task-Oriented Dialogue Systems using Large
Language Models
|
cs.CL
|
In this study, we explore the application of Large Language Models (LLMs) for
generating synthetic users and simulating user conversations with a
task-oriented dialogue system and present detailed results and their analysis.
We propose a comprehensive novel approach to user simulation technique that
uses LLMs to create diverse user profiles, set goals, engage in multi-turn
dialogues, and evaluate the conversation success. We employ two proprietary
LLMs, namely GPT-4o and GPT-o1 (Achiam et al., 2023), to generate a
heterogeneous base of user profiles, characterized by varied demographics,
multiple user goals, different conversational styles, initial knowledge levels,
interests, and conversational objectives. We perform a detailed analysis of the
user profiles generated by LLMs to assess the diversity, consistency, and
potential biases inherent in these LLM-generated user simulations. We find that
GPT-o1 generates more heterogeneous user distribution across most user
attributes, while GPT-4o generates more skewed user attributes. The generated
set of user profiles are then utilized to simulate dialogue sessions by
interacting with a task-oriented dialogue system.
|
2502.12819
|
Carotid Artery Plaque Analysis in 3D Based on Distance Encoding in Mesh
Representations
|
cs.CV
|
Purpose: Enabling a comprehensive and robust assessment of carotid artery
plaques in 3D through extraction and visualization of quantitative plaque
parameters. These parameters have potential applications in stroke risk
analysis, evaluation of therapy effectiveness, and plaque progression
prediction. Methods: We propose a novel method for extracting a plaque mesh
from 3D vessel wall segmentation using distance encoding on the inner and outer
wall mesh for precise plaque structure analysis. A case-specific threshold,
derived from the normal vessel wall thickness, was applied to extract plaques
from a dataset of 202 T1-weighted black-blood MRI scans of subjects with up to
50% stenosis. Applied to baseline and one-year follow-up data, the method
supports detailed plaque morphology analysis over time, including plaque volume
quantification, aided by improved visualization via mesh unfolding. Results: We
successfully extracted plaque meshes from 341 carotid arteries, capturing a
wide range of plaque shapes with volumes ranging from 2.69{\mu}l to
847.7{\mu}l. The use of a case-specific threshold effectively eliminated false
positives in young, healthy subjects. Conclusion: The proposed method enables
precise extraction of plaque meshes from 3D vessel wall segmentation masks
enabling a correspondence between baseline and one-year follow-up examinations.
Unfolding the plaque meshes enhances visualization, while the mesh-based
analysis allows quantification of plaque parameters independent of voxel
resolution.
|
2502.12821
|
Pitfalls of Scale: Investigating the Inverse Task of Redefinition in
Large Language Models
|
cs.CL
|
Inverse tasks can uncover potential reasoning gaps as Large Language Models
(LLMs) scale up. In this work, we explore the redefinition task, in which we
assign alternative values to well-known physical constants and units of
measure, prompting LLMs to respond accordingly. Our findings show that not only
does model performance degrade with scale, but its false confidence also rises.
Moreover, while factors such as prompting strategies or response formatting are
influential, they do not preclude LLMs from anchoring to memorized values.
|
2502.12825
|
Reasoning and the Trusting Behavior of DeepSeek and GPT: An Experiment
Revealing Hidden Fault Lines in Large Language Models
|
cs.CL cs.AI
|
When encountering increasingly frequent performance improvements or cost
reductions from a new large language model (LLM), developers of applications
leveraging LLMs must decide whether to take advantage of these improvements or
stay with older tried-and-tested models. Low perceived switching frictions can
lead to choices that do not consider more subtle behavior changes that the
transition may induce. Our experiments use a popular game-theoretic behavioral
economics model of trust to show stark differences in the trusting behavior of
OpenAI's and DeepSeek's models. We highlight a collapse in the economic trust
behavior of the o1-mini and o3-mini models as they reconcile profit-maximizing
and risk-seeking with future returns from trust, and contrast it with
DeepSeek's more sophisticated and profitable trusting behavior that stems from
an ability to incorporate deeper concepts like forward planning and
theory-of-mind. As LLMs form the basis for high-stakes commercial systems, our
results highlight the perils of relying on LLM performance benchmarks that are
too narrowly defined and suggest that careful analysis of their hidden fault
lines should be part of any organization's AI strategy.
|
2502.12829
|
KazMMLU: Evaluating Language Models on Kazakh, Russian, and Regional
Knowledge of Kazakhstan
|
cs.CL
|
Despite having a population of twenty million, Kazakhstan's culture and
language remain underrepresented in the field of natural language processing.
Although large language models (LLMs) continue to advance worldwide, progress
in Kazakh language has been limited, as seen in the scarcity of dedicated
models and benchmark evaluations. To address this gap, we introduce KazMMLU,
the first MMLU-style dataset specifically designed for Kazakh language. KazMMLU
comprises 23,000 questions that cover various educational levels, including
STEM, humanities, and social sciences, sourced from authentic educational
materials and manually validated by native speakers and educators. The dataset
includes 10,969 Kazakh questions and 12,031 Russian questions, reflecting
Kazakhstan's bilingual education system and rich local context. Our evaluation
of several state-of-the-art multilingual models (Llama-3.1, Qwen-2.5, GPT-4,
and DeepSeek V3) demonstrates substantial room for improvement, as even the
best-performing models struggle to achieve competitive performance in Kazakh
and Russian. These findings underscore significant performance gaps compared to
high-resource languages. We hope that our dataset will enable further research
and development of Kazakh-centric LLMs. Data and code will be made available
upon acceptance.
|
2502.12834
|
NTP-INT: Network Traffic Prediction-Driven In-band Network Telemetry for
High-load Switches
|
cs.NI cs.LG
|
In-band network telemetry (INT) is essential to network management due to its
real-time visibility. However, because of the rapid increase in network devices
and services, it has become crucial to have targeted access to detailed network
information in a dynamic network environment. This paper proposes an
intelligent network telemetry system called NTP-INT to obtain more fine-grained
network information on high-load switches. Specifically, NTP-INT consists of
three modules: network traffic prediction module, network pruning module, and
probe path planning module. Firstly, the network traffic prediction module
adopts a Multi-Temporal Graph Neural Network (MTGNN) to predict future network
traffic and identify high-load switches. Then, we design the network pruning
algorithm to generate a subnetwork covering all high-load switches to reduce
the complexity of probe path planning. Finally, the probe path planning module
uses an attention-mechanism-based deep reinforcement learning (DEL) model to
plan efficient probe paths in the network slice. The experimental results
demonstrate that NTP-INT can acquire more precise network information on
high-load switches while decreasing the control overhead by 50\%.
|
2502.12835
|
Subword models struggle with word learning, but surprisal hides it
|
cs.CL
|
We study word learning in subword and character language models with the
psycholinguistic lexical decision task. While subword LMs struggle to discern
words and non-words with high accuracy, character LMs solve this task easily
and consistently. Furthermore, when comparing word learning and syntactic
learning, both processes are separable in character LM where word learning
predates syntactic learning, whereas these processes are simultaneous in
subword LM. This raises questions about the adequacy of subword LMs for
modeling language acquisition and positions character LMs as a viable
alternative.
|
2502.12836
|
An LLM-Powered Agent for Physiological Data Analysis: A Case Study on
PPG-based Heart Rate Estimation
|
cs.CL
|
Large language models (LLMs) are revolutionizing healthcare by improving
diagnosis, patient care, and decision support through interactive
communication. More recently, they have been applied to analyzing physiological
time-series like wearable data for health insight extraction. Existing methods
embed raw numerical sequences directly into prompts, which exceeds token limits
and increases computational costs. Additionally, some studies integrated
features extracted from time-series in textual prompts or applied multimodal
approaches. However, these methods often produce generic and unreliable outputs
due to LLMs' limited analytical rigor and inefficiency in interpreting
continuous waveforms. In this paper, we develop an LLM-powered agent for
physiological time-series analysis aimed to bridge the gap in integrating LLMs
with well-established analytical tools. Built on the OpenCHA, an open-source
LLM-powered framework, our agent features an orchestrator that integrates user
interaction, data sources, and analytical tools to generate accurate health
insights. To evaluate its effectiveness, we implement a case study on heart
rate (HR) estimation from Photoplethysmogram (PPG) signals using a dataset of
PPG and Electrocardiogram (ECG) recordings in a remote health monitoring study.
The agent's performance is benchmarked against OpenAI GPT-4o-mini and GPT-4o,
with ECG serving as the gold standard for HR estimation. Results demonstrate
that our agent significantly outperforms benchmark models by achieving lower
error rates and more reliable HR estimations. The agent implementation is
publicly available on GitHub.
|
2502.12838
|
Towards Equitable AI: Detecting Bias in Using Large Language Models for
Marketing
|
cs.CY cs.CL
|
The recent advances in large language models (LLMs) have revolutionized
industries such as finance, marketing, and customer service by enabling
sophisticated natural language processing tasks. However, the broad adoption of
LLMs brings significant challenges, particularly in the form of social biases
that can be embedded within their outputs. Biases related to gender, age, and
other sensitive attributes can lead to unfair treatment, raising ethical
concerns and risking both company reputation and customer trust. This study
examined bias in finance-related marketing slogans generated by LLMs (i.e.,
ChatGPT) by prompting tailored ads targeting five demographic categories:
gender, marital status, age, income level, and education level. A total of
1,700 slogans were generated for 17 unique demographic groups, and key terms
were categorized into four thematic groups: empowerment, financial, benefits
and features, and personalization. Bias was systematically assessed using
relative bias calculations and statistically tested with the Kolmogorov-Smirnov
(KS) test against general slogans generated for any individual. Results
revealed that marketing slogans are not neutral; rather, they emphasize
different themes based on demographic factors. Women, younger individuals,
low-income earners, and those with lower education levels receive more distinct
messaging compared to older, higher-income, and highly educated individuals.
This underscores the need to consider demographic-based biases in AI-generated
marketing strategies and their broader societal implications. The findings of
this study provide a roadmap for developing more equitable AI systems,
highlighting the need for ongoing bias detection and mitigation efforts in
LLMs.
|
2502.12842
|
Towards Adaptive Feedback with AI: Comparing the Feedback Quality of
LLMs and Teachers on Experimentation Protocols
|
cs.AI cs.HC
|
Effective feedback is essential for fostering students' success in scientific
inquiry. With advancements in artificial intelligence, large language models
(LLMs) offer new possibilities for delivering instant and adaptive feedback.
However, this feedback often lacks the pedagogical validation provided by
real-world practitioners. To address this limitation, our study evaluates and
compares the feedback quality of LLM agents with that of human teachers and
science education experts on student-written experimentation protocols. Four
blinded raters, all professionals in scientific inquiry and science education,
evaluated the feedback texts generated by 1) the LLM agent, 2) the teachers and
3) the science education experts using a five-point Likert scale based on six
criteria of effective feedback: Feed Up, Feed Back, Feed Forward, Constructive
Tone, Linguistic Clarity, and Technical Terminology. Our results indicate that
LLM-generated feedback shows no significant difference to that of teachers and
experts in overall quality. However, the LLM agent's performance lags in the
Feed Back dimension, which involves identifying and explaining errors within
the student's work context. Qualitative analysis highlighted the LLM agent's
limitations in contextual understanding and in the clear communication of
specific errors. Our findings suggest that combining LLM-generated feedback
with human expertise can enhance educational practices by leveraging the
efficiency of LLMs and the nuanced understanding of educators.
|
2502.12845
|
MOLLM: Multi-Objective Large Language Model for Molecular Design --
Optimizing with Experts
|
cs.LG
|
Molecular design plays a critical role in advancing fields such as drug
discovery, materials science, and chemical engineering. This work introduces
the Multi-Objective Large Language Model for Molecular Design (MOLLM), a novel
framework that combines domain-specific knowledge with the adaptability of
Large Language Models to optimize molecular properties across multiple
objectives. Leveraging in-context learning and multi-objective optimization,
MOLLM achieves superior efficiency, innovation, and performance, significantly
surpassing state-of-the-art (SOTA) methods. Recognizing the substantial impact
of initial populations on evolutionary algorithms, we categorize them into
three types: best initial, worst initial, and random initial, to ensure the
initial molecules are the same for each method across experiments. Our results
demonstrate that MOLLM consistently outperforms SOTA models in all of our
experiments. We also provide extensive ablation studies to evaluate the
superiority of our components.
|
2502.12847
|
Characterizing the Interaction of Cultural Evolution Mechanisms in
Experimental Social Networks
|
cs.SI q-bio.NC q-bio.PE
|
Understanding how cognitive and social mechanisms shape the evolution of
complex artifacts such as songs is central to cultural evolution research.
Social network topology (what artifacts are available?), selection (which are
chosen?), and reproduction (how are they copied?) have all been proposed as key
influencing factors. However, prior research has rarely studied them together
due to methodological challenges. We address this gap through a controlled
naturalistic paradigm whereby participants (N=2,404) are placed in networks and
are asked to iteratively choose and sing back melodies from their neighbors. We
show that this setting yields melodies that are more complex and more pleasant
than those found in the more-studied linear transmission setting, and exhibits
robust differences across topologies. Crucially, these differences are
diminished when selection or reproduction bias are eliminated, suggesting an
interaction between mechanisms. These findings shed light on the interplay of
mechanisms underlying the evolution of cultural artifacts.
|
2502.12849
|
Leveraging Intermediate Representations for Better Out-of-Distribution
Detection
|
cs.LG cs.CV
|
In real-world applications, machine learning models must reliably detect
Out-of-Distribution (OoD) samples to prevent unsafe decisions. Current OoD
detection methods often rely on analyzing the logits or the embeddings of the
penultimate layer of a neural network. However, little work has been conducted
on the exploitation of the rich information encoded in intermediate layers. To
address this, we analyze the discriminative power of intermediate layers and
show that they can positively be used for OoD detection. Therefore, we propose
to regularize intermediate layers with an energy-based contrastive loss, and by
grouping multiple layers in a single aggregated response. We demonstrate that
intermediate layer activations improves OoD detection performance by running a
comprehensive evaluation across multiple datasets.
|
2502.12851
|
MeMo: Towards Language Models with Associative Memory Mechanisms
|
cs.CL cs.AI
|
Memorization is a fundamental ability of Transformer-based Large Language
Models, achieved through learning. In this paper, we propose a paradigm shift
by designing an architecture to memorize text directly, bearing in mind the
principle that memorization precedes learning. We introduce MeMo, a novel
architecture for language modeling that explicitly memorizes sequences of
tokens in layered associative memories. By design, MeMo offers transparency and
the possibility of model editing, including forgetting texts. We experimented
with the MeMo architecture, showing the memorization power of the one-layer and
the multi-layer configurations.
|
2502.12852
|
MVL-SIB: A Massively Multilingual Vision-Language Benchmark for
Cross-Modal Topical Matching
|
cs.CL
|
Existing multilingual vision-language (VL) benchmarks often only cover a
handful of languages. Consequently, evaluations of large vision-language models
(LVLMs) predominantly target high-resource languages, underscoring the need for
evaluation data for low-resource languages. To address this limitation, we
introduce MVL-SIB, a massively multilingual vision-language benchmark that
evaluates both cross-modal and text-only topical matching across 205 languages
-- over 100 more than the most multilingual existing VL benchmarks encompass.
We then benchmark a range of of open-weight LVLMs together with GPT-4o(-mini)
on MVL-SIB. Our results reveal that LVLMs struggle in cross-modal topic
matching in lower-resource languages, performing no better than chance on
languages like N'Koo. Our analysis further reveals that VL support in LVLMs
declines disproportionately relative to textual support for lower-resource
languages, as evidenced by comparison of cross-modal and text-only topical
matching performance. We further observe that open-weight LVLMs do not benefit
from representing a topic with more than one image, suggesting that these
models are not yet fully effective at handling multi-image tasks. By
correlating performance on MVL-SIB with other multilingual VL benchmarks, we
highlight that MVL-SIB serves as a comprehensive probe of multilingual VL
understanding in LVLMs.
|
2502.12853
|
S$^2$R: Teaching LLMs to Self-verify and Self-correct via Reinforcement
Learning
|
cs.CL cs.LG
|
Recent studies have demonstrated the effectiveness of LLM test-time scaling.
However, existing approaches to incentivize LLMs' deep thinking abilities
generally require large-scale data or significant training efforts. Meanwhile,
it remains unclear how to improve the thinking abilities of less powerful base
models. In this work, we introduce S$^2$R, an efficient framework that enhances
LLM reasoning by teaching models to self-verify and self-correct during
inference. Specifically, we first initialize LLMs with iterative
self-verification and self-correction behaviors through supervised fine-tuning
on carefully curated data. The self-verification and self-correction skills are
then further strengthened by both outcome-level and process-level reinforcement
learning, with minimized resource requirements, enabling the model to
adaptively refine its reasoning process during inference. Our results
demonstrate that, with only 3.1k self-verifying and self-correcting behavior
initialization samples, Qwen2.5-math-7B achieves an accuracy improvement from
51.0\% to 81.6\%, outperforming models trained on an equivalent amount of
long-CoT distilled data. Extensive experiments and analysis based on three base
models across both in-domain and out-of-domain benchmarks validate the
effectiveness of S$^2$R. Our code and data are available at
https://github.com/NineAbyss/S2R.
|
2502.12855
|
Integrating Arithmetic Learning Improves Mathematical Reasoning in
Smaller Models
|
cs.CL cs.AI cs.LG
|
While large models pre-trained on high-quality data exhibit excellent
performance across various reasoning tasks, including mathematical reasoning
(e.g. GSM8k, MultiArith), specializing smaller models to excel at mathematical
reasoning remains a challenging problem. Common approaches to address this
challenge include knowledge distillation, where smaller student models learn
from large pre-trained teacher models, and data augmentation, such as
rephrasing questions. Despite these efforts, smaller models struggle with
arithmetic computations, leading to errors in mathematical reasoning. In this
work, we focus on leveraging a programmatically generated arithmetic dataset to
enhance the reasoning capabilities of smaller models. We investigate two key
approaches to incorporate this dataset -- (1) intermediate fine-tuning, where a
model is fine-tuned on the arithmetic dataset before being trained on a
reasoning dataset, and (2) integrating the arithmetic dataset into the
instruction-tuning mixture, allowing the model to learn arithmetic skills
alongside general instruction-following abilities. Our experiments on multiple
reasoning benchmarks demonstrate that incorporating an arithmetic dataset,
whether through targeted fine-tuning or within the instruction-tuning mixture,
enhances the models' arithmetic capabilities, which in turn improves their
mathematical reasoning performance.
|
2502.12858
|
Rejected Dialects: Biases Against African American Language in Reward
Models
|
cs.CL cs.AI cs.CY
|
Preference alignment via reward models helps build safe, helpful, and
reliable large language models (LLMs). However, subjectivity in preference
judgments and the lack of representative sampling in preference data collection
can introduce new biases, hindering reward models' fairness and equity. In this
work, we introduce a framework for evaluating dialect biases in reward models
and conduct a case study on biases against African American Language (AAL)
through several experiments comparing reward model preferences and behavior on
paired White Mainstream English (WME) and both machine-translated and
human-written AAL corpora. We show that reward models are less aligned with
human preferences when processing AAL texts vs. WME ones (-4\% accuracy on
average), frequently disprefer AAL-aligned texts vs. WME-aligned ones, and
steer conversations toward WME, even when prompted with AAL texts. Our findings
provide a targeted analysis of anti-AAL biases at a relatively understudied
stage in LLM development, highlighting representational harms and ethical
questions about the desired behavior of LLMs concerning AAL.
|
2502.12859
|
PAFT: Prompt-Agnostic Fine-Tuning
|
cs.CL cs.AI
|
While Large Language Models (LLMs) adapt well to downstream tasks after
fine-tuning, this adaptability often compromises prompt robustness, as even
minor prompt variations can significantly degrade performance. To address this,
we propose Prompt-Agnostic Fine-Tuning(PAFT), a simple yet effective approach
that dynamically adjusts prompts during fine-tuning. This encourages the model
to learn underlying task principles rather than overfitting to specific prompt
formulations. PAFT operates in two stages: First, a diverse set of meaningful,
synthetic candidate prompts is constructed. Second, during fine-tuning, prompts
are randomly sampled from this set to create dynamic training inputs. Extensive
experiments across diverse datasets and LLMs demonstrate that models trained
with PAFT exhibit strong robustness and generalization across a wide range of
prompts, including unseen ones. This enhanced robustness improves both model
performance and inference speed while maintaining training efficiency. Ablation
studies further confirm the effectiveness of PAFT.
|
2502.12860
|
An Experimental Study of SOTA LiDAR Segmentation Models
|
cs.CV
|
Point cloud segmentation (PCS) is to classify each point in point clouds. The
task enables robots to parse their 3D surroundings and run autonomously.
According to different point cloud representations, existing PCS models can be
roughly divided into point-, voxel-, and range image-based models. However, no
work has been found to report comprehensive comparisons among the
state-of-the-art point-, voxel-, and range image-based models from an
application perspective, bringing difficulty in utilizing these models for
real-world scenarios. In this paper, we provide thorough comparisons among the
models by considering the LiDAR data motion compensation and the metrics of
model parameters, max GPU memory allocated during testing, inference latency,
frames per second, intersection-over-union (IoU) and mean IoU (mIoU) scores.
The experimental results benefit engineers when choosing a reasonable PCS model
for an application and inspire researchers in the PCS field to design more
practical models for a real-world scenario.
|
2502.12861
|
InstructRobot: A Model-Free Framework for Mapping Natural Language
Instructions into Robot Motion
|
cs.RO
|
The ability to communicate with robots using natural language is a
significant step forward in human-robot interaction. However, accurately
translating verbal commands into physical actions is promising, but still
presents challenges. Current approaches require large datasets to train the
models and are limited to robots with a maximum of 6 degrees of freedom. To
address these issues, we propose a framework called InstructRobot that maps
natural language instructions into robot motion without requiring the
construction of large datasets or prior knowledge of the robot's kinematics
model. InstructRobot employs a reinforcement learning algorithm that enables
joint learning of language representations and inverse kinematics model,
simplifying the entire learning process. The proposed framework is validated
using a complex robot with 26 revolute joints in object manipulation tasks,
demonstrating its robustness and adaptability in realistic environments. The
framework can be applied to any task or domain where datasets are scarce and
difficult to create, making it an intuitive and accessible solution to the
challenges of training robots using linguistic communication. Open source code
for the InstructRobot framework and experiments can be accessed at
https://github.com/icleveston/InstructRobot.
|
2502.12862
|
RobotIQ: Empowering Mobile Robots with Human-Level Planning for
Real-World Execution
|
cs.RO cs.SY eess.SY
|
This paper introduces RobotIQ, a framework that empowers mobile robots with
human-level planning capabilities, enabling seamless communication via natural
language instructions through any Large Language Model. The proposed framework
is designed in the ROS architecture and aims to bridge the gap between humans
and robots, enabling robots to comprehend and execute user-expressed text or
voice commands. Our research encompasses a wide spectrum of robotic tasks,
ranging from fundamental logical, mathematical, and learning reasoning for
transferring knowledge in domains like navigation, manipulation, and object
localization, enabling the application of learned behaviors from simulated
environments to real-world operations. All encapsulated within a modular
crafted robot library suite of API-wise control functions, RobotIQ offers a
fully functional AI-ROS-based toolset that allows researchers to design and
develop their own robotic actions tailored to specific applications and robot
configurations. The effectiveness of the proposed system was tested and
validated both in simulated and real-world experiments focusing on a home
service scenario that included an assistive application designed for elderly
people. RobotIQ with an open-source, easy-to-use, and adaptable robotic library
suite for any robot can be found at https://github.com/emmarapt/RobotIQ.
|
2502.12863
|
Malware Detection based on API calls
|
cs.CR cs.LG
|
Malware attacks pose a significant threat in today's interconnected digital
landscape, causing billions of dollars in damages. Detecting and identifying
families as early as possible provides an edge in protecting against such
malware. We explore a lightweight, order-invariant approach to detecting and
mitigating malware threats: analyzing API calls without regard to their
sequence. We publish a public dataset of over three hundred thousand samples
and their function call parameters for this task, annotated with labels
indicating benign or malicious activity. The complete dataset is above 550GB
uncompressed in size. We leverage machine learning algorithms, such as random
forests, and conduct behavioral analysis by examining patterns and anomalies in
API call sequences. By investigating how the function calls occur regardless of
their order, we can identify discriminating features that can help us identify
malware early on. The models we've developed are not only effective but also
efficient. They are lightweight and can run on any machine with minimal
performance overhead, while still achieving an impressive F1-Score of over
85\%. We also empirically show that we only need a subset of the function call
sequence, specifically calls to the ntdll.dll library, to identify malware. Our
research demonstrates the efficacy of this approach through empirical
evaluations, underscoring its accuracy and scalability. The code is open source
and available at Github along with the dataset on Zenodo.
|
2502.12874
|
Testing for Causal Fairness
|
cs.LG
|
Causality is widely used in fairness analysis to prevent discrimination on
sensitive attributes, such as genders in career recruitment and races in crime
prediction. However, the current data-based Potential Outcomes Framework (POF)
often leads to untrustworthy fairness analysis results when handling
high-dimensional data. To address this, we introduce a distribution-based POF
that transform fairness analysis into Distributional Closeness Testing (DCT) by
intervening on sensitive attributes. We define counterfactual closeness
fairness as the null hypothesis of DCT, where a sensitive attribute is
considered fair if its factual and counterfactual potential outcome
distributions are sufficiently close. We introduce the Norm-Adaptive Maximum
Mean Discrepancy Treatment Effect (N-TE) as a statistic for measuring
distributional closeness and apply DCT using the empirical estimator of NTE,
referred to Counterfactual Fairness-CLOseness Testing ($\textrm{CF-CLOT}$). To
ensure the trustworthiness of testing results, we establish the testing
consistency of N-TE through rigorous theoretical analysis. $\textrm{CF-CLOT}$
demonstrates sensitivity in fairness analysis through the flexibility of the
closeness parameter $\epsilon$. Unfair sensitive attributes have been
successfully tested by $\textrm{CF-CLOT}$ in extensive experiments across
various real-world scenarios, which validate the consistency of the testing.
|
2502.12876
|
Continuous Learning Conversational AI: A Personalized Agent Framework
via A2C Reinforcement Learning
|
cs.AI
|
Creating personalized and adaptable conversational AI remains a key
challenge. This paper introduces a Continuous Learning Conversational AI (CLCA)
approach, implemented using A2C reinforcement learning, to move beyond static
Large Language Models (LLMs). We use simulated sales dialogues, generated by
LLMs, to train an A2C agent. This agent learns to optimize conversation
strategies for personalization, focusing on engagement and delivering value.
Our system architecture integrates reinforcement learning with LLMs for both
data creation and response selection. This method offers a practical way to
build personalized AI companions that evolve through continuous learning,
advancing beyond traditional static LLM techniques.
|
2502.12877
|
Pushing the Limits of the Reactive Affine Shaker Algorithm to Higher
Dimensions
|
math.NA cs.LG cs.NA
|
Bayesian Optimization (BO) for the minimization of expensive functions of
continuous variables uses all the knowledge acquired from previous samples
(${\boldsymbol x}_i$ and $f({\boldsymbol x}_i)$ values) to build a surrogate
model based on Gaussian processes. The surrogate is then exploited to define
the next point to sample, through a careful balance of exploration and
exploitation. Initially intended for low-dimensional spaces, BO has recently
been modified and used also for very large-dimensional spaces (up to about one
thousand dimensions).
In this paper we consider a much simpler algorithm, called "Reactive Affine
Shaker" (RAS). The next sample is always generated with a uniform probability
distribution inside a parallelepiped (the "box"). At each iteration, the form
of the box is adapted during the search through an affine transformation, based
only on the point $\boldsymbol x$ position and on the success or failure in
improving the function. The function values are therefore not used directly to
modify the search area and to generate the next sample. The entire
dimensionality is kept (no active subspaces).
Despite its extreme simplicity and its use of only stochastic local search,
surprisingly the produced results are comparable to and not too far from the
state-of-the-art results of high-dimensional versions of BO, although with some
more function evaluations.
An ablation study and an analysis of probability distribution of directions
(improving steps and prevailing box orientation) in very large-dimensional
spaces are conducted to understand more about the behavior of RAS and to assess
the relative importance of the algorithmic building blocks for the final
results.
|
2502.12884
|
How desirable is alignment between LLMs and linguistically diverse human
users?
|
cs.CL
|
We discuss how desirable it is that Large Language Models (LLMs) be able to
adapt or align their language behavior with users who may be diverse in their
language use. User diversity may come about among others due to i) age
differences; ii) gender characteristics, and/or iii) multilingual experience,
and associated differences in language processing and use. We consider
potential consequences for usability, communication, and LLM development.
|
2502.12886
|
Are Multilingual Language Models an Off-ramp for Under-resourced
Languages? Will we arrive at Digital Language Equality in Europe in 2030?
|
cs.CL
|
Large language models (LLMs) demonstrate unprecedented capabilities and
define the state of the art for almost all natural language processing (NLP)
tasks and also for essentially all Language Technology (LT) applications. LLMs
can only be trained for languages for which a sufficient amount of pre-training
data is available, effectively excluding many languages that are typically
characterised as under-resourced. However, there is both circumstantial and
empirical evidence that multilingual LLMs, which have been trained using data
sets that cover multiple languages (including under-resourced ones), do exhibit
strong capabilities for some of these under-resourced languages. Eventually,
this approach may have the potential to be a technological off-ramp for those
under-resourced languages for which "native" LLMs, and LLM-based technologies,
cannot be developed due to a lack of training data. This paper, which
concentrates on European languages, examines this idea, analyses the current
situation in terms of technology support and summarises related work. The
article concludes by focusing on the key open questions that need to be
answered for the approach to be put into practice in a systematic way.
|
2502.12892
|
Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept
Extraction in Large Vision Models
|
cs.CV
|
Sparse Autoencoders (SAEs) have emerged as a powerful framework for machine
learning interpretability, enabling the unsupervised decomposition of model
representations into a dictionary of abstract, human-interpretable concepts.
However, we reveal a fundamental limitation: existing SAEs exhibit severe
instability, as identical models trained on similar datasets can produce
sharply different dictionaries, undermining their reliability as an
interpretability tool. To address this issue, we draw inspiration from the
Archetypal Analysis framework introduced by Cutler & Breiman (1994) and present
Archetypal SAEs (A-SAE), wherein dictionary atoms are constrained to the convex
hull of data. This geometric anchoring significantly enhances the stability of
inferred dictionaries, and their mildly relaxed variants RA-SAEs further match
state-of-the-art reconstruction abilities. To rigorously assess dictionary
quality learned by SAEs, we introduce two new benchmarks that test (i)
plausibility, if dictionaries recover "true" classification directions and (ii)
identifiability, if dictionaries disentangle synthetic concept mixtures. Across
all evaluations, RA-SAEs consistently yield more structured representations
while uncovering novel, semantically meaningful concepts in large-scale vision
models.
|
2502.12893
|
H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to
Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and
Gemini 2.0 Flash Thinking
|
cs.CL
|
Large Reasoning Models (LRMs) have recently extended their powerful reasoning
capabilities to safety checks-using chain-of-thought reasoning to decide
whether a request should be answered. While this new approach offers a
promising route for balancing model utility and safety, its robustness remains
underexplored. To address this gap, we introduce Malicious-Educator, a
benchmark that disguises extremely dangerous or malicious requests beneath
seemingly legitimate educational prompts. Our experiments reveal severe
security flaws in popular commercial-grade LRMs, including OpenAI o1/o3,
DeepSeek-R1, and Gemini 2.0 Flash Thinking. For instance, although OpenAI's o1
model initially maintains a high refusal rate of about 98%, subsequent model
updates significantly compromise its safety; and attackers can easily extract
criminal strategies from DeepSeek-R1 and Gemini 2.0 Flash Thinking without any
additional tricks. To further highlight these vulnerabilities, we propose
Hijacking Chain-of-Thought (H-CoT), a universal and transferable attack method
that leverages the model's own displayed intermediate reasoning to jailbreak
its safety reasoning mechanism. Under H-CoT, refusal rates sharply
decline-dropping from 98% to below 2%-and, in some instances, even transform
initially cautious tones into ones that are willing to provide harmful content.
We hope these findings underscore the urgent need for more robust safety
mechanisms to preserve the benefits of advanced reasoning capabilities without
compromising ethical standards.
|
2502.12894
|
CAST: Component-Aligned 3D Scene Reconstruction from an RGB Image
|
cs.CV
|
Recovering high-quality 3D scenes from a single RGB image is a challenging
task in computer graphics. Current methods often struggle with domain-specific
limitations or low-quality object generation. To address these, we propose CAST
(Component-Aligned 3D Scene Reconstruction from a Single RGB Image), a novel
method for 3D scene reconstruction and recovery. CAST starts by extracting
object-level 2D segmentation and relative depth information from the input
image, followed by using a GPT-based model to analyze inter-object spatial
relationships. This enables the understanding of how objects relate to each
other within the scene, ensuring more coherent reconstruction. CAST then
employs an occlusion-aware large-scale 3D generation model to independently
generate each object's full geometry, using MAE and point cloud conditioning to
mitigate the effects of occlusions and partial object information, ensuring
accurate alignment with the source image's geometry and texture. To align each
object with the scene, the alignment generation model computes the necessary
transformations, allowing the generated meshes to be accurately placed and
integrated into the scene's point cloud. Finally, CAST incorporates a
physics-aware correction step that leverages a fine-grained relation graph to
generate a constraint graph. This graph guides the optimization of object
poses, ensuring physical consistency and spatial coherence. By utilizing Signed
Distance Fields (SDF), the model effectively addresses issues such as
occlusions, object penetration, and floating objects, ensuring that the
generated scene accurately reflects real-world physical interactions. CAST can
be leveraged in robotics, enabling efficient real-to-simulation workflows and
providing realistic, scalable simulation environments for robotic systems.
|
2502.12895
|
Multilingual European Language Models: Benchmarking Approaches and
Challenges
|
cs.CL
|
The breakthrough of generative large language models (LLMs) that can solve
different tasks through chat interaction has led to a significant increase in
the use of general benchmarks to assess the quality or performance of these
models beyond individual applications. There is also a need for better methods
to evaluate and also to compare models due to the ever increasing number of new
models published. However, most of the established benchmarks revolve around
the English language. This paper analyses the benefits and limitations of
current evaluation datasets, focusing on multilingual European benchmarks. We
analyse seven multilingual benchmarks and identify four major challenges.
Furthermore, we discuss potential solutions to enhance translation quality and
mitigate cultural biases, including human-in-the-loop verification and
iterative translation ranking. Our analysis highlights the need for culturally
aware and rigorously validated benchmarks to assess the reasoning and
question-answering capabilities of multilingual LLMs accurately.
|
2502.12896
|
None of the Others: a General Technique to Distinguish Reasoning from
Memorization in Multiple-Choice LLM Evaluation Benchmarks
|
cs.CL
|
In LLM evaluations, reasoning is often distinguished from recall/memorization
by performing numerical variations to math-oriented questions. Here we
introduce a general variation method for multiple-choice questions that
completely dissociates the correct answer from previously seen tokens or
concepts, requiring LLMs to understand and reason (rather than memorizing) in
order to answer correctly. Using this method, we evaluate state-of-the-art
proprietary and open-source LLMs on two datasets available in English and
Spanish: the public MMLU benchmark and the private UNED-Access 2024 dataset.
Results show that all models experience remarkable accuracy drops under our
proposed variation, with an average loss of 57% on MMLU and 50% on UNED-Access
2024, ranging from 10% to 93% across models. Notably, the most accurate model
in our experimentation (OpenAI-o3-mini) is not the most robust
(DeepSeek-R1-70B), suggesting that the best models in standard evaluations may
not be the ones with better reasoning capabilities. Also, we see larger
accuracy drops in public (vs private) datasets and questions posed in their
original language (vs a manual translation), which are signs of contamination
and also point to a relevant role of recall/memorization in current LLMs'
answers.
|
2502.12897
|
On Zero Skip-Cost Generalized Fractional-Repetition Codes from Covering
Designs
|
cs.IT math.CO math.IT
|
We study generalized fractional repetition codes that have zero skip cost,
and which are based on covering designs. We show that a zero skip cost is
always attainable, perhaps at a price of an expansion factor compared with the
optimal size of fractional repetition codes based on Steiner systems. We
provide three constructions, as well as show non-constructively, that no
expansion is needed for all codes based on sufficiently large covering systems.
|
2502.12898
|
The Relationship Between Head Injury and Alzheimer's Disease: A Causal
Analysis with Bayesian Networks
|
cs.LG
|
This study examines the potential causal relationship between head injury and
the risk of developing Alzheimer's disease (AD) using Bayesian networks and
regression models. Using a dataset of 2,149 patients, we analyze key medical
history variables, including head injury history, memory complaints,
cardiovascular disease, and diabetes. Logistic regression results suggest an
odds ratio of 0.88 for head injury, indicating a potential but statistically
insignificant protective effect against AD. In contrast, memory complaints
exhibit a strong association with AD, with an odds ratio of 4.59. Linear
regression analysis further confirms the lack of statistical significance for
head injury (coefficient: -0.0245, p = 0.469) while reinforcing the predictive
importance of memory complaints. These findings highlight the complex interplay
of medical history factors in AD risk assessment and underscore the need for
further research utilizing larger datasets and advanced causal modeling
techniques.
|
2502.12900
|
Soundwave: Less is More for Speech-Text Alignment in LLMs
|
cs.CL cs.AI cs.SD
|
Existing end-to-end speech large language models (LLMs) usually rely on
large-scale annotated data for training, while data-efficient training has not
been discussed in depth. We focus on two fundamental problems between speech
and text: the representation space gap and sequence length inconsistency. We
propose Soundwave, which utilizes an efficient training strategy and a novel
architecture to address these issues. Results show that Soundwave outperforms
the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks,
using only one-fiftieth of the training data. Further analysis shows that
Soundwave still retains its intelligence during conversation. The project is
available at https://github.com/FreedomIntelligence/Soundwave.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.