id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2502.14372 | Discovering highly efficient low-weight quantum error-correcting codes
with reinforcement learning | quant-ph cs.AI cs.IT cs.LG math.IT | The realization of scalable fault-tolerant quantum computing is expected to
hinge on quantum error-correcting codes. In the quest for more efficient
quantum fault tolerance, a critical code parameter is the weight of
measurements that extract information about errors to enable error correction:
as higher measurement ... |
2502.14373 | CrossVTON: Mimicking the Logic Reasoning on Cross-category Virtual
Try-on guided by Tri-zone Priors | cs.CV | Despite remarkable progress in image-based virtual try-on systems, generating
realistic and robust fitting images for cross-category virtual try-on remains a
challenging task. The primary difficulty arises from the absence of human-like
reasoning, which involves addressing size mismatches between garments and
models ... |
2502.14375 | VFL-RPS: Relevant Participant Selection in Vertical Federated Learning | cs.LG | Federated Learning (FL) allows collaboration between different parties, while
ensuring that the data across these parties is not shared. However, not every
collaboration is helpful in terms of the resulting model performance.
Therefore, it is an important challenge to select the correct participants in a
collaboratio... |
2502.14376 | A Similarity Paradigm Through Textual Regularization Without Forgetting | cs.CL cs.CV | Prompt learning has emerged as a promising method for adapting pre-trained
visual-language models (VLMs) to a range of downstream tasks. While optimizing
the context can be effective for improving performance on specific tasks, it
can often lead to poor generalization performance on unseen classes or datasets
sampled... |
2502.14377 | RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers | cs.CV | The Diffusion Transformer plays a pivotal role in advancing text-to-image and
text-to-video generation, owing primarily to its inherent scalability. However,
existing controlled diffusion transformer methods incur significant parameter
and computational overheads and suffer from inefficient resource allocation due
to... |
2502.14378 | Extremal Self-Dual Codes and Linear Complementary Dual Codes from Double
Circulant Codes | cs.IT math.IT | This paper explores extremal self-dual double circulant (DC) codes and linear
complementary dual (LCD) codes of arbitrary length over the Galois field
$\mathbb F_2$. We establish the sufficient and necessary conditions for DC
codes and bordered DC codes to be self-dual and identify the conditions for
self-dual DC cod... |
2502.14379 | Achieving adaptivity and optimality for multi-armed bandits using
Exponential-Kullback Leiblier Maillard Sampling | cs.LG cs.DS | We study the problem of Multi-Armed Bandits (MAB) with reward distributions
belonging to a One-Parameter Exponential Distribution (OPED) family. In the
literature, several criteria have been proposed to evaluate the performance of
such algorithms, including Asymptotic Optimality (A.O.), Minimax Optimality
(M.O.), Sub... |
2502.14380 | Affinity and Diversity: A Unified Metric for Demonstration Selection via
Internal Representations | cs.CL cs.AI cs.LG | The performance of In-Context Learning (ICL) is highly sensitive to the
selected demonstrations. Existing approaches to demonstration selection
optimize different objectives, yielding inconsistent results. To address this,
we propose a unified metric--affinity and diversity--that leverages ICL model's
internal repres... |
2502.14381 | dtaianomaly: A Python library for time series anomaly detection | cs.LG cs.DB | dtaianomaly is an open-source Python library for time series anomaly
detection, designed to bridge the gap between academic research and real-world
applications. Our goal is to (1) accelerate the development of novel
state-of-the-art anomaly detection techniques through simple extensibility; (2)
offer functionality f... |
2502.14382 | S*: Test Time Scaling for Code Generation | cs.LG cs.AI | Increasing test-time compute for LLMs shows promise across domains but
remains underexplored in code generation, despite extensive study in math. In
this paper, we propose S*, the first hybrid test-time scaling framework that
substantially improves the coverage and selection accuracy of generated code.
S* extends the... |
2502.14383 | Rumor Detection by Multi-task Suffix Learning based on Time-series Dual
Sentiments | cs.CL | The widespread dissemination of rumors on social media has a significant
impact on people's lives, potentially leading to public panic and fear. Rumors
often evoke specific sentiments, resonating with readers and prompting sharing.
To effectively detect and track rumors, it is essential to observe the
fine-grained se... |
2502.14385 | Tradutor: Building a Variety Specific Translation Model | cs.CL | Language models have become foundational to many widely used systems.
However, these seemingly advantageous models are double-edged swords. While
they excel in tasks related to resource-rich languages like English, they often
lose the fine nuances of language forms, dialects, and varieties that are
inherent to langua... |
2502.14387 | MPPI-DBaS: Safe Trajectory Optimization with Adaptive Exploration | eess.SY cs.SY | In trajectory optimization, Model Predictive Path Integral (MPPI) control is
a sampling-based Model Predictive Control (MPC) framework that generates
optimal inputs by efficiently simulating numerous trajectories. In practice,
however, MPPI often struggles to guarantee safety assurance and balance
efficient sampling ... |
2502.14389 | Leveraging Small LLMs for Argument Mining in Education: Argument
Component Identification, Classification, and Assessment | cs.CL cs.HC | Argument mining algorithms analyze the argumentative structure of essays,
making them a valuable tool for enhancing education by providing targeted
feedback on the students' argumentation skills. While current methods often use
encoder or encoder-decoder deep learning architectures, decoder-only models
remain largely... |
2502.14394 | Enhancing Portuguese Variety Identification with Cross-Domain Approaches | cs.CL | Recent advances in natural language processing have raised expectations for
generative models to produce coherent text across diverse language varieties.
In the particular case of the Portuguese language, the predominance of
Brazilian Portuguese corpora online introduces linguistic biases in these
models, limiting th... |
2502.14397 | PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data | cs.CV | We introduce PhotoDoodle, a novel image editing framework designed to
facilitate photo doodling by enabling artists to overlay decorative elements
onto photographs. Photo doodling is challenging because the inserted elements
must appear seamlessly integrated with the background, requiring realistic
blending, perspect... |
2502.14400 | HPS: Hard Preference Sampling for Human Preference Alignment | cs.AI | Aligning Large Language Model (LLM) responses with human preferences is vital
for building safe and controllable AI systems. While preference optimization
methods based on Plackett-Luce (PL) and Bradley-Terry (BT) models have shown
promise, they face challenges such as poor handling of harmful content,
inefficient us... |
2502.14401 | MedFuncta: Modality-Agnostic Representations Based on Efficient Neural
Fields | eess.IV cs.CV | Recent research in medical image analysis with deep learning almost
exclusively focuses on grid- or voxel-based data representations. We challenge
this common choice by introducing MedFuncta, a modality-agnostic continuous
data representation based on neural fields. We demonstrate how to scale neural
fields from sing... |
2502.14403 | A Macro- and Micro-Hierarchical Transfer Learning Framework for
Cross-Domain Fake News Detection | cs.SI cs.CL cs.LG | Cross-domain fake news detection aims to mitigate domain shift and improve
detection performance by transferring knowledge across domains. Existing
approaches transfer knowledge based on news content and user engagements from a
source domain to a target domain. However, these approaches face two main
limitations, hin... |
2502.14409 | Unstructured Evidence Attribution for Long Context Query Focused
Summarization | cs.CL cs.IR | Large language models (LLMs) are capable of generating coherent summaries
from very long contexts given a user query. Extracting and properly citing
evidence spans could help improve the transparency and reliability of these
summaries. At the same time, LLMs suffer from positional biases in terms of
which information... |
2502.14412 | Evaluating Precise Geolocation Inference Capabilities of Vision Language
Models | cs.CV cs.CR cs.LG | The prevalence of Vision-Language Models (VLMs) raises important questions
about privacy in an era where visual information is increasingly available.
While foundation VLMs demonstrate broad knowledge and learned capabilities, we
specifically investigate their ability to infer geographic location from
previously unse... |
2502.14413 | Towards Efficient Automatic Self-Pruning of Large Language Models | cs.LG | Despite exceptional capabilities, Large Language Models (LLMs) still face
deployment challenges due to their enormous size. Post-training structured
pruning is a promising solution that prunes LLMs without the need for
retraining, reducing computational overhead, and it is hardware-deployment
friendly. However, the t... |
2502.14416 | Reliable Explainability of Deep Learning Spatial-Spectral Classifiers
for Improved Semantic Segmentation in Autonomous Driving | eess.IV cs.AI cs.LG | Integrating hyperspectral imagery (HSI) with deep neural networks (DNNs) can
strengthen the accuracy of intelligent vision systems by combining spectral and
spatial information, which is useful for tasks like semantic segmentation in
autonomous driving. To advance research in such safety-critical systems,
determining... |
2502.14418 | Role of the Pretraining and the Adaptation data sizes for low-resource
real-time MRI video segmentation | eess.AS cs.CV eess.SP | Real-time Magnetic Resonance Imaging (rtMRI) is frequently used in speech
production studies as it provides a complete view of the vocal tract during
articulation. This study investigates the effectiveness of rtMRI in analyzing
vocal tract movements by employing the SegNet and UNet models for Air-Tissue
Boundary (ATB... |
2502.14420 | ChatVLA: Unified Multimodal Understanding and Robot Control with
Vision-Language-Action Model | cs.RO cs.CV cs.LG | Humans possess a unified cognitive ability to perceive, comprehend, and
interact with the physical world. Why can't large language models replicate
this holistic understanding? Through a systematic analysis of existing training
paradigms in vision-language-action models (VLA), we identify two key
challenges: spurious... |
2502.14422 | Towards Routing and Edge Computing in Satellite-Terrestrial Networks: A
Column Generation Approach | eess.SY cs.SY | Edge computing that enables satellites to process raw data locally is
expected to bring further timeliness and flexibility to satellite-terrestrial
networks (STNs). In this letter, In this letter, we propose a three-layer edge
computing protocol, where raw data collected by satellites can be processed
locally, or tra... |
2502.14424 | Distribution Matching for Self-Supervised Transfer Learning | stat.ML cs.AI cs.LG stat.ME | In this paper, we propose a novel self-supervised transfer learning method
called Distribution Matching (DM), which drives the representation distribution
toward a predefined reference distribution while preserving augmentation
invariance. The design of DM results in a learned representation space that is
intuitively... |
2502.14425 | A Survey on Data Contamination for Large Language Models | cs.CL | Recent advancements in Large Language Models (LLMs) have demonstrated
significant progress in various areas, such as text generation and code
synthesis. However, the reliability of performance evaluation has come under
scrutiny due to data contamination-the unintended overlap between training and
test datasets. This ... |
2502.14427 | Token-Level Density-Based Uncertainty Quantification Methods for
Eliciting Truthfulness of Large Language Models | cs.CL | Uncertainty quantification (UQ) is a prominent approach for eliciting
truthful answers from large language models (LLMs). To date, information-based
and consistency-based UQ have been the dominant UQ methods for text generation
via LLMs. Density-based methods, despite being very effective for UQ in text
classificatio... |
2502.14429 | Early-Exit and Instant Confidence Translation Quality Estimation | cs.CL | Quality estimation is omnipresent in machine translation, for both evaluation
and generation. Unfortunately, quality estimation models are often opaque and
computationally expensive, making them impractical to be part of large-scale
pipelines. In this work, we tackle two connected challenges: (1) reducing the
cost of... |
2502.14430 | Cardiac Evidence Backtracking for Eating Behavior Monitoring using
Collocative Electrocardiogram Imagining | cs.LG cs.CE | Eating monitoring has remained an open challenge in medical research for
years due to the lack of non-invasive sensors for continuous monitoring and the
reliable methods for automatic behavior detection. In this paper, we present a
pilot study using the wearable 24-hour ECG for sensing and tailoring the
sophisticated... |
2502.14432 | Port-Hamiltonian Neural Networks with Output Error Noise Models | cs.LG | Hamiltonian neural networks (HNNs) represent a promising class of
physics-informed deep learning methods that utilize Hamiltonian theory as
foundational knowledge within neural networks. However, their direct
application to engineering systems is often challenged by practical issues,
including the presence of externa... |
2502.14433 | Daily Land Surface Temperature Reconstruction in Landsat Cross-Track
Areas Using Deep Ensemble Learning With Uncertainty Quantification | cs.CV | Many real-world applications rely on land surface temperature (LST) data at
high spatiotemporal resolution. In complex urban areas, LST exhibits
significant variations, fluctuating dramatically within and across city blocks.
Landsat provides high spatial resolution data at 100 meters but is limited by
long revisit ti... |
2502.14437 | Natural Language Generation | cs.CL | This book provides a broad overview of Natural Language Generation (NLG),
including technology, user requirements, evaluation, and real-world
applications. The focus is on concepts and insights which hopefully will remain
relevant for many years, not on the latest LLM innovations. It draws on decades
of work by the a... |
2502.14442 | Stochastic Resonance Improves the Detection of Low Contrast Images in
Deep Learning Models | cs.CV cs.AI | Stochastic resonance describes the utility of noise in improving the
detectability of weak signals in certain types of systems. It has been observed
widely in natural and engineered settings, but its utility in image
classification with rate-based neural networks has not been studied
extensively. In this analysis a s... |
2502.14444 | An Enhancement of Jiang, Z., et al.s Compression-Based Classification
Algorithm Applied to News Article Categorization | cs.CL | This study enhances Jiang et al.'s compression-based classification algorithm
by addressing its limitations in detecting semantic similarities between text
documents. The proposed improvements focus on unigram extraction and optimized
concatenation, eliminating reliance on entire document compression. By
compressing ... |
2502.14445 | PredictaBoard: Benchmarking LLM Score Predictability | cs.CL cs.AI stat.ML | Despite possessing impressive skills, Large Language Models (LLMs) often fail
unpredictably, demonstrating inconsistent success in even basic common sense
reasoning tasks. This unpredictability poses a significant challenge to
ensuring their safe deployment, as identifying and operating within a reliable
"safe zone" ... |
2502.14451 | Optimal word order for non-causal text generation with Large Language
Models: the Spanish case | cs.CL | Natural Language Generation (NLG) popularity has increased owing to the
progress in Large Language Models (LLMs), with zero-shot inference
capabilities. However, most neural systems utilize decoder-only causal
(unidirectional) transformer models, which are effective for English but may
reduce the richness of language... |
2502.14454 | Exploiting Deblurring Networks for Radiance Fields | cs.CV | In this paper, we propose DeepDeblurRF, a novel radiance field deblurring
approach that can synthesize high-quality novel views from blurred training
views with significantly reduced training time. DeepDeblurRF leverages deep
neural network (DNN)-based deblurring modules to enjoy their deblurring
performance and comp... |
2502.14455 | An Efficient Ground-aerial Transportation System for Pest Control
Enabled by AI-based Autonomous Nano-UAVs | cs.RO cs.AI | Efficient crop production requires early detection of pest outbreaks and
timely treatments; we consider a solution based on a fleet of multiple
autonomous miniaturized unmanned aerial vehicles (nano-UAVs) to visually detect
pests and a single slower heavy vehicle that visits the detected outbreaks to
deliver treatmen... |
2502.14456 | Narrative-Driven Travel Planning: Geoculturally-Grounded Script
Generation with Evolutionary Itinerary Optimization | cs.AI | To enhance tourists' experiences and immersion, this paper proposes a
narrative-driven travel planning framework called NarrativeGuide, which
generates a geoculturally-grounded narrative script for travelers, offering a
novel, role-playing experience for their journey. In the initial stage,
NarrativeGuide constructs ... |
2502.14457 | Watch Less, Feel More: Sim-to-Real RL for Generalizable Articulated
Object Manipulation via Motion Adaptation and Impedance Control | cs.RO cs.AI cs.LG | Articulated object manipulation poses a unique challenge compared to rigid
object manipulation as the object itself represents a dynamic environment. In
this work, we present a novel RL-based pipeline equipped with variable
impedance control and motion adaptation leveraging observation history for
generalizable artic... |
2502.14458 | Llamba: Scaling Distilled Recurrent Models for Efficient Language
Processing | cs.LG cs.AI | We introduce Llamba, a family of efficient recurrent language models
distilled from Llama-3.x into the Mamba architecture. The series includes
Llamba-1B, Llamba-3B, and Llamba-8B, which achieve higher inference throughput
and handle significantly larger batch sizes than Transformer-based models while
maintaining comp... |
2502.14462 | Single-image Reflectance and Transmittance Estimation from Any Flatbed
Scanner | cs.GR cs.AI cs.CV cs.LG | Flatbed scanners have emerged as promising devices for high-resolution,
single-image material capture. However, existing approaches assume very
specific conditions, such as uniform diffuse illumination, which are only
available in certain high-end devices, hindering their scalability and cost. In
contrast, in this wo... |
2502.14467 | Provable Quantum Algorithm Advantage for Gaussian Process Quadrature | stat.CO cs.LG quant-ph | The aim of this paper is to develop novel quantum algorithms for Gaussian
process quadrature methods. Gaussian process quadratures are numerical
integration methods where Gaussian processes are used as functional priors for
the integrands to capture the uncertainty arising from the sparse function
evaluations. Quantu... |
2502.14469 | Enhancing Smart Environments with Context-Aware Chatbots using Large
Language Models | cs.CL cs.AI cs.SI | This work presents a novel architecture for context-aware interactions within
smart environments, leveraging Large Language Models (LLMs) to enhance user
experiences. Our system integrates user location data obtained through UWB tags
and sensor-equipped smart homes with real-time human activity recognition (HAR)
to p... |
2502.14471 | Integrating Extra Modality Helps Segmentor Find Camouflaged Objects Well | cs.CV | Camouflaged Object Segmentation (COS) remains a challenging problem due to
the subtle visual differences between camouflaged objects and backgrounds.
Owing to the exceedingly limited visual cues available from visible spectrum,
previous RGB single-modality approaches often struggle to achieve satisfactory
results, pr... |
2502.14476 | Argument-Based Comparative Question Answering Evaluation Benchmark | cs.CL | In this paper, we aim to solve the problems standing in the way of automatic
comparative question answering. To this end, we propose an evaluation framework
to assess the quality of comparative question answering summaries. We formulate
15 criteria for assessing comparative answers created using manual annotation
and... |
2502.14477 | Unshackling Context Length: An Efficient Selective Attention Approach
through Query-Key Compression | cs.CL | Handling long-context sequences efficiently remains a significant challenge
in large language models (LLMs). Existing methods for token selection in
sequence extrapolation either employ a permanent eviction strategy or select
tokens by chunk, which may lead to the loss of critical information. We propose
Efficient Se... |
2502.14482 | NLoRA: Nystr\"om-Initiated Low-Rank Adaptation for Large Language Models | cs.CL | Parameter-efficient fine-tuning (PEFT) is essential for adapting large
language models (LLMs), with low-rank adaptation (LoRA) being the most popular
approach. However, LoRA suffers from slow convergence, and some recent LoRA
variants, such as PiSSA, primarily rely on Singular Value Decomposition (SVD)
for initializa... |
2502.14486 | How Jailbreak Defenses Work and Ensemble? A Mechanistic Investigation | cs.CR cs.AI cs.CL | Jailbreak attacks, where harmful prompts bypass generative models' built-in
safety, raise serious concerns about model vulnerability. While many defense
methods have been proposed, the trade-offs between safety and helpfulness, and
their application to Large Vision-Language Models (LVLMs), are not well
understood. Th... |
2502.14487 | Temporal Misalignment and Probabilistic Neurons | cs.LG cs.AI cs.CV | Spiking Neural Networks (SNNs) offer a more energy-efficient alternative to
Artificial Neural Networks (ANNs) by mimicking biological neural principles,
establishing them as a promising approach to mitigate the increasing energy
demands of large-scale neural models. However, fully harnessing the
capabilities of SNNs ... |
2502.14491 | Statistical Scenario Modelling and Lookalike Distributions for
Multi-Variate AI Risk | cs.AI | Evaluating AI safety requires statistically rigorous methods and risk metrics
for understanding how the use of AI affects aggregated risk. However, much AI
safety literature focuses upon risks arising from AI models in isolation,
lacking consideration of how modular use of AI affects risk distribution of
workflow com... |
2502.14493 | CrossFuse: Learning Infrared and Visible Image Fusion by Cross-Sensor
Top-K Vision Alignment and Beyond | cs.CV cs.LG | Infrared and visible image fusion (IVIF) is increasingly applied in critical
fields such as video surveillance and autonomous driving systems. Significant
progress has been made in deep learning-based fusion methods. However, these
models frequently encounter out-of-distribution (OOD) scenes in real-world
application... |
2502.14494 | StructFlowBench: A Structured Flow Benchmark for Multi-turn Instruction
Following | cs.CL | Multi-turn instruction following capability constitutes a core competency of
large language models (LLMs) in real-world applications. Existing evaluation
benchmarks predominantly focus on fine-grained constraint satisfaction and
domain-specific capability assessment, yet overlook the crucial structural
dependency bet... |
2502.14495 | Nearshore Underwater Target Detection Meets UAV-borne Hyperspectral
Remote Sensing: A Novel Hybrid-level Contrastive Learning Framework and
Benchmark Dataset | cs.CV | UAV-borne hyperspectral remote sensing has emerged as a promising approach
for underwater target detection (UTD). However, its effectiveness is hindered
by spectral distortions in nearshore environments, which compromise the
accuracy of traditional hyperspectral UTD (HUTD) methods that rely on
bathymetric model. Thes... |
2502.14496 | Enhancing Language Multi-Agent Learning with Multi-Agent Credit
Re-Assignment for Interactive Environment Generalization | cs.CL | LLM-based agents have made significant advancements in interactive
environments, such as mobile operations and web browsing, and other domains
beyond computer using. Current multi-agent systems universally excel in
performance, compared to single agents, but struggle with generalization across
environments due to pre... |
2502.14497 | Stories that (are) Move(d by) Markets: A Causal Exploration of Market
Shocks and Semantic Shifts across Different Partisan Groups | cs.CL cs.CE econ.GN q-fin.EC | Macroeconomic fluctuations and the narratives that shape them form a mutually
reinforcing cycle: public discourse can spur behavioural changes leading to
economic shifts, which then result in changes in the stories that propagate. We
show that shifts in semantic embedding space can be causally linked to
financial mar... |
2502.14499 | MLGym: A New Framework and Benchmark for Advancing AI Research Agents | cs.CL cs.AI cs.LG | We introduce Meta MLGym and MLGym-Bench, a new framework and benchmark for
evaluating and developing LLM agents on AI research tasks. This is the first
Gym environment for machine learning (ML) tasks, enabling research on
reinforcement learning (RL) algorithms for training such agents. MLGym-bench
consists of 13 dive... |
2502.14501 | Towards a Perspectivist Turn in Argument Quality Assessment | cs.CL | The assessment of argument quality depends on well-established logical,
rhetorical, and dialectical properties that are unavoidably subjective:
multiple valid assessments may exist, there is no unequivocal ground truth.
This aligns with recent paths in machine learning, which embrace the
co-existence of different per... |
2502.14502 | How Much Knowledge Can You Pack into a LoRA Adapter without Harming LLM? | cs.CL | The performance of Large Language Models (LLMs) on many tasks is greatly
limited by the knowledge learned during pre-training and stored in the model's
parameters. Low-rank adaptation (LoRA) is a popular and efficient training
technique for updating or domain-specific adaptation of LLMs. In this study, we
investigate... |
2502.14503 | LXLv2: Enhanced LiDAR Excluded Lean 3D Object Detection with Fusion of
4D Radar and Camera | cs.CV | As the previous state-of-the-art 4D radar-camera fusion-based 3D object
detection method, LXL utilizes the predicted image depth distribution maps and
radar 3D occupancy grids to assist the sampling-based image view
transformation. However, the depth prediction lacks accuracy and consistency,
and the concatenation-ba... |
2502.14504 | PLPHP: Per-Layer Per-Head Vision Token Pruning for Efficient Large
Vision-Language Models | cs.CV cs.AI | Large Vision-Language Models (LVLMs) have demonstrated remarkable
capabilities across a range of multimodal tasks. However, their inference
efficiency is constrained by the large number of visual tokens processed during
decoding. To address this challenge, we propose Per-Layer Per-Head Vision Token
Pruning (PLPHP), a... |
2502.14507 | Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis
of L1-Dependent Biases | cs.CL | This study evaluates Large Language Models' (LLMs) ability to simulate
non-native-like English use observed in human second language (L2) learners
interfered with by their native first language (L1). In dialogue-based
interviews, we prompt LLMs to mimic L2 English learners with specific L1s
(e.g., Japanese, Thai, Urd... |
2502.14509 | MultiSlav: Using Cross-Lingual Knowledge Transfer to Combat the Curse of
Multilinguality | cs.CL | Does multilingual Neural Machine Translation (NMT) lead to The Curse of the
Multlinguality or provides the Cross-lingual Knowledge Transfer within a
language family? In this study, we explore multiple approaches for extending
the available data-regime in NMT and we prove cross-lingual benefits even in
0-shot translat... |
2502.14514 | A Mobile Robotic Approach to Autonomous Surface Scanning in Legal
Medicine | cs.RO cs.CV cs.SY eess.SY | Purpose: Comprehensive legal medicine documentation includes both an internal
but also an external examination of the corpse. Typically, this documentation
is conducted manually during conventional autopsy. A systematic digital
documentation would be desirable, especially for the external examination of
wounds, which... |
2502.14520 | Learning Temporal 3D Semantic Scene Completion via Optical Flow Guidance | cs.CV | 3D Semantic Scene Completion (SSC) provides comprehensive scene geometry and
semantics for autonomous driving perception, which is crucial for enabling
accurate and reliable decision-making. However, existing SSC methods are
limited to capturing sparse information from the current frame or naively
stacking multi-fram... |
2502.14522 | Investigating the Generalizability of ECG Noise Detection Across Diverse
Data Sources and Noise Types | cs.LG | Electrocardiograms (ECGs) are essential for monitoring cardiac health,
allowing clinicians to analyze heart rate variability (HRV), detect abnormal
rhythms, and diagnose cardiovascular diseases. However, ECG signals, especially
those from wearable devices, are often affected by noise artifacts caused by
motion, muscl... |
2502.14523 | Generative adversarial networks vs large language models: a comparative
study on synthetic tabular data generation | cs.LG cs.CL | We propose a new framework for zero-shot generation of synthetic tabular
data. Using the large language model (LLM) GPT-4o and plain-language prompting,
we demonstrate the ability to generate high-fidelity tabular data without
task-specific fine-tuning or access to real-world data (RWD) for pre-training.
To benchmark... |
2502.14525 | Small Graph Is All You Need: DeepStateGNN for Scalable Traffic
Forecasting | cs.LG cs.AI | We propose a novel Graph Neural Network (GNN) model, named DeepStateGNN, for
analyzing traffic data, demonstrating its efficacy in two critical tasks:
forecasting and reconstruction. Unlike typical GNN methods that treat each
traffic sensor as an individual graph node, DeepStateGNN clusters sensors into
higher-level ... |
2502.14527 | Inter-turbine Modelling of Wind-Farm Power using Multi-task Learning | cs.LG | Because of the global need to increase power production from renewable energy
resources, developments in the online monitoring of the associated
infrastructure is of interest to reduce operation and maintenance costs.
However, challenges exist for data-driven approaches to this problem, such as
incomplete or limited ... |
2502.14529 | CORBA: Contagious Recursive Blocking Attacks on Multi-Agent Systems
Based on Large Language Models | cs.CL cs.AI | Large Language Model-based Multi-Agent Systems (LLM-MASs) have demonstrated
remarkable real-world capabilities, effectively collaborating to complete
complex tasks. While these systems are designed with safety mechanisms, such as
rejecting harmful instructions through alignment, their security remains
largely unexplo... |
2502.14536 | Preordering: A hybrid of correlation clustering and partial ordering | cs.LG | We discuss the preordering problem, a joint relaxation of the correlation
clustering problem and the partial ordering problem. We show that preordering
remains NP-hard even for values in $\{-1,0,1\}$. We introduce a linear-time
$4$-approximation algorithm and a local search technique. For an integer linear
program fo... |
2502.14538 | LoRA-GGPO: Mitigating Double Descent in LoRA Fine-Tuning via
Gradient-Guided Perturbation Optimization | cs.CL | Large Language Models (LLMs) have achieved remarkable success in natural
language processing, but their full fine-tuning remains resource-intensive.
Parameter-Efficient Fine-Tuning (PEFT) methods, such as Low-Rank Adaptation
(LoRA), have emerged as a practical solution by approximating parameter updates
with low-rank... |
2502.14541 | LLM-based User Profile Management for Recommender System | cs.CL | The rapid advancement of Large Language Models (LLMs) has opened new
opportunities in recommender systems by enabling zero-shot recommendation
without conventional training. Despite their potential, most existing works
rely solely on users' purchase histories, leaving significant room for
improvement by incorporating... |
2502.14544 | Generalization Error of $f$-Divergence Stabilized Algorithms via Duality | stat.ML cs.LG | The solution to empirical risk minimization with $f$-divergence
regularization (ERM-$f$DR) is extended to constrained optimization problems,
establishing conditions for equivalence between the solution and constraints. A
dual formulation of ERM-$f$DR is introduced, providing a computationally
efficient method to deri... |
2502.14545 | An Entropic Metric for Measuring Calibration of Machine Learning Models | cs.LG | Understanding the confidence with which a machine learning model classifies
an input datum is an important, and perhaps under-investigated, concept. In
this paper, we propose a new calibration metric, the Entropic Calibration
Difference (ECD). Based on existing research in the field of state estimation,
specifically ... |
2502.14546 | Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks | cs.LG cs.AI cs.NE | While machine learning on graphs has demonstrated promise in drug design and
molecular property prediction, significant benchmarking challenges hinder its
further progress and relevance. Current benchmarking practices often lack focus
on transformative, real-world applications, favoring narrow domains like
two-dimens... |
2502.14553 | Multiscale Byte Language Models -- A Hierarchical Architecture for
Causal Million-Length Sequence Modeling | cs.CL cs.AI cs.LG | Bytes form the basis of the digital world and thus are a promising building
block for multimodal foundation models. Recently, Byte Language Models (BLMs)
have emerged to overcome tokenization, yet the excessive length of bytestreams
requires new architectural paradigms. Therefore, we present the Multiscale Byte
Langu... |
2502.14558 | FUIA: Model Inversion Attack against Federated Unlearning | cs.CR cs.AI | With the introduction of regulations related to the ``right to be forgotten",
federated learning (FL) is facing new privacy compliance challenges. To address
these challenges, researchers have proposed federated unlearning (FU). However,
existing FU research has primarily focused on improving the efficiency of
unlear... |
2502.14560 | Less is More: Improving LLM Alignment via Preference Data Selection | cs.LG cs.AI cs.CL | Direct Preference Optimization (DPO) has emerged as a promising approach for
aligning large language models with human preferences. While prior work mainly
extends DPO from the aspect of the objective function, we instead improve DPO
from the largely overlooked but critical aspect of data selection.
Specifically, we ... |
2502.14561 | Can LLMs Predict Citation Intent? An Experimental Analysis of In-context
Learning and Fine-tuning on Open LLMs | cs.CL cs.DL | This work investigates the ability of open Large Language Models (LLMs) to
predict citation intent through in-context learning and fine-tuning. Unlike
traditional approaches that rely on pre-trained models like SciBERT, which
require extensive domain-specific pretraining and specialized architectures, we
demonstrate ... |
2502.14563 | Plan-over-Graph: Towards Parallelable LLM Agent Schedule | cs.AI | Large Language Models (LLMs) have demonstrated exceptional abilities in
reasoning for task planning. However, challenges remain under-explored for
parallel schedules. This paper introduces a novel paradigm, plan-over-graph, in
which the model first decomposes a real-life textual task into executable
subtasks and cons... |
2502.14565 | ReVISE: Learning to Refine at Test-Time via Intrinsic Self-Verification | cs.LG cs.CL | Self-awareness, i.e., the ability to assess and correct one's own generation,
is a fundamental aspect of human intelligence, making its replication in large
language models (LLMs) an important yet challenging task. Previous works tackle
this by employing extensive reinforcement learning or rather relying on large
ext... |
2502.14571 | Predicting Filter Medium Performances in Chamber Filter Presses with
Digital Twins Using Neural Network Technologies | cs.LG cs.CE | Efficient solid-liquid separation is crucial in industries like mining, but
traditional chamber filter presses depend heavily on manual monitoring, leading
to inefficiencies, downtime, and resource wastage. This paper introduces a
machine learning-powered digital twin framework to improve operational
flexibility and ... |
2502.14572 | Factor Graph-based Interpretable Neural Networks | cs.LG cs.AI | Comprehensible neural network explanations are foundations for a better
understanding of decisions, especially when the input data are infused with
malicious perturbations. Existing solutions generally mitigate the impact of
perturbations through adversarial training, yet they fail to generate
comprehensible explanat... |
2502.14573 | Self-supervised Monocular Depth Estimation Robust to Reflective Surface
Leveraged by Triplet Mining | cs.CV cs.LG | Self-supervised monocular depth estimation (SSMDE) aims to predict the dense
depth map of a monocular image, by learning depth from RGB image sequences,
eliminating the need for ground-truth depth labels. Although this approach
simplifies data acquisition compared to supervised methods, it struggles with
reflective s... |
2502.14574 | Real-world Troublemaker: A Novel Track Testing Framework for Automated
Driving Systems in Safety-critical Interaction Scenarios | cs.RO cs.ET | Track testing plays a critical role in the safety evaluation of autonomous
driving systems (ADS), as it provides real-world object targets and a
safety-controllable interaction environment. However, existing track testing
scenarios are often pre-fixed and limited, primarily due to the inflexibility
of object target c... |
2502.14581 | A Statistical Case Against Empirical Human-AI Alignment | cs.AI cs.CL cs.LG stat.OT | Empirical human-AI alignment aims to make AI systems act in line with
observed human behavior. While noble in its goals, we argue that empirical
alignment can inadvertently introduce statistical biases that warrant caution.
This position paper thus advocates against naive empirical alignment, offering
prescriptive al... |
2502.14583 | A Theory for Conditional Generative Modeling on Multiple Data Sources | cs.LG cs.AI | The success of large generative models has driven a paradigm shift,
leveraging massive multi-source data to enhance model capabilities. However,
the interaction among these sources remains theoretically underexplored. This
paper takes the first step toward a rigorous analysis of multi-source training
in conditional g... |
2502.14584 | Vision Foundation Models in Medical Image Analysis: Advances and
Challenges | eess.IV cs.CV | The rapid development of Vision Foundation Models (VFMs), particularly Vision
Transformers (ViT) and Segment Anything Model (SAM), has sparked significant
advances in the field of medical image analysis. These models have demonstrated
exceptional capabilities in capturing long-range dependencies and achieving
high ge... |
2502.14585 | A Stackelberg Game Approach for Signal Temporal Logic Control Synthesis
with Uncontrollable Agents | eess.SY cs.SY | In this paper, we investigate the control synthesis problem for Signal
Temporal Logic (STL) specifications in the presence of uncontrollable agents.
Existing works mainly address this problem in a robust control setting by
assuming the uncontrollable agents are adversarial and accounting for the
worst-case scenario. ... |
2502.14586 | Moshi Moshi? A Model Selection Hijacking Adversarial Attack | cs.LG cs.CR | Model selection is a fundamental task in Machine Learning~(ML), focusing on
selecting the most suitable model from a pool of candidates by evaluating their
performance on specific metrics. This process ensures optimal performance,
computational efficiency, and adaptability to diverse tasks and environments.
Despite i... |
2502.14589 | Explicit adaptive time stepping for the Cahn-Hilliard equation by
exponential Krylov subspace and Chebyshev polynomial methods | math.NA cs.CE cs.NA physics.comp-ph | The Cahn-Hilliard equation has been widely employed within various
mathematical models in physics, chemistry and engineering. Explicit stabilized
time stepping methods can be attractive for time integration of the
Cahn-Hilliard equation, especially on parallel and hybrid supercomputers. In
this paper, we propose an e... |
2502.14591 | Data-driven Control of T-Product-based Dynamical Systems | eess.SY cs.SY | Data-driven control is a powerful tool that enables the design and
implementation of control strategies directly from data without explicitly
identifying the underlying system dynamics. While various data-driven control
techniques, such as stabilization, linear quadratic regulation, and model
predictive control, have... |
2502.14597 | Multi-Class Imbalanced Learning with Support Vector Machines via
Differential Evolution | cs.LG cs.NE | Support vector machine (SVM) is a powerful machine learning algorithm to
handle classification tasks. However, the classical SVM is developed for binary
problems with the assumption of balanced datasets. Obviously, the multi-class
imbalanced classification problems are more complex. In this paper, we propose
an impro... |
2502.14604 | Noisy Test-Time Adaptation in Vision-Language Models | cs.LG | Test-time adaptation (TTA) aims to address distribution shifts between source
and target data by relying solely on target data during testing. In open-world
scenarios, models often encounter noisy samples, i.e., samples outside the
in-distribution (ID) label space. Leveraging the zero-shot capability of
pre-trained v... |
2502.14613 | Behavioral Analysis of Information Salience in Large Language Models | cs.CL | Large Language Models (LLMs) excel at text summarization, a task that
requires models to select content based on its importance. However, the exact
notion of salience that LLMs have internalized remains unclear. To bridge this
gap, we introduce an explainable framework to systematically derive and
investigate informa... |
2502.14614 | FIND: Fine-grained Information Density Guided Adaptive
Retrieval-Augmented Generation for Disease Diagnosis | cs.CL | Retrieval-Augmented Large Language Models (LLMs), which integrate external
knowledge into LLMs, have shown remarkable performance in various medical
domains, including clinical diagnosis. However, existing RAG methods struggle
to effectively assess task difficulty to make retrieval decisions, thereby
failing to meet ... |
2502.14616 | Monocular Depth Estimation and Segmentation for Transparent Object with
Iterative Semantic and Geometric Fusion | cs.CV | Transparent object perception is indispensable for numerous robotic tasks.
However, accurately segmenting and estimating the depth of transparent objects
remain challenging due to complex optical properties. Existing methods
primarily delve into only one task using extra inputs or specialized sensors,
neglecting the ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.