id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2502.14619 | Reward Models Identify Consistency, Not Causality | cs.LG cs.AI cs.CL | Reward models (RMs) play a crucial role in aligning large language models
(LLMs) with human preferences and enhancing reasoning quality. Traditionally,
RMs are trained to rank candidate outputs based on their correctness and
coherence. However, in this work, we present several surprising findings that
challenge commo... |
2502.14620 | Exploring RWKV for Sentence Embeddings: Layer-wise Analysis and Baseline
Comparison for Semantic Similarity | cs.CL cs.AI | This paper investigates the efficacy of RWKV, a novel language model
architecture known for its linear attention mechanism, for generating sentence
embeddings in a zero-shot setting. I conduct a layer-wise analysis to evaluate
the semantic similarity captured by embeddings from different hidden layers of
a pre-traine... |
2502.14625 | Multi-Record Web Page Information Extraction From News Websites | cs.CL cs.IR | In this paper, we focused on the problem of extracting information from web
pages containing many records, a task of growing importance in the era of
massive web data. Recently, the development of neural network methods has
improved the quality of information extraction from web pages. Nevertheless,
most of the resea... |
2502.14627 | ATRI: Mitigating Multilingual Audio Text Retrieval Inconsistencies by
Reducing Data Distribution Errors | cs.SD cs.AI eess.AS | Multilingual audio-text retrieval (ML-ATR) is a challenging task that aims to
retrieve audio clips or multilingual texts from databases. However, existing
ML-ATR schemes suffer from inconsistencies for instance similarity matching
across languages. We theoretically analyze the inconsistency in terms of both
multiling... |
2502.14628 | PEARL: Towards Permutation-Resilient LLMs | cs.LG cs.CL | The in-context learning (ICL) capability of large language models (LLMs)
enables them to perform challenging tasks using provided demonstrations.
However, ICL is highly sensitive to the ordering of demonstrations, leading to
instability in predictions. This paper shows that this vulnerability can be
exploited to desi... |
2502.14630 | Understanding long-term energy use in off-grid solar home systems in
sub-Saharan Africa | eess.SY cs.SY | Solar home systems provide low-cost electricity access for rural off-grid
communities. As access to them increases, more long-term data becomes available
on how these systems are used throughout their lifetime. This work analyses a
dataset of 1,000 systems across sub-Saharan Africa. Dynamic time warping
clustering wa... |
2502.14631 | Synergistic Fusion of Multi-Source Knowledge via Evidence Theory for
High-Entropy Alloy Discovery | cs.LG | Discovering novel high-entropy alloys (HEAs) with desirable properties is
challenging due to the vast compositional space and complex phase formation
mechanisms. Efficient exploration of this space requires a strategic approach
that integrates heterogeneous knowledge sources. Here, we propose a framework
that systema... |
2502.14634 | CER: Confidence Enhanced Reasoning in LLMs | cs.LG | Ensuring the reliability of Large Language Models (LLMs) in complex reasoning
tasks remains a formidable challenge, particularly in scenarios that demand
precise mathematical calculations and knowledge-intensive open-domain
generation. In this work, we introduce an uncertainty-aware framework designed
to enhance the ... |
2502.14637 | ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality
Protein Backbone Generation | cs.LG cs.AI | Protein backbone generation plays a central role in de novo protein design
and is significant for many biological and medical applications. Although
diffusion and flow-based generative models provide potential solutions to this
challenging task, they often generate proteins with undesired designability and
suffer com... |
2502.14638 | NAVIG: Natural Language-guided Analysis with Vision Language Models for
Image Geo-localization | cs.CL cs.CV | Image geo-localization is the task of predicting the specific location of an
image and requires complex reasoning across visual, geographical, and cultural
contexts. While prior Vision Language Models (VLMs) have the best accuracy at
this task, there is a dearth of high-quality datasets and models for analytical
reas... |
2502.14642 | How Far are LLMs from Being Our Digital Twins? A Benchmark for
Persona-Based Behavior Chain Simulation | cs.CL | Recently, LLMs have garnered increasing attention across academic disciplines
for their potential as human digital twins, virtual proxies designed to
replicate individuals and autonomously perform tasks such as decision-making,
problem-solving, and reasoning on their behalf. However, current evaluations of
LLMs prima... |
2502.14643 | Length-Controlled Margin-Based Preference Optimization without Reference
Model | cs.CL | Direct Preference Optimization (DPO) is a widely adopted offline algorithm
for preference-based reinforcement learning from human feedback (RLHF),
designed to improve training simplicity and stability by redefining reward
functions. However, DPO is hindered by several limitations, including length
bias, memory ineffi... |
2502.14644 | LIFT: Improving Long Context Understanding of Large Language Models
through Long Input Fine-Tuning | cs.CL | Long context understanding remains challenging for large language models due
to their limited context windows. This paper presents Long Input Fine-Tuning
(LIFT), a novel framework for long-context modeling that can improve the
long-context performance of arbitrary (short-context) LLMs by dynamically
adapting model pa... |
2502.14645 | Edit Once, Update Everywhere: A Simple Framework for Cross-Lingual
Knowledge Synchronization in LLMs | cs.CL cs.AI | Knowledge editing allows for efficient adaptation of large language models
(LLMs) to new information or corrections without requiring full retraining.
However, prior methods typically focus on either single-language editing or
basic multilingual editing, failing to achieve true cross-linguistic knowledge
synchronizat... |
2502.14648 | Variance Reduction Methods Do Not Need to Compute Full Gradients:
Improved Efficiency through Shuffling | cs.LG math.OC | In today's world, machine learning is hard to imagine without large training
datasets and models. This has led to the use of stochastic methods for
training, such as stochastic gradient descent (SGD). SGD provides weak
theoretical guarantees of convergence, but there are modifications, such as
Stochastic Variance Red... |
2502.14659 | MAGO-SP: Detection and Correction of Water-Fat Swaps in Magnitude-Only
VIBE MRI | cs.CV | Volume Interpolated Breath-Hold Examination (VIBE) MRI generates images
suitable for water and fat signal composition estimation. While the two-point
VIBE provides water-fat-separated images, the six-point VIBE allows estimation
of the effective transversal relaxation rate R2* and the proton density fat
fraction (PDF... |
2502.14660 | Beyond the Surface: Uncovering Implicit Locations with LLMs for
Personalized Local News | cs.LG | News recommendation systems personalize homepage content to boost engagement,
but factors like content type, editorial stance, and geographic focus impact
recommendations. Local newspapers balance coverage across regions, yet
identifying local articles is challenging due to implicit location cues like
slang or landma... |
2502.14662 | InstructAgent: Building User Controllable Recommender via LLM Agent | cs.CL cs.IR | Traditional recommender systems usually take the user-platform paradigm,
where users are directly exposed under the control of the platform's
recommendation algorithms. However, the defect of recommendation algorithms may
put users in very vulnerable positions under this paradigm. First, many
sophisticated models are... |
2502.14663 | The Restricted Isometry Property for Measurements from Group Orbits | cs.IT math.IT | It is known that sparse recovery by measurements from random circulant
matrices provides good recovery bounds. We generalize this to measurements that
arise as a random orbit of a group representation for some finite group G. We
derive estimates for the number of measurements required to guarantee the
restricted isom... |
2502.14669 | AlphaMaze: Enhancing Large Language Models' Spatial Intelligence via
GRPO | cs.CL | Large Language Models (LLMs) have demonstrated impressive capabilities in
language processing, yet they often struggle with tasks requiring genuine
visual spatial reasoning. In this paper, we introduce a novel two-stage
training framework designed to equip standard LLMs with visual reasoning
abilities for maze naviga... |
2502.14671 | Explanations of Deep Language Models Explain Language Representations in
the Brain | cs.CL cs.AI q-bio.NC | Recent advances in artificial intelligence have given rise to large language
models (LLMs) that not only achieve human-like performance but also share
computational principles with the brain's language processing mechanisms. While
previous research has primarily focused on aligning LLMs' internal
representations with... |
2502.14676 | BP-SGCN: Behavioral Pseudo-Label Informed Sparse Graph Convolution
Network for Pedestrian and Heterogeneous Trajectory Prediction | cs.CV cs.AI | Trajectory prediction allows better decision-making in applications of
autonomous vehicles or surveillance by predicting the short-term future
movement of traffic agents. It is classified into pedestrian or heterogeneous
trajectory prediction. The former exploits the relatively consistent behavior
of pedestrians, but... |
2502.14677 | Data-Constrained Synthesis of Training Data for De-Identification | cs.CL cs.AI | Many sensitive domains -- such as the clinical domain -- lack widely
available datasets due to privacy risks. The increasing generative capabilities
of large language models (LLMs) have made synthetic datasets a viable path
forward. In this study, we domain-adapt LLMs to the clinical domain and
generate synthetic cli... |
2502.14678 | How to Get Your LLM to Generate Challenging Problems for Evaluation | cs.CL | The pace of evolution of Large Language Models (LLMs) necessitates new
approaches for rigorous and comprehensive evaluation. Traditional human
annotation is increasingly impracticable due to the complexities and costs
involved in generating high-quality, challenging problems. In this work, we
introduce CHASE, a unifi... |
2502.14679 | Disentangled Latent Spaces for Reduced Order Models using Deterministic
Autoencoders | cs.LG | Data-driven reduced-order models based on autoencoders generally lack
interpretability compared to classical methods such as the proper orthogonal
decomposition. More interpretability can be gained by disentangling the latent
variables and analyzing the resulting modes. For this purpose, probabilistic
$\beta$-variati... |
2502.14681 | seqKAN: Sequence processing with Kolmogorov-Arnold Networks | cs.LG cs.AI | Kolmogorov-Arnold Networks (KANs) have been recently proposed as a machine
learning framework that is more interpretable and controllable than the
multi-layer perceptron. Various network architectures have been proposed within
the KAN framework targeting different tasks and application domains, including
sequence pro... |
2502.14682 | Bridging the Gap: Transforming Natural Language Questions into SQL
Queries via Abstract Query Pattern and Contextual Schema Markup | cs.CL | Large language models have demonstrated excellent performance in many tasks,
including Text-to-SQL, due to their powerful in-context learning capabilities.
They are becoming the mainstream approach for Text-to-SQL. However, these
methods still have a significant gap compared to human performance, especially
on comple... |
2502.14684 | CDGS: Confidence-Aware Depth Regularization for 3D Gaussian Splatting | cs.GR cs.CV | 3D Gaussian Splatting (3DGS) has shown significant advantages in novel view
synthesis (NVS), particularly in achieving high rendering speeds and
high-quality results. However, its geometric accuracy in 3D reconstruction
remains limited due to the lack of explicit geometric constraints during
optimization. This paper ... |
2502.14689 | Confidence Estimation via Sequential Likelihood Mixing | stat.ML cs.LG | We present a universal framework for constructing confidence sets based on
sequential likelihood mixing. Building upon classical results from sequential
analysis, we provide a unifying perspective on several recent lines of work,
and establish fundamental connections between sequential mixing, Bayesian
inference and ... |
2502.14693 | I-MCTS: Enhancing Agentic AutoML via Introspective Monte Carlo Tree
Search | cs.CL | Recent advancements in large language models (LLMs) have shown remarkable
potential in automating machine learning tasks. However, existing LLM-based
agents often struggle with low-diversity and suboptimal code generation. While
recent work has introduced Monte Carlo Tree Search (MCTS) to address these
issues, limita... |
2502.14694 | Revisiting Near-Far Field Boundary in Dual-Polarized XL-MIMO Systems | cs.IT math.IT | Extremely large-scale multiple-input multiple-output (XL-MIMO) is expected to
be an important technology in future sixth generation (6G) networks. Compared
with conventional single-polarized XL-MIMO, where signals are transmitted and
received in only one polarization direction, dual-polarized XL-MIMO systems
achieve ... |
2502.14698 | General Uncertainty Estimation with Delta Variances | cs.LG cs.AI stat.AP stat.ML | Decision makers may suffer from uncertainty induced by limited data. This may
be mitigated by accounting for epistemic uncertainty, which is however
challenging to estimate efficiently for large neural networks. To this extent
we investigate Delta Variances, a family of algorithms for epistemic
uncertainty quantifica... |
2502.14704 | Not All Data are Good Labels: On the Self-supervised Labeling for Time
Series Forecasting | cs.LG cs.AI | Time Series Forecasting (TSF) is a crucial task in various domains, yet
existing TSF models rely heavily on high-quality data and insufficiently
exploit all available data. This paper explores a novel self-supervised
approach to re-label time series datasets by inherently constructing candidate
datasets. During the o... |
2502.14706 | Building reliable sim driving agents by scaling self-play | cs.AI cs.RO | Simulation agents are essential for designing and testing systems that
interact with humans, such as autonomous vehicles (AVs). These agents serve
various purposes, from benchmarking AV performance to stress-testing the
system's limits, but all use cases share a key requirement: reliability. A
simulation agent should... |
2502.14707 | TRUSWorthy: Toward Clinically Applicable Deep Learning for Confident
Detection of Prostate Cancer in Micro-Ultrasound | eess.IV cs.LG q-bio.TO | While deep learning methods have shown great promise in improving the
effectiveness of prostate cancer (PCa) diagnosis by detecting suspicious
lesions from trans-rectal ultrasound (TRUS), they must overcome multiple
simultaneous challenges. There is high heterogeneity in tissue appearance,
significant class imbalance... |
2502.14708 | Human Misperception of Generative-AI Alignment: A Laboratory Experiment | econ.TH cs.AI cs.GT | We conduct an incentivized laboratory experiment to study people's perception
of generative artificial intelligence (GenAI) alignment in the context of
economic decision-making. Using a panel of economic problems spanning the
domains of risk, time preference, social preference, and strategic
interactions, we ask huma... |
2502.14709 | Data-Efficient Pretraining with Group-Level Data Influence Modeling | cs.CL cs.LG | Data-efficient pretraining has shown tremendous potential to elevate scaling
laws. This paper argues that effective pretraining data should be curated at
the group level, treating a set of data points as a whole rather than as
independent contributors. To achieve that, we propose Group-Level Data
Influence Modeling (... |
2502.14714 | From Knowledge Generation to Knowledge Verification: Examining the
BioMedical Generative Capabilities of ChatGPT | cs.AI cs.CL cs.IR | The generative capabilities of LLM models present opportunities in
accelerating tasks and concerns with the authenticity of the knowledge it
produces. To address the concerns, we present a computational approach that
systematically evaluates the factual accuracy of biomedical knowledge that an
LLM model has been prom... |
2502.14718 | Entity Framing and Role Portrayal in the News | cs.CL | We introduce a novel multilingual hierarchical corpus annotated for entity
framing and role portrayal in news articles. The dataset uses a unique taxonomy
inspired by storytelling elements, comprising 22 fine-grained roles, or
archetypes, nested within three main categories: protagonist, antagonist, and
innocent. Eac... |
2502.14719 | Internal Incoherency Scores for Constraint-based Causal Discovery
Algorithms | stat.ML cs.LG | Causal discovery aims to infer causal graphs from observational or
experimental data. Methods such as the popular PC algorithm are based on
conditional independence testing and utilize enabling assumptions, such as the
faithfulness assumption, for their inferences. In practice, these assumptions,
as well as the funct... |
2502.14720 | Advancing Measurement Capabilities in Lithium-Ion Batteries: Exploring
the Potential of Fiber Optic Sensors for Thermal Monitoring of Battery Cells | physics.app-ph cs.SY eess.SY | This work demonstrates the potential of fiber optic sensors for measuring
thermal effects in lithium-ion batteries, using a fiber optic measurement
method of Optical Frequency Domain Reflectometry (OFDR). The innovative
application of fiber sensors allows for spatially resolved temperature
measurement, particularly e... |
2502.14721 | Multi-dataset synergistic in supervised learning to pre-label structural
components in point clouds from shell construction scenes | cs.CV | The significant effort required to annotate data for new training datasets
hinders computer vision research and machine learning in the construction
industry. This work explores adapting standard datasets and the latest
transformer model architectures for point cloud semantic segmentation in the
context of shell cons... |
2502.14724 | Ranking Joint Policies in Dynamic Games using Evolutionary Dynamics | cs.MA cs.AI cs.LG | Game-theoretic solution concepts, such as the Nash equilibrium, have been key
to finding stable joint actions in multi-player games. However, it has been
shown that the dynamics of agents' interactions, even in simple two-player
games with few strategies, are incapable of reaching Nash equilibria,
exhibiting complex ... |
2502.14727 | WavRAG: Audio-Integrated Retrieval Augmented Generation for Spoken
Dialogue Models | cs.SD cs.AI eess.AS | Retrieval Augmented Generation (RAG) has gained widespread adoption owing to
its capacity to empower large language models (LLMs) to integrate external
knowledge. However, existing RAG frameworks are primarily designed for
text-based LLMs and rely on Automatic Speech Recognition to process speech
input, which discard... |
2502.14731 | Beyond Performance Scores: Directed Functional Connectivity as a
Brain-Based Biomarker for Motor Skill Learning and Retention | q-bio.NC cs.LG | Motor skill acquisition in fields like surgery, robotics, and sports involves
learning complex task sequences through extensive training. Traditional
performance metrics, like execution time and error rates, offer limited insight
as they fail to capture the neural mechanisms underlying skill learning and
retention. T... |
2502.14734 | Sentence Smith: Formally Controllable Text Transformation and its
Application to Evaluation of Text Embedding Models | cs.CL | We propose the Sentence Smith framework that enables controlled and specified
manipulation of text meaning. It consists of three main steps: 1. Parsing a
sentence into a semantic graph, 2. Applying human-designed semantic
manipulation rules, and 3. Generating text from the manipulated graph. A final
filtering step (4... |
2502.14735 | EAGER-LLM: Enhancing Large Language Models as Recommenders through
Exogenous Behavior-Semantic Integration | cs.IR cs.AI | Large language models (LLMs) are increasingly leveraged as foundational
backbones in the development of advanced recommender systems, offering enhanced
capabilities through their extensive knowledge and reasoning. Existing
llm-based recommender systems (RSs) often face challenges due to the
significant differences be... |
2502.14738 | Robust Information Selection for Hypothesis Testing with
Misclassification Penalties | stat.ML cs.SY eess.SP eess.SY math.CO math.OC | We study the problem of robust information selection for a Bayesian
hypothesis testing / classification task, where the goal is to identify the
true state of the world from a finite set of hypotheses based on observations
from the selected information sources. We introduce a novel misclassification
penalty framework,... |
2502.14739 | SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines | cs.CL | Large language models (LLMs) have demonstrated remarkable proficiency in
mainstream academic disciplines such as mathematics, physics, and computer
science. However, human knowledge encompasses over 200 specialized disciplines,
far exceeding the scope of existing benchmarks. The capabilities of LLMs in
many of these ... |
2502.14740 | YOLOv12: A Breakdown of the Key Architectural Features | cs.CV cs.AI | This paper presents an architectural analysis of YOLOv12, a significant
advancement in single-stage, real-time object detection building upon the
strengths of its predecessors while introducing key improvements. The model
incorporates an optimised backbone (R-ELAN), 7x7 separable convolutions, and
FlashAttention-driv... |
2502.14741 | Reinforcement Learning with Graph Attention for Routing and Wavelength
Assignment with Lightpath Reuse | cs.NI cs.LG cs.SY eess.SY | Many works have investigated reinforcement learning (RL) for routing and
spectrum assignment on flex-grid networks but only one work to date has
examined RL for fixed-grid with flex-rate transponders, despite production
systems using this paradigm. Flex-rate transponders allow existing lightpaths
to accommodate new s... |
2502.14743 | Multi-Agent Coordination across Diverse Applications: A Survey | cs.MA cs.AI | Multi-agent coordination studies the underlying mechanism enabling the
trending spread of diverse multi-agent systems (MAS) and has received
increasing attention, driven by the expansion of emerging applications and
rapid AI advances. This survey outlines the current state of coordination
research across applications... |
2502.14744 | HiddenDetect: Detecting Jailbreak Attacks against Large Vision-Language
Models via Monitoring Hidden States | cs.CL | The integration of additional modalities increases the susceptibility of
large vision-language models (LVLMs) to safety risks, such as jailbreak
attacks, compared to their language-only counterparts. While existing research
primarily focuses on post-hoc alignment techniques, the underlying safety
mechanisms within LV... |
2502.14745 | SQL4NN: Validation and expressive querying of models as data | cs.DB cs.LG | We consider machine learning models, learned from data, to be an important,
intensional, kind of data in themselves. As such, various analysis tasks on
models can be thought of as queries over this intensional data, often combined
with extensional data such as data for training or validation. We demonstrate
that rela... |
2502.14746 | Classical and quantum Coxeter codes: Extending the Reed-Muller family | cs.IT math.CO math.IT quant-ph | We introduce a class of binary linear codes that generalizes the Reed-Muller
family by replacing the group $\mathbb{Z}_2^m$ with an arbitrary finite Coxeter
group. Similar to the Reed-Muller codes, this class is closed under duality and
has rate determined by a Gaussian distribution. We also construct quantum CSS
cod... |
2502.14748 | Large Language Models Struggle to Describe the Haystack without Human
Help: Human-in-the-loop Evaluation of LLMs | cs.CL | A common use of NLP is to facilitate the understanding of large document
collections, with a shift from using traditional topic models to Large Language
Models. Yet the effectiveness of using LLM for large corpus understanding in
real-world applications remains under-explored. This study measures the
knowledge users ... |
2502.14752 | TritonBench: Benchmarking Large Language Model Capabilities for
Generating Triton Operators | cs.CL cs.LG | Triton, a high-level Python-like language designed for building efficient GPU
kernels, is widely adopted in deep learning frameworks due to its portability,
flexibility, and accessibility. However, programming and parallel optimization
still require considerable trial and error from Triton developers. Despite
advance... |
2502.14753 | MedVAE: Efficient Automated Interpretation of Medical Images with
Large-Scale Generalizable Autoencoders | eess.IV cs.AI cs.CV | Medical images are acquired at high resolutions with large fields of view in
order to capture fine-grained features necessary for clinical decision-making.
Consequently, training deep learning models on medical images can incur large
computational costs. In this work, we address the challenge of downsizing
medical im... |
2502.14755 | Multi-Objective Causal Bayesian Optimization | stat.ML cs.LG | In decision-making problems, the outcome of an intervention often depends on
the causal relationships between system components and is highly costly to
evaluate. In such settings, causal Bayesian optimization (CBO) can exploit the
causal relationships between the system variables and sequentially perform
intervention... |
2502.14759 | On the Influence of Context Size and Model Choice in Retrieval-Augmented
Generation Systems | cs.CL cs.AI | Retrieval-augmented generation (RAG) has emerged as an approach to augment
large language models (LLMs) by reducing their reliance on static knowledge and
improving answer factuality. RAG retrieves relevant context snippets and
generates an answer based on them. Despite its increasing industrial adoption,
systematic ... |
2502.14760 | EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of
Optimization Formulations | cs.AI cs.LG math.OC | A fundamental problem in combinatorial optimization is identifying equivalent
formulations, which can lead to more efficient solution strategies and deeper
insights into a problem's computational complexity. The need to automatically
identify equivalence between problem formulations has grown as optimization
copilots... |
2502.14762 | Sculpting [CLS] Features for Pre-Trained Model-Based Class-Incremental
Learning | cs.LG cs.CV | Class-incremental learning requires models to continually acquire knowledge
of new classes without forgetting old ones. Although pre-trained models have
demonstrated strong performance in class-incremental learning, they remain
susceptible to catastrophic forgetting when learning new concepts. Excessive
plasticity in... |
2502.14764 | The illusion of households as entities in social networks | cs.SI physics.soc-ph | Data recording connections between people in communities and villages are
collected and analyzed in various ways, most often as either networks of
individuals or as networks of households. These two networks can differ in
substantial ways. The methodological choice of which network to study,
therefore, is an importan... |
2502.14765 | Step-by-Step Fact Verification System for Medical Claims with
Explainable Reasoning | cs.CL cs.AI | Fact verification (FV) aims to assess the veracity of a claim based on
relevant evidence. The traditional approach for automated FV includes a
three-part pipeline relying on short evidence snippets and encoder-only
inference models. More recent approaches leverage the multi-turn nature of LLMs
to address FV as a step... |
2502.14767 | Tree-of-Debate: Multi-Persona Debate Trees Elicit Critical Thinking for
Scientific Comparative Analysis | cs.CL cs.AI | With the exponential growth of research facilitated by modern technology and
improved accessibility, scientific discoveries have become increasingly
fragmented within and across fields. This makes it challenging to assess the
significance, novelty, incremental findings, and equivalent ideas between
related works, par... |
2502.14768 | Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement
Learning | cs.CL cs.AI | Inspired by the success of DeepSeek-R1, we explore the potential of
rule-based reinforcement learning (RL) in large reasoning models. To analyze
reasoning dynamics, we use synthetic logic puzzles as training data due to
their controllable complexity and straightforward answer verification. We make
some key technical ... |
2502.14770 | Determining Layer-wise Sparsity for Large Language Models Through a
Theoretical Perspective | cs.LG | In this paper, we address the challenge of determining the layer-wise
sparsity rates of large language models (LLMs) through a theoretical
perspective. Specifically, we identify a critical issue of
''$\textbf{reconstruction error explosion}$'' in existing LLMs sparsification
methods. This refers to the cumulative eff... |
2502.14772 | Efficient Multivariate Robust Mean Estimation Under Mean-Shift
Contamination | cs.DS cs.LG math.ST stat.ML stat.TH | We study the algorithmic problem of robust mean estimation of an identity
covariance Gaussian in the presence of mean-shift contamination. In this
contamination model, we are given a set of points in $\mathbb{R}^d$ generated
i.i.d. via the following process. For a parameter $\alpha<1/2$, the $i$-th
sample $x_i$ is ob... |
2502.14773 | Sparse Activations as Conformal Predictors | cs.LG | Conformal prediction is a distribution-free framework for uncertainty
quantification that replaces point predictions with sets, offering marginal
coverage guarantees (i.e., ensuring that the prediction sets contain the true
label with a specified probability, in expectation). In this paper, we uncover
a novel connect... |
2502.14776 | SurveyX: Academic Survey Automation via Large Language Models | cs.CL | Large Language Models (LLMs) have demonstrated exceptional comprehension
capabilities and a vast knowledge base, suggesting that LLMs can serve as
efficient tools for automated survey generation. However, recent research
related to automated survey generation remains constrained by some critical
limitations like fini... |
2502.14777 | Making Universal Policies Universal | cs.AI | The development of a generalist agent capable of solving a wide range of
sequential decision-making tasks remains a significant challenge. We address
this problem in a cross-agent setup where agents share the same observation
space but differ in their action spaces. Our approach builds on the universal
policy framewo... |
2502.14778 | Harnessing PDF Data for Improving Japanese Large Multimodal Models | cs.CL cs.AI cs.CV | Large Multimodal Models (LMMs) have demonstrated strong performance in
English, but their effectiveness in Japanese remains limited due to the lack of
high-quality training data. Current Japanese LMMs often rely on translated
English datasets, restricting their ability to capture Japan-specific cultural
knowledge. To... |
2502.14779 | DC-ControlNet: Decoupling Inter- and Intra-Element Conditions in Image
Generation with Diffusion Models | cs.CV | In this paper, we introduce DC (Decouple)-ControlNet, a highly flexible and
precisely controllable framework for multi-condition image generation. The core
idea behind DC-ControlNet is to decouple control conditions, transforming
global control into a hierarchical system that integrates distinct elements,
contents, a... |
2502.14780 | ReVision: A Dataset and Baseline VLM for Privacy-Preserving
Task-Oriented Visual Instruction Rewriting | cs.CL cs.AI cs.CV | Efficient and privacy-preserving multimodal interaction is essential as AR,
VR, and modern smartphones with powerful cameras become primary interfaces for
human-computer communication. Existing powerful large vision-language models
(VLMs) enabling multimodal interaction often rely on cloud-based processing,
raising s... |
2502.14782 | A Neural Operator-Based Emulator for Regional Shallow Water Dynamics | cs.CE cs.LG physics.comp-ph physics.geo-ph | Coastal regions are particularly vulnerable to the impacts of rising sea
levels and extreme weather events. Accurate real-time forecasting of
hydrodynamic processes in these areas is essential for infrastructure planning
and climate adaptation. In this study, we present the Multiple-Input Temporal
Operator Network (M... |
2502.14783 | Tracking and Assigning Jobs to a Markov Machine | cs.IT cs.NI cs.SY eess.SY math.IT | We consider a time-slotted communication system with a machine, a cloud
server, and a sampler. Job requests from the users are queued on the server to
be completed by the machine. The machine has two states, namely, a busy state
and a free state. The server can assign a job to the machine in a
first-in-first-served m... |
2502.14785 | Real-Time Device Reach Forecasting Using HLL and MinHash Data Sketches | cs.DB cs.AI cs.LG | Predicting the right number of TVs (Device Reach) in real-time based on a
user-specified targeting attributes is imperative for running multi-million
dollar ADs business. The traditional approach of SQL queries to join billions
of records across multiple targeting dimensions is extremely slow. As a
workaround, many a... |
2502.14786 | SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic
Understanding, Localization, and Dense Features | cs.CV cs.AI | We introduce SigLIP 2, a family of new multilingual vision-language encoders
that build on the success of the original SigLIP. In this second iteration, we
extend the original image-text training objective with several prior,
independently developed techniques into a unified recipe -- this includes
captioning-based p... |
2502.14788 | Ray-Tracing for Conditionally Activated Neural Networks | cs.LG cs.AI | In this paper, we introduce a novel architecture for conditionally activated
neural networks combining a hierarchical construction of multiple Mixture of
Experts (MoEs) layers with a sampling mechanism that progressively converges to
an optimized configuration of expert activation. This methodology enables the
dynami... |
2502.14789 | Structurally Disentangled Feature Fields Distillation for 3D
Understanding and Editing | cs.CV | Recent work has demonstrated the ability to leverage or distill pre-trained
2D features obtained using large pre-trained 2D models into 3D features,
enabling impressive 3D editing and understanding capabilities using only 2D
supervision. Although impressive, models assume that 3D features are captured
using a single ... |
2502.14790 | An Adversarial Analysis of Thompson Sampling for Full-information Online
Learning: from Finite to Infinite Action Spaces | cs.LG cs.GT math.ST stat.ML stat.TH | We develop an analysis of Thompson sampling for online learning under full
feedback - also known as prediction with expert advice - where the learner's
prior is defined over the space of an adversary's future actions, rather than
the space of experts. We show regret decomposes into regret the learner
expected a prior... |
2502.14791 | Rapid Word Learning Through Meta In-Context Learning | cs.CL cs.AI cs.LG | Humans can quickly learn a new word from a few illustrative examples, and
then systematically and flexibly use it in novel contexts. Yet the abilities of
current language models for few-shot word learning, and methods for improving
these abilities, are underexplored. In this study, we introduce a novel method,
Meta-t... |
2502.14792 | RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye
View Segmentation | cs.CV | Bird's Eye View (BEV) semantic maps have recently garnered a lot of attention
as a useful representation of the environment to tackle assisted and autonomous
driving tasks. However, most of the existing work focuses on the fully
supervised setting, training networks on large annotated datasets. In this
work, we prese... |
2502.14795 | Humanoid-VLA: Towards Universal Humanoid Control with Visual Integration | cs.RO cs.CV | This paper addresses the limitations of current humanoid robot control
frameworks, which primarily rely on reactive mechanisms and lack autonomous
interaction capabilities due to data scarcity. We propose Humanoid-VLA, a novel
framework that integrates language understanding, egocentric scene perception,
and motion c... |
2502.14796 | A Multi-Agent Perspective on Modern Information Retrieval | cs.IR | The rise of large language models (LLMs) has introduced a new era in
information retrieval (IR), where queries and documents that were once assumed
to be generated exclusively by humans can now also be created by automated
agents. These agents can formulate queries, generate documents, and perform
ranking. This shift... |
2502.14799 | A Survey on Text-Driven 360-Degree Panorama Generation | cs.CV cs.AI | The advent of text-driven 360-degree panorama generation, enabling the
synthesis of 360-degree panoramic images directly from textual descriptions,
marks a transformative advancement in immersive visual content creation. This
innovation significantly simplifies the traditionally complex process of
producing such cont... |
2502.14801 | AVD2: Accident Video Diffusion for Accident Video Description | cs.CV | Traffic accidents present complex challenges for autonomous driving, often
featuring unpredictable scenarios that hinder accurate system interpretation
and responses.Nonetheless, prevailing methodologies fall short in elucidating
the causes of accidents and proposing preventive measures due to the paucity of
training... |
2502.14802 | From RAG to Memory: Non-Parametric Continual Learning for Large Language
Models | cs.CL cs.AI | Our ability to continuously acquire, organize, and leverage knowledge is a
key feature of human intelligence that AI systems must approximate to unlock
their full potential. Given the challenges in continual learning with large
language models (LLMs), retrieval-augmented generation (RAG) has become the
dominant way t... |
2502.14803 | Planning, scheduling, and execution on the Moon: the CADRE technology
demonstration mission | cs.RO cs.SY eess.SY | NASA's Cooperative Autonomous Distributed Robotic Exploration (CADRE)
mission, slated for flight to the Moon's Reiner Gamma region in 2025/2026, is
designed to demonstrate multi-agent autonomous exploration of the Lunar surface
and sub-surface. A team of three robots and a base station will autonomously
explore a reg... |
2502.14807 | FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image
Analysis | eess.IV cs.AI cs.CV | Foundation models are becoming increasingly effective in the medical domain,
offering pre-trained models on large datasets that can be readily adapted for
downstream tasks. Despite progress, fetal ultrasound images remain a
challenging domain for foundation models due to their inherent complexity,
often requiring sub... |
2502.14809 | PREM: Privately Answering Statistical Queries with Relative Error | cs.LG | We introduce $\mathsf{PREM}$ (Private Relative Error Multiplicative weight
update), a new framework for generating synthetic data that achieves a relative
error guarantee for statistical queries under $(\varepsilon, \delta)$
differential privacy (DP). Namely, for a domain ${\cal X}$, a family ${\cal F}$
of queries $f... |
2502.14814 | VB-Com: Learning Vision-Blind Composite Humanoid Locomotion Against
Deficient Perception | cs.RO | The performance of legged locomotion is closely tied to the accuracy and
comprehensiveness of state observations. Blind policies, which rely solely on
proprioception, are considered highly robust due to the reliability of
proprioceptive observations. However, these policies significantly limit
locomotion speed and of... |
2502.14815 | Optimizing Model Selection for Compound AI Systems | cs.AI cs.CL cs.LG cs.MA | Compound AI systems that combine multiple LLM calls, such as self-refine and
multi-agent-debate, achieve strong performance on many AI tasks. We address a
core question in optimizing compound systems: for each LLM call or module in
the system, how should one decide which LLM to use? We show that these LLM
choices hav... |
2502.14816 | Dynamic Low-Rank Sparse Adaptation for Large Language Models | cs.LG | Despite the efficacy of network sparsity in alleviating the deployment strain
of Large Language Models (LLMs), it endures significant performance
degradation. Applying Low-Rank Adaptation (LoRA) to fine-tune the sparse LLMs
offers an intuitive approach to counter this predicament, while it holds
shortcomings include:... |
2502.14819 | Learning from Reward-Free Offline Data: A Case for Planning with Latent
Dynamics Models | cs.LG | A long-standing goal in AI is to build agents that can solve a variety of
tasks across different environments, including previously unseen ones. Two
dominant approaches tackle this challenge: (i) reinforcement learning (RL),
which learns policies through trial and error, and (ii) optimal control, which
plans actions ... |
2502.14820 | eC-Tab2Text: Aspect-Based Text Generation from e-Commerce Product Tables | cs.CL cs.AI cs.DB cs.HC | Large Language Models (LLMs) have demonstrated exceptional versatility across
diverse domains, yet their application in e-commerce remains underexplored due
to a lack of domain-specific datasets. To address this gap, we introduce
eC-Tab2Text, a novel dataset designed to capture the intricacies of e-commerce,
includin... |
2502.14821 | Meshless Shape Optimization using Neural Networks and Partial
Differential Equations on Graphs | math.NA cs.LG cs.NA math.OC | Shape optimization involves the minimization of a cost function defined over
a set of shapes, often governed by a partial differential equation (PDE). In
the absence of closed-form solutions, one relies on numerical methods to
approximate the solution. The level set method -- when coupled with the finite
element meth... |
2502.14822 | A Survey of Model Architectures in Information Retrieval | cs.IR | This survey examines the evolution of model architectures in information
retrieval (IR), focusing on two key aspects: backbone models for feature
extraction and end-to-end system architectures for relevance estimation. The
review intentionally separates architectural considerations from training
methodologies to prov... |
2502.14827 | Exploring Advanced Techniques for Visual Question Answering: A
Comprehensive Comparison | cs.CV cs.AI cs.ET cs.LG | Visual Question Answering (VQA) has emerged as a pivotal task in the
intersection of computer vision and natural language processing, requiring
models to understand and reason about visual content in response to natural
language questions. Analyzing VQA datasets is essential for developing robust
models that can hand... |
2502.14828 | Fundamental Limitations in Defending LLM Finetuning APIs | cs.LG cs.CR | LLM developers have imposed technical interventions to prevent fine-tuning
misuse attacks, attacks where adversaries evade safeguards by fine-tuning the
model using a public API. Previous work has established several successful
attacks against specific fine-tuning API defences. In this work, we show that
defences of ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.