id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.12902
Probabilistic neural operators for functional uncertainty quantification
cs.LG
Neural operators aim to approximate the solution operator of a system of differential equations purely from data. They have shown immense success in modeling complex dynamical systems across various domains. However, the occurrence of uncertainties inherent in both model and data has so far rarely been taken into acc...
2502.12904
Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements
cs.CL
We introduce Fraud-R1, a benchmark designed to evaluate LLMs' ability to defend against internet fraud and phishing in dynamic, real-world scenarios. Fraud-R1 comprises 8,564 fraud cases sourced from phishing scams, fake job postings, social media, and news, categorized into 5 major fraud types. Unlike previous bench...
2502.12908
Graph Neural Networks for Databases: A Survey
cs.DB cs.AI
Graph neural networks (GNNs) are powerful deep learning models for graph-structured data, demonstrating remarkable success across diverse domains. Recently, the database (DB) community has increasingly recognized the potentiality of GNNs, prompting a surge of researches focusing on improving database systems through ...
2502.12911
Knapsack Optimization-based Schema Linking for LLM-based Text-to-SQL Generation
cs.CL cs.DB
Generating SQLs from user queries is a long-standing challenge, where the accuracy of initial schema linking significantly impacts subsequent SQL generation performance. However, current schema linking models still struggle with missing relevant schema elements or an excess of redundant ones. A crucial reason for thi...
2502.12912
A Simplified and Numerically Stable Approach to the BG/NBD Churn Prediction model
stat.OT cs.LG math.ST stat.TH
This study extends the BG/NBD churn probability model, addressing its limitations in industries where customer behaviour is often influenced by seasonal events and possibly high purchase counts. We propose a modified definition of churn, considering a customer to have churned if they make no purchases within M days. ...
2502.12913
GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
cs.LG cs.AI cs.CL
Large Language Models (LLMs) fine-tuning technologies have achieved remarkable results. However, traditional LLM fine-tuning approaches face significant challenges: they require large Floating Point (FP) computation, raising privacy concerns when handling sensitive data, and are impractical for resource-constrained e...
2502.12917
Contrast-Unity for Partially-Supervised Temporal Sentence Grounding
cs.CV
Temporal sentence grounding aims to detect event timestamps described by the natural language query from given untrimmed videos. The existing fully-supervised setting achieves great results but requires expensive annotation costs; while the weakly-supervised setting adopts cheap labels but performs poorly. To pursue ...
2502.12918
Query Rewriting via LLMs
cs.DB
Query rewriting is a classical technique for transforming complex declarative SQL queries into ``lean'' equivalents that are conducive to (a) faster execution from a performance perspective, and (b) better understanding from a developer perspective. The rewriting is typically achieved via transformation rules, but th...
2502.12919
A Smooth Transition Between Induction and Deduction: Fast Abductive Learning Based on Probabilistic Symbol Perception
cs.LG
Abductive learning (ABL) that integrates strengths of machine learning and logical reasoning to improve the learning generalization, has been recently shown effective. However, its efficiency is affected by the transition between numerical induction and symbolical deduction, leading to high computational costs in the...
2502.12920
Lightweight Online Adaption for Time Series Foundation Model Forecasts
cs.LG stat.ML
Foundation models (FMs) have emerged as a promising approach for time series forecasting. While effective, FMs typically remain fixed during deployment due to the high computational costs of learning them online. Consequently, deployed FMs fail to adapt their forecasts to current data characteristics, despite the ava...
2502.12921
Q-STRUM Debate: Query-Driven Contrastive Summarization for Recommendation Comparison
cs.CL
Query-driven recommendation with unknown items poses a challenge for users to understand why certain items are appropriate for their needs. Query-driven Contrastive Summarization (QCS) is a methodology designed to address this issue by leveraging language-based item descriptions to clarify contrasts between them. How...
2502.12923
On-Device LLMs for Home Assistant: Dual Role in Intent Detection and Response Generation
cs.CL
This paper investigates whether Large Language Models (LLMs), fine-tuned on synthetic but domain-representative data, can perform the twofold task of (i) slot and intent detection and (ii) natural language response generation for a smart home assistant, while running solely on resource-limited, CPU-only edge hardware...
2502.12924
Conditioning LLMs to Generate Code-Switched Text: A Methodology Grounded in Naturally Occurring Data
cs.CL cs.AI
Code-switching (CS) is still a critical challenge in Natural Language Processing (NLP). Current Large Language Models (LLMs) struggle to interpret and generate code-switched text, primarily due to the scarcity of large-scale CS datasets for training. This paper presents a novel methodology to generate CS data using L...
2502.12925
Keep what you need : extracting efficient subnetworks from large audio representation models
cs.SD cs.AI
Recently, research on audio foundation models has witnessed notable advances, as illustrated by the ever improving results on complex downstream tasks. Subsequently, those pretrained networks have quickly been used for various audio applications. These improvements have however resulted in a considerable increase bot...
2502.12926
Towards more Contextual Agents: An extractor-Generator Optimization Framework
cs.AI
Large Language Model (LLM)-based agents have demonstrated remarkable success in solving complex tasks across a wide range of general-purpose applications. However, their performance often degrades in context-specific scenarios, such as specialized industries or research domains, where the absence of domain-relevant k...
2502.12927
SEFL: Harnessing Large Language Model Agents to Improve Educational Feedback Systems
cs.CL
Providing high-quality feedback is crucial for student success but is constrained by time, cost, and limited data availability. We introduce Synthetic Educational Feedback Loops (SEFL), a novel framework designed to deliver immediate, on-demand feedback at scale without relying on extensive, real-world student data. ...
2502.12928
Finedeep: Mitigating Sparse Activation in Dense LLMs via Multi-Layer Fine-Grained Experts
cs.CL
Large language models have demonstrated exceptional performance across a wide range of tasks. However, dense models usually suffer from sparse activation, where many activation values tend towards zero (i.e., being inactivated). We argue that this could restrict the efficient exploration of model representation space...
2502.12929
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
cs.LG cs.AI cs.CL
We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Lear...
2502.12930
Universal Embedding Function for Traffic Classification via QUIC Domain Recognition Pretraining: A Transfer Learning Success
cs.LG cs.NI
Encrypted traffic classification (TC) methods must adapt to new protocols and extensions as well as to advancements in other machine learning fields. In this paper, we follow a transfer learning setup best known from computer vision. We first pretrain an embedding model on a complex task with a large number of classe...
2502.12932
Synthetic Data Generation for Culturally Nuanced Commonsense Reasoning in Low-Resource Languages
cs.CL
Quantifying reasoning capability in low-resource languages remains a challenge in NLP due to data scarcity and limited access to annotators. While LLM-assisted dataset construction has proven useful for medium- and high-resource languages, its effectiveness in low-resource languages, particularly for commonsense reas...
2502.12937
Tuning Algorithmic and Architectural Hyperparameters in Graph-Based Semi-Supervised Learning with Provable Guarantees
cs.LG
Graph-based semi-supervised learning is a powerful paradigm in machine learning for modeling and exploiting the underlying graph structure that captures the relationship between labeled and unlabeled data. A large number of classical as well as modern deep learning based algorithms have been proposed for this problem...
2502.12944
Performance of Zero-Shot Time Series Foundation Models on Cloud Data
cs.LG
Time series foundation models (FMs) have emerged as a popular paradigm for zero-shot multi-domain forecasting. FMs are trained on numerous diverse datasets and claim to be effective forecasters across multiple different time series domains, including cloud data. In this work we investigate this claim, exploring the e...
2502.12945
LLMPopcorn: An Empirical Study of LLMs as Assistants for Popular Micro-video Generation
cs.CL cs.CV
Popular Micro-videos, dominant on platforms like TikTok and YouTube, hold significant commercial value. The rise of high-quality AI-generated content has spurred interest in AI-driven micro-video creation. However, despite the advanced capabilities of large language models (LLMs) like ChatGPT and DeepSeek in text gen...
2502.12947
Every Expert Matters: Towards Effective Knowledge Distillation for Mixture-of-Experts Language Models
cs.CL cs.AI cs.LG
With the emergence of Mixture-of-Experts (MoE), the efficient scaling of model size has accelerated the development of large language models in recent years. However, their high memory requirements prevent their use in resource-constrained environments. While knowledge distillation (KD) has been a proven method for m...
2502.12948
Fake It Till You Make It: Using Synthetic Data and Domain Knowledge for Improved Text-Based Learning for LGE Detection
cs.CV cs.AI
Detection of hyperenhancement from cardiac LGE MRI images is a complex task requiring significant clinical expertise. Although deep learning-based models have shown promising results for the task, they require large amounts of data with fine-grained annotations. Clinical reports generated for cardiac MR studies conta...
2502.12949
Efficient Learning Under Density Shift in Incremental Settings Using Cram\'er-Rao-Based Regularization
cs.LG stat.ML
The continuous surge in data volume and velocity is often dealt with using data orchestration and distributed processing approaches, abstracting away the machine learning challenges that exist at the algorithmic level. With growing interest in automating the learning loop, training with data that arrive in a sequence...
2502.12950
Towards Hybrid Traffic Laws for Mixed Flow of Human-Driven Vehicles and Connected Autonomous Vehicles
cs.MA
Hybrid traffic laws represent an innovative approach to managing mixed environments of connected autonomous vehicles (CAVs) and human-driven vehicles (HDVs) by introducing separate sets of regulations for each vehicle type. These laws are designed to leverage the unique capabilities of CAVs while ensuring both types ...
2502.12951
Guaranteed Conditional Diffusion: 3D Block-based Models for Scientific Data Compression
cs.LG
This paper proposes a new compression paradigm -- Guaranteed Conditional Diffusion with Tensor Correction (GCDTC) -- for lossy scientific data compression. The framework is based on recent conditional diffusion (CD) generative models, and it consists of a conditional diffusion model, tensor correction, and error guar...
2502.12953
Task-Informed Anti-Curriculum by Masking Improves Downstream Performance on Text
cs.CL cs.AI cs.LG
Masked language modeling has become a widely adopted unsupervised technique to pre-train language models. However, the process of selecting tokens for masking is random, and the percentage of masked tokens is typically fixed for the entire training process. In this paper, we propose to adjust the masking ratio and to...
2502.12958
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
cs.CR cs.DB cs.LG
Privacy concerns have led to the rise of federated recommender systems (FRS), which can create personalized models across distributed clients. However, FRS is vulnerable to poisoning attacks, where malicious users manipulate gradients to promote their target items intentionally. Existing attacks against FRS have limi...
2502.12959
AlignFreeze: Navigating the Impact of Realignment on the Layers of Multilingual Models Across Diverse Languages
cs.CL cs.AI
Realignment techniques are often employed to enhance cross-lingual transfer in multilingual language models, still, they can sometimes degrade performance in languages that differ significantly from the fine-tuned source language. This paper introduces AlignFreeze, a method that freezes either the layers' lower half ...
2502.12961
Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger
cs.AI cs.CL
Large language models (LLMs) have shown remarkable emergent capabilities, transforming the execution of functional tasks by leveraging external tools for complex problems that require specialized processing or real-time data. While existing research expands LLMs access to diverse tools (e.g., program interpreters, se...
2502.12962
Infinite Retrieval: Attention Enhanced LLMs in Long-Context Processing
cs.CL
Limited by the context window size of Large Language Models(LLMs), handling various tasks with input tokens exceeding the upper limit has been challenging, whether it is a simple direct retrieval task or a complex multi-hop reasoning task. Although various methods have been proposed to enhance the long-context proces...
2502.12963
D3-ARM: High-Dynamic, Dexterous and Fully Decoupled Cable-driven Robotic Arm
cs.RO
Cable transmission enables motors of robotic arm to operate lightweight and low-inertia joints remotely in various environments, but it also creates issues with motion coupling and cable routing that can reduce arm's control precision and performance. In this paper, we present a novel motion decoupling mechanism with...
2502.12964
Trust Me, I'm Wrong: High-Certainty Hallucinations in LLMs
cs.CL
Large Language Models (LLMs) often generate outputs that lack grounding in real-world facts, a phenomenon known as hallucinations. Prior research has associated hallucinations with model uncertainty, leveraging this relationship for hallucination detection and mitigation. In this paper, we challenge the underlying as...
2502.12965
A Survey of Text Classification Under Class Distribution Shift
cs.CL cs.AI cs.LG
The basic underlying assumption of machine learning (ML) models is that the training and test data are sampled from the same distribution. However, in daily practice, this assumption is often broken, i.e.~the distribution of the test data changes over time, which hinders the application of conventional ML models. One...
2502.12966
The Early Days of the Ethereum Blob Fee Market and Lessons Learnt
cs.CE cs.CR cs.DC cs.ET econ.GN q-fin.EC
Ethereum has adopted a rollup-centric roadmap to scale by making rollups (layer 2 scaling solutions) the primary method for handling transactions. The first significant step towards this goal was EIP-4844, which introduced blob transactions that are designed to meet the data availability needs of layer 2 protocols. T...
2502.12970
Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking
cs.CL
The reasoning abilities of Large Language Models (LLMs) have demonstrated remarkable advancement and exceptional performance across diverse domains. However, leveraging these reasoning capabilities to enhance LLM safety against adversarial attacks and jailbreak queries remains largely unexplored. To bridge this gap, ...
2502.12973
Optimizing Social Network Interventions via Hypergradient-Based Recommender System Design
cs.SI cs.SY eess.SY math.OC
Although social networks have expanded the range of ideas and information accessible to users, they are also criticized for amplifying the polarization of user opinions. Given the inherent complexity of these phenomena, existing approaches to counteract these effects typically rely on handcrafted algorithms and heuri...
2502.12974
Learning More Effective Representations for Dense Retrieval through Deliberate Thinking Before Search
cs.IR
Recent dense retrievers usually thrive on the emergency capabilities of Large Language Models (LLMs), using them to encode queries and documents into an embedding space for retrieval. These LLM-based dense retrievers have shown promising performance across various retrieval scenarios. However, relying on a single emb...
2502.12975
Instance-Level Moving Object Segmentation from a Single Image with Events
cs.CV
Moving object segmentation plays a crucial role in understanding dynamic scenes involving multiple moving objects, while the difficulties lie in taking into account both spatial texture structures and temporal motion cues. Existing methods based on video frames encounter difficulties in distinguishing whether pixel d...
2502.12976
Does Training with Synthetic Data Truly Protect Privacy?
cs.CR cs.LG
As synthetic data becomes increasingly popular in machine learning tasks, numerous methods--without formal differential privacy guarantees--use synthetic data for training. These methods often claim, either explicitly or implicitly, to protect the privacy of the original training data. In this work, we explore four d...
2502.12977
Time-series attribution maps with regularized contrastive learning
stat.ML cs.AI cs.LG q-bio.NC
Gradient-based attribution methods aim to explain decisions of deep learning models but so far lack identifiability guarantees. Here, we propose a method to generate attribution maps with identifiability guarantees by developing a regularized contrastive learning algorithm trained on time-series data plus a new attri...
2502.12978
Statistically Significant $k$NNAD by Selective Inference
stat.ML cs.LG
In this paper, we investigate the problem of unsupervised anomaly detection using the k-Nearest Neighbor method. The k-Nearest Neighbor Anomaly Detection (kNNAD) is a simple yet effective approach for identifying anomalies across various domains and fields. A critical challenge in anomaly detection, including kNNAD, ...
2502.12979
Electron flow matching for generative reaction mechanism prediction obeying conservation laws
cs.LG
Central to our understanding of chemical reactivity is the principle of mass conservation, which is fundamental for ensuring physical consistency, balancing equations, and guiding reaction design. However, data-driven computational models for tasks such as reaction product prediction rarely abide by this most basic c...
2502.12981
Towards Variational Flow Matching on General Geometries
cs.LG math.DG
We introduce Riemannian Gaussian Variational Flow Matching (RG-VFM), an extension of Variational Flow Matching (VFM) that leverages Riemannian Gaussian distributions for generative modeling on structured manifolds. We derive a variational objective for probability flows on manifolds with closed-form geodesics, making...
2502.12982
Sailor2: Sailing in South-East Asia with Inclusive Multilingual LLMs
cs.CL cs.AI cs.LG
Sailor2 is a family of cutting-edge multilingual language models for South-East Asian (SEA) languages, available in 1B, 8B, and 20B sizes to suit diverse applications. Building on Qwen2.5, Sailor2 undergoes continuous pre-training on 500B tokens (400B SEA-specific and 100B replay tokens) to support 13 SEA languages w...
2502.12984
On Erlang mixture approximations for differential equations with distributed time delays
math.DS cs.NA cs.SY eess.SY math.NA
In this paper, we propose a general approach for approximate simulation and analysis of delay differential equations (DDEs) with distributed time delays based on methods for ordinary differential equations (ODEs). The key innovation is that we 1) approximate the kernel by the probability density function of an Erlang...
2502.12985
PartSDF: Part-Based Implicit Neural Representation for Composite 3D Shape Parametrization and Optimization
cs.CV cs.AI
Accurate 3D shape representation is essential in engineering applications such as design, optimization, and simulation. In practice, engineering workflows require structured, part-aware representations, as objects are inherently designed as assemblies of distinct components. However, most existing methods either mode...
2502.12987
Ensemble Kalman filter in latent space using a variational autoencoder pair
cs.LG physics.ao-ph
Popular (ensemble) Kalman filter data assimilation (DA) approaches assume that the errors in both the a priori estimate of the state and those in the observations are Gaussian. For constrained variables, e.g. sea ice concentration or stress, such an assumption does not hold. The variational autoencoder (VAE) is a mac...
2502.12988
Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs
cs.CL
Previous approaches to persona simulation large language models (LLMs) have typically relied on learning basic biographical information, or using limited role-play dialogue datasets to capture a character's responses. However, a holistic representation of an individual goes beyond surface-level facts or conversations...
2502.12992
B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability
cs.CL cs.AI
Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural models. Meanwhile, B-cos networks have been introduced to improve model explainability through architectural and computational adaptations, but their applic...
2502.12993
Approximate Tree Completion and Learning-Augmented Algorithms for Metric Minimum Spanning Trees
cs.DS cs.DM cs.LG
Finding a minimum spanning tree (MST) for $n$ points in an arbitrary metric space is a fundamental primitive for hierarchical clustering and many other ML tasks, but this takes $\Omega(n^2)$ time to even approximate. We introduce a framework for metric MSTs that first (1) finds a forest of disconnected components usi...
2502.12994
SHADeS: Self-supervised Monocular Depth Estimation Through Non-Lambertian Image Decomposition
cs.CV
Purpose: Visual 3D scene reconstruction can support colonoscopy navigation. It can help in recognising which portions of the colon have been visualised and characterising the size and shape of polyps. This is still a very challenging problem due to complex illumination variations, including abundant specular reflecti...
2502.12995
Free Argumentative Exchanges for Explaining Image Classifiers
cs.AI
Deep learning models are powerful image classifiers but their opacity hinders their trustworthiness. Explanation methods for capturing the reasoning process within these classifiers faithfully and in a clear manner are scarce, due to their sheer complexity and size. We provide a solution for this problem by defining ...
2502.12996
Eager Updates For Overlapped Communication and Computation in DiLoCo
cs.CL
Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their ...
2502.12998
Personalized Top-k Set Queries Over Predicted Scores
cs.DB cs.AI cs.LG
This work studies the applicability of expensive external oracles such as large language models in answering top-k queries over predicted scores. Such scores are incurred by user-defined functions to answer personalized queries over multi-modal data. We propose a generic computational framework that handles arbitrary...
2502.12999
Asymptotic Optimism of Random-Design Linear and Kernel Regression Models
stat.ML cs.LG math.ST stat.TH
We derived the closed-form asymptotic optimism of linear regression models under random designs, and generalizes it to kernel ridge regression. Using scaled asymptotic optimism as a generic predictive model complexity measure, we studied the fundamental different behaviors of linear regression model, tangent kernel (...
2502.13000
Edge-Colored Clustering in Hypergraphs: Beyond Minimizing Unsatisfied Edges
cs.DS cs.DM cs.LG
We consider a framework for clustering edge-colored hypergraphs, where the goal is to cluster (equivalently, to color) objects based on the primary type of multiway interactions they participate in. One well-studied objective is to color nodes to minimize the number of unsatisfied hyperedges -- those containing one o...
2502.13001
You need to MIMIC to get FAME: Solving Meeting Transcript Scarcity with a Multi-Agent Conversations
cs.AI cs.CL
Meeting summarization suffers from limited high-quality data, mainly due to privacy restrictions and expensive collection processes. We address this gap with FAME, a dataset of 500 meetings in English and 300 in German produced by MIMIC, our new multi-agent meeting synthesis framework that generates meeting transcrip...
2502.13004
Language Barriers: Evaluating Cross-Lingual Performance of CNN and Transformer Architectures for Speech Quality Estimation
cs.CL
Objective speech quality models aim to predict human-perceived speech quality using automated methods. However, cross-lingual generalization remains a major challenge, as Mean Opinion Scores (MOS) vary across languages due to linguistic, perceptual, and dataset-specific differences. A model trained primarily on Engli...
2502.13006
Integrating Reinforcement Learning, Action Model Learning, and Numeric Planning for Tackling Complex Tasks
cs.AI
Automated Planning algorithms require a model of the domain that specifies the preconditions and effects of each action. Obtaining such a domain model is notoriously hard. Algorithms for learning domain models exist, yet it remains unclear whether learning a domain model and planning is an effective approach for nume...
2502.13010
Adaptive Knowledge Graphs Enhance Medical Question Answering: Bridging the Gap Between LLMs and Evolving Medical Knowledge
cs.CL cs.MA
Large Language Models (LLMs) have significantly advanced medical question-answering by leveraging extensive clinical data and medical literature. However, the rapid evolution of medical knowledge and the labor-intensive process of manually updating domain-specific resources pose challenges to the reliability of these...
2502.13012
Towards a Design Guideline for RPA Evaluation: A Survey of Large Language Model-Based Role-Playing Agents
cs.HC cs.CL
Role-Playing Agent (RPA) is an increasingly popular type of LLM Agent that simulates human-like behaviors in a variety of tasks. However, evaluating RPAs is challenging due to diverse task requirements and agent designs. This paper proposes an evidence-based, actionable, and generalizable evaluation design guideline ...
2502.13013
HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit
cs.RO cs.AI cs.HC
Current humanoid teleoperation systems either lack reliable low-level control policies, or struggle to acquire accurate whole-body control commands, making it difficult to teleoperate humanoids for loco-manipulation tasks. To solve these issues, we propose HOMIE, a novel humanoid teleoperation cockpit integrates a hu...
2502.13016
LLM-Powered Proactive Data Systems
cs.DB cs.AI
With the power of LLMs, we now have the ability to query data that was previously impossible to query, including text, images, and video. However, despite this enormous potential, most present-day data systems that leverage LLMs are reactive, reflecting our community's desire to map LLMs to known abstractions. Most d...
2502.13017
Mean of Means: Human Localization with Calibration-free and Unconstrained Camera Settings (extended version)
cs.CV cs.GR
Accurate human localization is crucial for various applications, especially in the Metaverse era. Existing high precision solutions rely on expensive, tag-dependent hardware, while vision-based methods offer a cheaper, tag-free alternative. However, current vision solutions based on stereo vision face limitations due...
2502.13019
Oreo: A Plug-in Context Reconstructor to Enhance Retrieval-Augmented Generation
cs.CL
Despite the remarkable capabilities of Large Language Models (LLMs) in various NLP tasks, they remain vulnerable to hallucinations due to their limited parametric knowledge and lack of domain-specific expertise. Retrieval-Augmented Generation (RAG) addresses this challenge by incorporating external document retrieval...
2502.13022
Efficient and Sharp Off-Policy Learning under Unobserved Confounding
cs.LG
We develop a novel method for personalized off-policy learning in scenarios with unobserved confounding. Thereby, we address a key limitation of standard policy learning: standard policy learning assumes unconfoundedness, meaning that no unobserved factors influence both treatment assignment and outcomes. However, th...
2502.13023
Detection and Geographic Localization of Natural Objects in the Wild: A Case Study on Palms
cs.CV cs.LG
Palms are ecologically and economically indicators of tropical forest health, biodiversity, and human impact that support local economies and global forest product supply chains. While palm detection in plantations is well-studied, efforts to map naturally occurring palms in dense forests remain limited by overlappin...
2502.13024
Fragility-aware Classification for Understanding Risk and Improving Generalization
cs.LG math.OC
Classification models play a critical role in data-driven decision-making applications such as medical diagnosis, user profiling, recommendation systems, and default detection. Traditional performance metrics, such as accuracy, focus on overall error rates but fail to account for the confidence of incorrect predictio...
2502.13025
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
cs.AI cond-mat.mtrl-sci cs.CL cs.LG
We present an agentic, autonomous graph expansion framework that iteratively structures and refines knowledge in situ. Unlike conventional knowledge graph construction methods relying on static extraction or single-pass learning, our approach couples a reasoning-native large language model with a continually updated ...
2502.13027
A deep learning framework for efficient pathology image analysis
cs.CV
Artificial intelligence (AI) has transformed digital pathology by enabling biomarker prediction from high-resolution whole slide images (WSIs). However, current methods are computationally inefficient, processing thousands of redundant tiles per WSI and requiring complex aggregator models. We introduce EAGLE (Efficie...
2502.13028
Whose story is it? Personalizing story generation by inferring author styles
cs.CL
Personalization has become essential for improving user experience in interactive writing and educational applications, yet its potential in story generation remains largely unexplored. In this work, we propose a novel two-stage pipeline for personalized story generation. Our approach first infers an author's implici...
2502.13030
Likelihood-Ratio Regularized Quantile Regression: Adapting Conformal Prediction to High-Dimensional Covariate Shifts
stat.ML cs.AI cs.LG
We consider the problem of conformal prediction under covariate shift. Given labeled data from a source domain and unlabeled data from a covariate shifted target domain, we seek to construct prediction sets with valid marginal coverage in the target domain. Most existing methods require estimating the unknown likelih...
2502.13031
HPSS: Heuristic Prompting Strategy Search for LLM Evaluators
cs.CL
Since the adoption of large language models (LLMs) for text evaluation has become increasingly prevalent in the field of natural language processing (NLP), a series of existing works attempt to optimize the prompts for LLM evaluators to improve their alignment with human judgment. However, their efforts are limited t...
2502.13034
Natural Language Generation from Visual Sequences: Challenges and Future Directions
cs.CL cs.AI cs.CV cs.LG
The ability to use natural language to talk about visual content is at the core of human intelligence and a crucial feature of any artificial intelligence system. Various studies have focused on generating text for single images. In contrast, comparatively little attention has been paid to exhaustively analyzing and ...
2502.13037
Enhancing Power Grid Inspections with Machine Learning
cs.CV
Ensuring the safety and reliability of power grids is critical as global energy demands continue to rise. Traditional inspection methods, such as manual observations or helicopter surveys, are resource-intensive and lack scalability. This paper explores the use of 3D computer vision to automate power grid inspections...
2502.13042
Network-Realized Model Predictive Control Part I: NRF-Enabled Closed-loop Decomposition
eess.SY cs.SY
A two-layer control architecture is proposed, which promotes scalable implementations for model predictive controllers. The top layer acts as both reference governor for the bottom layer, and as a feedback controller for the regulated network. By employing set-based methods, global theoretical guarantees are obtained...
2502.13044
Do we still need Human Annotators? Prompting Large Language Models for Aspect Sentiment Quad Prediction
cs.CL
Aspect sentiment quadruple prediction (ASQP) facilitates a detailed understanding of opinions expressed in a text by identifying the opinion term, aspect term, aspect category and sentiment polarity for each opinion. However, annotating a full set of training examples to fine-tune models for ASQP is a resource-intens...
2502.13049
$k$-Graph: A Graph Embedding for Interpretable Time Series Clustering
cs.LG
Time series clustering poses a significant challenge with diverse applications across domains. A prominent drawback of existing solutions lies in their limited interpretability, often confined to presenting users with centroids. In addressing this gap, our work presents $k$-Graph, an unsupervised method explicitly cr...
2502.13053
AEIA-MN: Evaluating the Robustness of Multimodal LLM-Powered Mobile Agents Against Active Environmental Injection Attacks
cs.CL
As researchers continuously optimize AI agents to perform tasks more effectively within operating systems, they often neglect to address the critical need for enabling these agents to identify "impostors" within the system. Through an analysis of the agents' operating environment, we identified a potential threat: at...
2502.13055
LAMD: Context-driven Android Malware Detection and Classification with LLMs
cs.CR cs.AI cs.LG
The rapid growth of mobile applications has escalated Android malware threats. Although there are numerous detection methods, they often struggle with evolving attacks, dataset biases, and limited explainability. Large Language Models (LLMs) offer a promising alternative with their zero-shot inference and reasoning c...
2502.13056
Benchmarking MedMNIST dataset on real quantum hardware
quant-ph cs.LG
Quantum machine learning (QML) has emerged as a promising domain to leverage the computational capabilities of quantum systems to solve complex classification tasks. In this work, we present first comprehensive QML study by benchmarking the MedMNIST-a diverse collection of medical imaging datasets on a 127-qubit real...
2502.13059
SimpleVQA: Multimodal Factuality Evaluation for Multimodal Large Language Models
cs.CL
The increasing application of multi-modal large language models (MLLMs) across various sectors have spotlighted the essence of their output reliability and accuracy, particularly their ability to produce content grounded in factual information (e.g. common and domain-specific knowledge). In this work, we introduce Si...
2502.13061
Improved Fine-Tuning of Large Multimodal Models for Hateful Meme Detection
cs.CL cs.AI cs.CV cs.LG
Hateful memes have become a significant concern on the Internet, necessitating robust automated detection systems. While large multimodal models have shown strong generalization across various tasks, they exhibit poor generalization to hateful meme detection due to the dynamic nature of memes tied to emerging social ...
2502.13062
AI-Assisted Decision Making with Human Learning
cs.AI cs.GT cs.HC
AI systems increasingly support human decision-making. In many cases, despite the algorithm's superior performance, the final decision remains in human hands. For example, an AI may assist doctors in determining which diagnostic tests to run, but the doctor ultimately makes the diagnosis. This paper studies such AI-a...
2502.13063
Cramming 1568 Tokens into a Single Vector and Back Again: Exploring the Limits of Embedding Space Capacity
cs.CL cs.LG
A range of recent works addresses the problem of compression of sequence of tokens into a shorter sequence of real-valued vectors to be used as inputs instead of token embeddings or key-value cache. These approaches allow to reduce the amount of compute in existing language models. Despite relying on powerful models ...
2502.13069
Interactive Agents to Overcome Ambiguity in Software Engineering
cs.AI
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions. Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes, safety risks due to tool misuse, and wasted computational resources. In this work, we stud...
2502.13071
RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection
cs.CV
While recent low-cost radar-camera approaches have shown promising results in multi-modal 3D object detection, both sensors face challenges from environmental and intrinsic disturbances. Poor lighting or adverse weather conditions degrade camera performance, while radar suffers from noise and positional ambiguity. Ac...
2502.13073
Network-Realized Model Predictive Control Part II: Distributed Constraint Management
eess.SY cs.SY
A two-layer control architecture is proposed, which promotes scalable implementations for model predictive controllers. The top layer acts as both reference governor for the bottom layer, and as a feedback controller for the regulated network. By employing set-based methods, global theoretical guarantees are obtained...
2502.13076
KAPPA: A Generic Patent Analysis Framework with Keyphrase-Based Portraits
cs.CL
Patent analysis highly relies on concise and interpretable document representations, referred to as patent portraits. Keyphrases, both present and absent, are ideal candidates for patent portraits due to their brevity, representativeness, and clarity. In this paper, we introduce KAPPA, an integrated framework designe...
2502.13077
Pricing is All You Need to Improve Traffic Routing
eess.SY cs.SY
We investigate the design of pricing policies that enhance driver adherence to route guidance, ensuring effective routing control. The major novelty lies in that we adopt a Markov chain to model drivers' compliance rates conditioned on both traffic states and tolls. By formulating the managed traffic network as a non...
2502.13078
L4P: Low-Level 4D Vision Perception Unified
cs.CV
The spatio-temporal relationship between the pixels of a video carries critical information for low-level 4D perception. A single model that reasons about it should be able to solve several such tasks well. Yet, most state-of-the-art methods rely on architectures specialized for the task at hand. We present L4P (pron...
2502.13080
BOLIMES: Boruta and LIME optiMized fEature Selection for Gene Expression Classification
cs.LG cs.AI
Gene expression classification is a pivotal yet challenging task in bioinformatics, primarily due to the high dimensionality of genomic data and the risk of overfitting. To bridge this gap, we propose BOLIMES, a novel feature selection algorithm designed to enhance gene expression classification by systematically ref...
2502.13081
Personalized Image Generation with Deep Generative Models: A Decade Survey
cs.CV
Recent advancements in generative models have significantly facilitated the development of personalized content creation. Given a small set of images with user-specific concept, personalized image generation allows to create images that incorporate the specified concept and adhere to provided text descriptions. Due t...
2502.13082
Automated Linear Parameter-Varying Modeling of Nonlinear Systems: A Global Embedding Approach
eess.SY cs.SY
In this paper, an automated Linear Parameter-Varying (LPV) model conversion approach is proposed for nonlinear dynamical systems. The proposed method achieves global embedding of the original nonlinear behavior of the system by leveraging the second fundamental theorem of calculus to factorize matrix function express...
2502.13085
A Neural Difference-of-Entropies Estimator for Mutual Information
stat.ML cs.IT cs.LG math.IT
Estimating Mutual Information (MI), a key measure of dependence of random quantities without specific modelling assumptions, is a challenging problem in high dimensions. We propose a novel mutual information estimator based on parametrizing conditional densities using normalizing flows, a deep generative model that h...
2502.13090
tn4ml: Tensor Network Training and Customization for Machine Learning
cs.LG cs.MS quant-ph
Tensor Networks have emerged as a prominent alternative to neural networks for addressing Machine Learning challenges in foundational sciences, paving the way for their applications to real-life problems. This paper introduces tn4ml, a novel library designed to seamlessly integrate Tensor Networks into optimization p...
2502.13092
Text2World: Benchmarking Large Language Models for Symbolic World Model Generation
cs.CL cs.AI
Recently, there has been growing interest in leveraging large language models (LLMs) to generate symbolic world models from textual descriptions. Although LLMs have been extensively explored in the context of world modeling, prior studies encountered several challenges, including evaluation randomness, dependence on ...