id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.11312
AI Generations: From AI 1.0 to AI 4.0
cs.AI
This paper proposes that Artificial Intelligence (AI) progresses through several overlapping generations: AI 1.0 (Information AI), AI 2.0 (Agentic AI), AI 3.0 (Physical AI), and now a speculative AI 4.0 (Conscious AI). Each of these AI generations is driven by shifting priorities among algorithms, computing power, an...
2502.11323
A statistical theory of overfitting for imbalanced classification
math.ST cs.LG stat.ML stat.TH
Classification with imbalanced data is a common challenge in data analysis, where certain classes (minority classes) account for a small fraction of the training data compared with other classes (majority classes). Classical statistical theory based on large-sample asymptotics and finite-sample corrections is often i...
2502.11324
Robust High-Dimensional Mean Estimation With Low Data Size, an Empirical Study
stat.ML cs.LG
Robust statistics aims to compute quantities to represent data where a fraction of it may be arbitrarily corrupted. The most essential statistic is the mean, and in recent years, there has been a flurry of theoretical advancement for efficiently estimating the mean in high dimensions on corrupted data. While several ...
2502.11329
Differentially private fine-tuned NF-Net to predict GI cancer type
cs.CV
Based on global genomic status, the cancer tumor is classified as Microsatellite Instable (MSI) and Microsatellite Stable (MSS). Immunotherapy is used to diagnose MSI, whereas radiation and chemotherapy are used for MSS. Therefore, it is significant to classify a gastro-intestinal (GI) cancer tumor into MSI vs. MSS t...
2502.11330
System Message Generation for User Preferences using Open-Source Models
cs.CL cs.AI
System messages play a crucial role in interactions with large language models (LLMs), often serving as prompts to initiate conversations. Through system messages, users can assign specific roles, perform intended tasks, incorporate background information, specify various output formats and communication styles. Desp...
2502.11331
Transfer Learning of CATE with Kernel Ridge Regression
stat.ME cs.LG stat.ML
The proliferation of data has sparked significant interest in leveraging findings from one study to estimate treatment effects in a different target population without direct outcome observations. However, the transfer learning process is frequently hindered by substantial covariate shift and limited overlap between ...
2502.11333
Inverse Flow and Consistency Models
cs.LG cs.AI
Inverse generation problems, such as denoising without ground truth observations, is a critical challenge in many scientific inquiries and real-world applications. While recent advances in generative models like diffusion models, conditional flow matching, and consistency models achieved impressive results by casting...
2502.11335
Personalized Ranking on Cascading Behavior Graphs for Accurate Multi-Behavior Recommendation
cs.IR
Multi-behavior recommendation predicts items a user may purchase by analyzing diverse behaviors like viewing, adding to a cart, and purchasing. Existing methods fall into two categories: representation learning and graph ranking. Representation learning generates user and item embeddings to capture latent interaction...
2502.11336
ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability
cs.CL
Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions, such as undermining student's academic dignity. LLM text detection thus needs to ensure the interpretability of the decision, which can help users judge how reliably correct its prediction is. When humans ...
2502.11337
A Comparison of Human and Machine Learning Errors in Face Recognition
cs.HC cs.CV cs.CY
Machine learning applications in high-stakes scenarios should always operate under human oversight. Developing an optimal combination of human and machine intelligence requires an understanding of their complementarities, particularly regarding the similarities and differences in the way they make mistakes. We perfor...
2502.11338
WRT-SAM: Foundation Model-Driven Segmentation for Generalized Weld Radiographic Testing
cs.CV
Radiographic testing is a fundamental non-destructive evaluation technique for identifying weld defects and assessing quality in industrial applications due to its high-resolution imaging capabilities. Over the past decade, deep learning techniques have significantly advanced weld defect identification in radiographi...
2502.11340
S2TX: Cross-Attention Multi-Scale State-Space Transformer for Time Series Forecasting
cs.LG
Time series forecasting has recently achieved significant progress with multi-scale models to address the heterogeneity between long and short range patterns. Despite their state-of-the-art performance, we identify two potential areas for improvement. First, the variates of the multivariate time series are processed ...
2502.11345
Hierarchical Graph Topic Modeling with Topic Tree-based Transformer
cs.CL
Textual documents are commonly connected in a hierarchical graph structure where a central document links to others with an exponentially growing connectivity. Though Hyperbolic Graph Neural Networks (HGNNs) excel at capturing such graph hierarchy, they cannot model the rich textual semantics within documents. Moreov...
2502.11346
Power-Measurement-Based Channel Autocorrelation Estimation for IRS-Assisted Wideband Communications
cs.IT math.IT
Channel state information (CSI) is essential to the performance optimization of intelligent reflecting surface (IRS)-aided wireless communication systems. However, the passive and frequency-flat reflection of IRS, as well as the high-dimensional IRS-reflected channels, have posed practical challenges for efficient IR...
2502.11349
Biases in Edge Language Models: Detection, Analysis, and Mitigation
cs.LG cs.PF stat.ML
The integration of large language models (LLMs) on low-power edge devices such as Raspberry Pi, known as edge language models (ELMs), has introduced opportunities for more personalized, secure, and low-latency language intelligence that is accessible to all. However, the resource constraints inherent in edge devices ...
2502.11352
A Framework for Learning Scoring Rules in Autonomous Driving Planning Systems
cs.RO cs.LG
In autonomous driving systems, motion planning is commonly implemented as a two-stage process: first, a trajectory proposer generates multiple candidate trajectories, then a scoring mechanism selects the most suitable trajectory for execution. For this critical selection stage, rule-based scoring mechanisms are parti...
2502.11355
"Nuclear Deployed!": Analyzing Catastrophic Risks in Decision-making of Autonomous LLM Agents
cs.CL cs.AI cs.CR cs.CY
Large language models (LLMs) are evolving into autonomous decision-makers, raising concerns about catastrophic risks in high-stakes scenarios, particularly in Chemical, Biological, Radiological and Nuclear (CBRN) domains. Based on the insight that such risks can originate from trade-offs between the agent's Helpful, ...
2502.11356
SAIF: A Sparse Autoencoder Framework for Interpreting and Steering Instruction Following of Language Models
cs.LG cs.AI cs.CL
The ability of large language models (LLMs) to follow instructions is crucial for their practical applications, yet the underlying mechanisms remain poorly understood. This paper presents a novel framework that leverages sparse autoencoders (SAE) to interpret how instruction following works in these models. We demons...
2502.11357
Explorer: Scaling Exploration-driven Web Trajectory Synthesis for Multimodal Web Agents
cs.AI cs.HC
Recent success in large multimodal models (LMMs) has sparked promising applications of agents capable of autonomously completing complex web tasks. While open-source LMM agents have made significant advances in offline evaluation benchmarks, their performance still falls substantially short of human-level capabilitie...
2502.11358
Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
cs.AI cs.CR
Information theft attacks pose a significant risk to Large Language Model (LLM) tool-learning systems. Adversaries can inject malicious commands through compromised tools, manipulating LLMs to send sensitive information to these tools, which leads to potential privacy breaches. However, existing attack approaches are...
2502.11360
GeoDANO: Geometric VLM with Domain Agnostic Vision Encoder
cs.CV cs.CL
We introduce GeoDANO, a geometric vision-language model (VLM) with a domain-agnostic vision encoder, for solving plane geometry problems. Although VLMs have been employed for solving geometry problems, their ability to recognize geometric features remains insufficiently analyzed. To address this gap, we propose a ben...
2502.11361
VLDBench: Vision Language Models Disinformation Detection Benchmark
cs.CL
The rapid rise of AI-generated content has made detecting disinformation increasingly challenging. In particular, multimodal disinformation, i.e., online posts-articles that contain images and texts with fabricated information are specially designed to deceive. While existing AI safety benchmarks primarily address bi...
2502.11362
Teleportation With Null Space Gradient Projection for Optimization Acceleration
cs.LG
Optimization techniques have become increasingly critical due to the ever-growing model complexity and data scale. In particular, teleportation has emerged as a promising approach, which accelerates convergence of gradient descent-based methods by navigating within the loss invariant level set to identify parameters ...
2502.11364
Blessing of Multilinguality: A Systematic Analysis of Multilingual In-Context Learning
cs.CL
While multilingual large language models generally perform adequately, and sometimes even rival English performance on high-resource languages (HRLs), they often significantly underperform on low-resource languages (LRLs). Among several prompting strategies aiming at bridging the gap, multilingual in-context learning...
2502.11367
Sparse Autoencoder Features for Classifications and Transferability
cs.LG cs.AI cs.CL
Sparse Autoencoders (SAEs) provide potentials for uncovering structured, human-interpretable representations in Large Language Models (LLMs), making them a crucial tool for transparent and controllable AI systems. We systematically analyze SAE for interpretable feature extraction from LLMs in safety-critical classifi...
2502.11368
LLMs can Perform Multi-Dimensional Analytic Writing Assessments: A Case Study of L2 Graduate-Level Academic English Writing
cs.CL cs.AI
The paper explores the performance of LLMs in the context of multi-dimensional analytic writing assessments, i.e. their ability to provide both scores and comments based on multiple assessment criteria. Using a corpus of literature reviews written by L2 graduate students and assessed by human experts against 9 analyt...
2502.11369
Physics-Informed Gaussian Process Classification for Constraint-Aware Alloy Design
cond-mat.mtrl-sci cs.LG
Alloy design can be framed as a constraint-satisfaction problem. Building on previous methodologies, we propose equipping Gaussian Process Classifiers (GPCs) with physics-informed prior mean functions to model the boundaries of feasible design spaces. Through three case studies, we highlight the utility of informativ...
2502.11370
HI-GVF: Shared Control based on Human-Influenced Guiding Vector Fields for Human-multi-robot Cooperation
cs.RO
Human-multi-robot shared control leverages human decision-making and robotic autonomy to enhance human-robot collaboration. While widely studied, existing systems often adopt a leader-follower model, limiting robot autonomy to some extent. Besides, a human is required to directly participate in the motion control of ...
2502.11371
RAG vs. GraphRAG: A Systematic Evaluation and Key Insights
cs.IR
Retrieval-Augmented Generation (RAG) enhances the performance of LLMs across various tasks by retrieving relevant information from external sources, particularly on text-based data. For structured data, such as knowledge graphs, GraphRAG has been widely used to retrieve relevant information. However, recent studies h...
2502.11372
Weibull Processes in Network Degree Distributions
cs.SI physics.soc-ph
This study examines degree distributions in two large collaboration networks: the Microsoft Academic Graph (1800-2020) and Internet Movie Database (1900-2020), comprising $2.72 \times 10^8$ and $1.88 \times 10^6$ nodes respectively. Statistical comparison using $\chi^2$ measures showed that Weibull distributions fit ...
2502.11374
Leave No One Behind: Enhancing Diversity While Maintaining Accuracy in Social Recommendation
cs.IR
Social recommendation, a branch of algorithms that utilizes social connection information to construct recommender systems, has demonstrated its effectiveness in enhancing recommendation accuracy. However, apart from accuracy, the diversity of recommendations also plays a critical role in user engagement. Unfortunate...
2502.11375
Robot Deformable Object Manipulation via NMPC-generated Demonstrations in Deep Reinforcement Learning
cs.RO cs.LG
In this work, we conducted research on deformable object manipulation by robots based on demonstration-enhanced reinforcement learning (RL). To improve the learning efficiency of RL, we enhanced the utilization of demonstration data from multiple aspects and proposed the HGCR-DDPG algorithm. It uses a novel high-dime...
2502.11377
PrivilegedDreamer: Explicit Imagination of Privileged Information for Rapid Adaptation of Learned Policies
cs.RO cs.LG
Numerous real-world control problems involve dynamics and objectives affected by unobservable hidden parameters, ranging from autonomous driving to robotic manipulation, which cause performance degradation during sim-to-real transfer. To represent these kinds of domains, we adopt hidden-parameter Markov decision proc...
2502.11379
CCJA: Context-Coherent Jailbreak Attack for Aligned Large Language Models
cs.CR cs.AI cs.CL
Despite explicit alignment efforts for large language models (LLMs), they can still be exploited to trigger unintended behaviors, a phenomenon known as "jailbreaking." Current jailbreak attack methods mainly focus on discrete prompt manipulations targeting closed-source LLMs, relying on manually crafted prompt templa...
2502.11380
Exploring the Small World of Word Embeddings: A Comparative Study on Conceptual Spaces from LLMs of Different Scales
cs.CL
A conceptual space represents concepts as nodes and semantic relatedness as edges. Word embeddings, combined with a similarity metric, provide an effective approach to constructing such a space. Typically, embeddings are derived from traditional distributed models or encoder-only pretrained models, whose objectives d...
2502.11381
Without Paired Labeled Data: An End-to-End Self-Supervised Paradigm for UAV-View Geo-Localization
cs.CV cs.AI
UAV-View Geo-Localization (UVGL) aims to ascertain the precise location of a UAV by retrieving the most similar GPS-tagged satellite image. However, existing methods predominantly rely on supervised learning paradigms that necessitate annotated paired data for training, which incurs substantial annotation costs and i...
2502.11382
A Physics-Informed Blur Learning Framework for Imaging Systems
cs.CV
Accurate blur estimation is essential for high-performance imaging across various applications. Blur is typically represented by the point spread function (PSF). In this paper, we propose a physics-informed PSF learning framework for imaging systems, consisting of a simple calibration followed by a learning process. ...
2502.11386
Intelligent Mobile AI-Generated Content Services via Interactive Prompt Engineering and Dynamic Service Provisioning
cs.NI cs.LG
Due to massive computational demands of large generative models, AI-Generated Content (AIGC) can organize collaborative Mobile AIGC Service Providers (MASPs) at network edges to provide ubiquitous and customized content generation for resource-constrained users. However, such a paradigm faces two significant challeng...
2502.11387
RoleMRC: A Fine-Grained Composite Benchmark for Role-Playing and Instruction-Following
cs.CL
Role-playing is important for Large Language Models (LLMs) to follow diverse instructions while maintaining role identity and the role's pre-defined ability limits. Existing role-playing datasets mostly contribute to controlling role style and knowledge boundaries, but overlook role-playing in instruction-following s...
2502.11390
MARS: Mesh AutoRegressive Model for 3D Shape Detailization
cs.CV
State-of-the-art methods for mesh detailization predominantly utilize Generative Adversarial Networks (GANs) to generate detailed meshes from coarse ones. These methods typically learn a specific style code for each category or similar categories without enforcing geometry supervision across different Levels of Detai...
2502.11393
HellaSwag-Pro: A Large-Scale Bilingual Benchmark for Evaluating the Robustness of LLMs in Commonsense Reasoning
cs.CL
Large language models (LLMs) have shown remarkable capabilities in commonsense reasoning; however, some variations in questions can trigger incorrect responses. Do these models truly understand commonsense knowledge, or just memorize expression patterns? To investigate this question, we present the first extensive ro...
2502.11394
Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks
cs.LG
Oversmoothing is a common issue in graph neural networks (GNNs), where node representations become excessively homogeneous as the number of layers increases, resulting in degraded performance. Various strategies have been proposed to combat oversmoothing in practice, yet they are based on different heuristics and lac...
2502.11396
Maintenance of Structural Hole Spanners in Dynamic Networks
cs.SI
Structural Hole (SH) spanners are the set of users who bridge different groups of users and are vital in numerous applications. Despite their importance, existing work for identifying SH spanners focuses only on static networks. However, real-world networks are highly dynamic where the underlying structure of the net...
2502.11400
Revisiting Robust RAG: Do We Still Need Complex Robust Training in the Era of Powerful LLMs?
cs.CL
Retrieval-augmented generation (RAG) systems often suffer from performance degradation when encountering noisy or irrelevant documents, driving researchers to develop sophisticated training strategies to enhance their robustness against such retrieval noise. However, as large language models (LLMs) continue to advanc...
2502.11401
Following the Autoregressive Nature of LLM Embeddings via Compression and Alignment
cs.CL
A new trend uses LLMs as dense text encoders via contrastive learning. However, since LLM embeddings predict the probability distribution of the next token, they are inherently generative and distributive, conflicting with contrastive learning, which requires embeddings to capture full-text semantics and align via co...
2502.11404
ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large Language Models
cs.CL
Tool learning has emerged as a crucial capability for large language models (LLMs) to solve complex real-world tasks through interaction with external tools. Existing approaches face significant challenges, including reliance on hand-crafted prompts, difficulty in multi-step planning, and lack of precise error diagno...
2502.11405
LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy
cs.CL
Despite being pretrained on multilingual corpora, large language models (LLMs) exhibit suboptimal performance on low-resource languages. Recent approaches have leveraged multilingual encoders alongside LLMs by introducing trainable parameters connecting the two models. However, these methods typically focus on the en...
2502.11408
Precise GPS-Denied UAV Self-Positioning via Context-Enhanced Cross-View Geo-Localization
cs.CV
Image retrieval has been employed as a robust complementary technique to address the challenge of Unmanned Aerial Vehicles (UAVs) self-positioning. However, most existing methods primarily focus on localizing objects captured by UAVs through complex part-based representations, often overlooking the unique challenges ...
2502.11410
Structure based SAT dataset for analysing GNN generalisation
cs.LG
Satisfiability (SAT) solvers based on techniques such as conflict driven clause learning (CDCL) have produced excellent performance on both synthetic and real world industrial problems. While these CDCL solvers only operate on a per-problem basis, graph neural network (GNN) based solvers bring new benefits to the fie...
2502.11411
Detecting and Filtering Unsafe Training Data via Data Attribution
cs.LG
Large language models (LLMs) are vulnerable to unsafe training data that even small amounts of unsafe data can lead to harmful model behaviors. Detecting and filtering such unsafe training data is essential for trustworthy model development. Current state-of-the-art (SOTA) approaches typically rely on training modera...
2502.11413
Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise
cs.LG stat.ML
We study the task of Multiclass Linear Classification (MLC) in the distribution-free PAC model with Random Classification Noise (RCN). Specifically, the learner is given a set of labeled examples $(x, y)$, where $x$ is drawn from an unknown distribution on $R^d$ and the labels are generated by a multiclass linear cla...
2502.11414
Unbiased Learning to Rank with Query-Level Click Propensity Estimation: Beyond Pointwise Observation and Relevance
cs.IR
Most existing unbiased learning-to-rank (ULTR) approaches are based on the user examination hypothesis, which assumes that users will click a result only if it is both relevant and observed (typically modeled by position). However, in real-world scenarios, users often click only one or two results after examining mul...
2502.11417
DiSCo: Device-Server Collaborative LLM-Based Text Streaming Services
cs.LG cs.DC
The rapid rise of large language models (LLMs) in text streaming services has introduced significant cost and Quality of Experience (QoE) challenges in serving millions of daily requests, especially in meeting Time-To-First-Token (TTFT) and Time-Between-Token (TBT) requirements for real-time interactions. Our real-wo...
2502.11418
TimeCAP: Learning to Contextualize, Augment, and Predict Time Series Events with Large Language Model Agents
cs.AI cs.LG
Time series data is essential in various applications, including climate modeling, healthcare monitoring, and financial analytics. Understanding the contextual information associated with real-world time series data is often essential for accurate and reliable event predictions. In this paper, we introduce TimeCAP, a...
2502.11419
InsBank: Evolving Instruction Subset for Ongoing Alignment
cs.CL
Large language models (LLMs) typically undergo instruction tuning to enhance alignment. Recent studies emphasize that quality and diversity of instruction data are more crucial than quantity, highlighting the need to select diverse, high-quality subsets to reduce training costs. However, how to evolve these selected ...
2502.11420
Training-Free Guidance Beyond Differentiability: Scalable Path Steering with Tree Search in Diffusion and Flow Models
cs.LG
Training-free guidance enables controlled generation in diffusion and flow models, but most existing methods assume differentiable objectives and rely on gradients. This work focuses on training-free guidance addressing challenges from non-differentiable objectives and discrete data distributions. We propose an algor...
2502.11422
Planning of Heuristics: Strategic Planning on Large Language Models with Monte Carlo Tree Search for Automating Heuristic Optimization
cs.AI
Heuristics have achieved great success in solving combinatorial optimization problems (COPs). However, heuristics designed by humans require too much domain knowledge and testing time. Given the fact that Large Language Models (LLMs) possess strong capabilities to understand and generate content, and a knowledge base...
2502.11423
Exploring Persona Sentiment Sensitivity in Personalized Dialogue Generation
cs.CL
Personalized dialogue systems have advanced considerably with the integration of user-specific personas into large language models (LLMs). However, while LLMs can effectively generate personalized responses, the influence of persona sentiment on dialogue quality remains underexplored. In this work, we conduct a large...
2502.11425
Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models
cs.CL cs.AI
Despite the advanced capabilities of large language models (LLMs), their temporal reasoning ability remains underdeveloped. Prior works have highlighted this limitation, particularly in maintaining temporal consistency when understanding events. For example, models often confuse mutually exclusive temporal relations ...
2502.11426
Verti-Bench: A General and Scalable Off-Road Mobility Benchmark for Vertically Challenging Terrain
cs.RO
Recent advancement in off-road autonomy has shown promises in deploying autonomous mobile robots in outdoor off-road environments. Encouraging results have been reported from both simulated and real-world experiments. However, unlike evaluating off-road perception tasks on static datasets, benchmarking off-road mobil...
2502.11427
Do we Really Need Visual Instructions? Towards Visual Instruction-Free Fine-tuning for Large Vision-Language Models
cs.CL cs.CV
Visual instruction tuning has become the predominant technology in eliciting the multimodal task-solving capabilities of large vision-language models (LVLMs). Despite the success, as visual instructions require images as the input, it would leave the gap in inheriting the task-solving capabilities from the backbone L...
2502.11429
What's in a Query: Polarity-Aware Distribution-Based Fair Ranking
cs.LG cs.CY
Machine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a r...
2502.11431
Any Information Is Just Worth One Single Screenshot: Unifying Search With Visualized Information Retrieval
cs.CL
With the popularity of multimodal techniques, it receives growing interests to acquire useful information in visual forms. In this work, we formally define an emerging IR paradigm called \textit{Visualized Information Retrieval}, or \textbf{Vis-IR}, where multimodal information, such as texts, images, tables and char...
2502.11433
FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading
cs.AI cs.CE q-fin.TR
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities in various financial tasks. However, they often struggle with multi-step, goal-oriented scenarios in interactive financial markets, such as trading, where complex agentic approaches are required to...
2502.11435
SMART: Self-Aware Agent for Tool Overuse Mitigation
cs.AI cs.CL cs.LG
Current Large Language Model (LLM) agents demonstrate strong reasoning and tool use capabilities, but often lack self-awareness, failing to balance these approaches effectively. This imbalance leads to Tool Overuse, where models unnecessarily rely on external tools for tasks solvable with parametric knowledge, increa...
2502.11436
ADO: Automatic Data Optimization for Inputs in LLM Prompts
cs.LG
This study explores a novel approach to enhance the performance of Large Language Models (LLMs) through the optimization of input data within prompts. While previous research has primarily focused on refining instruction components and augmenting input data with in-context examples, our work investigates the potentia...
2502.11437
Learning Dexterous Bimanual Catch Skills through Adversarial-Cooperative Heterogeneous-Agent Reinforcement Learning
cs.RO cs.AI
Robotic catching has traditionally focused on single-handed systems, which are limited in their ability to handle larger or more complex objects. In contrast, bimanual catching offers significant potential for improved dexterity and object handling but introduces new challenges in coordination and control. In this pa...
2502.11438
SAFE-SQL: Self-Augmented In-Context Learning with Fine-grained Example Selection for Text-to-SQL
cs.CL
Text-to-SQL aims to convert natural language questions into executable SQL queries. While previous approaches, such as skeleton-masked selection, have demonstrated strong performance by retrieving similar training examples to guide large language models (LLMs), they struggle in real-world scenarios where such example...
2502.11439
An Efficient Row-Based Sparse Fine-Tuning
cs.CL cs.AI cs.LG
Fine-tuning is an important step in adapting foundation models such as large language models to downstream tasks. To make this step more accessible to users with limited computational budgets, it is crucial to develop fine-tuning methods that are memory and computationally efficient. Sparse Fine-tuning (SFT) and Low-...
2502.11440
Medical Image Registration Meets Vision Foundation Model: Prototype Learning and Contour Awareness
cs.CV
Medical image registration is a fundamental task in medical image analysis, aiming to establish spatial correspondences between paired images. However, existing unsupervised deformable registration methods rely solely on intensity-based similarity metrics, lacking explicit anatomical knowledge, which limits their acc...
2502.11441
Which Retain Set Matters for LLM Unlearning? A Case Study on Entity Unlearning
cs.CL
Large language models (LLMs) risk retaining unauthorized or sensitive information from their training data, which raises privacy concerns. LLM unlearning seeks to mitigate these risks by selectively removing specified data while maintaining overall model performance. However, most existing work focus on methods to ac...
2502.11442
Multi-Turn Multi-Modal Question Clarification for Enhanced Conversational Understanding
cs.IR cs.AI cs.CL cs.LG
Conversational query clarification enables users to refine their search queries through interactive dialogue, improving search effectiveness. Traditional approaches rely on text-based clarifying questions, which often fail to capture complex user preferences, particularly those involving visual attributes. While rece...
2502.11444
Does RAG Really Perform Bad For Long-Context Processing?
cs.CL
The efficient processing of long context poses a serious challenge for large language models (LLMs). Recently, retrieval-augmented generation (RAG) has emerged as a promising strategy for this problem, as it enables LLMs to make selective use of the long context for efficient computation. However, existing RAG approa...
2502.11447
Does Editing Provide Evidence for Localization?
cs.LG cs.AI
A basic aspiration for interpretability research in large language models is to "localize" semantically meaningful behaviors to particular components within the LLM. There are various heuristics for finding candidate locations within the LLM. Once a candidate localization is found, it can be assessed by editing the i...
2502.11448
AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection
cs.AI
The rapid advancements in Large Language Models (LLMs) have enabled their deployment as autonomous agents for handling complex tasks in dynamic environments. These LLMs demonstrate strong problem-solving capabilities and adaptability to multifaceted scenarios. However, their use as agents also introduces significant ...
2502.11449
Tractable General Equilibrium
cs.GT cs.CE econ.TH
We study Walrasian economies (or general equilibrium models) and their solution concept, the Walrasian equilibrium. A key challenge in this domain is identifying price-adjustment processes that converge to equilibrium. One such process, t\^atonnement, is an auction-like algorithm first proposed in 1874 by L\'eon Walr...
2502.11450
Fishing For Cheap And Efficient Pruners At Initialization
cs.LG cs.AI
Pruning offers a promising solution to mitigate the associated costs and environmental impact of deploying large deep neural networks (DNNs). Traditional approaches rely on computationally expensive trained models or time-consuming iterative prune-retrain cycles, undermining their utility in resource-constrained sett...
2502.11451
From Personas to Talks: Revisiting the Impact of Personas on LLM-Synthesized Emotional Support Conversations
cs.CL
The rapid advancement of Large Language Models (LLMs) has revolutionized the generation of emotional support conversations (ESC), offering scalable solutions with reduced costs and enhanced data privacy. This paper explores the role of personas in the creation of ESC by LLMs. Our research utilizes established psychol...
2502.11453
Connector-S: A Survey of Connectors in Multi-modal Large Language Models
cs.LG cs.AI
With the rapid advancements in multi-modal large language models (MLLMs), connectors play a pivotal role in bridging diverse modalities and enhancing model performance. However, the design and evolution of connectors have not been comprehensively analyzed, leaving gaps in understanding how these components function a...
2502.11454
UniCBE: An Uniformity-driven Comparing Based Evaluation Framework with Unified Multi-Objective Optimization
cs.CL
Human preference plays a significant role in measuring large language models and guiding them to align with human values. Unfortunately, current comparing-based evaluation (CBE) methods typically focus on a single optimization objective, failing to effectively utilize scarce yet valuable preference signals. To addres...
2502.11456
Leveraging Labelled Data Knowledge: A Cooperative Rectification Learning Network for Semi-supervised 3D Medical Image Segmentation
cs.CV cs.AI
Semi-supervised 3D medical image segmentation aims to achieve accurate segmentation using few labelled data and numerous unlabelled data. The main challenge in the design of semi-supervised learning methods consists in the effective use of the unlabelled data for training. A promising solution consists of ensuring co...
2502.11457
Aligning Sentence Simplification with ESL Learner's Proficiency for Language Acquisition
cs.CL cs.AI
Text simplification is crucial for improving accessibility and comprehension for English as a Second Language (ESL) learners. This study goes a step further and aims to facilitate ESL learners' language acquisition by simplification. Specifically, we propose simplifying complex sentences to appropriate levels for lea...
2502.11458
Towards Efficient Pre-training: Exploring FP4 Precision in Large Language Models
cs.LG cs.AI
The burgeoning computational demands for training large language models (LLMs) necessitate efficient methods, including quantized training, which leverages low-bit arithmetic operations to reduce costs. While FP8 precision has shown potential, leveraging FP4 remains challenging due to inherent quantization errors and...
2502.11459
Towards Responsible and Fair Data Science: Resource Allocation for Inclusive and Sustainable Analytics
cs.DB
This project addresses the challenges of responsible and fair resource allocation in data science (DS), focusing on DS queries evaluation. Current DS practices often overlook the broader socio-economic, environmental, and ethical implications, including data sovereignty, fairness, and inclusivity. By integrating a de...
2502.11460
UnitCoder: Scalable Iterative Code Synthesis with Unit Test Guidance
cs.CL cs.SE
Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks, yet code generation remains a major challenge. Current approaches for obtaining high-quality code data primarily focus on (i) collecting large-scale pre-training data and (ii) synthesizing instruction data through prompt engineer...
2502.11461
Doppler Correspondence: Non-Iterative Scan Matching With Doppler Velocity-Based Correspondence
cs.RO
Achieving successful scan matching is essential for LiDAR odometry. However, in challenging environments with adverse weather conditions or repetitive geometric patterns, LiDAR odometry performance is degraded due to incorrect scan matching. Recently, the emergence of frequency-modulated continuous wave 4D LiDAR and ...
2502.11462
LMFCA-Net: A Lightweight Model for Multi-Channel Speech Enhancement with Efficient Narrow-Band and Cross-Band Attention
eess.AS cs.LG cs.SD
Deep learning based end-to-end multi-channel speech enhancement methods have achieved impressive performance by leveraging sub-band, cross-band, and spatial information. However, these methods often demand substantial computational resources, limiting their practicality on terminal devices. This paper presents a ligh...
2502.11465
All Models Are Miscalibrated, But Some Less So: Comparing Calibration with Conditional Mean Operators
stat.ML cs.LG
When working in a high-risk setting, having well calibrated probabilistic predictive models is a crucial requirement. However, estimators for calibration error are not always able to correctly distinguish which model is better calibrated. We propose the \emph{conditional kernel calibration error} (CKCE) which is base...
2502.11466
GiFT: Gibbs Fine-Tuning for Code Generation
cs.LG cs.CL cs.SE
Training Large Language Models (LLMs) with synthetic data is a prevalent practice in code generation. A key approach is self-training, where LLMs are iteratively trained on self-generated correct code snippets. In this case, the self-generated codes are drawn from a conditional distribution, conditioned on a specific...
2502.11467
Approximation of Permutation Invariant Polynomials by Transformers: Efficient Construction in Column-Size
cs.LG math.FA
Transformers are a type of neural network that have demonstrated remarkable performance across various domains, particularly in natural language processing tasks. Motivated by this success, research on the theoretical understanding of transformers has garnered significant attention. A notable example is the mathemati...
2502.11468
Semantically Robust Unsupervised Image Translation for Paired Remote Sensing Images
cs.CV
Image translation for change detection or classification in bi-temporal remote sensing images is unique. Although it can acquire paired images, it is still unsupervised. Moreover, strict semantic preservation in translation is always needed instead of multimodal outputs. In response to these problems, this paper prop...
2502.11469
If Attention Serves as a Cognitive Model of Human Memory Retrieval, What is the Plausible Memory Representation?
cs.CL
Recent work in computational psycholinguistics has revealed intriguing parallels between attention mechanisms and human memory retrieval, focusing primarily on Transformer architectures that operate on token-level representations. However, computational psycholinguistic research has also established that syntactic st...
2502.11470
Optimized detection of cyber-attacks on IoT networks via hybrid deep learning models
cs.CR cs.AI
The rapid expansion of Internet of Things (IoT) devices has increased the risk of cyber-attacks, making effective detection essential for securing IoT networks. This work introduces a novel approach combining Self-Organizing Maps (SOMs), Deep Belief Networks (DBNs), and Autoencoders to detect known and previously uns...
2502.11471
GLTW: Joint Improved Graph Transformer and LLM via Three-Word Language for Knowledge Graph Completion
cs.CL cs.IR
Knowledge Graph Completion (KGC), which aims to infer missing or incomplete facts, is a crucial task for KGs. However, integrating the vital structural information of KGs into Large Language Models (LLMs) and outputting predictions deterministically remains challenging. To address this, we propose a new method called...
2502.11476
FastMCTS: A Simple Sampling Strategy for Data Synthesis
cs.CL
Synthetic high-quality multi-step reasoning data can significantly enhance the performance of large language models on various tasks. However, most existing methods rely on rejection sampling, which generates trajectories independently and suffers from inefficiency and imbalanced sampling across problems of varying d...
2502.11477
Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation
cs.CV
Recent advances in text-to-image diffusion models have achieved impressive image generation capabilities. However, it remains challenging to control the generation process with desired properties (e.g., aesthetic quality, user intention), which can be expressed as black-box reward functions. In this paper, we focus o...
2502.11478
TAPS: Throat and Acoustic Paired Speech Dataset for Deep Learning-Based Speech Enhancement
cs.SD cs.LG eess.AS
In high-noise environments such as factories, subways, and busy streets, capturing clear speech is challenging due to background noise. Throat microphones provide a solution with their noise-suppressing properties, reducing the noise while recording speech. However, a significant limitation remains: high-frequency in...
2502.11480
Enhancing Offline Model-Based RL via Active Model Selection: A Bayesian Optimization Perspective
cs.LG stat.ML
Offline model-based reinforcement learning (MBRL) serves as a competitive framework that can learn well-performing policies solely from pre-collected data with the help of learned dynamics models. To fully unleash the power of offline MBRL, model selection plays a pivotal role in determining the dynamics model utiliz...
2502.11481
Variable-frame CNNLSTM for Breast Nodule Classification using Ultrasound Videos
cs.CV cs.AI
The intersection of medical imaging and artificial intelligence has become an important research direction in intelligent medical treatment, particularly in the analysis of medical images using deep learning for clinical diagnosis. Despite the advances, existing keyframe classification methods lack extraction of time...
2502.11482
DATA: Decomposed Attention-based Task Adaptation for Rehearsal-Free Continual Learning
cs.LG cs.AI cs.CL
Continual learning (CL) is essential for Large Language Models (LLMs) to adapt to evolving real-world demands, yet they are susceptible to catastrophic forgetting (CF). While traditional CF solutions rely on expensive data rehearsal, recent rehearsal-free methods employ model-based and regularization-based strategies...