id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.11113
Valuable Hallucinations: Realizable Non-realistic Propositions
cs.CL
This paper introduces the first formal definition of valuable hallucinations in large language models (LLMs), addressing a gap in the existing literature. We provide a systematic definition and analysis of hallucination value, proposing methods for enhancing the value of hallucinations. In contrast to previous works,...
2502.11114
Beyond Pairwise: Global Zero-shot Temporal Graph Generation
cs.CL
Temporal relation extraction (TRE) is a fundamental task in natural language processing (NLP) that involves identifying the temporal relationships between events in a document. Despite the advances in large language models (LLMs), their application to TRE remains limited. Most existing approaches rely on pairwise cla...
2502.11115
Are Generative Models Underconfident? An Embarrassingly Simple Quality Estimation Approach
cs.CL
Quality Estimation (QE) is estimating the quality of model output when the ground truth reference is not available. Looking at model uncertainty from its own output probabilities is the most trivial and low-effort way to estimate the output quality. However, for generative model, output probabilities might not be the...
2502.11116
Gumbel Reranking: Differentiable End-to-End Reranker Optimization
cs.CL cs.IR
RAG systems rely on rerankers to identify relevant documents. However, fine-tuning these models remains challenging due to the scarcity of annotated query-document pairs. Existing distillation-based approaches suffer from training-inference misalignment and fail to capture interdependencies among candidate documents....
2502.11122
Hierarchical Expert Prompt for Large-Language-Model: An Approach Defeat Elite AI in TextStarCraft II for the First Time
cs.AI
Since the emergence of the Large Language Model (LLM), LLM has been widely used in fields such as writing, translating, and searching. However, there is still great potential for LLM-based methods in handling complex tasks such as decision-making in the StarCraft II environment. To address problems such as lack of re...
2502.11123
DuplexMamba: Enhancing Real-time Speech Conversations with Duplex and Streaming Capabilities
cs.CL
Real-time speech conversation is essential for natural and efficient human-machine interactions, requiring duplex and streaming capabilities. Traditional Transformer-based conversational chatbots operate in a turn-based manner and exhibit quadratic computational complexity that grows as the input size increases. In t...
2502.11124
AdaManip: Adaptive Articulated Object Manipulation Environments and Policy Learning
cs.RO cs.AI
Articulated object manipulation is a critical capability for robots to perform various tasks in real-world scenarios. Composed of multiple parts connected by joints, articulated objects are endowed with diverse functional mechanisms through complex relative motions. For example, a safe consists of a door, a handle, a...
2502.11127
G-Safeguard: A Topology-Guided Security Lens and Treatment on LLM-based Multi-agent Systems
cs.CR cs.LG cs.MA
Large Language Model (LLM)-based Multi-agent Systems (MAS) have demonstrated remarkable capabilities in various complex tasks, ranging from collaborative problem-solving to autonomous decision-making. However, as these systems become increasingly integrated into critical applications, their vulnerability to adversari...
2502.11128
FELLE: Autoregressive Speech Synthesis with Token-Wise Coarse-to-Fine Flow Matching
cs.CL cs.SD eess.AS
To advance continuous-valued token modeling and temporal-coherence enforcement, we propose FELLE, an autoregressive model that integrates language modeling with token-wise flow matching. By leveraging the autoregressive nature of language models and the generative efficacy of flow matching, FELLE effectively predicts...
2502.11131
Improving Similar Case Retrieval Ranking Performance By Revisiting RankSVM
cs.CL
Given the rapid development of Legal AI, a lot of attention has been paid to one of the most important legal AI tasks--similar case retrieval, especially with language models to use. In our paper, however, we try to improve the ranking performance of current models from the perspective of learning to rank instead of ...
2502.11132
UNITE-FND: Reframing Multimodal Fake News Detection through Unimodal Scene Translation
cs.LG cs.AI
Multimodal fake news detection typically demands complex architectures and substantial computational resources, posing deployment challenges in real-world settings. We introduce UNITE-FND, a novel framework that reframes multimodal fake news detection as a unimodal text classification task. We propose six specialized...
2502.11133
MasRouter: Learning to Route LLMs for Multi-Agent Systems
cs.LG cs.MA
Multi-agent systems (MAS) powered by Large Language Models (LLMs) have been demonstrated to push the boundaries of LLM capabilities, yet they often incur significant costs and face challenges in dynamic LLM selection. Current LLM routing methods effectively reduce overhead in single-agent scenarios by customizing LLM...
2502.11134
Solving Online Resource-Constrained Scheduling for Follow-Up Observation in Astronomy: a Reinforcement Learning Approach
cs.AI astro-ph.IM
In the astronomical observation field, determining the allocation of observation resources of the telescope array and planning follow-up observations for targets of opportunity (ToOs) are indispensable components of astronomical scientific discovery. This problem is computationally challenging, given the online obser...
2502.11137
Safety Evaluation of DeepSeek Models in Chinese Contexts
cs.CL cs.AI
Recently, the DeepSeek series of models, leveraging their exceptional reasoning capabilities and open-source strategy, is reshaping the global AI landscape. Despite these advantages, they exhibit significant safety deficiencies. Research conducted by Robust Intelligence, a subsidiary of Cisco, in collaboration with t...
2502.11138
Machine Learning-Based Intrusion Detection and Prevention System for IIoT Smart Metering Networks: Challenges and Solutions
cs.LG
The Industrial Internet of Things (IIoT) has revolutionized industries by enabling automation, real-time data exchange, and smart decision-making. However, its increased connectivity introduces cybersecurity threats, particularly in smart metering networks, which play a crucial role in monitoring and optimizing energ...
2502.11140
VisPath: Automated Visualization Code Synthesis via Multi-Path Reasoning and Feedback-Driven Optimization
cs.SE cs.AI cs.CL cs.HC
Unprecedented breakthroughs in Large Language Models (LLMs) has amplified its penetration into application of automated visualization code generation. Few-shot prompting and query expansion techniques have notably enhanced data visualization performance, however, still fail to overcome ambiguity and complexity of nat...
2502.11141
Cognitive Neural Architecture Search Reveals Hierarchical Entailment
cs.NE cs.AI q-bio.QM
Recent research has suggested that the brain is more shallow than previously thought, challenging the traditionally assumed hierarchical structure of the ventral visual pathway. Here, we demonstrate that optimizing convolutional network architectures for brain-alignment via evolutionary neural architecture search res...
2502.11142
NavRAG: Generating User Demand Instructions for Embodied Navigation through Retrieval-Augmented LLM
cs.AI cs.CL cs.CV
Vision-and-Language Navigation (VLN) is an essential skill for embodied agents, allowing them to navigate in 3D environments following natural language instructions. High-performance navigation models require a large amount of training data, the high cost of manually annotating data has seriously hindered this field....
2502.11147
Efficient Long-Decoding Inference with Reasoning-Aware Attention Sparsity
cs.LG cs.AI
Large Language Models (LLMs) have demonstrated strong capabilities across various domains, with recent advancements in challenging reasoning tasks such as mathematics and programming. However, solving reasoning tasks often requires long decoding chains (of thoughts), which incur $O(N)$ time and memory consumption, wh...
2502.11149
Large Language-Geometry Model: When LLM meets Equivariance
cs.LG cs.AI
Accurately predicting 3D structures and dynamics of physical systems is crucial in scientific applications. Existing approaches that rely on geometric Graph Neural Networks (GNNs) effectively enforce $\mathrm{E}(3)$-equivariance, but they often fall in leveraging extensive broader information. While direct applicatio...
2502.11150
Surprisal Takes It All: Eye Tracking Based Cognitive Evaluation of Text Readability Measures
cs.CL
Text readability measures are widely used in many real-world scenarios and in NLP. These measures have primarily been developed by predicting reading comprehension outcomes, while largely neglecting what is perhaps the core aspect of a readable text: reading ease. In this work, we propose a new eye tracking based met...
2502.11152
Error Bound Analysis for the Regularized Loss of Deep Linear Neural Networks
math.OC cs.LG
The optimization foundations of deep linear networks have received significant attention lately. However, due to the non-convexity and hierarchical structure, analyzing the regularized loss of deep linear networks remains a challenging task. In this work, we study the local geometric landscape of the regularized squa...
2502.11155
Uncertainty-Aware Search and Value Models: Mitigating Search Scaling Flaws in LLMs
cs.AI cs.CL
Value model-guided search is effective in steering the generation but suffers from scaling flaws: Its superiority diminishes with larger sample sizes, underperforming non-search baselines. This limitation arises from reliability degradation in value models in unseen reasoning paths. To address this, we propose an unc...
2502.11157
Dyve: Thinking Fast and Slow for Dynamic Process Verification
cs.AI
We present Dyve, a dynamic process verifier that enhances reasoning error detection in large language models by integrating fast and slow thinking, inspired by Kahneman's Systems Theory. Dyve adaptively applies immediate token-level confirmation System 1 for straightforward steps and comprehensive analysis System 2 f...
2502.11158
AnyRefill: A Unified, Data-Efficient Framework for Left-Prompt-Guided Vision Tasks
cs.CV
In this paper, we present a novel Left-Prompt-Guided (LPG) paradigm to address a diverse range of reference-based vision tasks. Inspired by the human creative process, we reformulate these tasks using a left-right stitching formulation to construct contextual input. Building upon this foundation, we propose AnyRefill...
2502.11161
BFA: Best-Feature-Aware Fusion for Multi-View Fine-grained Manipulation
cs.RO cs.CV
In real-world scenarios, multi-view cameras are typically employed for fine-grained manipulation tasks. Existing approaches (e.g., ACT) tend to treat multi-view features equally and directly concatenate them for policy learning. However, it will introduce redundant visual information and bring higher computational co...
2502.11162
Logarithmic Width Suffices for Robust Memorization
cs.LG stat.ML
The memorization capacity of neural networks with a given architecture has been thoroughly studied in many works. Specifically, it is well-known that memorizing $N$ samples can be done using a network of constant width, independent of $N$. However, the required constructions are often quite delicate. In this paper, w...
2502.11163
VLMs as GeoGuessr Masters: Exceptional Performance, Hidden Biases, and Privacy Risks
cs.CV cs.CL
Visual-Language Models (VLMs) have shown remarkable performance across various tasks, particularly in recognizing geographic information from images. However, significant challenges remain, including biases and privacy concerns. To systematically address these issues in the context of geographic information recogniti...
2502.11164
Quantifying the Capability Boundary of DeepSeek Models: An Application-Driven Performance Analysis
cs.AI cs.LG
DeepSeek-R1, known for its low training cost and exceptional reasoning capabilities, has achieved state-of-the-art performance on various benchmarks. However, detailed evaluations from the perspective of real-world applications are lacking, making it challenging for users to select the most suitable DeepSeek models f...
2502.11167
SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
cs.LG cs.CL
Large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as code understanding and code generation. However, an equally important yet underexplored question is whether LLMs can serve as general-purpose surrogate code executors, to predict the output and behavior of a program ...
2502.11168
Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding
cs.CV cs.AI
Transformer has attracted increasing interest in STVG, owing to its end-to-end pipeline and promising result. Existing Transformer-based STVG approaches often leverage a set of object queries, which are initialized simply using zeros and then gradually learn target position information via iterative interactions with...
2502.11169
Leveraging Constrained Monte Carlo Tree Search to Generate Reliable Long Chain-of-Thought for Mathematical Reasoning
cs.CL
Recently, Long Chain-of-Thoughts (CoTs) have gained widespread attention for improving the reasoning capabilities of Large Language Models (LLMs). This necessitates that existing LLMs, which lack the ability to generate Long CoTs, to acquire such capability through post-training methods. Without additional training, ...
2502.11173
Evaluating the Potential of Quantum Machine Learning in Cybersecurity: A Case-Study on PCA-based Intrusion Detection Systems
quant-ph cs.CR cs.LG cs.NI
Quantum computing promises to revolutionize our understanding of the limits of computation, and its implications in cryptography have long been evident. Today, cryptographers are actively devising post-quantum solutions to counter the threats posed by quantum-enabled adversaries. Meanwhile, quantum scientists are inn...
2502.11175
Investigating Language Preference of Multilingual RAG Systems
cs.CL
Multilingual Retrieval-Augmented Generation (mRAG) systems enhance language models by integrating external multilingual information to produce context-aware responses. However, mRAG systems struggle with retrieving relevant information due to linguistic variations between queries and documents, generating inconsisten...
2502.11176
LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning
cs.CL
Modern large language models (LLMs) employ various forms of logical inference, both implicitly and explicitly, when addressing reasoning tasks. Understanding how to optimally leverage these inference paradigms is critical for advancing LLMs' reasoning capabilities. This paper adopts an exploratory approach by introdu...
2502.11177
The Mirage of Model Editing: Revisiting Evaluation in the Wild
cs.CL
Despite near-perfect results in artificial evaluations, the effectiveness of model editing in real-world applications remains unexplored. To bridge this gap, we propose to study model editing in question answering (QA) by establishing a rigorous evaluation practice to assess the effectiveness of editing methods in co...
2502.11178
DAViMNet: SSMs-Based Domain Adaptive Object Detection
cs.CV
Unsupervised domain adaptation (UDA) for object detection adapts models trained on labeled source domains to unlabeled target domains, ensuring robust performance across domain shifts. Transformer-based architectures excel at capturing long-range dependencies but face efficiency challenges due to their quadratic atte...
2502.11179
RT-DEMT: A hybrid real-time acupoint detection model combining mamba and transformer
cs.CV cs.AI
Traditional Chinese acupuncture methods often face controversy in clinical practice due to their high subjectivity. Additionally, current intelligent-assisted acupuncture systems have two major limitations: slow acupoint localization speed and low accuracy. To address these limitations, a new method leverages the exc...
2502.11181
Improving Scientific Document Retrieval with Concept Coverage-based Query Set Generation
cs.IR cs.AI
In specialized fields like the scientific domain, constructing large-scale human-annotated datasets poses a significant challenge due to the need for domain expertise. Recent methods have employed large language models to generate synthetic queries, which serve as proxies for actual user queries. However, they lack c...
2502.11182
Stacked Intelligent Metasurface-Based Transceiver Design for Near-Field Wideband Systems
cs.IT math.IT
Intelligent metasurfaces may be harnessed for realizing efficient holographic multiple-input and multiple-output (MIMO) systems, at a low hardware-cost and high energy-efficiency. As part of this family, we propose a hybrid beamforming design for stacked intelligent metasurfaces (SIM) aided wideband wireless systems ...
2502.11183
Don't Get Lost in the Trees: Streamlining LLM Reasoning by Overcoming Tree Search Exploration Pitfalls
cs.CL
Recent advancements in tree search algorithms guided by verifiers have significantly enhanced the reasoning capabilities of large language models (LLMs), but at the cost of increased computational resources. In this work, we identify two key challenges contributing to this inefficiency: $\textit{over-exploration}$ du...
2502.11184
Can't See the Forest for the Trees: Benchmarking Multimodal Safety Awareness for Multimodal LLMs
cs.CL cs.AI cs.CV cs.MM
Multimodal Large Language Models (MLLMs) have expanded the capabilities of traditional language models by enabling interaction through both text and images. However, ensuring the safety of these models remains a significant challenge, particularly in accurately identifying whether multimodal content is safe or unsafe...
2502.11187
TituLLMs: A Family of Bangla LLMs with Comprehensive Benchmarking
cs.CL cs.AI
In this paper, we present TituLLMs, the first large pretrained Bangla LLMs, available in 1B and 3B parameter sizes. Due to computational constraints during both training and inference, we focused on smaller models. To train TituLLMs, we collected a pretraining dataset of approximately 37 billion tokens. We extended t...
2502.11188
Exploring information geometry: Recent Advances and Connections to Topological Field Theory
math.DG cs.IT math.AG math.IT
This introductory text arises from a lecture given in G\"oteborg, Sweden, given by the first author and is intended for undergraduate students, as well as for any mathematically inclined reader wishing to explore a synthesis of ideas connecting geometry and statistics. At its core, this work seeks to illustrate the p...
2502.11190
ReLearn: Unlearning via Learning for Large Language Models
cs.CL cs.AI cs.CV cs.HC cs.LG
Current unlearning methods for large language models usually rely on reverse optimization to reduce target token probabilities. However, this paradigm disrupts the subsequent tokens prediction, degrading model performance and linguistic coherence. Moreover, existing evaluation metrics overemphasize contextual forgett...
2502.11191
Primus: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training
cs.CR cs.AI cs.CL
Large Language Models (LLMs) have shown remarkable advancements in specialized fields such as finance, law, and medicine. However, in cybersecurity, we have noticed a lack of open-source datasets, with a particular lack of high-quality cybersecurity pretraining corpora, even though much research indicates that LLMs a...
2502.11193
Large Language Models Penetration in Scholarly Writing and Peer Review
cs.CL
While the widespread use of Large Language Models (LLMs) brings convenience, it also raises concerns about the credibility of academic research and scholarly processes. To better understand these dynamics, we evaluate the penetration of LLMs across academic workflows from multiple perspectives and dimensions, providi...
2502.11195
From Deception to Perception: The Surprising Benefits of Deepfakes for Detecting, Measuring, and Mitigating Bias
cs.CV cs.AI
While deepfake technologies have predominantly been criticized for potential misuse, our study demonstrates their significant potential as tools for detecting, measuring, and mitigating biases in key societal domains. By employing deepfake technology to generate controlled facial images, we extend the scope of tradit...
2502.11196
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
cs.LG cs.AI cs.CL cs.CV cs.HC
Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired knowledge in their neural computations. We address this issue through the lens of knowledge circuit evolu...
2502.11197
CSP: A Simulator For Multi-Agent Ranking Competitions
cs.IR cs.GT
In ranking competitions, document authors compete for the highest rankings by modifying their content in response to past rankings. Previous studies focused on human participants, primarily students, in controlled settings. The rise of generative AI, particularly Large Language Models (LLMs), introduces a new paradig...
2502.11198
ANCHOLIK-NER: A Benchmark Dataset for Bangla Regional Named Entity Recognition
cs.CL cs.LG
ANCHOLIK-NER is a linguistically diverse dataset for Named Entity Recognition (NER) in Bangla regional dialects, capturing variations across Sylhet, Chittagong, and Barishal. The dataset has around 10,443 sentences, 3,481 sentences per region. The data was collected from two publicly available datasets and through we...
2502.11201
Bridging the Gap: Enabling Natural Language Queries for NoSQL Databases through Text-to-NoSQL Translation
cs.DB cs.AI
NoSQL databases have become increasingly popular due to their outstanding performance in handling large-scale, unstructured, and semi-structured data, highlighting the need for user-friendly interfaces to bridge the gap between non-technical users and complex database queries. In this paper, we introduce the Text-to-...
2502.11203
Multiscale autonomous forecasting of plasma systems' dynamics using neural networks
physics.plasm-ph cs.LG
Plasma systems exhibit complex multiscale dynamics, resolving which poses significant challenges for conventional numerical simulations. Machine learning (ML) offers an alternative by learning data-driven representations of these dynamics. Yet existing ML time-stepping models suffer from error accumulation, instabili...
2502.11205
Deep Contrastive Learning for Feature Alignment: Insights from Housing-Household Relationship Inference
cs.LG cs.CY
Housing and household characteristics are key determinants of social and economic well-being, yet our understanding of their interrelationships remains limited. This study addresses this knowledge gap by developing a deep contrastive learning (DCL) model to infer housing-household relationships using the American Com...
2502.11211
A Survey of LLM-based Agents in Medicine: How far are we from Baymax?
cs.CL cs.AI cs.CV
Large Language Models (LLMs) are transforming healthcare through the development of LLM-based agents that can understand, reason about, and assist with medical tasks. This survey provides a comprehensive review of LLM-based agents in medicine, examining their architectures, applications, and challenges. We analyze th...
2502.11213
Stochastic Optimization of Inventory at Large-scale Supply Chains
math.OC cs.AI cs.LG
Today's global supply chains face growing challenges due to rapidly changing market conditions, increased network complexity and inter-dependency, and dynamic uncertainties in supply, demand, and other factors. To combat these challenges, organizations employ Material Requirements Planning (MRP) software solutions to...
2502.11221
PlanGenLLMs: A Modern Survey of LLM Planning Capabilities
cs.AI cs.CL
LLMs have immense potential for generating plans, transforming an initial world state into a desired goal state. A large body of research has explored the use of LLMs for various planning tasks, from web navigation to travel planning and database querying. However, many of these systems are tailored to specific probl...
2502.11223
Asymmetric Conflict and Synergy in Post-training for LLM-based Multilingual Machine Translation
cs.CL
The emergence of Large Language Models (LLMs) has advanced the multilingual machine translation (MMT), yet the Curse of Multilinguality (CoM) remains a major challenge. Existing work in LLM-based MMT typically mitigates this issue via scaling up training and computation budget, which raises a critical question: Is sc...
2502.11225
METAFOR: A Hybrid Metaheuristics Software Framework for Single-Objective Continuous Optimization Problems
cs.NE cs.AI
Hybrid metaheuristics are powerful techniques for solving difficult optimization problems that exploit the strengths of different approaches in a single implementation. For algorithm designers, however, creating hybrid metaheuristic implementations has become increasingly challenging due to the vast number of design ...
2502.11227
Integrating Retrospective Framework in Multi-Robot Collaboration
cs.RO
Recent advancements in Large Language Models (LLMs) have demonstrated substantial capabilities in enhancing communication and coordination in multi-robot systems. However, existing methods often struggle to achieve efficient collaboration and decision-making in dynamic and uncertain environments, which are common in ...
2502.11228
Vendi-RAG: Adaptively Trading-Off Diversity And Quality Significantly Improves Retrieval Augmented Generation With LLMs
cs.CL cs.AI
Retrieval-augmented generation (RAG) enhances large language models (LLMs) for domain-specific question-answering (QA) tasks by leveraging external knowledge sources. However, traditional RAG systems primarily focus on relevance-based retrieval and often struggle with redundancy, especially when reasoning requires co...
2502.11229
Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent
math.OC cs.LG
This paper investigates the convergence properties of the hypergradient descent method (HDM), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of HDM using the online learning framework of [Gao24] and appl...
2502.11234
MaskFlow: Discrete Flows For Flexible and Efficient Long Video Generation
cs.CV
Generating long, high-quality videos remains a challenge due to the complex interplay of spatial and temporal dynamics and hardware limitations. In this work, we introduce \textbf{MaskFlow}, a unified video generation framework that combines discrete representations with flow-matching to enable efficient generation o...
2502.11238
Span-Agnostic Optimal Sample Complexity and Oracle Inequalities for Average-Reward RL
cs.LG cs.IT math.IT math.OC stat.ML
We study the sample complexity of finding an $\varepsilon$-optimal policy in average-reward Markov Decision Processes (MDPs) with a generative model. The minimax optimal span-based complexity of $\widetilde{O}(SAH/\varepsilon^2)$, where $H$ is the span of the optimal bias function, has only been achievable with prior...
2502.11239
Towards identifying possible fault-tolerant advantage of quantum linear system algorithms in terms of space, time and energy
quant-ph cs.AI cs.LG math.OC
Quantum computing, a prominent non-Von Neumann paradigm beyond Moore's law, can offer superpolynomial speedups for certain problems. Yet its advantages in efficiency for tasks like machine learning remain under investigation, and quantum noise complicates resource estimations and classical comparisons. We provide a d...
2502.11244
Soteria: Language-Specific Functional Parameter Steering for Multilingual Safety Alignment
cs.CL cs.AI
Ensuring consistent safety across multiple languages remains a significant challenge for large language models (LLMs). We introduce Soteria, a lightweight yet powerful strategy that locates and minimally adjusts the "functional heads" most responsible for harmful content generation in each language. By altering only ...
2502.11245
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
cs.LG cs.AI
Concept-based Models are neural networks that learn a concept extractor to map inputs to high-level concepts and an inference layer to translate these into predictions. Ensuring these modules produce interpretable concepts and behave reliably in out-of-distribution is crucial, yet the conditions for achieving this re...
2502.11246
MemeSense: An Adaptive In-Context Framework for Social Commonsense Driven Meme Moderation
cs.IR cs.CL cs.CY
Memes present unique moderation challenges due to their subtle, multimodal interplay of images, text, and social context. Standard systems relying predominantly on explicit textual cues often overlook harmful content camouflaged by irony, symbolism, or cultural references. To address this gap, we introduce MemeSense,...
2502.11248
Prevalence, Sharing Patterns, and Spreaders of Multimodal AI-Generated Content on X during the 2024 U.S. Presidential Election
cs.SI cs.CY
While concerns about the risks of AI-generated content (AIGC) to the integrity of social media discussions have been raised, little is known about its scale and the actors responsible for its dissemination online. In this work, we identify and characterize the prevalence, sharing patterns, and spreaders of AIGC in di...
2502.11250
Uncertainty-Aware Step-wise Verification with Generative Reward Models
cs.CL
Complex multi-step reasoning tasks, such as solving mathematical problems, remain challenging for large language models (LLMs). While outcome supervision is commonly used, process supervision via process reward models (PRMs) provides intermediate rewards to verify step-wise correctness in solution traces. However, as...
2502.11251
Explaining Necessary Truths
cs.AI cs.CC math.HO q-bio.NC
Knowing the truth is rarely enough -- we also seek out reasons why the fact is true. While much is known about how we explain contingent truths, we understand less about how we explain facts, such as those in mathematics, that are true as a matter of logical necessity. We present a framework, based in computational c...
2502.11256
Unveiling Environmental Impacts of Large Language Model Serving: A Functional Unit View
cs.LG cs.AR cs.CL
Large language models (LLMs) offer powerful capabilities but come with significant environmental costs, particularly in carbon emissions. Existing studies benchmark these emissions but lack a standardized basis for comparison across models. To address this, we introduce the concept of a functional unit (FU) and devel...
2502.11258
Leveraging Conditional Mutual Information to Improve Large Language Model Fine-Tuning For Classification
cs.CL
Although large language models (LLMs) have demonstrated remarkable capabilities in recent years, the potential of information theory (IT) to enhance LLM development remains underexplored. This paper introduces the information theoretic principle of Conditional Mutual Information (CMI) to LLM fine-tuning for classific...
2502.11259
Exploiting network optimization stability for enhanced PET image denoising using deep image prior
physics.med-ph cs.CV
PET is affected by statistical noise due to constraints on tracer dose and scan duration, impacting both diagnostic performance and quantitative accuracy. While deep learning (DL)-based PET denoising methods have been used to improve image quality, they may introduce over-smoothing, compromising quantitative accuracy...
2502.11260
Scalable Multi-Agent Offline Reinforcement Learning and the Role of Information
cs.LG
Offline Reinforcement Learning (RL) focuses on learning policies solely from a batch of previously collected data. offering the potential to leverage such datasets effectively without the need for costly or risky active exploration. While recent advances in Offline Multi-Agent RL (MARL) have shown promise, most exist...
2502.11262
Generating Skyline Datasets for Data Science Models
cs.DB cs.AI
Preparing high-quality datasets required by various data-driven AI and machine learning models has become a cornerstone task in data-driven analysis. Conventional data discovery methods typically integrate datasets towards a single pre-defined quality measure that may lead to bias for downstream tasks. This paper int...
2502.11265
Towards Automatic Identification of Missing Tissues using a Geometric-Learning Correspondence Model
cs.CV physics.med-ph
Missing tissue presents a big challenge for dose mapping, e.g., in the reirradiation setting. We propose a pipeline to identify missing tissue on intra-patient structure meshes using a previously trained geometric-learning correspondence model. For our application, we relied on the prediction discrepancies between fo...
2502.11266
The Shrinking Landscape of Linguistic Diversity in the Age of Large Language Models
cs.CL
Language is far more than a communication tool. A wealth of information - including but not limited to the identities, psychological states, and social contexts of its users - can be gleaned through linguistic markers, and such insights are routinely leveraged across diverse fields ranging from product development an...
2502.11267
Prompting in the Dark: Assessing Human Performance in Prompt Engineering for Data Labeling When Gold Labels Are Absent
cs.HC cs.AI cs.CL cs.LG
Millions of users prompt large language models (LLMs) for various tasks, but how good are people at prompt engineering? Do users actually get closer to their desired outcome over multiple iterations of their prompts? These questions are crucial when no gold-standard labels are available to measure progress. This pape...
2502.11268
Improved Unbiased Watermark for Large Language Models
cs.CL
As artificial intelligence surpasses human capabilities in text generation, the necessity to authenticate the origins of AI-generated content has become paramount. Unbiased watermarks offer a powerful solution by embedding statistical signals into language model-generated text without distorting the quality. In this ...
2502.11269
Unlocking the Potential of Generative AI through Neuro-Symbolic Architectures: Benefits and Limitations
cs.AI cs.LG cs.SC
Neuro-symbolic artificial intelligence (NSAI) represents a transformative approach in artificial intelligence (AI) by combining deep learning's ability to handle large-scale and unstructured data with the structured reasoning of symbolic methods. By leveraging their complementary strengths, NSAI enhances generalizati...
2502.11271
OctoTools: An Agentic Framework with Extensible Tools for Complex Reasoning
cs.LG cs.CL cs.CV cs.MA
Solving complex reasoning tasks may involve visual understanding, domain knowledge retrieval, numerical calculation, and multi-step reasoning. Existing methods augment large language models (LLMs) with external tools but are restricted to specialized domains, limited tool types, or require additional training data. I...
2502.11273
FairFare: A Tool for Crowdsourcing Rideshare Data to Empower Labor Organizers
cs.HC cs.AI cs.CY
Rideshare workers experience unpredictable working conditions due to gig work platforms' reliance on opaque AI and algorithmic systems. In response to these challenges, we found that labor organizers want data to help them advocate for legislation to increase the transparency and accountability of these platforms. To...
2502.11275
Cuckoo: An IE Free Rider Hatched by Massive Nutrition in LLM's Nest
cs.CL
Massive high-quality data, both pre-training raw texts and post-training annotations, have been carefully prepared to incubate advanced large language models (LLMs). In contrast, for information extraction (IE), pre-training data, such as BIO-tagged sequences, are hard to scale up. We show that IE models can act as f...
2502.11276
The Rotary Position Embedding May Cause Dimension Inefficiency in Attention Heads for Long-Distance Retrieval
cs.CL cs.LG
The Rotary Position Embedding (RoPE) is widely used in the attention heads of many large language models (LLM). It rotates dimensions in the query and the key vectors by different angles according to their positions in the input sequence. For long context modeling, the range of positions may vary a lot, and thus RoPE...
2502.11278
Reducing Computational Complexity of Rigidity-Based UAV Trajectory Optimization for Real-Time Cooperative Target Localization
eess.SY cs.SY
Accurate and swift localization of the target is crucial in emergencies. However, accurate position data of a target mobile device, typically obtained from global navigation satellite systems (GNSS), cellular networks, or WiFi, may not always be accessible to first responders. For instance, 1) accuracy and availabili...
2502.11279
Neural Operators for Stochastic Modeling of Nonlinear Structural System Response to Natural Hazards
cs.LG
Traditionally, neural networks have been employed to learn the mapping between finite-dimensional Euclidean spaces. However, recent research has opened up new horizons, focusing on the utilization of deep neural networks to learn operators capable of mapping infinite-dimensional function spaces. In this work, we empl...
2502.11284
Balancing the Budget: Understanding Trade-offs Between Supervised and Preference-Based Finetuning
cs.LG
Post-training of Large Language Models often involves a pipeline of Supervised Finetuning (SFT) followed by Preference Finetuning (PFT) using methods like Direct Preference Optimization. Both stages require annotated data that are very different in structure and costs. We study how to optimally allocate a fixed train...
2502.11287
MC-BEVRO: Multi-Camera Bird Eye View Road Occupancy Detection for Traffic Monitoring
cs.CV
Single camera 3D perception for traffic monitoring faces significant challenges due to occlusion and limited field of view. Moreover, fusing information from multiple cameras at the image feature level is difficult because of different view angles. Further, the necessity for practical implementation and compatibility...
2502.11291
Dialogue-based Explanations for Logical Reasoning using Structured Argumentation
cs.AI cs.DB cs.HC cs.LO
The problem of explaining inconsistency-tolerant reasoning in knowledge bases (KBs) is a prominent topic in Artificial Intelligence (AI). While there is some work on this problem, the explanations provided by existing approaches often lack critical information or fail to be expressive enough for non-binary conflicts....
2502.11295
Game-Of-Goals: Using adversarial games to achieve strategic resilience
cs.AI cs.GT
Our objective in this paper is to develop a machinery that makes a given organizational strategic plan resilient to the actions of competitor agents (adverse environmental actions). We assume that we are given a goal tree representing strategic goals (can also be seen business requirements for a software systems) wit...
2502.11298
Integrating Language Models for Enhanced Network State Monitoring in DRL-Based SFC Provisioning
cs.NI cs.AI cs.CL
Efficient Service Function Chain (SFC) provisioning and Virtual Network Function (VNF) placement are critical for enhancing network performance in modern architectures such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV). While Deep Reinforcement Learning (DRL) aids decision-making in d...
2502.11299
Grassroots Platforms with Atomic Transactions: Social Networks, Cryptocurrencies, and Democratic Federations
cs.DC cs.NI cs.SI
Grassroots platforms aim to offer an egalitarian alternative to global platforms -- centralized/autocratic (Facebook etc.) and decentralized/plutocratic (Bitcoin etc.) alike. Key grassroots platforms include grassroots social networks, grassroots cryptocurrencies, and grassroots democratic federations. Previously, gr...
2502.11300
CORDIAL: Can Multimodal Large Language Models Effectively Understand Coherence Relationships?
cs.CL cs.AI cs.CV
Multimodal Large Language Models (MLLMs) are renowned for their superior instruction-following and reasoning capabilities across diverse problem domains. However, existing benchmarks primarily focus on assessing factual and logical correctness in downstream tasks, with limited emphasis on evaluating MLLMs' ability to...
2502.11304
Leveraging Multimodal-LLMs Assisted by Instance Segmentation for Intelligent Traffic Monitoring
cs.AI cs.CL cs.CV
A robust and efficient traffic monitoring system is essential for smart cities and Intelligent Transportation Systems (ITS), using sensors and cameras to track vehicle movements, optimize traffic flow, reduce congestion, enhance road safety, and enable real-time adaptive traffic control. Traffic monitoring models mus...
2502.11305
Non-Uniform Memory Sampling in Experience Replay
cs.LG
Continual learning is the process of training machine learning models on a sequence of tasks where data distributions change over time. A well-known obstacle in this setting is catastrophic forgetting, a phenomenon in which a model drastically loses performance on previously learned tasks when learning new ones. A po...
2502.11306
Smoothing Out Hallucinations: Mitigating LLM Hallucination with Smoothed Knowledge Distillation
cs.CL cs.LG
Large language models (LLMs) often suffer from hallucination, generating factually incorrect or ungrounded content, which limits their reliability in high-stakes applications. A key factor contributing to hallucination is the use of hard labels during training, which enforce deterministic supervision, encourage overc...
2502.11307
Exploiting Point-Language Models with Dual-Prompts for 3D Anomaly Detection
cs.CV cs.AI
Anomaly detection (AD) in 3D point clouds is crucial in a wide range of industrial applications, especially in various forms of precision manufacturing. Considering the industrial demand for reliable 3D AD, several methods have been developed. However, most of these approaches typically require training separate mode...
2502.11308
ALGEN: Few-shot Inversion Attacks on Textual Embeddings using Alignment and Generation
cs.CR cs.AI cs.CL
With the growing popularity of Large Language Models (LLMs) and vector databases, private textual data is increasingly processed and stored as numerical embeddings. However, recent studies have proven that such embeddings are vulnerable to inversion attacks, where original text is reconstructed to reveal sensitive in...
2502.11310
Generalized Factor Neural Network Model for High-dimensional Regression
stat.ML cs.LG q-fin.ST
We tackle the challenges of modeling high-dimensional data sets, particularly those with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships. Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dim...