id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.12499
GPU Memory Usage Optimization for Backward Propagation in Deep Network Training
cs.LG cs.DS
In modern Deep Learning, it has been a trend to design larger Deep Neural Networks (DNNs) for the execution of more complex tasks and better accuracy. On the other hand, Convolutional Neural Networks (CNNs) have become the standard method for most of computer vision tasks. However, the memory allocation for the inter...
2502.12501
Crowd Comparative Reasoning: Unlocking Comprehensive Evaluations for LLM-as-a-Judge
cs.CL
LLM-as-a-Judge, which generates chain-of-thought (CoT) judgments, has become a widely adopted auto-evaluation method. However, its reliability is compromised by the CoT reasoning's inability to capture comprehensive and deeper details, often leading to incomplete outcomes. Existing methods mainly rely on majority vot...
2502.12502
Efficient OpAmp Adaptation for Zoom Attention to Golden Contexts
cs.CL
Large language models (LLMs) have shown significant promise in question-answering (QA) tasks, particularly in retrieval-augmented generation (RAG) scenarios and long-context applications. However, their performance is hindered by noisy reference documents, which often distract from essential information. Despite fine...
2502.12507
Mixture of Attention Yields Accurate Results for Tabular Data
cs.LG cs.AI
Tabular data inherently exhibits significant feature heterogeneity, but existing transformer-based methods lack specialized mechanisms to handle this property. To bridge the gap, we propose MAYA, an encoder-decoder transformer-based framework. In the encoder, we design a Mixture of Attention (MOA) that constructs mul...
2502.12508
Understanding Generalization in Transformers: Error Bounds and Training Dynamics Under Benign and Harmful Overfitting
cs.LG
Transformers serve as the foundational architecture for many successful large-scale models, demonstrating the ability to overfit the training data while maintaining strong generalization on unseen data, a phenomenon known as benign overfitting. However, research on how the training dynamics influence error bounds wit...
2502.12509
LegalCore: A Dataset for Legal Documents Event Coreference Resolution
cs.CL cs.AI
Recognizing events and their coreferential mentions in a document is essential for understanding semantic meanings of text. The existing research on event coreference resolution is mostly limited to news articles. In this paper, we present the first dataset for the legal domain, LegalCore, which has been annotated wi...
2502.12510
Aspect-Guided Multi-Level Perturbation Analysis of Large Language Models in Automated Peer Review
cs.CL
We propose an aspect-guided, multi-level perturbation framework to evaluate the robustness of Large Language Models (LLMs) in automated peer review. Our framework explores perturbations in three key components of the peer review process-papers, reviews, and rebuttals-across several quality aspects, including contribu...
2502.12511
Myna: Masking-Based Contrastive Learning of Musical Representations
cs.SD cs.AI cs.LG
We present Myna, a simple yet effective approach for self-supervised musical representation learning. Built on a contrastive learning framework, Myna introduces two key innovations: (1) the use of a Vision Transformer (ViT) on mel-spectrograms as the backbone and (2) a novel data augmentation strategy, token masking,...
2502.12513
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
cs.CV
After pre-training on extensive image-text pairs, Contrastive Language-Image Pre-training (CLIP) demonstrates promising performance on a wide variety of benchmarks. However, a substantial volume of non-paired data, such as multimodal interleaved documents, remains underutilized for vision-language representation lear...
2502.12514
Memory-updated-based Framework for 100% Reliable Flexible Flat Cables Insertion
cs.RO
Automatic assembly lines have increasingly replaced human labor in various tasks; however, the automation of Flexible Flat Cable (FFC) insertion remains unrealized due to its high requirement for effective feedback and dynamic operation, limiting approximately 11% of global industrial capacity. Despite lots of approa...
2502.12516
Can LLMs Extract Frame-Semantic Arguments?
cs.CL
Frame-semantic parsing is a critical task in natural language understanding, yet the ability of large language models (LLMs) to extract frame-semantic arguments remains underexplored. This paper presents a comprehensive evaluation of LLMs on frame-semantic argument identification, analyzing the impact of input repres...
2502.12518
New Constant Dimension Codes From the Inserting Mixed Dimension Construction and Multilevel Construction
cs.IT math.IT
Constant dimension codes (CDCs) are essential for error correction in random network coding. A fundamental problem of CDCs is to determine their maximal possible size for given parameters. Inserting construction and multilevel construction are two effective techniques for constructing CDCs. We first provide a suffici...
2502.12520
SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
cs.CV
As Multimodal Large Language Models (MLLMs) develop, their potential security issues have become increasingly prominent. Machine Unlearning (MU), as an effective strategy for forgetting specific knowledge in training data, has been widely used in privacy protection. However, MU for safety in MLLM has yet to be fully ...
2502.12521
Inference-Time Computations for LLM Reasoning and Planning: A Benchmark and Insights
cs.AI cs.LG
We examine the reasoning and planning capabilities of large language models (LLMs) in solving complex tasks. Recent advances in inference-time techniques demonstrate the potential to enhance LLM reasoning without additional training by exploring intermediate steps during inference. Notably, OpenAI's o1 model shows pr...
2502.12523
Cohesive Subgraph Discovery in Hypergraphs: A Locality-Driven Indexing Framework
cs.SI
Hypergraphs are increasingly employed to model complex, diverse relationships in modern networks, effectively capturing higher-order interactions. A critical challenge in this domain is the discovery of cohesive subgraphs, which provides valuable insights into hypergraph structures. However, selecting suitable parame...
2502.12524
YOLOv12: Attention-Centric Real-Time Object Detectors
cs.CV cs.AI
Enhancing the network architecture of the YOLO framework has been crucial for a long time, but has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes a...
2502.12525
From Abstract to Actionable: Pairwise Shapley Values for Explainable AI
cs.LG cs.AI
Explainable AI (XAI) is critical for ensuring transparency, accountability, and trust in machine learning systems as black-box models are increasingly deployed within high-stakes domains. Among XAI methods, Shapley values are widely used for their fairness and consistency axioms. However, prevalent Shapley value appr...
2502.12527
Comprehensive Assessment and Analysis for NSFW Content Erasure in Text-to-Image Diffusion Models
cs.CV
Text-to-image (T2I) diffusion models have gained widespread application across various domains, demonstrating remarkable creative potential. However, the strong generalization capabilities of these models can inadvertently led they to generate NSFW content even with efforts on filtering NSFW content from the training...
2502.12528
Contextual Linear Bandits with Delay as Payoff
cs.LG
A recent work by Schlisselberg et al. (2024) studies a delay-as-payoff model for stochastic multi-armed bandits, where the payoff (either loss or reward) is delayed for a period that is proportional to the payoff itself. While this captures many real-world applications, the simple multi-armed bandit setting limits th...
2502.12529
Alternating Regret for Online Convex Optimization
cs.LG
Motivated by alternating learning dynamics in two-player games, a recent work by Cevher et al.(2024) shows that $o(\sqrt{T})$ alternating regret is possible for any $T$-round adversarial Online Linear Optimization (OLO) problem, and left as an open question whether the same is true for general Online Convex Optimizat...
2502.12530
Policy-to-Language: Train LLMs to Explain Decisions with Flow-Matching Generated Rewards
cs.CL cs.LG
As humans increasingly share environments with diverse agents powered by RL, LLMs, and beyond, the ability to explain their policies in natural language will be vital for reliable coexistence. In this paper, we build a model-agnostic explanation generator based on an LLM. The technical novelty is that the rewards for...
2502.12531
GSCE: A Prompt Framework with Enhanced Reasoning for Reliable LLM-driven Drone Control
cs.RO cs.AI
The integration of Large Language Models (LLMs) into robotic control, including drones, has the potential to revolutionize autonomous systems. Research studies have demonstrated that LLMs can be leveraged to support robotic operations. However, when facing tasks with complex reasoning, concerns and challenges are rai...
2502.12532
CityEQA: A Hierarchical LLM Agent on Embodied Question Answering Benchmark in City Space
cs.AI
Embodied Question Answering (EQA) has primarily focused on indoor environments, leaving the complexities of urban settings - spanning environment, action, and perception - largely unexplored. To bridge this gap, we introduce CityEQA, a new task where an embodied agent answers open-vocabulary questions through active ...
2502.12534
NoKSR: Kernel-Free Neural Surface Reconstruction via Point Cloud Serialization
cs.CV
We present a novel approach to large-scale point cloud surface reconstruction by developing an efficient framework that converts an irregular point cloud into a signed distance field (SDF). Our backbone builds upon recent transformer-based architectures (i.e., PointTransformerV3), that serializes the point cloud into...
2502.12535
Learning Transformation-Isomorphic Latent Space for Accurate Hand Pose Estimation
cs.CV
Vision-based regression tasks, such as hand pose estimation, have achieved higher accuracy and faster convergence through representation learning. However, existing representation learning methods often encounter the following issues: the high semantic level of features extracted from images is inadequate for regress...
2502.12536
An Algorithm Board in Neural Decoding
cs.NE cs.AI
Understanding the mechanisms of neural encoding and decoding has always been a highly interesting research topic in fields such as neuroscience and cognitive intelligence. In prior studies, some researchers identified a symmetry in neural data decoded by unsupervised methods in motor scenarios and constructed a cogni...
2502.12537
Finding Optimal Trading History in Reinforcement Learning for Stock Market Trading
cs.LG cs.AI
This paper investigates the optimization of temporal windows in Financial Deep Reinforcement Learning (DRL) models using 2D Convolutional Neural Networks (CNNs). We introduce a novel approach to treating the temporal field as a hyperparameter and examine its impact on model performance across various datasets and fea...
2502.12539
Design and Implementation of a Dual Uncrewed Surface Vessel Platform for Bathymetry Research under High-flow Conditions
cs.RO cs.LG cs.SY eess.SY
Bathymetry, the study of underwater topography, relies on sonar mapping of submerged structures. These measurements, critical for infrastructure health monitoring, often require expensive instrumentation. The high financial risk associated with sensor damage or vessel loss creates a reluctance to deploy uncrewed surf...
2502.12541
When Segmentation Meets Hyperspectral Image: New Paradigm for Hyperspectral Image Classification
cs.CV
Hyperspectral image (HSI) classification is a cornerstone of remote sensing, enabling precise material and land-cover identification through rich spectral information. While deep learning has driven significant progress in this task, small patch-based classifiers, which account for over 90% of the progress, face limi...
2502.12542
Computing Voting Rules with Improvement Feedback
cs.GT cs.AI
Aggregating preferences under incomplete or constrained feedback is a fundamental problem in social choice and related domains. While prior work has established strong impossibility results for pairwise comparisons, this paper extends the inquiry to improvement feedback, where voters express incremental adjustments r...
2502.12545
IM360: Textured Mesh Reconstruction for Large-scale Indoor Mapping with 360$^\circ$ Cameras
cs.CV
We present a novel 3D reconstruction pipeline for 360$^\circ$ cameras for 3D mapping and rendering of indoor environments. Traditional Structure-from-Motion (SfM) methods may not work well in large-scale indoor scenes due to the prevalence of textureless and repetitive regions. To overcome these challenges, our appro...
2502.12546
Spatiotemporal Multi-Camera Calibration using Freely Moving People
cs.CV
We propose a novel method for spatiotemporal multi-camera calibration using freely moving people in multiview videos. Since calibrating multiple cameras and finding matches across their views are inherently interdependent, performing both in a unified framework poses a significant challenge. We address these issues a...
2502.12548
Improving the Stability of GNN Force Field Models by Reducing Feature Correlation
cs.LG cs.AI
Recently, Graph Neural Network based Force Field (GNNFF) models are widely used in Molecular Dynamics (MD) simulation, which is one of the most cost-effective means in semiconductor material research. However, even such models provide high accuracy in energy and force Mean Absolute Error (MAE) over trained (in-distri...
2502.12552
LLM Safety for Children
cs.CY cs.AI
This paper analyzes the safety of Large Language Models (LLMs) in interactions with children below age of 18 years. Despite the transformative applications of LLMs in various aspects of children's lives such as education and therapy, there remains a significant gap in understanding and mitigating potential content ha...
2502.12555
Warm Starting of CMA-ES for Contextual Optimization Problems
cs.NE
Several practical applications of evolutionary computation possess objective functions that receive the design variables and externally given parameters. Such problems are termed contextual optimization problems. These problems require finding the optimal solutions corresponding to the given context vectors. Existing...
2502.12556
From Maneuver to Mishap: A Systematic Literature Review on U-Turn Safety Risks
eess.SY cs.SY
Understanding the impacts of U-turn configurations on intersection safety and traffic operations is essential for developing effective strategies to enhance road safety and efficiency. Extensive research has been conducted to investigate the role of geometric designs, driver behavior, and advanced technologies in mit...
2502.12558
MomentSeeker: A Comprehensive Benchmark and A Strong Baseline For Moment Retrieval Within Long Videos
cs.CV cs.AI
Retrieval augmented generation (RAG) holds great promise in addressing challenges associated with long video understanding. These methods retrieve useful moments from long videos for their presented tasks, thereby enabling multimodal large language models (MLLMs) to generate high-quality answers in a cost-effective w...
2502.12560
How does a Language-Specific Tokenizer affect LLMs?
cs.CL
The necessity of language-specific tokenizers intuitively appears crucial for effective natural language processing, yet empirical analyses on their significance and underlying reasons are lacking. This study explores how language-specific tokenizers influence the behavior of Large Language Models predominantly train...
2502.12561
UXAgent: An LLM Agent-Based Usability Testing Framework for Web Design
cs.HC cs.CL
Usability testing is a fundamental yet challenging (e.g., inflexible to iterate the study design flaws and hard to recruit study participants) research method for user experience (UX) researchers to evaluate a web design. Recent advances in Large Language Model-simulated Agent (LLM-Agent) research inspired us to desi...
2502.12562
SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings
cs.CL cs.CR cs.MM
Multimodal Large Language Models (MLLMs) have serious security vulnerabilities.While safety alignment using multimodal datasets consisting of text and data of additional modalities can effectively enhance MLLM's security, it is costly to construct these datasets. Existing low-resource security alignment methods, incl...
2502.12563
Evaluating Language Models on Grooming Risk Estimation Using Fuzzy Theory
cs.CL cs.AI cs.LG
Encoding implicit language presents a challenge for language models, especially in high-risk domains where maintaining high precision is important. Automated detection of online child grooming is one such critical domain, where predators manipulate victims using a combination of explicit and implicit language to conv...
2502.12564
Sample Efficient Omniprediction and Downstream Swap Regret for Non-Linear Losses
cs.LG cs.GT
We define "decision swap regret" which generalizes both prediction for downstream swap regret and omniprediction, and give algorithms for obtaining it for arbitrary multi-dimensional Lipschitz loss functions in online adversarial settings. We also give sample complexity bounds in the batch setting via an online-to-ba...
2502.12565
Self Iterative Label Refinement via Robust Unlabeled Learning
cs.CL
Recent advances in large language models (LLMs) have yielded impressive performance on various tasks, yet they often depend on high-quality feedback that can be costly. Self-refinement methods attempt to leverage LLMs' internal evaluation mechanisms with minimal human supervision; however, these approaches frequently...
2502.12566
Exploring the Impact of Personality Traits on LLM Bias and Toxicity
cs.AI
With the different roles that AI is expected to play in human life, imbuing large language models (LLMs) with different personalities has attracted increasing research interests. While the "personification" enhances human experiences of interactivity and adaptability of LLMs, it gives rise to critical concerns about ...
2502.12567
DeltaDiff: A Residual-Guided Diffusion Model for Enhanced Image Super-Resolution
cs.CV
Recently, the application of diffusion models in super-resolution tasks has become a popular research direction. Existing work is focused on fully migrating diffusion models to SR tasks. The diffusion model is proposed in the field of image generation, so in order to make the generated results diverse, the diffusion ...
2502.12568
A Cognitive Writing Perspective for Constrained Long-Form Text Generation
cs.CL cs.AI
Like humans, Large Language Models (LLMs) struggle to generate high-quality long-form text that adheres to strict requirements in a single pass. This challenge is unsurprising, as successful human writing, according to the Cognitive Writing Theory, is a complex cognitive process involving iterative planning, translat...
2502.12569
Maximizing Value in Challenge the Champ Tournaments
cs.DS cs.GT cs.MA
A tournament is a method to decide the winner in a competition, and describes the overall sequence in which matches between the players are held. While deciding a worthy winner is the primary goal of a tournament, a close second is to maximize the value generated for the matches played, with value for a match measure...
2502.12570
GVTNet: Graph Vision Transformer For Face Super-Resolution
cs.CV
Recent advances in face super-resolution research have utilized the Transformer architecture. This method processes the input image into a series of small patches. However, because of the strong correlation between different facial components in facial images. When it comes to super-resolution of low-resolution image...
2502.12571
A Novel Gain Modeling Technique for LLC Resonant Converters based on The Hybrid Deep-Learning/GMDH Neural Network
eess.SY cs.SY
This paper presents a novel hybrid approach for modeling the voltage gain of LLC resonant converters by combining deep-learning neural networks with the polynomial based Group Method of Data Handling (GMDH). While deep learning offers high accuracy in predicting nonlinear converter behavior, it produces complex netwo...
2502.12574
HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
cs.LG cs.AI
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to...
2502.12575
DemonAgent: Dynamically Encrypted Multi-Backdoor Implantation Attack on LLM-based Agent
cs.CR cs.AI
As LLM-based agents become increasingly prevalent, backdoors can be implanted into agents through user queries or environment feedback, raising critical concerns regarding safety vulnerabilities. However, backdoor attacks are typically detectable by safety audits that analyze the reasoning process of agents. To this ...
2502.12576
A Fuzzy Evaluation of Sentence Encoders on Grooming Risk Classification
cs.CL cs.AI cs.LG
With the advent of social media, children are becoming increasingly vulnerable to the risk of grooming in online settings. Detecting grooming instances in an online conversation poses a significant challenge as the interactions are not necessarily sexually explicit, since the predators take time to build trust and a ...
2502.12579
CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation
cs.CV
Diffusion models have emerged as a dominant approach for text-to-image generation. Key components such as the human preference alignment and classifier-free guidance play a crucial role in ensuring generation quality. However, their independent application in current text-to-image models continues to face significant...
2502.12581
The Majority Vote Paradigm Shift: When Popular Meets Optimal
stat.ML cs.AI cs.LG
Reliably labelling data typically requires annotations from multiple human workers. However, humans are far from being perfect. Hence, it is a common practice to aggregate labels gathered from multiple annotators to make a more confident estimate of the true label. Among many aggregation methods, the simple and well ...
2502.12582
Adaptive Prototype Model for Attribute-based Multi-label Few-shot Action Recognition
cs.CV
In real-world action recognition systems, incorporating more attributes helps achieve a more comprehensive understanding of human behavior. However, using a single model to simultaneously recognize multiple attributes can lead to a decrease in accuracy. In this work, we propose a novel method i.e. Adaptive Attribute ...
2502.12583
LongFaith: Enhancing Long-Context Reasoning in LLMs with Faithful Synthetic Data
cs.CL
Despite the growing development of long-context large language models (LLMs), data-centric approaches relying on synthetic data have been hindered by issues related to faithfulness, which limit their effectiveness in enhancing model performance on tasks such as long-context reasoning and question answering (QA). Thes...
2502.12584
Enhancing Semi-supervised Learning with Noisy Zero-shot Pseudolabels
cs.LG cs.AI
Semi-supervised learning (SSL) leverages limited labeled data alongside abundant unlabeled data to address labeling costs in machine learning. While recent foundation models enable zero-shot inference, attempts to integrate these capabilities into SSL through pseudo-labeling have shown mixed results due to unreliable...
2502.12586
G-Refer: Graph Retrieval-Augmented Large Language Model for Explainable Recommendation
cs.IR cs.CL
Explainable recommendation has demonstrated significant advantages in informing users about the logic behind recommendations, thereby increasing system transparency, effectiveness, and trustworthiness. To provide personalized and interpretable explanations, existing works often combine the generation capabilities of ...
2502.12587
RSMLP: A light Sampled MLP Structure for Incomplete Utterance Rewrite
cs.CL cs.AI
The Incomplete Utterance Rewriting (IUR) task has garnered significant attention in recent years. Its goal is to reconstruct conversational utterances to better align with the current context, thereby enhancing comprehension. In this paper, we introduce a novel and versatile lightweight method, Rewritten-Sampled MLP ...
2502.12589
RM-PoT: Reformulating Mathematical Problems and Solving via Program of Thoughts
cs.AI
Recently, substantial advancements have been made in training language models to carry out step-by-step reasoning for solving intricate numerical reasoning tasks. Beyond the methods used to solve these problems, the structure and formulation of the problems themselves also play a crucial role in determining the perfo...
2502.12591
CutPaste&Find: Efficient Multimodal Hallucination Detector with Visual-aid Knowledge Base
cs.CV cs.CL
Large Vision-Language Models (LVLMs) have demonstrated impressive multimodal reasoning capabilities, but they remain susceptible to hallucination, particularly object hallucination where non-existent objects or incorrect attributes are fabricated in generated descriptions. Existing detection methods achieve strong pe...
2502.12594
PASER: Post-Training Data Selection for Efficient Pruned Large Language Model Recovery
cs.CL
Model pruning is an effective approach for compressing large language models. However, this process often leads to significant degradation of model capabilities. While post-training techniques such as instruction tuning are commonly employed to recover model performance, existing methods often overlook the uneven det...
2502.12598
Bring Your Own Knowledge: A Survey of Methods for LLM Knowledge Expansion
cs.CL
Adapting large language models (LLMs) to new and diverse knowledge is essential for their lasting effectiveness in real-world applications. This survey provides an overview of state-of-the-art methods for expanding the knowledge of LLMs, focusing on integrating various knowledge types, including factual information, ...
2502.12599
Learning a High-quality Robotic Wiping Policy Using Systematic Reward Analysis and Visual-Language Model Based Curriculum
cs.RO cs.LG
Autonomous robotic wiping is an important task in various industries, ranging from industrial manufacturing to sanitization in healthcare. Deep reinforcement learning (Deep RL) has emerged as a promising algorithm, however, it often suffers from a high demand for repetitive reward engineering. Instead of relying on m...
2502.12600
Revisiting the Generalization Problem of Low-level Vision Models Through the Lens of Image Deraining
cs.CV
Generalization remains a significant challenge for low-level vision models, which often struggle with unseen degradations in real-world scenarios despite their success in controlled benchmarks. In this paper, we revisit the generalization problem in low-level vision models. Image deraining is selected as a case study...
2502.12601
COPU: Conformal Prediction for Uncertainty Quantification in Natural Language Generation
cs.CL
Uncertainty Quantification (UQ) for Natural Language Generation (NLG) is crucial for assessing the performance of Large Language Models (LLMs), as it reveals confidence in predictions, identifies failure modes, and gauges output reliability. Conformal Prediction (CP), a model-agnostic method that generates prediction...
2502.12602
Learning-based Dynamic Robot-to-Human Handover
cs.RO
This paper presents a novel learning-based approach to dynamic robot-to-human handover, addressing the challenges of delivering objects to a moving receiver. We hypothesize that dynamic handover, where the robot adjusts to the receiver's movements, results in more efficient and comfortable interaction compared to sta...
2502.12603
Disentangling Long-Short Term State Under Unknown Interventions for Online Time Series Forecasting
cs.LG cs.AI
Current methods for time series forecasting struggle in the online scenario, since it is difficult to preserve long-term dependency while adapting short-term changes when data are arriving sequentially. Although some recent methods solve this problem by controlling the updates of latent states, they cannot disentangl...
2502.12604
S2C: Learning Noise-Resistant Differences for Unsupervised Change Detection in Multimodal Remote Sensing Images
cs.CV
Unsupervised Change Detection (UCD) in multimodal Remote Sensing (RS) images remains a difficult challenge due to the inherent spatio-temporal complexity within data, and the heterogeneity arising from different imaging sensors. Inspired by recent advancements in Visual Foundation Models (VFMs) and Contrastive Learni...
2502.12605
Hypernetwork-based approach for optimal composition design in partially controlled multi-agent systems
cs.MA cs.LG
Partially Controlled Multi-Agent Systems (PCMAS) are comprised of controllable agents, managed by a system designer, and uncontrollable agents, operating autonomously. This study addresses an optimal composition design problem in PCMAS, which involves the system designer's problem, determining the optimal number and ...
2502.12607
Generalized Kernel Inducing Points by Duality Gap for Dataset Distillation
stat.ML cs.LG
We propose Duality Gap KIP (DGKIP), an extension of the Kernel Inducing Points (KIP) method for dataset distillation. While existing dataset distillation methods often rely on bi-level optimization, DGKIP eliminates the need for such optimization by leveraging duality theory in convex programming. The KIP method has ...
2502.12608
Unveiling Mode Connectivity in Graph Neural Networks
cs.LG cs.AI
A fundamental challenge in understanding graph neural networks (GNNs) lies in characterizing their optimization dynamics and loss landscape geometry, critical for improving interpretability and robustness. While mode connectivity, a lens for analyzing geometric properties of loss landscapes has proven insightful for ...
2502.12611
Who Writes What: Unveiling the Impact of Author Roles on AI-generated Text Detection
cs.CL
The rise of Large Language Models (LLMs) necessitates accurate AI-generated text detection. However, current approaches largely overlook the influence of author characteristics. We investigate how sociolinguistic attributes-gender, CEFR proficiency, academic field, and language environment-impact state-of-the-art AI ...
2502.12614
Label Drop for Multi-Aspect Relation Modeling in Universal Information Extraction
cs.CL cs.AI
Universal Information Extraction (UIE) has garnered significant attention due to its ability to address model explosion problems effectively. Extractive UIE can achieve strong performance using a relatively small model, making it widely adopted. Extractive UIEs generally rely on task instructions for different tasks,...
2502.12616
Improving Chain-of-Thought Reasoning via Quasi-Symbolic Abstractions
cs.CL
Chain-of-Though (CoT) represents a common strategy for reasoning in Large Language Models (LLMs) by decomposing complex tasks into intermediate inference steps. However, explanations generated via CoT are susceptible to content biases that negatively affect their robustness and faithfulness. To mitigate existing limi...
2502.12617
A Graph-Enhanced Deep-Reinforcement Learning Framework for the Aircraft Landing Problem
cs.LG cs.AI cs.SY eess.SY
The Aircraft Landing Problem (ALP) is one of the challenging problems in aircraft transportation and management. The challenge is to schedule the arriving aircraft in a sequence so that the cost and delays are optimized. There are various solution approaches to solving this problem, most of which are based on operati...
2502.12618
Uncertainty-Aware Graph Structure Learning
cs.LG
Graph Neural Networks (GNNs) have become a prominent approach for learning from graph-structured data. However, their effectiveness can be significantly compromised when the graph structure is suboptimal. To address this issue, Graph Structure Learning (GSL) has emerged as a promising technique that refines node conn...
2502.12623
DeepResonance: Enhancing Multimodal Music Understanding via Music-centric Multi-way Instruction Tuning
cs.SD cs.AI cs.CL cs.MM eess.AS
Recent advancements in music large language models (LLMs) have significantly improved music understanding tasks, which involve the model's ability to analyze and interpret various musical elements. These improvements primarily focused on integrating both music and text inputs. However, the potential of incorporating ...
2502.12624
Implicit Repair with Reinforcement Learning in Emergent Communication
cs.LG cs.MA
Conversational repair is a mechanism used to detect and resolve miscommunication and misinformation problems when two or more agents interact. One particular and underexplored form of repair in emergent communication is the implicit repair mechanism, where the interlocutor purposely conveys the desired information in...
2502.12627
DAMamba: Vision State Space Model with Dynamic Adaptive Scan
cs.CV
State space models (SSMs) have recently garnered significant attention in computer vision. However, due to the unique characteristics of image data, adapting SSMs from natural language processing to computer vision has not outperformed the state-of-the-art convolutional neural networks (CNNs) and Vision Transformers ...
2502.12629
Rate Maximization for Downlink Pinching-Antenna Systems
cs.IT eess.SP math.IT
In this letter, we consider a new type of flexible-antenna system, termed pinching-antenna, where multiple low-cost pinching antennas, realized by activating small dielectric particles on a dielectric waveguide, are jointly used to serve a single-antenna user. Our goal is to maximize the downlink transmission rate by...
2502.12630
Automating Prompt Leakage Attacks on Large Language Models Using Agentic Approach
cs.CR cs.AI
This paper presents a novel approach to evaluating the security of large language models (LLMs) against prompt leakage-the exposure of system-level prompts or proprietary configurations. We define prompt leakage as a critical threat to secure LLM deployment and introduce a framework for testing the robustness of LLMs...
2502.12631
Score-Based Diffusion Policy Compatible with Reinforcement Learning via Optimal Transport
cs.LG cs.AI
Diffusion policies have shown promise in learning complex behaviors from demonstrations, particularly for tasks requiring precise control and long-term planning. However, they face challenges in robustness when encountering distribution shifts. This paper explores improving diffusion-based imitation learning models t...
2502.12632
MALT Diffusion: Memory-Augmented Latent Transformers for Any-Length Video Generation
cs.CV cs.LG
Diffusion models are successful for synthesizing high-quality videos but are limited to generating short clips (e.g., 2-10 seconds). Synthesizing sustained footage (e.g. over minutes) still remains an open research question. In this paper, we propose MALT Diffusion (using Memory-Augmented Latent Transformers), a new ...
2502.12633
One Size doesn't Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction
cs.CL cs.AI
Large language models (LLMs) have been increasingly employed in various intelligent educational systems, simulating human tutors to facilitate effective human-machine interaction. However, previous studies often overlook the significance of recognizing and adapting to individual learner characteristics. Such adaptati...
2502.12634
Introducing Context Information in Lifelong Sequential Modeling using Temporal Convolutional Networks
cs.IR
The importance of lifelong sequential modeling (LSM) is growing in the realm of social media recommendation systems. A key component in this process is the attention module, which derives interest representations with respect to candidate items from the sequence. Typically, attention modules function in a point-wise ...
2502.12635
Corrupted but Not Broken: Rethinking the Impact of Corrupted Data in Visual Instruction Tuning
cs.CV
Visual Instruction Tuning (VIT) enhances Multimodal Large Language Models (MLLMs) but it is hindered by corrupted datasets containing hallucinated content, incorrect responses, and poor OCR quality. While prior works focus on dataset refinement through high-quality data collection or rule-based filtering, they are co...
2502.12638
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
q-bio.QM cs.LG q-bio.BM
3D molecule generation is crucial for drug discovery and material design. While prior efforts focus on 3D diffusion models for their benefits in modeling continuous 3D conformers, they overlook the advantages of 1D SELFIES-based Language Models (LMs), which can generate 100% valid molecules and leverage the billion-s...
2502.12640
RecDreamer: Consistent Text-to-3D Generation via Uniform Score Distillation
cs.CV
Current text-to-3D generation methods based on score distillation often suffer from geometric inconsistencies, leading to repeated patterns across different poses of 3D assets. This issue, known as the Multi-Face Janus problem, arises because existing methods struggle to maintain consistency across varying poses and ...
2502.12654
Free Energy and Network Structure: Breaking Scale-Free Behaviour Through Information Processing Constraints
cs.SI physics.soc-ph
In this paper we show how The Free Energy Principle (FEP) can provide an explanation for why real-world networks deviate from scale-free behaviour, and how these characteristic deviations can emerge from constraints on information processing. We propose a minimal FEP model for node behaviour reveals three distinct re...
2502.12655
LiMo-Calib: On-Site Fast LiDAR-Motor Calibration for Quadruped Robot-Based Panoramic 3D Sensing System
cs.RO
Conventional single LiDAR systems are inherently constrained by their limited field of view (FoV), leading to blind spots and incomplete environmental awareness, particularly on robotic platforms with strict payload limitations. Integrating a motorized LiDAR offers a practical solution by significantly expanding the ...
2502.12658
R.R.: Unveiling LLM Training Privacy through Recollection and Ranking
cs.CL
Large Language Models (LLMs) pose significant privacy risks, potentially leaking training data due to implicit memorization. Existing privacy attacks primarily focus on membership inference attacks (MIAs) or data extraction attacks, but reconstructing specific personally identifiable information (PII) in LLM's traini...
2502.12659
The Hidden Risks of Large Reasoning Models: A Safety Assessment of R1
cs.CY cs.AI
The rapid development of large reasoning models, such as OpenAI-o3 and DeepSeek-R1, has led to significant improvements in complex reasoning over non-reasoning large language models~(LLMs). However, their enhanced capabilities, combined with the open-source access of models like DeepSeek-R1, raise serious safety conc...
2502.12663
Demystifying Multilingual Chain-of-Thought in Process Reward Modeling
cs.CL
Large language models (LLMs) are designed to perform a wide range of tasks. To improve their ability to solve complex problems requiring multi-step reasoning, recent research leverages process reward modeling to provide fine-grained feedback at each step of the reasoning process for reinforcement learning (RL), but i...
2502.12665
A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization
cs.CL
Long context large language models (LLMs) pose significant challenges for efficient serving due to the large memory footprint and high access overhead of KV cache. Retrieval-based KV cache reduction methods can mitigate these challenges, typically by offloading the complete KV cache to CPU and retrieving necessary to...
2502.12668
Evaluation of Best-of-N Sampling Strategies for Language Model Alignment
cs.CL
Best-of-N (BoN) sampling with a reward model has been shown to be an effective strategy for aligning Large Language Models (LLMs) with human preferences at the time of decoding. BoN sampling is susceptible to a problem known as reward hacking. Since the reward model is an imperfect proxy for the true objective, an ex...
2502.12669
Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research
cs.AI
The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain. We present a comprehensive knowledge-enhanced system for PSCs that integrates three key components. First, ...
2502.12671
Baichuan-M1: Pushing the Medical Capability of Large Language Models
cs.CL
The current generation of large language models (LLMs) is typically designed for broad, general-purpose applications, while domain-specific LLMs, especially in vertical fields like medicine, remain relatively scarce. In particular, the development of highly efficient and practical LLMs for the medical domain is chall...
2502.12672
Speech-FT: A Fine-tuning Strategy for Enhancing Speech Representation Models Without Compromising Generalization Ability
cs.CL cs.AI
Speech representation models are highly effective at extracting general features for various tasks. While fine-tuning can enhance these representations for specific applications, it often compromises their generalization ability. To address this challenge, we propose Speech-FT, a fine-tuning strategy for speech repre...
2502.12673
ROI-NeRFs: Hi-Fi Visualization of Objects of Interest within a Scene by NeRFs Composition
cs.CV cs.GR
Efficient and accurate 3D reconstruction is essential for applications in cultural heritage. This study addresses the challenge of visualizing objects within large-scale scenes at a high level of detail (LOD) using Neural Radiance Fields (NeRFs). The aim is to improve the visual fidelity of chosen objects while maint...