id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.11483
No-regret incentive-compatible online learning under exact truthfulness with non-myopic experts
cs.LG cs.GT stat.ML
We study an online forecasting setting in which, over $T$ rounds, $N$ strategic experts each report a forecast to a mechanism, the mechanism selects one forecast, and then the outcome is revealed. In any given round, each expert has a belief about the outcome, but the expert wishes to select its report so as to maxim...
2502.11484
Dictionary-Learning-Based Data Pruning for System Identification
cs.LG cs.SY eess.SY
System identification is normally involved in augmenting time series data by time shifting and nonlinearisation (via polynomial basis), which introduce redundancy both feature-wise and sample-wise. Many research works focus on reducing redundancy feature-wise, while less attention is paid to sample-wise redundancy. T...
2502.11486
Anti-Degeneracy Scheme for Lidar SLAM based on Particle Filter in Geometry Feature-Less Environments
cs.RO
Simultaneous localization and mapping (SLAM) based on particle filtering has been extensively employed in indoor scenarios due to its high efficiency. However, in geometry feature-less scenes, the accuracy is severely reduced due to lack of constraints. In this article, we propose an anti-degeneracy system based on d...
2502.11487
Non-Binary LDPC Arithmetic Error Correction For Processing-in-Memory
cs.AR cs.IT math.IT
Processing-in-memory (PIM) based on emerging devices such as memristors is more vulnerable to noise than traditional memories, due to the physical non-idealities and complex operations in analog domains. To ensure high reliability, efficient error-correcting code (ECC) is highly desired. However, state-of-the-art ECC...
2502.11490
GPU-accelerated Multi-relational Parallel Graph Retrieval for Web-scale Recommendations
cs.LG cs.DC cs.IR
Web recommendations provide personalized items from massive catalogs for users, which rely heavily on retrieval stages to trade off the effectiveness and efficiency of selecting a small relevant set from billion-scale candidates in online digital platforms. As one of the largest Chinese search engine and news feed pr...
2502.11491
Ontology-Guided Reverse Thinking Makes Large Language Models Stronger on Knowledge Graph Question Answering
cs.CL cs.AI
Large language models (LLMs) have shown remarkable capabilities in natural language processing. However, in knowledge graph question answering tasks (KGQA), there remains the issue of answering questions that require multi-hop reasoning. Existing methods rely on entity vector matching, but the purpose of the question...
2502.11492
Why Vision Language Models Struggle with Visual Arithmetic? Towards Enhanced Chart and Geometry Understanding
cs.AI cs.CL cs.CV
Vision Language Models (VLMs) have achieved remarkable progress in multimodal tasks, yet they often struggle with visual arithmetic, seemingly simple capabilities like object counting or length comparison, which are essential for relevant complex tasks like chart understanding and geometric reasoning. In this work, w...
2502.11493
DAST: Context-Aware Compression in LLMs via Dynamic Allocation of Soft Tokens
cs.CL
Large Language Models (LLMs) face computational inefficiencies and redundant processing when handling long context inputs, prompting a focus on compression techniques. While existing semantic vector-based compression methods achieve promising performance, these methods fail to account for the intrinsic information de...
2502.11494
Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More
cs.CL cs.CV
Vision tokens in multimodal large language models often dominate huge computational overhead due to their excessive length compared to linguistic modality. Abundant recent methods aim to solve this problem with token pruning, which first defines an importance criterion for tokens and then prunes the unimportant visio...
2502.11495
Balanced Multi-Factor In-Context Learning for Multilingual Large Language Models
cs.CL
Multilingual large language models (MLLMs) are able to leverage in-context learning (ICL) to achieve high performance by leveraging cross-lingual knowledge transfer without parameter updates. However, their effectiveness is highly sensitive to example selection, particularly in multilingual settings. Based on the fin...
2502.11501
Token Pruning in Multimodal Large Language Models: Are We Solving the Right Problem?
cs.CL cs.CV
Multimodal large language models (MLLMs) have shown remarkable performance for cross-modal understanding and generation, yet still suffer from severe inference costs. Recently, abundant works have been proposed to solve this problem with token pruning, which identifies the redundant tokens in MLLMs and then prunes th...
2502.11504
Accelerated Gradient-based Design Optimization Via Differentiable Physics-Informed Neural Operator: A Composites Autoclave Processing Case Study
cs.LG cs.AI cs.NA math.NA
Simulation and optimization are crucial for advancing the engineering design of complex systems and processes. Traditional optimization methods require substantial computational time and effort due to their reliance on resource-intensive simulations, such as finite element analysis, and the complexity of rigorous opt...
2502.11505
A GNN-based Spectral Filtering Mechanism for Imbalance Classification in Network Digital Twin
cs.LG cs.NI
Graph Neural Networks are gaining attention in Fifth-Generation (5G) core network digital twins, which are data-driven complex systems with numerous components. Analyzing these data can be challenging due to rare failure types, leading to imbalanced classification in multiclass settings. Digital twins of 5G networks ...
2502.11506
Learning Surrogate Potential Mean Field Games via Gaussian Processes: A Data-Driven Approach to Ill-Posed Inverse Problems
cs.LG math.OC stat.ML
Mean field games (MFGs) describe the collective behavior of large populations of interacting agents. In this work, we tackle ill-posed inverse problems in potential MFGs, aiming to recover the agents' population, momentum, and environmental setup from limited, noisy measurements and partial observations. These proble...
2502.11508
Chinese Spelling Correction: A Comprehensive Survey of Progress, Challenges, and Opportunities
cs.CL cs.AI
Chinese Spelling Correction (CSC) is a critical task in natural language processing, aimed at detecting and correcting spelling errors in Chinese text. This survey provides a comprehensive overview of CSC, tracing its evolution from pre-trained language models to large language models, and critically analyzing their ...
2502.11509
DifCluE: Generating Counterfactual Explanations with Diffusion Autoencoders and modal clustering
cs.LG cs.AI
Generating multiple counterfactual explanations for different modes within a class presents a significant challenge, as these modes are distinct yet converge under the same classification. Diffusion probabilistic models (DPMs) have demonstrated a strong ability to capture the underlying modes of data distributions. I...
2502.11513
MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models
cs.LG cs.AI
Large language models have demonstrated exceptional capabilities across diverse tasks, but their fine-tuning demands significant memory, posing challenges for resource-constrained environments. Zeroth-order (ZO) optimization provides a memory-efficient alternative by eliminating the need for backpropagation. However,...
2502.11514
Investigating Inference-time Scaling for Chain of Multi-modal Thought: A Preliminary Study
cs.CL
Recently, inference-time scaling of chain-of-thought (CoT) has been demonstrated as a promising approach for addressing multi-modal reasoning tasks. While existing studies have predominantly centered on text-based thinking, the integration of both visual and textual modalities within the reasoning process remains une...
2502.11515
SayAnything: Audio-Driven Lip Synchronization with Conditional Video Diffusion
cs.CV
Recent advances in diffusion models have led to significant progress in audio-driven lip synchronization. However, existing methods typically rely on constrained audio-visual alignment priors or multi-stage learning of intermediate representations to force lip motion synthesis. This leads to complex training pipeline...
2502.11516
CRB-Rate Tradeoff in RSMA-enabled Near-Field Integrated Multi-Target Sensing and Multi-User Communications
cs.IT math.IT
Extremely large-scale antenna arrays enhance spectral efficiency and spatial resolution in integrated sensing and communication (ISAC) networks while expanding the Rayleigh distance, triggering a shift from conventional far-field plane waves to near-field (NF) spherical waves. However, full-digital beamforming is inf...
2502.11517
Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding
cs.CL cs.DC cs.LG
Decoding with autoregressive large language models (LLMs) traditionally occurs sequentially, generating one token after another. An emerging line of work explored parallel decoding by identifying and simultaneously generating semantically independent chunks of LLM responses. However, these techniques rely on hand-cra...
2502.11518
Generative Multi-Agent Collaboration in Embodied AI: A Systematic Review
cs.MA cs.AI cs.LG
Embodied multi-agent systems (EMAS) have attracted growing attention for their potential to address complex, real-world challenges in areas such as logistics and robotics. Recent advances in foundation models pave the way for generative agents capable of richer communication and adaptive problem-solving. This survey ...
2502.11519
UniGO: A Unified Graph Neural Network for Modeling Opinion Dynamics on Graphs
cs.SI cs.AI
Polarization and fragmentation in social media amplify user biases, making it increasingly important to understand the evolution of opinions. Opinion dynamics provide interpretability for studying opinion evolution, yet incorporating these insights into predictive models remains challenging. This challenge arises due...
2502.11520
AURORA:Automated Training Framework of Universal Process Reward Models via Ensemble Prompting and Reverse Verification
cs.CL
The reasoning capabilities of advanced large language models (LLMs) like o1 have revolutionized artificial intelligence applications. Nevertheless, evaluating and optimizing complex reasoning processes remain significant challenges due to diverse policy distributions and the inherent limitations of human effort and a...
2502.11521
DeFiScope: Detecting Various DeFi Price Manipulations with LLM Reasoning
cs.CR cs.AI
DeFi (Decentralized Finance) is one of the most important applications of today's cryptocurrencies and smart contracts. It manages hundreds of billions in Total Value Locked (TVL) on-chain, yet it remains susceptible to common DeFi price manipulation attacks. Despite state-of-the-art (SOTA) systems like DeFiRanger an...
2502.11525
Training Large Language Models to be Better Rule Followers
cs.CL
Large language models (LLMs) have shown impressive performance across a wide range of tasks. However, they often exhibit unexpected failures in seemingly straightforward tasks, suggesting a reliance on case-based reasoning rather than rule-based reasoning. While the vast training corpus of LLMs contains numerous text...
2502.11528
A Survey of Personalized Large Language Models: Progress and Future Directions
cs.AI
Large Language Models (LLMs) excel in handling general knowledge tasks, yet they struggle with user-specific personalization, such as understanding individual emotions, writing styles, and preferences. Personalized Large Language Models (PLLMs) tackle these challenges by leveraging individual user data, such as user ...
2502.11532
Control-CLIP: Decoupling Category and Style Guidance in CLIP for Specific-Domain Generation
cs.CV
Text-to-image diffusion models have shown remarkable capabilities of generating high-quality images closely aligned with textual inputs. However, the effectiveness of text guidance heavily relies on the CLIP text encoder, which is trained to pay more attention to general content but struggles to capture semantics in ...
2502.11533
Be Cautious When Merging Unfamiliar LLMs: A Phishing Model Capable of Stealing Privacy
cs.CL
Model merging is a widespread technology in large language models (LLMs) that integrates multiple task-specific LLMs into a unified one, enabling the merged model to inherit the specialized capabilities of these LLMs. Most task-specific LLMs are sourced from open-source communities and have not undergone rigorous aud...
2502.11534
SurgPose: a Dataset for Articulated Robotic Surgical Tool Pose Estimation and Tracking
cs.RO cs.CV
Accurate and efficient surgical robotic tool pose estimation is of fundamental significance to downstream applications such as augmented reality (AR) in surgical training and learning-based autonomous manipulation. While significant advancements have been made in pose estimation for humans and animals, it is still a ...
2502.11535
Disentangled Iterative Surface Fitting for Contact-stable Grasp Planning
cs.RO
In this work, we address the limitation of surface fitting-based grasp planning algorithm, which primarily focuses on geometric alignment between the gripper and object surface while overlooking the stability of contact point distribution, often resulting in unstable grasps due to inadequate contact configurations. T...
2502.11537
$\text{M}^{\text{3}}$: A Modular World Model over Streams of Tokens
cs.LG cs.AI
Token-based world models emerged as a promising modular framework, modeling dynamics over token streams while optimizing tokenization separately. While successful in visual environments with discrete actions (e.g., Atari games), their broader applicability remains uncertain. In this paper, we introduce $\text{M}^{\te...
2502.11538
How to Divide: A Set Partitioning Strategy Balancing the Trade-off Between Intra-Subset Correlation and Inter-Subset Gain Mutual Influence in Distributed Attack Detection Scheduling Task
cs.DC cs.SY eess.SY
Recently, the efficiency of attack detection in large-scale sensor networks has remained a critical research challenge. Studies have shown that while distributed algorithms offer higher efficiency compared to centralized approaches, they often come at the cost of reduced performance. To strike a balance between detec...
2502.11541
MuSC: Improving Complex Instruction Following with Multi-granularity Self-Contrastive Training
cs.CL cs.AI
Complex instruction-following with elaborate constraints is imperative for Large Language Models (LLMs). While existing methods have constructed data for complex instruction alignment, they all rely on a more advanced model, especially GPT-4, limiting their application. In this paper, we propose a Multi-granularity S...
2502.11544
Evaluating o1-Like LLMs: Unlocking Reasoning for Translation through Comprehensive Analysis
cs.CL
The o1-Like LLMs are transforming AI by simulating human cognitive processes, but their performance in multilingual machine translation (MMT) remains underexplored. This study examines: (1) how o1-Like LLMs perform in MMT tasks and (2) what factors influence their translation quality. We evaluate multiple o1-Like LLM...
2502.11546
DCAD-2000: A Multilingual Dataset across 2000+ Languages with Data Cleaning as Anomaly Detection
cs.CL
The rapid development of multilingual large language models (LLMs) highlights the need for high-quality, diverse, and clean multilingual datasets. In this paper, we introduce DCAD-2000 (Data Cleaning as Anomaly Detection), a large-scale multilingual corpus built using newly extracted Common Crawl data and existing mu...
2502.11554
Toward Metaphor-Fluid Conversation Design for Voice User Interfaces
cs.HC cs.AI cs.CL cs.CY cs.ET
Metaphors play a critical role in shaping user experiences with Voice User Interfaces (VUIs), yet existing designs often rely on static, human-centric metaphors that fail to adapt to diverse contexts and user needs. This paper introduces Metaphor-Fluid Design, a novel approach that dynamically adjusts metaphorical re...
2502.11555
Equilibrate RLHF: Towards Balancing Helpfulness-Safety Trade-off in Large Language Models
cs.AI
Fine-tuning large language models (LLMs) based on human preferences, commonly achieved through reinforcement learning from human feedback (RLHF), has been effective in improving their performance. However, maintaining LLM safety throughout the fine-tuning process remains a significant challenge, as resolving conflict...
2502.11557
Fast Maximum Common Subgraph Search: A Redundancy-Reduced Backtracking Approach
cs.DB cs.DS
Given two input graphs, finding the largest subgraph that occurs in both, i.e., finding the maximum common subgraph, is a fundamental operator for evaluating the similarity between two graphs in graph data analysis. Existing works for solving the problem are of either theoretical or practical interest, but not both. ...
2502.11559
Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models
cs.CL cs.AI
Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack a...
2502.11560
A Survey of Automatic Prompt Engineering: An Optimization Perspective
cs.AI cs.LG
The rise of foundation models has shifted focus from resource-intensive fine-tuning to prompt engineering, a paradigm that steers model behavior through input design rather than weight updates. While manual prompt engineering faces limitations in scalability, adaptability, and cross-modal alignment, automated methods...
2502.11562
Reinforced Information Retrieval
cs.CL
While retrieval techniques are widely used in practice, they still face significant challenges in cross-domain scenarios. Recently, generation-augmented methods have emerged as a promising solution to this problem. These methods enhance raw queries by incorporating additional information from an LLM-based generator, ...
2502.11563
Leader and Follower: Interactive Motion Generation under Trajectory Constraints
cs.RO cs.AI
With the rapid advancement of game and film production, generating interactive motion from texts has garnered significant attention due to its potential to revolutionize content creation processes. In many practical applications, there is a need to impose strict constraints on the motion range or trajectory of virtua...
2502.11564
Continuous Diffusion Model for Language Modeling
cs.LG
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existin...
2502.11565
STARS-Enabled Full-Duplex Two-Way mMIMO System Under Spatially-Correlated Channels
cs.IT math.IT
\underline{S}imultaneous \underline{t}ransmitting \underline{a}nd \underline{r}eflecting \underline{s}urface (STARS)-assisted systems have emerged to fill this gap by providing $ 360^{\circ}$ wireless coverage. In parallel, full-duplex (FD) communication offers a higher achievable rate through efficient spectrum ut...
2502.11569
Towards Reasoning Ability of Small Language Models
cs.CL cs.AI cs.LG
Reasoning has long been viewed as an emergent property of large language models (LLMs), appearing at or above a certain scale ($\sim$100B parameters). However, recent studies challenge this assumption, showing that small language models (SLMs) can also achieve competitive reasoning performance. SLMs are increasingly ...
2502.11570
Towards a Trustworthy Anomaly Detection for Critical Applications through Approximated Partial AUC Loss
cs.LG cs.CV
Anomaly Detection is a crucial step for critical applications such in the industrial, medical or cybersecurity domains. These sectors share the same requirement of handling differently the different types of classification errors. Indeed, even if false positives are acceptable, false negatives are not, because it wou...
2502.11571
FaMTEB: Massive Text Embedding Benchmark in Persian Language
cs.CL cs.IR cs.LG
In this paper, we introduce a comprehensive benchmark for Persian (Farsi) text embeddings, built upon the Massive Text Embedding Benchmark (MTEB). Our benchmark includes 63 datasets spanning seven different tasks: classification, clustering, pair classification, reranking, retrieval, summary retrieval, and semantic t...
2502.11573
InfiR : Crafting Effective Small Language Models and Multimodal Small Language Models in Reasoning
cs.CL cs.AI
Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) have made significant advancements in reasoning capabilities. However, they still face challenges such as high computational demands and privacy concerns. This paper focuses on developing efficient Small Language Models (SLMs) and Multimodal Sm...
2502.11574
Large Language Models and Mathematical Reasoning Failures
cs.AI
This paper investigates the mathematical reasoning capabilities of large language models (LLMs) using 50 newly constructed high-school-level word problems. Unlike prior studies that focus solely on answer correctness, we rigorously analyze both final answers and solution steps to identify reasoning failures. Evaluati...
2502.11578
Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance
cs.CL cs.AI
Large Language Models (LLMs) have made significant strides in natural language generation but often face challenges in tasks requiring precise calculations and structural analysis. This paper investigates the performance of state-of-the-art LLMs on language complexity measurement tasks, through the computation of the...
2502.11583
Distributional autoencoders know the score
stat.ML cs.LG
This work presents novel and desirable properties of a recently introduced class of autoencoders -- the Distributional Principal Autoencoder (DPA) -- that combines distributionally correct reconstruction with principal components-like interpretability of the encodings. First, we show that the level sets of the enco...
2502.11584
Runtime Enforcement of CPS against Signal Temporal Logic
eess.SY cs.SY
Cyber-Physical Systems (CPSs), especially those involving autonomy, need guarantees of their safety. Runtime Enforcement (RE) is a lightweight method to formally ensure that some specified properties are satisfied over the executions of the system. Hence, there is recent interest in the RE of CPS. However, existing m...
2502.11585
Calibration of Vehicular Traffic Simulation Models by Local Optimization
cs.AI
Simulation is a valuable tool for traffic management experts to assist them in refining and improving transportation systems and anticipating the impact of possible changes in the infrastructure network before their actual implementation. Calibrating simulation models using traffic count data is challenging because o...
2502.11586
Syllables to Scenes: Literary-Guided Free-Viewpoint 3D Scene Synthesis from Japanese Haiku
cs.CV
In the era of the metaverse, where immersive technologies redefine human experiences, translating abstract literary concepts into navigable 3D environments presents a fundamental challenge in preserving semantic and emotional fidelity. This research introduces HaikuVerse, a novel framework for transforming poetic abs...
2502.11588
A Unified Modeling Framework for Automated Penetration Testing
cs.AI cs.NI
The integration of artificial intelligence into automated penetration testing (AutoPT) has highlighted the necessity of simulation modeling for the training of intelligent agents, due to its cost-efficiency and swift feedback capabilities. Despite the proliferation of AutoPT research, there is a recognized gap in the...
2502.11594
iMOVE: Instance-Motion-Aware Video Understanding
cs.CV
Enhancing the fine-grained instance spatiotemporal motion perception capabilities of Video Large Language Models is crucial for improving their temporal and general video understanding. However, current models struggle to perceive detailed and complex instance motions. To address these challenges, we have made improv...
2502.11596
LLM Embeddings for Deep Learning on Tabular Data
cs.LG cs.AI
Tabular deep-learning methods require embedding numerical and categorical input features into high-dimensional spaces before processing them. Existing methods deal with this heterogeneous nature of tabular data by employing separate type-specific encoding approaches. This limits the cross-table transfer potential and...
2502.11598
Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?
cs.CL
The radioactive nature of Large Language Model (LLM) watermarking enables the detection of watermarks inherited by student models when trained on the outputs of watermarked teacher models, making it a promising tool for preventing unauthorized knowledge distillation. However, the robustness of watermark radioactivity...
2502.11599
Self-orthogonal codes from plateaued functions and their applications in quantum codes and LCD codes
cs.IT math.IT
Self-orthogonal codes have received great attention due to their important applications in quantum codes, LCD codes and lattices. Recently, several families of self-orthogonal codes containing the all-$1$ vector were constructed by augmentation technique. In this paper, utilizing plateaued functions, we construct som...
2502.11603
DR.GAP: Mitigating Bias in Large Language Models using Gender-Aware Prompting with Demonstration and Reasoning
cs.CL cs.AI
Large Language Models (LLMs) exhibit strong natural language processing capabilities but also inherit and amplify societal biases, including gender bias, raising fairness concerns. Existing debiasing methods face significant limitations: parameter tuning requires access to model weights, prompt-based approaches often...
2502.11604
An Actor-Critic Algorithm with Function Approximation for Risk Sensitive Cost Markov Decision Processes
cs.LG stat.ML
In this paper, we consider the risk-sensitive cost criterion with exponentiated costs for Markov decision processes and develop a model-free policy gradient algorithm in this setting. Unlike additive cost criteria such as average or discounted cost, the risk-sensitive cost criterion is less studied due to the complex...
2502.11607
GraphThought: Graph Combinatorial Optimization with Thought Generation
cs.LG
Large language models (LLMs) have demonstrated remarkable capabilities across various domains, especially in text processing and generative tasks. Recent advancements in the reasoning capabilities of state-of-the-art LLMs, such as OpenAI-o1, have significantly broadened their applicability, particularly in complex pr...
2502.11609
Exploiting Task Relationships for Continual Learning Using Transferability-Aware Task Embeddings
cs.LG
Continual learning (CL) has been an essential topic in the contemporary application of deep neural networks, where catastrophic forgetting (CF) can impede a model's ability to acquire knowledge progressively. Existing CL strategies primarily address CF by regularizing model updates or separating task-specific and sha...
2502.11610
Accuracy Assessment of OpenAlex and Clarivate Scholar ID with an LLM-Assisted Benchmark
cs.IR
In quantitative SciSci (science of science) studies, accurately identifying individual scholars is paramount for scientific data analysis. However, the variability in how names are represented-due to commonality, abbreviations, and different spelling conventions-complicates this task. While identifier systems like OR...
2502.11611
Identifying Gender Stereotypes and Biases in Automated Translation from English to Italian using Similarity Networks
cs.CL cs.AI
This paper is a collaborative effort between Linguistics, Law, and Computer Science to evaluate stereotypes and biases in automated translation systems. We advocate gender-neutral translation as a means to promote gender inclusion and improve the objectivity of machine translation. Our approach focuses on identifying...
2502.11612
Maximum Entropy Reinforcement Learning with Diffusion Policy
cs.LG cs.AI
The Soft Actor-Critic (SAC) algorithm with a Gaussian policy has become a mainstream implementation for realizing the Maximum Entropy Reinforcement Learning (MaxEnt RL) objective, which incorporates entropy maximization to encourage exploration and enhance policy robustness. While the Gaussian policy performs well on...
2502.11614
Is Human-Like Text Liked by Humans? Multilingual Human Detection and Preference Against AI
cs.CL cs.AI
Prior studies have shown that distinguishing text generated by large language models (LLMs) from human-written one is highly challenging, and often no better than random guessing. To verify the generalizability of this finding across languages and domains, we perform an extensive case study to identify the upper boun...
2502.11617
In-Context Parametric Inference: Point or Distribution Estimators?
cs.LG cs.AI stat.ML
Bayesian and frequentist inference are two fundamental paradigms in statistical estimation. Bayesian methods treat hypotheses as random variables, incorporating priors and updating beliefs via Bayes' theorem, whereas frequentist methods assume fixed but unknown hypotheses, relying on estimators like maximum likelihoo...
2502.11618
Real-time Neural Rendering of LiDAR Point Clouds
cs.CV cs.GR
Static LiDAR scanners produce accurate, dense, colored point clouds, but often contain obtrusive artifacts which makes them ill-suited for direct display. We propose an efficient method to render photorealistic images of such scans without any expensive preprocessing or training of a scene-specific model. A naive pro...
2502.11619
Membership Inference Attacks for Face Images Against Fine-Tuned Latent Diffusion Models
cs.CV
The rise of generative image models leads to privacy concerns when it comes to the huge datasets used to train such models. This paper investigates the possibility of inferring if a set of face images was used for fine-tuning a Latent Diffusion Model (LDM). A Membership Inference Attack (MIA) method is presented for ...
2502.11633
CLASS: Enhancing Cross-Modal Text-Molecule Retrieval Performance and Training Efficiency
cs.CL
Cross-modal text-molecule retrieval task bridges molecule structures and natural language descriptions. Existing methods predominantly focus on aligning text modality and molecule modality, yet they overlook adaptively adjusting the learning states at different training stages and enhancing training efficiency. To ta...
2502.11638
Enhancing Out-of-Distribution Detection in Medical Imaging with Normalizing Flows
cs.CV
Out-of-distribution (OOD) detection is crucial in AI-driven medical imaging to ensure reliability and safety by identifying inputs outside a model's training distribution. Existing methods often require retraining or modifications to pre-trained models, which is impractical for clinical applications. This study intro...
2502.11639
Neural Interpretable Reasoning
cs.LG cs.AI cs.NE
We formalize a novel modeling framework for achieving interpretability in deep learning, anchored in the principle of inference equivariance. While the direct verification of interpretability scales exponentially with the number of variables of the system, we show that this complexity can be mitigated by treating int...
2502.11641
A Zero-Knowledge Proof for the Syndrome Decoding Problem in the Lee Metric
cs.CR cs.IT math.IT
The syndrome decoding problem is one of the NP-complete problems lying at the foundation of code-based cryptography. The variant thereof where the distance between vectors is measured with respect to the Lee metric, rather than the more commonly used Hamming metric, has been analyzed recently in several works due to ...
2502.11642
GaussianMotion: End-to-End Learning of Animatable Gaussian Avatars with Pose Guidance from Text
cs.CV
In this paper, we introduce GaussianMotion, a novel human rendering model that generates fully animatable scenes aligned with textual descriptions using Gaussian Splatting. Although existing methods achieve reasonable text-to-3D generation of human bodies using various 3D representations, they often face limitations ...
2502.11644
InTec: integrated things-edge computing: a framework for distributing machine learning pipelines in edge AI systems
cs.DC cs.AI
With the rapid expansion of the Internet of Things (IoT), sensors, smartphones, and wearables have become integral to daily life, powering smart applications in home automation, healthcare, and intelligent transportation. However, these advancements face significant challenges due to latency and bandwidth constraints...
2502.11645
Deviation Ratings: A General, Clone-Invariant Rating Method
cs.GT cs.CL cs.MA stat.OT
Many real-world multi-agent or multi-task evaluation scenarios can be naturally modelled as normal-form games due to inherent strategic (adversarial, cooperative, and mixed motive) interactions. These strategic interactions may be agentic (e.g. players trying to win), fundamental (e.g. cost vs quality), or complement...
2502.11646
Hyperspherical Energy Transformer with Recurrent Depth
cs.LG
Transformer-based foundation models have achieved unprecedented success with a gigantic amount of parameters and computational resources. Yet, the core building blocks of these models, the Transformer layers, and how they are arranged and configured are primarily engineered from the bottom up and driven by heuristics...
2502.11647
DELMAN: Dynamic Defense Against Large Language Model Jailbreaking with Model Editing
cs.CR cs.AI
Large Language Models (LLMs) are widely applied in decision making, but their deployment is threatened by jailbreak attacks, where adversarial users manipulate model behavior to bypass safety measures. Existing defense mechanisms, such as safety fine-tuning and model editing, either require extensive parameter modifi...
2502.11649
Competing LLM Agents in a Non-Cooperative Game of Opinion Polarisation
cs.AI cs.SI
We introduce a novel non-cooperative game to analyse opinion formation and resistance, incorporating principles from social psychology such as confirmation bias, resource constraints, and influence penalties. Our simulation features Large Language Model (LLM) agents competing to influence a population, with penalties...
2502.11651
MMXU: A Multi-Modal and Multi-X-ray Understanding Dataset for Disease Progression
cs.CV cs.AI
Large vision-language models (LVLMs) have shown great promise in medical applications, particularly in visual question answering (MedVQA) and diagnosis from medical images. However, existing datasets and models often fail to consider critical aspects of medical diagnostics, such as the integration of historical recor...
2502.11655
Object-Centric Image to Video Generation with Language Guidance
cs.CV
Accurate and flexible world models are crucial for autonomous systems to understand their environment and predict future events. Object-centric models, with structured latent spaces, have shown promise in modeling object dynamics and interactions, but often face challenges in scaling to complex datasets and incorpora...
2502.11656
Uncovering the Impact of Chain-of-Thought Reasoning for Direct Preference Optimization: Lessons from Text-to-SQL
cs.CL cs.DB
Direct Preference Optimization (DPO) has proven effective in complex reasoning tasks like math word problems and code generation. However, when applied to Text-to-SQL datasets, it often fails to improve performance and can even degrade it. Our investigation reveals the root cause: unlike math and code tasks, which na...
2502.11657
How does ion temperature gradient turbulence depend on magnetic geometry? Insights from data and machine learning
physics.plasm-ph cs.LG
Magnetic geometry has a significant effect on the level of turbulent transport in fusion plasmas. Here, we model and analyze this dependence using multiple machine learning methods and a dataset of > 200,000 nonlinear simulations of ion-temperature-gradient turbulence in diverse non-axisymmetric geometries. The datas...
2502.11658
"I'm not for sale" -- Perceptions and limited awareness of privacy risks by digital natives about location data
cs.CY cs.AI cs.CR
Although mobile devices benefit users in their daily lives in numerous ways, they also raise several privacy concerns. For instance, they can reveal sensitive information that can be inferred from location data. This location data is shared through service providers as well as mobile applications. Understanding how a...
2502.11663
MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction
cs.CV
World models that forecast environmental changes from actions are vital for autonomous driving models with strong generalization. The prevailing driving world model mainly build on video prediction model. Although these models can produce high-fidelity video sequences with advanced diffusion-based generator, they are...
2502.11664
VRoPE: Rotary Position Embedding for Video Large Language Models
cs.AI
Rotary Position Embedding (RoPE) has shown strong performance in text-based Large Language Models (LLMs), but extending it to video remains a challenge due to the intricate spatiotemporal structure of video frames. Existing adaptations, such as RoPE-3D, attempt to encode spatial and temporal dimensions separately but...
2502.11665
On the kernel learning problem
stat.ML cs.LG math.CA math.FA math.OC
The classical kernel ridge regression problem aims to find the best fit for the output $Y$ as a function of the input data $X\in \mathbb{R}^d$, with a fixed choice of regularization term imposed by a given choice of a reproducing kernel Hilbert space, such as a Sobolev space. Here we consider a generalization of the ...
2502.11669
Deep Subspace Learning for Surface Anomaly Classification Based on 3D Point Cloud Data
stat.ML cs.LG
Surface anomaly classification is critical for manufacturing system fault diagnosis and quality control. However, the following challenges always hinder accurate anomaly classification in practice: (i) Anomaly patterns exhibit intra-class variation and inter-class similarity, presenting challenges in the accurate cla...
2502.11671
Diversity-Oriented Data Augmentation with Large Language Models
cs.CL cs.AI cs.LG
Data augmentation is an essential technique in natural language processing (NLP) for enriching training datasets by generating diverse samples. This process is crucial for improving the robustness and generalization capabilities of NLP models. However, a significant challenge remains: \textit{Insufficient Attention t...
2502.11672
Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs
cs.LG stat.ME stat.ML
We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward N...
2502.11673
Best of Both Worlds: Regret Minimization versus Minimax Play
cs.LG stat.ML
In this paper, we investigate the existence of online learning algorithms with bandit feedback that simultaneously guarantee $O(1)$ regret compared to a given comparator strategy, and $O(\sqrt{T})$ regret compared to the best strategy in hindsight, where $T$ is the number of rounds. We provide the first affirmative a...
2502.11677
Towards Fully Exploiting LLM Internal States to Enhance Knowledge Boundary Perception
cs.CL
Large language models (LLMs) exhibit impressive performance across diverse tasks but often struggle to accurately gauge their knowledge boundaries, leading to confident yet incorrect responses. This paper explores leveraging LLMs' internal states to enhance their perception of knowledge boundaries from efficiency and...
2502.11678
Exploring LLM-based Student Simulation for Metacognitive Cultivation
cs.CY cs.CL
Metacognitive education plays a crucial role in cultivating students' self-regulation and reflective thinking, providing essential support for those with learning difficulties through academic advising. Simulating students with insufficient learning capabilities using large language models offers a promising approach...
2502.11680
Spectral structure learning for clinical time series
cs.LG
We develop and evaluate a structure learning algorithm for clinical time series. Clinical time series are multivariate time series observed in multiple patients and irregularly sampled, challenging existing structure learning algorithms. We assume that our times series are realizations of StructGP, a k-dimensional mu...
2502.11681
RIDE: Enhancing Large Language Model Alignment through Restyled In-Context Learning Demonstration Exemplars
cs.CL cs.AI
Alignment tuning is crucial for ensuring large language models (LLMs) behave ethically and helpfully. Current alignment approaches require high-quality annotations and significant training resources. This paper proposes a low-cost, tuning-free method using in-context learning (ICL) to enhance LLM alignment. Through a...
2502.11682
Double Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
cs.LG math.OC stat.ML
Strong Differential Privacy (DP) and Optimization guarantees are two desirable properties for a method in Federated Learning (FL). However, existing algorithms do not achieve both properties at once: they either have optimal DP guarantees but rely on restrictive assumptions such as bounded gradients/bounded data hete...
2502.11684
MathFimer: Enhancing Mathematical Reasoning by Expanding Reasoning Steps through Fill-in-the-Middle Task
cs.CL cs.AI
Mathematical reasoning represents a critical frontier in advancing large language models (LLMs). While step-by-step approaches have emerged as the dominant paradigm for mathematical problem-solving in LLMs, the quality of reasoning steps in training data fundamentally constrains the performance of the models. Recent ...
2502.11687
ReVeil: Unconstrained Concealed Backdoor Attack on Deep Neural Networks using Machine Unlearning
cs.CR cs.AI cs.LG
Backdoor attacks embed hidden functionalities in deep neural networks (DNN), triggering malicious behavior with specific inputs. Advanced defenses monitor anomalous DNN inferences to detect such attacks. However, concealed backdoors evade detection by maintaining a low pre-deployment attack success rate (ASR) and res...