id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.12320
Towards Fusing Point Cloud and Visual Representations for Imitation Learning
cs.RO cs.CV
Learning for manipulation requires using policies that have access to rich sensory information such as point clouds or RGB images. Point clouds efficiently capture geometric structures, making them essential for manipulation tasks in imitation learning. In contrast, RGB images provide rich texture and semantic inform...
2502.12323
Adversarial Debiasing for Unbiased Parameter Recovery
cs.LG stat.ML
Advances in machine learning and the increasing availability of high-dimensional data have led to the proliferation of social science research that uses the predictions of machine learning models as proxies for measures of human activity or environmental outcomes. However, prediction errors from machine learning mode...
2502.12325
From Dense to Dynamic: Token-Difficulty Driven MoEfication of Pre-Trained LLMs
cs.CL
Training large language models (LLMs) for different inference constraints is computationally expensive, limiting control over efficiency-accuracy trade-offs. Moreover, once trained, these models typically process tokens uniformly, regardless of their complexity, leading to static and inflexible behavior. In this pape...
2502.12326
Stability Bounds for Smooth Optimal Transport Maps and their Statistical Implications
math.ST cs.LG stat.ME stat.ML stat.TH
We study estimators of the optimal transport (OT) map between two probability distributions. We focus on plugin estimators derived from the OT map between estimates of the underlying distributions. We develop novel stability bounds for OT maps which generalize those in past work, and allow us to reduce the problem of...
2502.12327
Learning Plasma Dynamics and Robust Rampdown Trajectories with Predict-First Experiments at TCV
physics.plasm-ph cs.AI cs.LG cs.SY eess.SY
The rampdown in tokamak operations is a difficult to simulate phase during which the plasma is often pushed towards multiple instability limits. To address this challenge, and reduce the risk of disrupting operations, we leverage recent advances in Scientific Machine Learning (SciML) to develop a neural state-space m...
2502.12328
LM Agents for Coordinating Multi-User Information Gathering
cs.CL cs.AI
This paper introduces PeopleJoin, a benchmark for evaluating LM-mediated collaborative problem solving. Given a user request, PeopleJoin agents must identify teammates who might be able to assist, converse with these teammates to gather information, and finally compile a useful answer or summary for the original user...
2502.12329
A Novel Unified Parametric Assumption for Nonconvex Optimization
cs.LG cs.AI math.OC stat.ML
Nonconvex optimization is central to modern machine learning, but the general framework of nonconvex optimization yields weak convergence guarantees that are too pessimistic compared to practice. On the other hand, while convexity enables efficient optimization, it is of limited applicability to many practical proble...
2502.12330
X-IL: Exploring the Design Space of Imitation Learning Policies
cs.RO cs.LG
Designing modern imitation learning (IL) policies requires making numerous decisions, including the selection of feature encoding, architecture, policy representation, and more. As the field rapidly advances, the range of available options continues to grow, creating a vast and largely unexplored design space for IL ...
2502.12337
Stochastic Real-Time Deception in Nash Equilibrium Seeking for Games with Quadratic Payoffs
eess.SY cs.SY
In multi-agent autonomous systems, deception is a fundamental concept which characterizes the exploitation of unbalanced information to mislead victims into choosing oblivious actions. This effectively alters the system's long term behavior, leading to outcomes that may be beneficial to the deceiver but detrimental t...
2502.12340
Understanding Silent Data Corruption in LLM Training
cs.LG cs.DC
As the scale of training large language models (LLMs) increases, one emergent failure is silent data corruption (SDC), where hardware produces incorrect computations without explicit failure signals. In this work, we are the first to investigate the impact of real-world SDCs on LLM training by comparing model trainin...
2502.12342
REAL-MM-RAG: A Real-World Multi-Modal Retrieval Benchmark
cs.IR cs.CV
Accurate multi-modal document retrieval is crucial for Retrieval-Augmented Generation (RAG), yet existing benchmarks do not fully capture real-world challenges with their current design. We introduce REAL-MM-RAG, an automatically generated benchmark designed to address four key properties essential for real-world ret...
2502.12343
Energy-Efficient Flat Precoding for MIMO Systems
cs.IT math.IT
This paper addresses the suboptimal energy efficiency of conventional digital precoding schemes in multiple-input multiple-output (MIMO) systems. Through an analysis of the power amplifier (PA) output power distribution associated with conventional precoders, it is observed that these power distributions can be quite...
2502.12346
QuZO: Quantized Zeroth-Order Fine-Tuning for Large Language Models
cs.LG cs.AI
Language Models (LLMs) are often quantized to lower precision to reduce the memory cost and latency in inference. However, quantization often degrades model performance, thus fine-tuning is required for various down-stream tasks. Traditional fine-tuning methods such as stochastic gradient descent and Adam optimizatio...
2502.12347
Improving Grip Stability Using Passive Compliant Microspine Arrays for Soft Robots in Unstructured Terrain
cs.RO
Microspine grippers are small spines commonly found on insect legs that reinforce surface interaction by engaging with asperities to increase shear force and traction. An array of such microspines, when integrated into the limbs or undercarriage of a robot, can provide the ability to maneuver uneven terrains, travers...
2502.12350
Mamute: high-performance computing for geophysical methods
cs.CE
Due to their high computational cost, geophysical applications are typically designed to run in large computing systems. Because of that, such applications must implement several high-performance techniques to use the computational resources better. In this paper, we present Mamute, a software that delivers wave equa...
2502.12352
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
cs.LG cs.AI
We introduce Attention Graphs, a new tool for mechanistic interpretability of Graph Neural Networks (GNNs) and Graph Transformers based on the mathematical equivalence between message passing in GNNs and the self-attention mechanism in Transformers. Attention Graphs aggregate attention matrices across Transformer lay...
2502.12353
Stability-based Generalization Bounds for Variational Inference
cs.LG
Variational inference (VI) is widely used for approximate inference in Bayesian machine learning. In addition to this practical success, generalization bounds for variational inference and related algorithms have been developed, mostly through the connection to PAC-Bayes analysis. A second line of work has provided a...
2502.12354
Human-centered explanation does not fit all: The interplay of sociotechnical, cognitive, and individual factors in the effect AI explanations in algorithmic decision-making
cs.CY cs.AI cs.HC
Recent XAI studies have investigated what constitutes a \textit{good} explanation in AI-assisted decision-making. Despite the widely accepted human-friendly properties of explanations, such as contrastive and selective, existing studies have yielded inconsistent findings. To address these gaps, our study focuses on t...
2502.12355
Hovering Flight of Soft-Actuated Insect-Scale Micro Aerial Vehicles using Deep Reinforcement Learning
cs.RO cs.LG cs.SY eess.SY
Soft-actuated insect-scale micro aerial vehicles (IMAVs) pose unique challenges for designing robust and computationally efficient controllers. At the millimeter scale, fast robot dynamics ($\sim$ms), together with system delay, model uncertainty, and external disturbances significantly affect flight performances. He...
2502.12359
LanP: Rethinking the Impact of Language Priors in Large Vision-Language Models
cs.CV
Large Vision-Language Models (LVLMs) have shown impressive performance in various tasks. However, LVLMs suffer from hallucination, which hinders their adoption in the real world. Existing studies emphasized that the strong language priors of LVLMs can overpower visual information, causing hallucinations. However, the...
2502.12360
Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
cs.CV cs.AI cs.LG
Studying systematic weaknesses of DNNs has gained prominence in the last few years with the rising focus on building safe AI systems. Slice discovery methods (SDMs) are prominent algorithmic approaches for finding such systematic weaknesses. They identify top-k semantically coherent slices/subsets of data where a DNN...
2502.12361
ConFit v2: Improving Resume-Job Matching using Hypothetical Resume Embedding and Runner-Up Hard-Negative Mining
cs.CL
A reliable resume-job matching system helps a company recommend suitable candidates from a pool of resumes and helps a job seeker find relevant jobs from a list of job posts. However, since job seekers apply only to a few jobs, interaction labels in resume-job datasets are sparse. We introduce ConFit v2, an improveme...
2502.12362
Classifiers of Data Sharing Statements in Clinical Trial Records
cs.CL cs.AI
Digital individual participant data (IPD) from clinical trials are increasingly distributed for potential scientific reuse. The identification of available IPD, however, requires interpretations of textual data-sharing statements (DSS) in large databases. Recent advancements in computational linguistics include pre-t...
2502.12365
On the Performance of Uplink Pinching Antenna Systems (PASS)
cs.IT math.IT
Pinching antenna (PA) is a flexible antenna composed of a waveguide and multiple dielectric particles, which is capable of reconfiguring wireless channels intelligently in line-of-sight links. By leveraging the unique features of PAs, we exploit the uplink (UL) transmission in pinching antenna systems (PASS). To comp...
2502.12366
ScriptoriumWS: A Code Generation Assistant for Weak Supervision
cs.LG
Weak supervision is a popular framework for overcoming the labeled data bottleneck: the need to obtain labels for training data. In weak supervision, multiple noisy-but-cheap sources are used to provide guesses of the label and are aggregated to produce high-quality pseudolabels. These sources are often expressed as ...
2502.12370
Positional Encoding in Transformer-Based Time Series Models: A Survey
cs.LG
Recent advancements in transformer-based models have greatly improved time series analysis, providing robust solutions for tasks such as forecasting, anomaly detection, and classification. A crucial element of these models is positional encoding, which allows transformers to capture the intrinsic sequential nature of...
2502.12371
IMLE Policy: Fast and Sample Efficient Visuomotor Policy Learning via Implicit Maximum Likelihood Estimation
cs.RO cs.AI cs.LG
Recent advances in imitation learning, particularly using generative modelling techniques like diffusion, have enabled policies to capture complex multi-modal action distributions. However, these methods often require large datasets and multiple inference steps for action generation, posing challenges in robotics whe...
2502.12372
Factual Inconsistency in Data-to-Text Generation Scales Exponentially with LLM Size: A Statistical Validation
cs.CL cs.AI cs.LG
Monitoring factual inconsistency is essential for ensuring trustworthiness in data-to-text generation (D2T). While large language models (LLMs) have demonstrated exceptional performance across various D2T tasks, previous studies on scaling laws have primarily focused on generalization error through power law scaling ...
2502.12373
Soft Robotics for Search and Rescue: Advancements, Challenges, and Future Directions
cs.RO cs.AI
Soft robotics has emerged as a transformative technology in Search and Rescue (SAR) operations, addressing challenges in navigating complex, hazardous environments that often limit traditional rigid robots. This paper critically examines advancements in soft robotic technologies tailored for SAR applications, focusin...
2502.12375
UltraGen: Extremely Fine-grained Controllable Generation via Attribute Reconstruction and Global Preference Optimization
cs.CL
Fine granularity is an essential requirement for controllable text generation, which has seen rapid growth with the ability of LLMs. However, existing methods focus mainly on a small set of attributes like 3 to 5, and their performance degrades significantly when the number of attributes increases to the next order o...
2502.12377
Alignment and Adversarial Robustness: Are More Human-Like Models More Secure?
cs.CV
Representational alignment refers to the extent to which a model's internal representations mirror biological vision, offering insights into both neural similarity and functional correspondence. Recently, some more aligned models have demonstrated higher resiliency to adversarial examples, raising the question of whe...
2502.12378
Pragmatics in the Era of Large Language Models: A Survey on Datasets, Evaluation, Opportunities and Challenges
cs.CL
Understanding pragmatics-the use of language in context-is crucial for developing NLP systems capable of interpreting nuanced language use. Despite recent advances in language technologies, including large language models, evaluating their ability to handle pragmatic phenomena such as implicatures and references rema...
2502.12379
OCT Data is All You Need: How Vision Transformers with and without Pre-training Benefit Imaging
cs.CV cs.LG
Optical Coherence Tomography (OCT) provides high-resolution cross-sectional images useful for diagnosing various diseases, but their distinct characteristics from natural images raise questions about whether large-scale pre-training on datasets like ImageNet is always beneficial. In this paper, we investigate the imp...
2502.12381
Linear Diffusion Networks: Harnessing Diffusion Processes for Global Interactions
cs.LG
Diffusion kernels capture global dependencies. We present Linear Diffusion Networks (LDNs), a novel architecture that reinterprets sequential data processing as a unified diffusion process. Our model integrates adaptive diffusion modules with localized nonlinear updates and a diffusion-inspired attention mechanism. T...
2502.12382
Hybrid Machine Learning Models for Intrusion Detection in IoT: Leveraging a Real-World IoT Dataset
cs.CR cs.AI
The rapid growth of the Internet of Things (IoT) has revolutionized industries, enabling unprecedented connectivity and functionality. However, this expansion also increases vulnerabilities, exposing IoT networks to increasingly sophisticated cyberattacks. Intrusion Detection Systems (IDS) are crucial for mitigating ...
2502.12383
Locally-Deployed Chain-of-Thought (CoT) Reasoning Model in Chemical Engineering: Starting from 30 Experimental Data
cs.LG stat.AP
In the field of chemical engineering, traditional data-processing and prediction methods face significant challenges. Machine-learning and large-language models (LLMs) also have their respective limitations. This paper explores the application of the Chain-of-Thought (CoT) reasoning model in chemical engineering, sta...
2502.12384
Scalable Back-Propagation-Free Training of Optical Physics-Informed Neural Networks
cs.LG
Physics-informed neural networks (PINNs) have shown promise in solving partial differential equations (PDEs), with growing interest in their energy-efficient, real-time training on edge devices. Photonic computing offers a potential solution to achieve this goal because of its ultra-high operation speed. However, the...
2502.12386
Bridging the Data Gap in AI Reliability Research and Establishing DR-AIR, a Comprehensive Data Repository for AI Reliability
stat.AP cs.AI
Artificial intelligence (AI) technology and systems have been advancing rapidly. However, ensuring the reliability of these systems is crucial for fostering public confidence in their use. This necessitates the modeling and analysis of reliability data specific to AI systems. A major challenge in AI reliability resea...
2502.12388
Achieving Upper Bound Accuracy of Joint Training in Continual Learning
cs.LG
Continual learning has been an active research area in machine learning, focusing on incrementally learning a sequence of tasks. A key challenge is catastrophic forgetting (CF), and most research efforts have been directed toward mitigating this issue. However, a significant gap remains between the accuracy achieved ...
2502.12391
Reward-Safety Balance in Offline Safe RL via Diffusion Regularization
cs.LG
Constrained reinforcement learning (RL) seeks high-performance policies under safety constraints. We focus on an offline setting where the agent has only a fixed dataset -- common in realistic tasks to prevent unsafe exploration. To address this, we propose Diffusion-Regularized Constrained Offline Reinforcement Lear...
2502.12393
Time Series Treatment Effects Analysis with Always-Missing Controls
stat.ME cs.AI cs.LG stat.ML
Estimating treatment effects in time series data presents a significant challenge, especially when the control group is always unobservable. For example, in analyzing the effects of Christmas on retail sales, we lack direct observation of what would have occurred in late December without the Christmas impact. To addr...
2502.12395
Efficient Neural SDE Training using Wiener-Space Cubature
cs.LG
A neural stochastic differential equation (SDE) is an SDE with drift and diffusion terms parametrized by neural networks. The training procedure for neural SDEs consists of optimizing the SDE vector field (neural network) parameters to minimize the expected value of an objective functional on infinite-dimensional pat...
2502.12396
Scientific Machine Learning of Flow Resistance Using Universal Shallow Water Equations with Differentiable Programming
physics.flu-dyn cs.CE cs.LG
Shallow water equations (SWEs) are the backbone of most hydrodynamics models for flood prediction, river engineering, and many other water resources applications. The estimation of flow resistance, i.e., the Manning's roughness coefficient $n$, is crucial for ensuring model accuracy, and has been previously determine...
2502.12397
Could AI Leapfrog the Web? Evidence from Teachers in Sierra Leone
cs.CY cs.AI cs.HC econ.GN q-fin.EC
Access to digital information is a driver of economic development. But although 85% of sub-Saharan Africa's population is covered by mobile broadband signal, only 37% use the internet, and those who do seldom use the web. We investigate whether AI can bridge this gap by analyzing how 469 teachers use an AI chatbot in...
2502.12398
Solving the Cold Start Problem on One's Own as an End User via Preference Transfer
cs.IR cs.AI cs.LG
We propose a new approach that enables end users to directly solve the cold start problem by themselves. The cold start problem is a common issue in recommender systems, and many methods have been proposed to address the problem on the service provider's side. However, when the service provider does not take action, ...
2502.12401
Risk Assessment of Transmission Lines Against Grid-ignited Wildfires
cs.CE
Wildfires ignited by the power lines have become increasingly common over the past decade. Enhancing the operational and financial resilience of power grids against wildfires involves a multifaceted approach. Key proactive measures include meticulous vegetation management, strategic grid hardening such as infrastruct...
2502.12403
Sensing-based Robustness Challenges in Agricultural Robotic Harvesting
cs.RO cs.SY eess.SY
This paper presents the challenges agricultural robotic harvesters face in detecting and localising fruits under various environmental disturbances. In controlled laboratory settings, both the traditional HSV (Hue Saturation Value) transformation and the YOLOv8 (You Only Look Once) deep learning model were employed. ...
2502.12404
WMT24++: Expanding the Language Coverage of WMT24 to 55 Languages & Dialects
cs.CL
As large language models (LLM) become more and more capable in languages other than English, it is important to collect benchmark datasets in order to evaluate their multilingual performance, including on tasks like machine translation (MT). In this work, we extend the WMT24 dataset to cover 55 languages by collectin...
2502.12405
An Investment Prioritization Model for Wildfire Risk Mitigation Through Power Line Undergrounding
cs.CE
Grid-ignited wildfires are one of the most destructive catastrophic events, profoundly affecting the built and natural environments. Burying power lines is an effective solution for mitigating the risk of wildfire ignition. However, it is a costly capital expenditure (CapEx) requiring meticulous planning and investme...
2502.12406
Multi-vision-based Picking Point Localisation of Target Fruit for Harvesting Robots
cs.RO cs.CV
This paper presents multi-vision-based localisation strategies for harvesting robots. Identifying picking points accurately is essential for robotic harvesting because insecure grasping can lead to economic loss through fruit damage and dropping. In this study, two multi-vision-based localisation methods, namely the ...
2502.12408
On the Robust Approximation of ASR Metrics
cs.CL
Recent advances in speech foundation models are largely driven by scaling both model size and data, enabling them to perform a wide range of tasks, including speech recognition. Traditionally, ASR models are evaluated using metrics like Word Error Rate (WER) and Character Error Rate (CER), which depend on ground trut...
2502.12411
Gradient Co-occurrence Analysis for Detecting Unsafe Prompts in Large Language Models
cs.CL cs.AI
Unsafe prompts pose significant safety risks to large language models (LLMs). Existing methods for detecting unsafe prompts rely on data-driven fine-tuning to train guardrail models, necessitating significant data and computational resources. In contrast, recent few-shot gradient-based methods emerge, requiring only ...
2502.12412
Incomplete Graph Learning: A Comprehensive Survey
cs.LG eess.IV
Graph learning is a prevalent field that operates on ubiquitous graph data. Effective graph learning methods can extract valuable information from graphs. However, these methods are non-robust and affected by missing attributes in graphs, resulting in sub-optimal outcomes. This has led to the emergence of incomplete ...
2502.12413
DivIL: Unveiling and Addressing Over-Invariance for Out-of- Distribution Generalization
cs.LG
Out-of-distribution generalization is a common problem that expects the model to perform well in the different distributions even far from the train data. A popular approach to addressing this issue is invariant learning (IL), in which the model is compiled to focus on invariant features instead of spurious features ...
2502.12414
Lost in Transcription, Found in Distribution Shift: Demystifying Hallucination in Speech Foundation Models
cs.CL
Speech foundation models trained at a massive scale, both in terms of model and data size, result in robust systems capable of performing multiple speech tasks, including automatic speech recognition (ASR). These models transcend language and domain barriers, yet effectively measuring their performance remains a chal...
2502.12415
Gaseous Object Detection
cs.CV
Object detection, a fundamental and challenging problem in computer vision, has experienced rapid development due to the effectiveness of deep learning. The current objects to be detected are mostly rigid solid substances with apparent and distinct visual characteristics. In this paper, we endeavor on a scarcely expl...
2502.12418
Boosting Illuminant Estimation in Deep Color Constancy through Enhancing Brightness Robustness
cs.CV cs.AI
Color constancy estimates illuminant chromaticity to correct color-biased images. Recently, Deep Neural Network-driven Color Constancy (DNNCC) models have made substantial advancements. Nevertheless, the potential risks in DNNCC due to the vulnerability of deep neural networks have not yet been explored. In this pape...
2502.12420
Sens-Merging: Sensitivity-Guided Parameter Balancing for Merging Large Language Models
cs.CL cs.AI
Recent advances in large language models have led to numerous task-specialized fine-tuned variants, creating a need for efficient model merging techniques that preserve specialized capabilities while avoiding costly retraining. While existing task vector-based merging methods show promise, they typically apply unifor...
2502.12421
Wi-Chat: Large Language Model Powered Wi-Fi Sensing
cs.CL
Recent advancements in Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse tasks. However, their potential to integrate physical model knowledge for real-world signal interpretation remains largely unexplored. In this work, we introduce Wi-Chat, the first LLM-powered Wi-Fi-based huma...
2502.12425
Robust Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning
cs.CV
In this paper, we propose a new Robust Disentangled Counterfactual Learning (RDCL) approach for physical audiovisual commonsense reasoning. The task aims to infer objects' physics commonsense based on both video and audio input, with the main challenge being how to imitate the reasoning ability of humans, even under ...
2502.12427
Multi Image Super Resolution Modeling for Earth System Models
cs.CV
Super-resolution (SR) techniques are essential for improving Earth System Model (ESM) data's spatial resolution, which helps better understand complex environmental processes. This paper presents a new algorithm, ViFOR, which combines Vision Transformers (ViT) and Implicit Neural Representation Networks (INRs) to gen...
2502.12430
Bridge the Gaps between Machine Unlearning and AI Regulation
cs.LG cs.AI
The "right to be forgotten" and the data privacy laws that encode it have motivated machine unlearning since its earliest days. Now, an inbound wave of artificial intelligence regulations - like the European Union's Artificial Intelligence Act (AIA) - potentially offer important new use cases for machine unlearning. ...
2502.12435
A Survey on Large Language Models for Automated Planning
cs.AI cs.CL
The planning ability of Large Language Models (LLMs) has garnered increasing attention in recent years due to their remarkable capacity for multi-step reasoning and their ability to generalize across a wide range of domains. While some researchers emphasize the potential of LLMs to perform complex planning tasks, oth...
2502.12436
Should I Trust You? Detecting Deception in Negotiations using Counterfactual RL
cs.CL
An increasingly prevalent socio-technical problem is people being taken in by offers that sound ``too good to be true'', where persuasion and trust shape decision-making. This paper investigates how \abr{ai} can help detect these deceptive scenarios. We analyze how humans strategically deceive each other in \textit{D...
2502.12442
HopRAG: Multi-Hop Reasoning for Logic-Aware Retrieval-Augmented Generation
cs.IR cs.CL
Retrieval-Augmented Generation (RAG) systems often struggle with imperfect retrieval, as traditional retrievers focus on lexical or semantic similarity rather than logical relevance. To address this, we propose HopRAG, a novel RAG framework that augments retrieval with logical reasoning through graph-structured knowl...
2502.12444
SparAMX: Accelerating Compressed LLMs Token Generation on AMX-powered CPUs
cs.LG cs.AI cs.AR cs.PF
Large language models have high compute, latency, and memory requirements. While specialized accelerators such as GPUs and TPUs typically run these workloads, CPUs are more widely available and consume less energy. Accelerating LLMs with CPUs enables broader AI access at a lower cost and power consumption. This accel...
2502.12445
Computational Safety for Generative AI: A Signal Processing Perspective
cs.AI cs.LG stat.ML
AI safety is a rapidly growing area of research that seeks to prevent the harm and misuse of frontier AI technology, particularly with respect to generative AI (GenAI) tools that are capable of creating realistic and high-quality content through text prompts. Examples of such tools include large language models (LLMs...
2502.12446
Multi-Attribute Steering of Language Models via Targeted Intervention
cs.CL cs.AI cs.LG
Inference-time intervention (ITI) has emerged as a promising method for steering large language model (LLM) behavior in a particular direction (e.g., improving helpfulness) by intervening on token representations without costly updates to the LLM's parameters. However, existing ITI approaches fail to scale to multi-a...
2502.12448
From Principles to Applications: A Comprehensive Survey of Discrete Tokenizers in Generation, Comprehension, Recommendation, and Information Retrieval
cs.IR
Discrete tokenizers have emerged as indispensable components in modern machine learning systems, particularly within the context of autoregressive modeling and large language models (LLMs). These tokenizers serve as the critical interface that transforms raw, unstructured data from diverse modalities into discrete to...
2502.12449
YUNet: Improved YOLOv11 Network for Skyline Detection
cs.CV
Skyline detection plays an important role in geolocalizaion, flight control, visual navigation, port security, etc. The appearance of the sky and non-sky areas are variable, because of different weather or illumination environment, which brings challenges to skyline detection. In this research, we proposed the YUNet ...
2502.12450
Investigating and Extending Homans' Social Exchange Theory with Large Language Model based Agents
cs.AI
Homans' Social Exchange Theory (SET) is widely recognized as a basic framework for understanding the formation and emergence of human civilizations and social structures. In social science, this theory is typically studied based on simple simulation experiments or real-world human studies, both of which either lack r...
2502.12453
UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery
cs.LG cs.AI q-bio.BM
Drug discovery is crucial for identifying candidate drugs for various diseases.However, its low success rate often results in a scarcity of annotations, posing a few-shot learning problem. Existing methods primarily focus on single-scale features, overlooking the hierarchical molecular structures that determine diffe...
2502.12454
Benchmarking Zero-Shot Facial Emotion Annotation with Large Language Models: A Multi-Class and Multi-Frame Approach in DailyLife
cs.CV cs.AI cs.LG
This study investigates the feasibility and performance of using large language models (LLMs) to automatically annotate human emotions in everyday scenarios. We conducted experiments on the DailyLife subset of the publicly available FERV39k dataset, employing the GPT-4o-mini model for rapid, zero-shot labeling of key...
2502.12455
DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs
cs.CL
As large language models continue to scale, computational costs and resource consumption have emerged as significant challenges. While existing sparsification methods like pruning reduce computational overhead, they risk losing model knowledge through parameter removal. This paper proposes DSMoE (Dynamic Sparse Mixtu...
2502.12456
Not-So-Optimal Transport Flows for 3D Point Cloud Generation
cs.CV cs.AI
Learning generative models of 3D point clouds is one of the fundamental problems in 3D generative learning. One of the key properties of point clouds is their permutation invariance, i.e., changing the order of points in a point cloud does not change the shape they represent. In this paper, we analyze the recently pr...
2502.12458
An Empirical Evaluation of Encoder Architectures for Fast Real-Time Long Conversational Understanding
cs.CL
Analyzing long text data such as customer call transcripts is a cost-intensive and tedious task. Machine learning methods, namely Transformers, are leveraged to model agent-customer interactions. Unfortunately, Transformers adhere to fixed-length architectures and their self-attention mechanism scales quadratically w...
2502.12459
Stress Testing Generalization: How Minor Modifications Undermine Large Language Model Performance
cs.CL cs.AI cs.LG
This paper investigates the fragility of Large Language Models (LLMs) in generalizing to novel inputs, specifically focusing on minor perturbations in well-established benchmarks (e.g., slight changes in question format or distractor length). Despite high benchmark scores, LLMs exhibit significant accuracy drops and ...
2502.12460
LMN: A Tool for Generating Machine Enforceable Policies from Natural Language Access Control Rules using LLMs
cs.CR cs.LG
Organizations often lay down rules or guidelines called Natural Language Access Control Policies (NLACPs) for specifying who gets access to which information and when. However, these cannot be directly used in a target access control model like Attribute-based Access Control (ABAC). Manually translating the NLACP rul...
2502.12462
Emulating Retrieval Augmented Generation via Prompt Engineering for Enhanced Long Context Comprehension in LLMs
cs.CL
This paper addresses the challenge of comprehending very long contexts in Large Language Models (LLMs) by proposing a method that emulates Retrieval Augmented Generation (RAG) through specialized prompt engineering and chain-of-thought (CoT) reasoning. While recent LLMs support over 100,000 tokens in a single prompt,...
2502.12464
SafeRoute: Adaptive Model Selection for Efficient and Accurate Safety Guardrails in Large Language Models
cs.CL
Deploying large language models (LLMs) in real-world applications requires robust safety guard models to detect and block harmful user prompts. While large safety guard models achieve strong performance, their computational cost is substantial. To mitigate this, smaller distilled models are used, but they often under...
2502.12465
Computational-Statistical Tradeoffs at the Next-Token Prediction Barrier: Autoregressive and Imitation Learning under Misspecification
cs.LG cs.DS
Next-token prediction with the logarithmic loss is a cornerstone of autoregressive sequence modeling, but, in practice, suffers from error amplification, where errors in the model compound and generation quality degrades as sequence length $H$ increases. From a theoretical perspective, this phenomenon should not appe...
2502.12466
EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking
cs.LG cs.AI cs.CL cs.PL cs.SE
Equivalence checking, i.e., determining whether two programs produce identical outputs for all possible inputs, underpins a broad range of applications, including software refactoring, testing, and optimization. We present the task of equivalence checking as a new way to evaluate the code reasoning abilities of large...
2502.12468
MCTS-Judge: Test-Time Scaling in LLM-as-a-Judge for Code Correctness Evaluation
cs.LG cs.AI
The LLM-as-a-Judge paradigm shows promise for evaluating generative content but lacks reliability in reasoning-intensive scenarios, such as programming. Inspired by recent advances in reasoning models and shifts in scaling laws, we pioneer bringing test-time computation into LLM-as-a-Judge, proposing MCTS-Judge, a re...
2502.12470
Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking
cs.CL
Large Language Models (LLMs) exhibit impressive reasoning abilities, yet their reliance on structured step-by-step processing reveals a critical limitation. While human cognition fluidly adapts between intuitive, heuristic (System 1) and analytical, deliberative (System 2) reasoning depending on the context, LLMs lac...
2502.12476
CoCo-CoLa: Evaluating Language Adherence in Multilingual LLMs
cs.CL
Multilingual Large Language Models (LLMs) develop cross-lingual abilities despite being trained on limited parallel data. However, they often struggle to generate responses in the intended language, favoring high-resource languages such as English. In this work, we introduce CoCo-CoLa (Correct Concept - Correct Langu...
2502.12477
Savaal: Scalable Concept-Driven Question Generation to Enhance Human Learning
cs.CL
Assessing and enhancing human learning through question-answering is vital, yet automating this process remains challenging. While large language models (LLMs) excel at summarization and query responses, their ability to generate meaningful questions for learners is underexplored. We propose Savaal, a scalable ques...
2502.12478
MSE-Adapter: A Lightweight Plugin Endowing LLMs with the Capability to Perform Multimodal Sentiment Analysis and Emotion Recognition
cs.CL
Current Multimodal Sentiment Analysis (MSA) and Emotion Recognition in Conversations (ERC) methods based on pre-trained language models exhibit two primary limitations: 1) Once trained for MSA and ERC tasks, these pre-trained language models lose their original generalized capabilities. 2) They demand considerable ...
2502.12479
MotifBench: A standardized protein design benchmark for motif-scaffolding problems
cs.LG q-bio.BM
The motif-scaffolding problem is a central task in computational protein design: Given the coordinates of atoms in a geometry chosen to confer a desired biochemical function (a motif), the task is to identify diverse protein structures (scaffolds) that include the motif and maintain its geometry. Significant recent p...
2502.12481
Predicate Hierarchies Improve Few-Shot State Classification
cs.CV cs.AI cs.LG cs.RO
State classification of objects and their relations is core to many long-horizon tasks, particularly in robot planning and manipulation. However, the combinatorial explosion of possible object-predicate combinations, coupled with the need to adapt to novel real-world environments, makes it a desideratum for state cla...
2502.12483
The Knowledge Microscope: Features as Better Analytical Lenses than Neurons
cs.CL
Previous studies primarily utilize MLP neurons as units of analysis for understanding the mechanisms of factual knowledge in Language Models (LMs); however, neurons suffer from polysemanticity, leading to limited knowledge expression and poor interpretability. In this paper, we first conduct preliminary experiments t...
2502.12484
LocalEscaper: A Weakly-supervised Framework with Regional Reconstruction for Scalable Neural TSP Solvers
cs.LG cs.AI
Neural solvers have shown significant potential in solving the Traveling Salesman Problem (TSP), yet current approaches face significant challenges. Supervised learning (SL)-based solvers require large amounts of high-quality labeled data, while reinforcement learning (RL)-based solvers, though less dependent on such...
2502.12485
Safe at the Margins: A General Approach to Safety Alignment in Low-Resource English Languages -- A Singlish Case Study
cs.CL cs.AI
To ensure safe usage, Large Language Models (LLMs) typically undergo alignment with human-defined values. However, this alignment often relies on primarily English data and is biased towards Western-centric values, limiting its effectiveness in low-resource language settings. In this paper, we describe our approach f...
2502.12486
EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning
cs.CL
Large Language Models (LLMs) have shown impressive reasoning capabilities in well-defined problems with clear solutions, such as mathematics and coding. However, they still struggle with complex real-world scenarios like business negotiations, which require strategic reasoning-an ability to navigate dynamic environme...
2502.12488
Enhancing Audio-Visual Spiking Neural Networks through Semantic-Alignment and Cross-Modal Residual Learning
cs.CV
Humans interpret and perceive the world by integrating sensory information from multiple modalities, such as vision and hearing. Spiking Neural Networks (SNNs), as brain-inspired computational models, exhibit unique advantages in emulating the brain's information processing mechanisms. However, existing SNN models pr...
2502.12489
A Comprehensive Survey on Generative AI for Video-to-Music Generation
eess.AS cs.AI cs.MM
The burgeoning growth of video-to-music generation can be attributed to the ascendancy of multimodal generative models. However, there is a lack of literature that comprehensively combs through the work in this field. To fill this gap, this paper presents a comprehensive review of video-to-music generation using deep...
2502.12490
UniGenCoder: Merging Seq2Seq and Seq2Tree Paradigms for Unified Code Generation
cs.CL
Deep learning-based code generation has completely transformed the way developers write programs today. Existing approaches to code generation have focused either on the Sequence-to-Sequence paradigm, which generates target code as a sequence of tokens, or the Sequence-to-Tree paradigm, which outputs code as a sequen...
2502.12492
Boost, Disentangle, and Customize: A Robust System2-to-System1 Pipeline for Code Generation
cs.AI
Large language models (LLMs) have demonstrated remarkable capabilities in various domains, particularly in system 1 tasks, yet the intricacies of their problem-solving mechanisms in system 2 tasks are not sufficiently explored. Recent research on System2-to-System1 methods surge, exploring the System 2 reasoning know...
2502.12493
Optimal and Almost Optimal Locally Repairable Codes from Hyperelliptic Curves
cs.IT math.IT
Locally repairable codes are widely applicable in contemporary large-scale distributed cloud storage systems and various other areas. By making use of some algebraic structures of elliptic curves, Li et al. developed a series of $q$-ary optimal locally repairable codes with lengths that can extend to $q+2\sqrt{q}$. I...
2502.12494
EDGE: Efficient Data Selection for LLM Agents via Guideline Effectiveness
cs.LG cs.AI
Large Language Models (LLMs) have shown remarkable capabilities as AI agents. However, existing methods for enhancing LLM-agent abilities often lack a focus on data quality, leading to inefficiencies and suboptimal results in both fine-tuning and prompt engineering. To address this issue, we introduce EDGE, a novel a...
2502.12498
USPilot: An Embodied Robotic Assistant Ultrasound System with Large Language Model Enhanced Graph Planner
cs.RO
In the era of Large Language Models (LLMs), embodied artificial intelligence presents transformative opportunities for robotic manipulation tasks. Ultrasound imaging, a widely used and cost-effective medical diagnostic procedure, faces challenges due to the global shortage of professional sonographers. To address thi...