id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
2502.11688
From Isolates to Families: Using Neural Networks for Automated Language Affiliation
cs.CL
In historical linguistics, the affiliation of languages to a common language family is traditionally carried out using a complex workflow that relies on manually comparing individual languages. Large-scale standardized collections of multilingual wordlists and grammatical language structures might help to improve thi...
2502.11689
Improve LLM-as-a-Judge Ability as a General Ability
cs.CL
LLM-as-a-Judge leverages the generative and reasoning capabilities of large language models (LLMs) to evaluate LLM responses across diverse scenarios, providing accurate preference signals. This approach plays a vital role in aligning LLMs with human values, ensuring ethical and reliable AI outputs that align with so...
2502.11697
MVTokenFlow: High-quality 4D Content Generation using Multiview Token Flow
cs.CV
In this paper, we present MVTokenFlow for high-quality 4D content creation from monocular videos. Recent advancements in generative models such as video diffusion models and multiview diffusion models enable us to create videos or 3D models. However, extending these generative models for dynamic 4D content creation i...
2502.11703
CMQCIC-Bench: A Chinese Benchmark for Evaluating Large Language Models in Medical Quality Control Indicator Calculation
cs.CL
Medical quality control indicators are essential to assess the qualifications of healthcare institutions for medical services. With the impressive performance of large language models (LLMs) like GPT-4 in the medical field, leveraging these technologies for the Medical Quality Control Indicator Calculation (MQCIC) pr...
2502.11705
LLM Agents Making Agent Tools
cs.CL cs.AI cs.LG cs.MA
Tool use has turned large language models (LLMs) into powerful agents that can perform complex multi-step tasks by dynamically utilising external software components. However, these tools must be implemented in advance by human developers, hindering the applicability of LLM agents in domains which demand large number...
2502.11707
Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating Large Language Models
cs.CL
This study utilizes the game Codenames as a benchmarking tool to evaluate large language models (LLMs) with respect to specific linguistic and cognitive skills. LLMs play each side of the game, where one side generates a clue word covering several target words and the other guesses those target words. We designed var...
2502.11710
The Worse The Better: Content-Aware Viewpoint Generation Network for Projection-related Point Cloud Quality Assessment
cs.CV
Through experimental studies, however, we observed the instability of final predicted quality scores, which change significantly over different viewpoint settings. Inspired by the "wooden barrel theory", given the default content-independent viewpoints of existing projection-related PCQA approaches, this paper presen...
2502.11711
Knowledge-aware contrastive heterogeneous molecular graph learning
cs.LG cs.AI
Molecular representation learning is pivotal in predicting molecular properties and advancing drug design. Traditional methodologies, which predominantly rely on homogeneous graph encoding, are limited by their inability to integrate external knowledge and represent molecular structures across different levels of gra...
2502.11712
Component-aware Unsupervised Logical Anomaly Generation for Industrial Anomaly Detection
cs.CV
Anomaly detection is critical in industrial manufacturing for ensuring product quality and improving efficiency in automated processes. The scarcity of anomalous samples limits traditional detection methods, making anomaly generation essential for expanding the data repository. However, recent generative models often...
2502.11713
Nonlinearity Cancellation Based on Optimized First Order Perturbative Kernels
cs.IT math.IT
The potential offered by interference cancellation based on optimized regular perturbation kernels of the Manakov equation is studied. Theoretical gains of up to 2.5 dB in effective SNR are demonstrated.
2502.11715
Proactive Depot Discovery: A Generative Framework for Flexible Location-Routing
cs.LG cs.AI
The Location-Routing Problem (LRP), which combines the challenges of facility (depot) locating and vehicle route planning, is critically constrained by the reliance on predefined depot candidates, limiting the solution space and potentially leading to suboptimal outcomes. Previous research on LRP without predefined d...
2502.11718
ChineseSimpleVQA -- "See the World, Discover Knowledge": A Chinese Factuality Evaluation for Large Vision Language Models
cs.CL cs.CV
The evaluation of factual accuracy in large vision language models (LVLMs) has lagged behind their rapid development, making it challenging to fully reflect these models' knowledge capacity and reliability. In this paper, we introduce the first factuality-based visual question-answering benchmark in Chinese, named Ch...
2502.11720
Can you pass that tool?: Implications of Indirect Speech in Physical Human-Robot Collaboration
cs.HC cs.RO
Indirect speech acts (ISAs) are a natural pragmatic feature of human communication, allowing requests to be conveyed implicitly while maintaining subtlety and flexibility. Although advancements in speech recognition have enabled natural language interactions with robots through direct, explicit commands -- roviding c...
2502.11721
Enhancing Recommendation Explanations through User-Centric Refinement
cs.IR
Generating natural language explanations for recommendations has become increasingly important in recommender systems. Traditional approaches typically treat user reviews as ground truth for explanations and focus on improving review prediction accuracy by designing various model architectures. However, due to limita...
2502.11723
Energy-Conscious LLM Decoding: Impact of Text Generation Strategies on GPU Energy Consumption
cs.AI
Decoding strategies significantly influence the quality and diversity of the generated texts in large language models (LLMs), yet their impact on computational resource consumption, particularly GPU energy usage, is insufficiently studied. This paper investigates the relationship between text generation decoding meth...
2502.11724
Incomplete Modality Disentangled Representation for Ophthalmic Disease Grading and Diagnosis
cs.CV
Ophthalmologists typically require multimodal data sources to improve diagnostic accuracy in clinical decisions. However, due to medical device shortages, low-quality data and data privacy concerns, missing data modalities are common in real-world scenarios. Existing deep learning methods tend to address it by learni...
2502.11725
Adversarially Robust CLIP Models Can Induce Better (Robust) Perceptual Metrics
cs.CV cs.LG
Measuring perceptual similarity is a key tool in computer vision. In recent years perceptual metrics based on features extracted from neural networks with large and diverse training sets, e.g. CLIP, have become popular. At the same time, the metrics extracted from features of neural networks are not adversarially rob...
2502.11726
No-reference geometry quality assessment for colorless point clouds via list-wise rank learning
cs.CV
Geometry quality assessment (GQA) of colorless point clouds is crucial for evaluating the performance of emerging point cloud-based solutions (e.g., watermarking, compression, and 3-Dimensional (3D) reconstruction). Unfortunately, existing objective GQA approaches are traditional full-reference metrics, whereas state...
2502.11728
Matrix Low-dimensional Qubit Casting Based Quantum Electromagnetic Transient Network Simulation Program
quant-ph cs.SY eess.SY
In modern power systems, the integration of converter-interfaced generations requires the development of electromagnetic transient network simulation programs (EMTP) that can capture rapid fluctuations. However, as the power system scales, the EMTP's computing complexity increases exponentially, leading to a curse of...
2502.11731
GraphMorph: Tubular Structure Extraction by Morphing Predicted Graphs
cs.CV
Accurately restoring topology is both challenging and crucial in tubular structure extraction tasks, such as blood vessel segmentation and road network extraction. Diverging from traditional approaches based on pixel-level classification, our proposed method, named GraphMorph, focuses on branch-level features of tubu...
2502.11733
Plant in Cupboard, Orange on Table, Book on Shelf. Benchmarking Practical Reasoning and Situation Modelling in a Text-Simulated Situated Environment
cs.CL
Large language models (LLMs) have risen to prominence as 'chatbots' for users to interact via natural language. However, their abilities to capture common-sense knowledge make them seem promising as language-based planners of situated or embodied action as well. We have implemented a simple text-based environment -- ...
2502.11735
MT-RAIG: Novel Benchmark and Evaluation Framework for Retrieval-Augmented Insight Generation over Multiple Tables
cs.CL
Recent advancements in table-based reasoning have expanded beyond factoid-level QA to address insight-level tasks, where systems should synthesize implicit knowledge in the table to provide explainable analyses. Although effective, existing studies remain confined to scenarios where a single gold table is given along...
2502.11736
ReviewEval: An Evaluation Framework for AI-Generated Reviews
cs.CL cs.AI
The escalating volume of academic research, coupled with a shortage of qualified reviewers, necessitates innovative approaches to peer review. While large language model (LLMs) offer potential for automating this process, their current limitations include superficial critiques, hallucinations, and a lack of actionabl...
2502.11740
Mitigating Visual Knowledge Forgetting in MLLM Instruction-tuning via Modality-decoupled Gradient Descent
cs.LG cs.CV
Recent MLLMs have shown emerging visual understanding and reasoning abilities after being pre-trained on large-scale multimodal datasets. Unlike pre-training, where MLLMs receive rich visual-text alignment, instruction-tuning is often text-driven with weaker visual supervision, leading to the degradation of pre-train...
2502.11741
SQL-o1: A Self-Reward Heuristic Dynamic Search Method for Text-to-SQL
cs.DB cs.AI
The Text-to-SQL(Text2SQL) task aims to convert natural language queries into executable SQL queries. Thanks to the application of large language models (LLMs), significant progress has been made in this field. However, challenges such as model scalability, limited generation space, and coherence issues in SQL generat...
2502.11742
Range and Bird's Eye View Fused Cross-Modal Visual Place Recognition
cs.CV
Image-to-point cloud cross-modal Visual Place Recognition (VPR) is a challenging task where the query is an RGB image, and the database samples are LiDAR point clouds. Compared to single-modal VPR, this approach benefits from the widespread availability of RGB cameras and the robustness of point clouds in providing a...
2502.11743
Robust Partial-Label Learning by Leveraging Class Activation Values
cs.LG stat.ML
Real-world training data is often noisy; for example, human annotators assign conflicting class labels to the same instances. Partial-label learning (PLL) is a weakly supervised learning paradigm that allows training classifiers in this context without manual data cleaning. While state-of-the-art methods have good pr...
2502.11744
FUNCTO: Function-Centric One-Shot Imitation Learning for Tool Manipulation
cs.RO cs.CV
Learning tool use from a single human demonstration video offers a highly intuitive and efficient approach to robot teaching. While humans can effortlessly generalize a demonstrated tool manipulation skill to diverse tools that support the same function (e.g., pouring with a mug versus a teapot), current one-shot imi...
2502.11747
Open-Ended and Knowledge-Intensive Video Question Answering
cs.IR
Video question answering that requires external knowledge beyond the visual content remains a significant challenge in AI systems. While models can effectively answer questions based on direct visual observations, they often falter when faced with questions requiring broader contextual knowledge. To address this limi...
2502.11748
ILIAS: Instance-Level Image retrieval At Scale
cs.CV
This work introduces ILIAS, a new test dataset for Instance-Level Image retrieval At Scale. It is designed to evaluate the ability of current and future foundation models and retrieval techniques to recognize particular objects. The key benefits over existing datasets include large scale, domain diversity, accurate g...
2502.11749
JotlasNet: Joint Tensor Low-Rank and Attention-based Sparse Unrolling Network for Accelerating Dynamic MRI
cs.CV cs.AI
Joint low-rank and sparse unrolling networks have shown superior performance in dynamic MRI reconstruction. However, existing works mainly utilized matrix low-rank priors, neglecting the tensor characteristics of dynamic MRI images, and only a global threshold is applied for the sparse constraint to the multi-channel...
2502.11751
Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal Reasoning
cs.CV cs.AI
Although Large Language Models (LLMs) excel in reasoning and generation for language tasks, they are not specifically designed for multimodal challenges. Training Multimodal Large Language Models (MLLMs), however, is resource-intensive and constrained by various training limitations. In this paper, we propose the Mod...
2502.11752
Early Detection of Human Handover Intentions in Human-Robot Collaboration: Comparing EEG, Gaze, and Hand Motion
cs.RO cs.HC
Human-robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot's actions during object reception, assuming the human intention for object handove...
2502.11753
HintsOfTruth: A Multimodal Checkworthiness Detection Dataset with Real and Synthetic Claims
cs.AI
Misinformation can be countered with fact-checking, but the process is costly and slow. Identifying checkworthy claims is the first step, where automation can help scale fact-checkers' efforts. However, detection methods struggle with content that is 1) multimodal, 2) from diverse domains, and 3) synthetic. We introd...
2502.11756
On the Computation of the Fisher Information in Continual Learning
cs.LG cs.AI cs.CV stat.ML
One of the most popular methods for continual learning with deep neural networks is Elastic Weight Consolidation (EWC), which involves computing the Fisher Information. The exact way in which the Fisher Information is computed is however rarely described, and multiple different implementations for it can be found onl...
2502.11763
Lightweight Deepfake Detection Based on Multi-Feature Fusion
cs.CV cs.AI
Deepfake technology utilizes deep learning based face manipulation techniques to seamlessly replace faces in videos creating highly realistic but artificially generated content. Although this technology has beneficial applications in media and entertainment misuse of its capabilities may lead to serious risks includi...
2502.11766
Warmup-Distill: Bridge the Distribution Mismatch between Teacher and Student before Knowledge Distillation
cs.CL
The widespread deployment of Large Language Models (LLMs) is hindered by the high computational demands, making knowledge distillation (KD) crucial for developing compact smaller ones. However, the conventional KD methods endure the distribution mismatch issue between the teacher and student models, leading to the po...
2502.11767
From Selection to Generation: A Survey of LLM-based Active Learning
cs.LG cs.CL
Active Learning (AL) has been a powerful paradigm for improving model efficiency and performance by selecting the most informative data points for labeling and training. In recent active learning frameworks, Large Language Models (LLMs) have been employed not only for selection but also for generating entirely new da...
2502.11770
Cognitive-Aligned Document Selection for Retrieval-augmented Generation
cs.AI
Large language models (LLMs) inherently display hallucinations since the precision of generated texts cannot be guaranteed purely by the parametric knowledge they include. Although retrieval-augmented generation (RAG) systems enhance the accuracy and reliability of generative models by incorporating external document...
2502.11771
The Validation Gap: A Mechanistic Analysis of How Language Models Compute Arithmetic but Fail to Validate It
cs.CL cs.AI
The ability of large language models (LLMs) to validate their output and identify potential errors is crucial for ensuring robustness and reliability. However, current research indicates that LLMs struggle with self-correction, encountering significant challenges in detecting errors. While studies have explored metho...
2502.11774
Interpretable Machine Learning for Kronecker Coefficients
cs.LG math.CO math.RT stat.ML
We analyze the saliency of neural networks and employ interpretable machine learning models to predict whether the Kronecker coefficients of the symmetric group are zero or not. Our models use triples of partitions as input features, as well as b-loadings derived from the principal component of an embedding that capt...
2502.11775
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
cs.CV
While recent advancements in reasoning optimization have significantly enhanced the capabilities of large language models (LLMs), existing efforts to improve reasoning have been limited to solving mathematical problems and focusing on visual graphical inputs, neglecting broader applications in general video understan...
2502.11777
Deep Neural Networks for Accurate Depth Estimation with Latent Space Features
cs.CV cs.AI
Depth estimation plays a pivotal role in advancing human-robot interactions, especially in indoor environments where accurate 3D scene reconstruction is essential for tasks like navigation and object handling. Monocular depth estimation, which relies on a single RGB camera, offers a more affordable solution compared ...
2502.11778
Private Synthetic Graph Generation and Fused Gromov-Wasserstein Distance
stat.ML cs.DS cs.LG math.PR
Networks are popular for representing complex data. In particular, differentially private synthetic networks are much in demand for method and algorithm development. The network generator should be easy to implement and should come with theoretical guarantees. Here we start with complex data as input and jointly prov...
2502.11779
Efficient Response Generation Method Selection for Fine-Tuning Large Language Models
cs.CL
The training data for fine-tuning large language models (LLMs) is typically structured as input-output pairs. However, for many tasks, there can be multiple equally valid output variations for the same input. Recent studies have observed that the choice of output variation used in training can affect the model's perf...
2502.11785
Changing the Rules of the Game: Reasoning about Dynamic Phenomena in Multi-Agent Systems
cs.LO cs.MA
The design and application of multi-agent systems (MAS) require reasoning about the effects of modifications on their underlying structure. In particular, such changes may impact the satisfaction of system specifications and the strategic abilities of their autonomous components. In this paper, we are concerned with ...
2502.11789
Personality Editing for Language Models through Relevant Knowledge Editing
cs.CL
Large Language Models (LLMs) play a vital role in applications like conversational agents and content creation, where controlling a model's personality is crucial for maintaining tone, consistency, and engagement. However, traditional prompt-based techniques for controlling personality often fall short, as they do no...
2502.11799
Table-Critic: A Multi-Agent Framework for Collaborative Criticism and Refinement in Table Reasoning
cs.AI cs.CL
Despite the remarkable capabilities of large language models (LLMs) in various reasoning tasks, they still struggle with table reasoning tasks, particularly in maintaining consistency throughout multi-step reasoning processes. While existing approaches have explored various decomposition strategies, they often lack e...
2502.11800
Residual Learning towards High-fidelity Vehicle Dynamics Modeling with Transformer
cs.RO
The vehicle dynamics model serves as a vital component of autonomous driving systems, as it describes the temporal changes in vehicle state. In a long period, researchers have made significant endeavors to accurately model vehicle dynamics. Traditional physics-based methods employ mathematical formulae to model vehic...
2502.11801
3D Gaussian Inpainting with Depth-Guided Cross-View Consistency
cs.CV cs.LG
When performing 3D inpainting using novel-view rendering methods like Neural Radiance Field (NeRF) or 3D Gaussian Splatting (3DGS), how to achieve texture and geometry consistency across camera views has been a challenge. In this paper, we propose a framework of 3D Gaussian Inpainting with Depth-Guided Cross-View Con...
2502.11806
Exploring Translation Mechanism of Large Language Models
cs.CL
Large language models (LLMs) have succeeded remarkably in multilingual translation tasks. However, the inherent translation mechanisms of LLMs remain poorly understood, largely due to sophisticated architectures and vast parameter scales. In response to this issue, this study explores the translation mechanism of LLM...
2502.11809
Revealing Bias Formation in Deep Neural Networks Through the Geometric Mechanisms of Human Visual Decoupling
cs.CV cs.AI
Deep neural networks (DNNs) often exhibit biases toward certain categories during object recognition, even under balanced training data conditions. The intrinsic mechanisms underlying these biases remain unclear. Inspired by the human visual system, which decouples object manifolds through hierarchical processing to ...
2502.11811
FineFilter: A Fine-grained Noise Filtering Mechanism for Retrieval-Augmented Large Language Models
cs.CL
Retrieved documents containing noise will hinder Retrieval-Augmented Generation (RAG) from detecting answer clues, necessitating noise filtering mechanisms to enhance accuracy. Existing methods use re-ranking or summarization to identify the most relevant sentences, but directly and accurately locating answer clues f...
2502.11812
Towards Understanding Fine-Tuning Mechanisms of LLMs via Circuit Analysis
cs.CL cs.AI cs.LG
Fine-tuning significantly improves the performance of Large Language Models (LLMs), yet its underlying mechanisms remain poorly understood. This paper aims to provide an in-depth interpretation of the fine-tuning process through circuit analysis, a popular tool in Mechanistic Interpretability (MI). Unlike previous st...
2502.11816
IMTS-Mixer: Mixer-Networks for Irregular Multivariate Time Series Forecasting
cs.LG
Forecasting Irregular Multivariate Time Series (IMTS) has recently emerged as a distinct research field, necessitating specialized models to address its unique challenges. While most forecasting literature assumes regularly spaced observations without missing values, many real-world datasets - particularly in healthc...
2502.11817
AAKT: Enhancing Knowledge Tracing with Alternate Autoregressive Modeling
cs.AI cs.CY cs.LG
Knowledge Tracing (KT) aims to predict students' future performances based on their former exercises and additional information in educational settings. KT has received significant attention since it facilitates personalized experiences in educational situations. Simultaneously, the autoregressive modeling on the seq...
2502.11821
Unitary orthonormal bases of finite dimensional inclusions
math.OA cs.IT math-ph math.IT math.MP quant-ph
We study unitary orthonormal bases in the sense of Pimsner and Popa for inclusions $(\mathcal{B}\subseteq \mathcal{A}, E),$ where $\mathcal{A}, \mathcal{B}$ are finite dimensional von Neumann algebras and $E$ is a conditional expectation map from $\mathcal{A}$ onto $\mathcal{B}$. It is shown that existence of such ba...
2502.11824
M-ABSA: A Multilingual Dataset for Aspect-Based Sentiment Analysis
cs.CL
Aspect-based sentiment analysis (ABSA) is a crucial task in information extraction and sentiment analysis, aiming to identify aspects with associated sentiment elements in text. However, existing ABSA datasets are predominantly English-centric, limiting the scope for multilingual evaluation and research. To bridge th...
2502.11827
Influence Operations in Social Networks
cs.SI cs.CY
An important part of online activities are intended to control the public opinion and behavior, being considered currently a global threat. This article identifies and conceptualizes seven online strategies employed in social media influence operations. These procedures are quantified through the analysis of 80 incid...
2502.11828
Intersectional Fairness in Reinforcement Learning with Large State and Constraint Spaces
cs.LG cs.GT
In traditional reinforcement learning (RL), the learner aims to solve a single objective optimization problem: find the policy that maximizes expected reward. However, in many real-world settings, it is important to optimize over multiple objectives simultaneously. For example, when we are interested in fairness, sta...
2502.11829
Code-Vision: Evaluating Multimodal LLMs Logic Understanding and Code Generation Capabilities
cs.CL cs.AI cs.SE
This paper introduces Code-Vision, a benchmark designed to evaluate the logical understanding and code generation capabilities of Multimodal Large Language Models (MLLMs). It challenges MLLMs to generate a correct program that fulfills specific functionality requirements based on a given flowchart, which visually rep...
2502.11830
Text Classification in the LLM Era -- Where do we stand?
cs.CL
Large Language Models revolutionized NLP and showed dramatic performance improvements across several tasks. In this paper, we investigated the role of such language models in text classification and how they compare with other approaches relying on smaller pre-trained language models. Considering 32 datasets spanning...
2502.11831
Intuitive physics understanding emerges from self-supervised pretraining on natural videos
cs.CV cs.AI
We investigate the emergence of intuitive physics understanding in general-purpose deep neural network models trained to predict masked regions in natural videos. Leveraging the violation-of-expectation framework, we find that video prediction models trained to predict outcomes in a learned representation space demon...
2502.11835
Neural Chaos: A Spectral Stochastic Neural Operator
cs.CE physics.comp-ph stat.ML
Building surrogate models with uncertainty quantification capabilities is essential for many engineering applications where randomness, such as variability in material properties, is unavoidable. Polynomial Chaos Expansion (PCE) is widely recognized as a to-go method for constructing stochastic solutions in both intr...
2502.11836
Model Generalization on Text Attribute Graphs: Principles with Large Language Models
cs.LG
Large language models (LLMs) have recently been introduced to graph learning, aiming to extend their zero-shot generalization success to tasks where labeled graph data is scarce. Among these applications, inference over text-attributed graphs (TAGs) presents unique challenges: existing methods struggle with LLMs' lim...
2502.11840
ChordFormer: A Conformer-Based Architecture for Large-Vocabulary Audio Chord Recognition
cs.SD cs.AI cs.CV cs.IR cs.LG
Chord recognition serves as a critical task in music information retrieval due to the abstract and descriptive nature of chords in music analysis. While audio chord recognition systems have achieved significant accuracy for small vocabularies (e.g., major/minor chords), large-vocabulary chord recognition remains a ch...
2502.11843
Can LLM Agents Maintain a Persona in Discourse?
cs.CL cs.AI cs.SI
Large Language Models (LLMs) are widely used as conversational agents, exploiting their capabilities in various sectors such as education, law, medicine, and more. However, LLMs are often subjected to context-shifting behaviour, resulting in a lack of consistent and interpretable personality-aligned interactions. Adh...
2502.11844
BaxBench: Can LLMs Generate Correct and Secure Backends?
cs.CR cs.AI cs.LG cs.PL
The automatic generation of programs has long been a fundamental challenge in computer science. Recent benchmarks have shown that large language models (LLMs) can effectively generate code at the function level, make code edits, and solve algorithmic coding tasks. However, to achieve full automation, LLMs should be a...
2502.11850
Steering the LoCoMotif: Using Domain Knowledge in Time Series Motif Discovery
cs.LG cs.AI cs.CV
Time Series Motif Discovery (TSMD) identifies repeating patterns in time series data, but its unsupervised nature might result in motifs that are not interesting to the user. To address this, we propose a framework that allows the user to impose constraints on the motifs to be discovered, where constraints can easily...
2502.11853
StructTransform: A Scalable Attack Surface for Safety-Aligned Large Language Models
cs.LG
In this work, we present a series of structure transformation attacks on LLM alignment, where we encode natural language intent using diverse syntax spaces, ranging from simple structure formats and basic query languages (e.g. SQL) to new novel spaces and syntaxes created entirely by LLMs. Our extensive evaluation sh...
2502.11854
Enhanced Anomaly Detection in IoMT Networks using Ensemble AI Models on the CICIoMT2024 Dataset
cs.CR cs.LG
The rapid proliferation of Internet of Medical Things (IoMT) devices in healthcare has introduced unique cybersecurity challenges, primarily due to the diverse communication protocols and critical nature of these devices This research aims to develop an advanced, real-time anomaly detection framework tailored for IoM...
2502.11856
LLMs as a synthesis between symbolic and continuous approaches to language
cs.CL
Since the middle of the 20th century, a fierce battle is being fought between symbolic and continuous approaches to language and cognition. The success of deep learning models, and LLMs in particular, has been alternatively taken as showing that the continuous camp has won, or dismissed as an irrelevant engineering d...
2502.11858
Rethinking Audio-Visual Adversarial Vulnerability from Temporal and Modality Perspectives
cs.SD cs.CV
While audio-visual learning equips models with a richer understanding of the real world by leveraging multiple sensory modalities, this integration also introduces new vulnerabilities to adversarial attacks. In this paper, we present a comprehensive study of the adversarial robustness of audio-visual models, consid...
2502.11859
Defining and Evaluating Visual Language Models' Basic Spatial Abilities: A Perspective from Psychometrics
cs.CV cs.CL
The Theory of Multiple Intelligences underscores the hierarchical nature of cognitive capabilities. To advance Spatial Artificial Intelligence, we pioneer a psychometric framework defining five Basic Spatial Abilities (BSAs) in Visual Language Models (VLMs): Spatial Perception, Spatial Relation, Spatial Orientation, ...
2502.11861
Exploring Large Language Models in Healthcare: Insights into Corpora Sources, Customization Strategies, and Evaluation Metrics
cs.CL
This study reviewed the use of Large Language Models (LLMs) in healthcare, focusing on their training corpora, customization techniques, and evaluation metrics. A systematic search of studies from 2021 to 2024 identified 61 articles. Four types of corpora were used: clinical resources, literature, open-source dataset...
2502.11862
Understanding In-Context Machine Translation for Low-Resource Languages: A Case Study on Manchu
cs.CL
In-context machine translation (MT) with large language models (LLMs) is a promising approach for low-resource MT, as it can readily take advantage of linguistic resources such as grammar books and dictionaries. Such resources are usually selectively integrated into the prompt so that LLMs can directly perform transl...
2502.11863
FedEAT: A Robustness Optimization Framework for Federated LLMs
cs.LG cs.AI
Significant advancements have been made by Large Language Models (LLMs) in the domains of natural language understanding and automated content creation. However, they still face persistent problems, including substantial computational costs and inadequate availability of training data. The combination of Federated Le...
2502.11864
Does Knowledge About Perceptual Uncertainty Help an Agent in Automated Driving?
cs.CV cs.RO
Agents in real-world scenarios like automated driving deal with uncertainty in their environment, in particular due to perceptual uncertainty. Although, reinforcement learning is dedicated to autonomous decision-making under uncertainty these algorithms are typically not informed about the uncertainty currently conta...
2502.11866
Southern Newswire Corpus: A Large-Scale Dataset of Mid-Century Wire Articles Beyond the Front Page
cs.CL
I introduce a new large-scale dataset of historical wire articles from U.S. Southern newspapers, spanning 1960-1975 and covering multiple wire services: The Associated Press, United Press International, Newspaper Enterprise Association. Unlike prior work focusing on front-page content, this dataset captures articles ...
2502.11867
On Data-Driven Robust Optimization With Multiple Uncertainty Subsets: Unified Uncertainty Set Representation and Mitigating Conservatism
math.OC cs.SY eess.SY
Constructing uncertainty sets as unions of multiple subsets has emerged as an effective approach for creating compact and flexible uncertainty representations in data-driven robust optimization (RO). This paper focuses on two separate research questions. The first concerns the computational challenge in applying thes...
2502.11874
VAQUUM: Are Vague Quantifiers Grounded in Visual Data?
cs.CL
Vague quantifiers such as "a few" and "many" are influenced by many contextual factors, including how many objects are present in a given context. In this work, we evaluate the extent to which vision-and-language models (VLMs) are compatible with humans when producing or judging the appropriateness of vague quantifie...
2502.11877
JoLT: Joint Probabilistic Predictions on Tabular Data Using LLMs
stat.ML cs.LG
We introduce a simple method for probabilistic predictions on tabular data based on Large Language Models (LLMs) called JoLT (Joint LLM Process for Tabular data). JoLT uses the in-context learning capabilities of LLMs to define joint distributions over tabular data conditioned on user-specified side information about...
2502.11880
Bitnet.cpp: Efficient Edge Inference for Ternary LLMs
cs.LG cs.AI cs.CL cs.DC
The advent of 1-bit large language models (LLMs), led by BitNet b1.58, has spurred interest in ternary LLMs. Despite this, research and practical applications focusing on efficient edge inference for ternary LLMs remain scarce. To bridge this gap, we introduce Bitnet.cpp, an inference system optimized for BitNet b1.5...
2502.11881
Hypothesis-Driven Theory-of-Mind Reasoning for Large Language Models
cs.AI cs.CL
Existing LLM reasoning methods have shown impressive capabilities across various tasks, such as solving math and coding problems. However, applying these methods to scenarios without ground-truth answers or rule-based verification methods - such as tracking the mental states of an agent - remains challenging. Inspire...
2502.11882
Leveraging Dual Process Theory in Language Agent Framework for Real-time Simultaneous Human-AI Collaboration
cs.AI cs.CL cs.HC cs.LG cs.MA
Agents built on large language models (LLMs) have excelled in turn-by-turn human-AI collaboration but struggle with simultaneous tasks requiring real-time interaction. Latency issues and the challenge of inferring variable human strategies hinder their ability to make autonomous decisions without explicit instruction...
2502.11883
FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information Retrieval Algorithms
cs.IR
In modern information retrieval (IR). achieving more than just accuracy is essential to sustaining a healthy ecosystem, especially when addressing fairness and diversity considerations. To meet these needs, various datasets, algorithms, and evaluation frameworks have been introduced. However, these algorithms are oft...
2502.11886
LIMR: Less is More for RL Scaling
cs.LG cs.AI cs.CL
In this paper, we ask: what truly determines the effectiveness of RL training data for enhancing language models' reasoning capabilities? While recent advances like o1, Deepseek R1, and Kimi1.5 demonstrate RL's potential, the lack of transparency about training data requirements has hindered systematic progress. Star...
2502.11887
Stonefish: Supporting Machine Learning Research in Marine Robotics
cs.RO cs.AI cs.SY eess.SY
Simulations are highly valuable in marine robotics, offering a cost-effective and controlled environment for testing in the challenging conditions of underwater and surface operations. Given the high costs and logistical difficulties of real-world trials, simulators capable of capturing the operational conditions of ...
2502.11890
Revisiting Classification Taxonomy for Grammatical Errors
cs.CL
Grammatical error classification plays a crucial role in language learning systems, but existing classification taxonomies often lack rigorous validation, leading to inconsistencies and unreliable feedback. In this paper, we revisit previous classification taxonomies for grammatical errors by introducing a systematic...
2502.11891
From Open-Vocabulary to Vocabulary-Free Semantic Segmentation
cs.CV
Open-vocabulary semantic segmentation enables models to identify novel object categories beyond their training data. While this flexibility represents a significant advancement, current approaches still rely on manually specified class names as input, creating an inherent bottleneck in real-world applications. This w...
2502.11893
Rethinking Benign Overfitting in Two-Layer Neural Networks
cs.LG stat.ML
Recent theoretical studies (Kou et al., 2023; Cao et al., 2022) have revealed a sharp phase transition from benign to harmful overfitting when the noise-to-feature ratio exceeds a threshold-a situation common in long-tailed data distributions where atypical data is prevalent. However, harmful overfitting rarely happe...
2502.11895
Continual Quantization-Aware Pre-Training: When to transition from 16-bit to 1.58-bit pre-training for BitNet language models?
cs.LG cs.AI
Large language models (LLMs) require immense resources for training and inference. Quantization, a technique that reduces the precision of model parameters, offers a promising solution for improving LLM efficiency and sustainability. While post-training quantization methods typically achieve 4-8 bits per parameter, r...
2502.11896
CAMEL: Continuous Action Masking Enabled by Large Language Models for Reinforcement Learning
cs.LG cs.AI
Reinforcement learning (RL) in continuous action spaces encounters persistent challenges, such as inefficient exploration and convergence to suboptimal solutions. To address these limitations, we propose CAMEL, a novel framework integrating LLM-generated suboptimal policies into the RL training pipeline. CAMEL levera...
2502.11897
DLFR-VAE: Dynamic Latent Frame Rate VAE for Video Generation
cs.CV cs.AI
In this paper, we propose the Dynamic Latent Frame Rate VAE (DLFR-VAE), a training-free paradigm that can make use of adaptive temporal compression in latent space. While existing video generative models apply fixed compression rates via pretrained VAE, we observe that real-world video content exhibits substantial te...
2502.11900
Ansatz-free Hamiltonian learning with Heisenberg-limited scaling
quant-ph cs.IT cs.LG math.IT
Learning the unknown interactions that govern a quantum system is crucial for quantum information processing, device benchmarking, and quantum sensing. The problem, known as Hamiltonian learning, is well understood under the assumption that interactions are local, but this assumption may not hold for arbitrary Hamilt...
2502.11901
Building A Proof-Oriented Programmer That Is 64% Better Than GPT-4o Under Data Scarsity
cs.CL cs.PL cs.SE
Existing LMs struggle with proof-oriented programming due to data scarcity, which manifest in two key ways: (1) a lack of sufficient corpora for proof-oriented programming languages such as F*, and (2) the absence of large-scale, project-level proof-oriented implementations that can teach the model the intricate reas...
2502.11903
MMRC: A Large-Scale Benchmark for Understanding Multimodal Large Language Model in Real-World Conversation
cs.CL
Recent multimodal large language models (MLLMs) have demonstrated significant potential in open-ended conversation, generating more accurate and personalized responses. However, their abilities to memorize, recall, and reason in sustained interactions within real-world scenarios remain underexplored. This paper intro...
2502.11904
A formal implementation of Behavior Trees to act in robotics
cs.RO
Behavior Trees (BT) are becoming quite popular as an Acting component of autonomous robotic systems. We propose to define a formal semantics to BT by translating them to a formal language which enables us to perform verification of programs written with BT, as well as runtime verification while these BT execute. This...
2502.11909
Neural Guided Diffusion Bridges
stat.ML cs.LG
We propose a novel method for simulating conditioned diffusion processes (diffusion bridges) in Euclidean spaces. By training a neural network to approximate bridge dynamics, our approach eliminates the need for computationally intensive Markov Chain Monte Carlo (MCMC) methods or reverse-process modeling. Compared to...
2502.11910
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives
cs.LG
Misaligned research objectives have considerably hindered progress in adversarial robustness research over the past decade. For instance, an extensive focus on optimizing target metrics, while neglecting rigorous standardized evaluation, has led researchers to pursue ad-hoc heuristic defenses that were seemingly effe...