id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2502.10914 | LLM-driven Knowledge Distillation for Dynamic Text-Attributed Graphs | cs.LG | Dynamic Text-Attributed Graphs (DyTAGs) have numerous real-world
applications, e.g. social, collaboration, citation, communication, and review
networks. In these networks, nodes and edges often contain text descriptions,
and the graph structure can evolve over time. Future link prediction, edge
classification, relati... |
2502.10916 | An Open-Source Web-Based Tool for Evaluating Open-Source Large Language
Models Leveraging Information Retrieval from Custom Documents | cs.CL cs.IR | In our work, we present the first-of-its-kind open-source web-based tool
which is able to demonstrate the impacts of a user's speech act during
discourse with conversational agents, which leverages open-source large
language models. With this software resource, it is possible for researchers
and experts to evaluate t... |
2502.10920 | Do Deepfake Detectors Work in Reality? | cs.CV cs.AI | Deepfakes, particularly those involving faceswap-based manipulations, have
sparked significant societal concern due to their increasing realism and
potential for misuse. Despite rapid advancements in generative models,
detection methods have not kept pace, creating a critical gap in defense
strategies. This disparity... |
2502.10921 | Evolving Hate Speech Online: An Adaptive Framework for Detection and
Mitigation | cs.CL cs.SI | The proliferation of social media platforms has led to an increase in the
spread of hate speech, particularly targeting vulnerable communities.
Unfortunately, existing methods for automatically identifying and blocking
toxic language rely on pre-constructed lexicons, making them reactive rather
than adaptive. As such... |
2502.10927 | The underlying structures of self-attention: symmetry, directionality,
and emergent dynamics in Transformer training | cs.LG | Self-attention is essential to Transformer architectures, yet how information
is embedded in the self-attention matrices and how different objective
functions impact this process remains unclear. We present a mathematical
framework to analyze self-attention matrices by deriving the structures
governing their weight u... |
2502.10928 | Semantic Specialization in MoE Appears with Scale: A Study of DeepSeek
R1 Expert Specialization | cs.LG cs.AI cs.CL | DeepSeek-R1, the largest open-source Mixture-of-Experts (MoE) model, has
demonstrated reasoning capabilities comparable to proprietary frontier models.
Prior research has explored expert routing in MoE models, but findings suggest
that expert selection is often token-dependent rather than semantically driven.
Given D... |
2502.10930 | Reduced Order Modeling with Shallow Recurrent Decoder Networks | cs.LG math.DS | Reduced Order Modeling is of paramount importance for efficiently inferring
high-dimensional spatio-temporal fields in parametric contexts, enabling
computationally tractable parametric analyses, uncertainty quantification and
control. However, conventional dimensionality reduction techniques are
typically limited to... |
2502.10931 | D-CIPHER: Dynamic Collaborative Intelligent Agents with Planning and
Heterogeneous Execution for Enhanced Reasoning in Offensive Security | cs.AI cs.CR | Large Language Models (LLMs) have been used in cybersecurity in many ways,
including their recent use as intelligent agent systems for autonomous security
analysis. Capture the Flag (CTF) challenges serve as benchmarks for assessing
the automated task-planning abilities of LLM agents across various
cybersecurity skil... |
2502.10932 | PPAC Driven Multi-die and Multi-technology Floorplanning | eess.SY cs.SY | In heterogeneous integration, where different dies may utilize distinct
technologies, floorplanning across multiple dies inherently requires
simultaneous technology selection. This work presents the first systematic
study of multi-die and multi-technology floorplanning. Unlike many conventional
approaches, which are ... |
2502.10934 | Fundamental Principles of Linguistic Structure are Not Represented by o3 | cs.CL | A core component of a successful artificial general intelligence would be the
rapid creation and manipulation of grounded compositional abstractions and the
demonstration of expertise in the family of recursive hierarchical syntactic
objects necessary for the creative use of human language. We evaluated the
recently ... |
2502.10937 | SCALE: Towards Collaborative Content Analysis in Social Science with
Large Language Model Agents and Human Intervention | cs.AI cs.CL cs.MA | Content analysis breaks down complex and unstructured texts into
theory-informed numerical categories. Particularly, in social science, this
process usually relies on multiple rounds of manual annotation, domain expert
discussion, and rule-based refinement. In this paper, we introduce SCALE, a
novel multi-agent frame... |
2502.10938 | PEA: Enhancing LLM Performance on Computational-Reasoning Tasks | cs.AI | Large Language Models (LLMs) have exhibited remarkable capabilities across
diverse domains, prompting investigations into their potential as generic
reasoning engines. While recent studies have explored inference-time
computation to enhance model performance on complex problems, current research
lacks a formal framew... |
2502.10940 | CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation | cs.LG cs.AI | Large language models (LLMs) are revolutionizing many science and engineering
fields. However, their huge model sizes impose extremely demanding needs of
computational resources in the pre-training stage. Although low-rank
factorizations can reduce model parameters, their direct application in LLM
pre-training often ... |
2502.10942 | Exploring Contextual Flux in Large Language Models: A Novel Approach to
Self-Modulating Semantic Networks | cs.CL | Self-modulating mechanisms introduce dynamic adaptation capabilities within
language models through contextual realignment strategies that influence token
embedding trajectories across extended sequences. Contextual Flux is explored
as an approach to embedding modulation, integrating an auxiliary gating
mechanism wit... |
2502.10947 | The Relationship between No-Regret Learning and Online Conformal
Prediction | cs.LG cs.GT stat.ML | Existing algorithms for online conformal prediction -- guaranteeing marginal
coverage in adversarial settings -- are variants of online gradient descent
(OGD), but their analyses of worst-case coverage do not follow from the regret
guarantee of OGD. What is the relationship between no-regret learning and
online confo... |
2502.10949 | Learning the Exact Time Integration Algorithm for Initial Value Problems
by Randomized Neural Networks | math.NA cs.LG cs.NA physics.comp-ph | We present a method leveraging extreme learning machine (ELM) type randomized
neural networks (NNs) for learning the exact time integration algorithm for
initial value problems (IVPs). The exact time integration algorithm for
non-autonomous systems can be represented by an algorithmic function in higher
dimensions, w... |
2502.10953 | Empirical evaluation of LLMs in predicting fixes of Configuration bugs
in Smart Home System | cs.SE cs.AI | This empirical study evaluates the effectiveness of Large Language Models
(LLMs) in predicting fixes for configuration bugs in smart home systems. The
research analyzes three prominent LLMs - GPT-4, GPT-4o (GPT-4 Turbo), and
Claude 3.5 Sonnet - using four distinct prompt designs to assess their ability
to identify ap... |
2502.10954 | Learning to Stop Overthinking at Test Time | cs.CV cs.AI cs.LG | Test time scaling is currently one of the most active research areas that
shows promise after training time scaling has reached its limits. Deep-thinking
(DT) models are a class of recurrent models that can perform easy-to-hard
generalization by assigning more compute to harder test samples. However, due
to their ina... |
2502.10955 | A recurrent vision transformer shows signatures of primate visual
attention | cs.CV cs.AI q-bio.NC | Attention is fundamental to both biological and artificial intelligence, yet
research on animal attention and AI self attention remains largely
disconnected. We propose a Recurrent Vision Transformer (Recurrent ViT) that
integrates self-attention with recurrent memory, allowing both current inputs
and stored informat... |
2502.10956 | Fine-Tuning Hard-to-Simulate Objectives for Quadruped Locomotion: A Case
Study on Total Power Saving | cs.RO | Legged locomotion is not just about mobility; it also encompasses crucial
objectives such as energy efficiency, safety, and user experience, which are
vital for real-world applications. However, key factors such as battery power
consumption and stepping noise are often inaccurately modeled or missing in
common simula... |
2502.10957 | Skillful Nowcasting of Convective Clouds With a Cascade Diffusion Model | cs.CV physics.ao-ph | Accurate nowcasting of convective clouds from satellite imagery is essential
for mitigating the impacts of meteorological disasters, especially in
developing countries and remote regions with limited ground-based observations.
Recent advances in deep learning have shown promise in video prediction;
however, existing ... |
2502.10959 | Revisiting the Design of In-Memory Dynamic Graph Storage | cs.DB | The effectiveness of in-memory dynamic graph storage (DGS) for supporting
concurrent graph read and write queries is crucial for real-time graph
analytics and updates. Various methods have been proposed, for example, LLAMA,
Aspen, LiveGraph, Teseo, and Sortledton. These approaches differ significantly
in their suppor... |
2502.10961 | Graders should cheat: privileged information enables expert-level
automated evaluations | cs.LG cs.AI | Auto-evaluating language models (LMs), i.e., using a grader LM to evaluate
the candidate LM, is an appealing way to accelerate the evaluation process and
the cost associated with it. But this presents a paradox: how can we trust the
grader LM, which is presumably weaker than the candidate LM, to assess problems
that ... |
2502.10966 | Neural Networks Remember More: The Power of Parameter Isolation and
Combination | cs.CL cs.AI | Catastrophic forgetting is a pervasive issue for pre-trained language models
(PLMs) during continual learning, where models lose previously acquired
knowledge when sequentially trained on a series of tasks. The model's ability
to retain old tasks is referred to as stability, while its adaptability to new
tasks is cal... |
2502.10967 | Open-Set Cross-Network Node Classification via Unknown-Excluded
Adversarial Graph Domain Alignment | cs.SI | Existing cross-network node classification methods are mainly proposed for
closed-set setting, where the source network and the target network share
exactly the same label space. Such a setting is restricted in real-world
applications, since the target network might contain additional classes that
are not present in ... |
2502.10973 | Akan Cinematic Emotions (ACE): A Multimodal Multi-party Dataset for
Emotion Recognition in Movie Dialogues | cs.CL | In this paper, we introduce the Akan Conversation Emotion (ACE) dataset, the
first multimodal emotion dialogue dataset for an African language, addressing
the significant lack of resources for low-resource languages in emotion
recognition research. ACE, developed for the Akan language, contains 385
emotion-labeled di... |
2502.10975 | GS-GVINS: A Tightly-integrated GNSS-Visual-Inertial Navigation System
Augmented by 3D Gaussian Splatting | cs.RO cs.CV eess.IV | Recently, the emergence of 3D Gaussian Splatting (3DGS) has drawn significant
attention in the area of 3D map reconstruction and visual SLAM. While extensive
research has explored 3DGS for indoor trajectory tracking using visual sensor
alone or in combination with Light Detection and Ranging (LiDAR) and Inertial
Meas... |
2502.10976 | QuOTE: Question-Oriented Text Embeddings | cs.IR cs.AI cs.CL cs.LG | We present QuOTE (Question-Oriented Text Embeddings), a novel enhancement to
retrieval-augmented generation (RAG) systems, aimed at improving document
representation for accurate and nuanced retrieval. Unlike traditional RAG
pipelines, which rely on embedding raw text chunks, QuOTE augments chunks with
hypothetical q... |
2502.10978 | Agentic LLM Framework for Adaptive Decision Discourse | cs.AI cs.CY | Effective decision-making in complex systems requires synthesizing diverse
perspectives to address multifaceted challenges under uncertainty. This study
introduces a real-world inspired agentic Large Language Models (LLMs)
framework, to simulate and enhance decision discourse-the deliberative process
through which ac... |
2502.10980 | DFM: Deep Fourier Mimic for Expressive Dance Motion Learning | cs.RO | As entertainment robots gain popularity, the demand for natural and
expressive motion, particularly in dancing, continues to rise. Traditionally,
dancing motions have been manually designed by artists, a process that is both
labor-intensive and restricted to simple motion playback, lacking the
flexibility to incorpor... |
2502.10982 | TEASER: Token Enhanced Spatial Modeling for Expressions Reconstruction | cs.CV | 3D facial reconstruction from a single in-the-wild image is a crucial task in
human-centered computer vision tasks. While existing methods can recover
accurate facial shapes, there remains significant space for improvement in
fine-grained expression capture. Current approaches struggle with irregular
mouth shapes, ex... |
2502.10983 | Learning Quiet Walking for a Small Home Robot | cs.RO | As home robotics gains traction, robots are increasingly integrated into
households, offering companionship and assistance. Quadruped robots,
particularly those resembling dogs, have emerged as popular alternatives for
traditional pets. However, user feedback highlights concerns about the noise
these robots generate ... |
2502.10985 | Is Elo Rating Reliable? A Study Under Model Misspecification | cs.LG cs.AI stat.ME stat.ML | Elo rating, widely used for skill assessment across diverse domains ranging
from competitive games to large language models, is often understood as an
incremental update algorithm for estimating a stationary Bradley-Terry (BT)
model. However, our empirical analysis of practical matching datasets reveals
two surprisin... |
2502.10988 | OMG: Opacity Matters in Material Modeling with Gaussian Splatting | cs.CV | Decomposing geometry, materials and lighting from a set of images, namely
inverse rendering, has been a long-standing problem in computer vision and
graphics. Recent advances in neural rendering enable photo-realistic and
plausible inverse rendering results. The emergence of 3D Gaussian Splatting has
boosted it to th... |
2502.10990 | FinMTEB: Finance Massive Text Embedding Benchmark | cs.CL cs.IR | Embedding models play a crucial role in representing and retrieving
information across various NLP applications. Recent advances in large language
models (LLMs) have further enhanced the performance of embedding models. While
these models are often benchmarked on general-purpose datasets, real-world
applications dema... |
2502.10993 | RoseRAG: Robust Retrieval-augmented Generation with Small-scale LLMs via
Margin-aware Preference Optimization | cs.CL cs.LG | Large language models (LLMs) have achieved impressive performance but face
high computational costs and latency, limiting their deployment in
resource-constrained settings. In contrast, small-scale LLMs (SLMs) are more
efficient yet struggle to capture evolving real-world knowledge.
Retrieval-augmented generation (RA... |
2502.10994 | SSVEP-BiMA: Bifocal Masking Attention Leveraging Native and
Symmetric-Antisymmetric Components for Robust SSVEP Decoding | cs.LG | Brain-computer interface (BCI) based on steady-state visual evoked potentials
(SSVEP) is a popular paradigm for its simplicity and high information transfer
rate (ITR). Accurate and fast SSVEP decoding is crucial for reliable BCI
performance. However, conventional decoding methods demand longer time windows,
and deep... |
2502.10995 | Evaluating Large language models on Understanding Korean indirect Speech
acts | cs.CL | To accurately understand the intention of an utterance is crucial in
conversational communication. As conversational artificial intelligence models
are rapidly being developed and applied in various fields, it is important to
evaluate the LLMs' capabilities of understanding the intentions of user's
utterance. This st... |
2502.10996 | RAS: Retrieval-And-Structuring for Knowledge-Intensive LLM Generation | cs.CL | Retrieval-augmented language models often struggle with knowledge-intensive
tasks due to inefficient retrieval, unstructured knowledge integration, and
single-pass architectures. We present Retrieval-And-Structuring (RAS), a novel
framework that dynamically constructs and reasons over query-specific knowledge
graphs ... |
2502.10997 | New Rates in Stochastic Decision-Theoretic Online Learning under
Differential Privacy | cs.LG cs.CR cs.DS | Hu and Mehta (2024) posed an open problem: what is the optimal
instance-dependent rate for the stochastic decision-theoretic online learning
(with $K$ actions and $T$ rounds) under $\varepsilon$-differential privacy?
Before, the best known upper bound and lower bound are $O\left(\frac{\log
K}{\Delta_{\min}} + \frac{\... |
2502.10999 | ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering
without Font Annotations | cs.CV cs.AI cs.CL cs.MM | This work demonstrates that diffusion models can achieve font-controllable
multilingual text rendering using just raw images without font label
annotations. Visual text rendering remains a significant challenge. While
recent methods condition diffusion on glyphs, it is impossible to retrieve
exact font annotations fr... |
2502.11001 | CL-MFAP: A Contrastive Learning-Based Multimodal Foundation Model for
Molecular Property Prediction and Antibiotic Screening | q-bio.BM cs.AI cs.LG q-bio.QM | Due to the rise in antimicrobial resistance, identifying novel compounds with
antibiotic potential is crucial for combatting this global health issue.
However, traditional drug development methods are costly and inefficient.
Recognizing the pressing need for more effective solutions, researchers have
turned to machin... |
2502.11002 | Adjust Your Focus: Defocus Deblurring From Dual-Pixel Images Using
Explicit Multi-Scale Cross-Correlation | cs.CV | Defocus blur is a common problem in photography. It arises when an image is
captured with a wide aperture, resulting in a shallow depth of field. Sometimes
it is desired, e.g., in portrait effect. Otherwise, it is a problem from both
an aesthetic point of view and downstream computer vision tasks, such as
segmentatio... |
2502.11003 | FeaKM: Robust Collaborative Perception under Noisy Pose Conditions | cs.CV | Collaborative perception is essential for networks of agents with limited
sensing capabilities, enabling them to work together by exchanging information
to achieve a robust and comprehensive understanding of their environment.
However, localization inaccuracies often lead to significant spatial message
displacement, ... |
2502.11006 | Prompt Inject Detection with Generative Explanation as an Investigative
Tool | cs.CR cs.AI | Large Language Models (LLMs) are vulnerable to adversarial prompt based
injects. These injects could jailbreak or exploit vulnerabilities within these
models with explicit prompt requests leading to undesired responses. In the
context of investigating prompt injects, the challenge is the sheer volume of
input prompts... |
2502.11007 | Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task,
Multi-Dialogue Settings | cs.LG cs.DC | Compared to traditional machine learning models, recent large language models
(LLMs) can exhibit multi-task-solving capabilities through multiple dialogues
and multi-modal data sources. These unique characteristics of LLMs, beyond
their large size, make their deployment more challenging during the inference
stage. Sp... |
2502.11008 | CounterBench: A Benchmark for Counterfactuals Reasoning in Large
Language Models | cs.CL | Counterfactual reasoning is widely recognized as one of the most challenging
and intricate aspects of causality in artificial intelligence. In this paper,
we evaluate the performance of large language models (LLMs) in counterfactual
reasoning. In contrast to previous studies that primarily focus on commonsense
causal... |
2502.11009 | Computing Inconsistency Measures Under Differential Privacy | cs.DB | Assessing data quality is crucial to knowing whether and how to use the data
for different purposes. Specifically, given a collection of integrity
constraints, various ways have been proposed to quantify the inconsistency of a
database. Inconsistency measures are particularly important when we wish to
assess the qual... |
2502.11013 | Collaborative Deterministic-Diffusion Model for Probabilistic Urban
Spatiotemporal Prediction | cs.LG cs.AI | Accurate prediction of urban spatiotemporal dynamics is essential for
enhancing urban management and decision-making. Existing spatiotemporal
prediction models are predominantly deterministic, focusing on primary
spatiotemporal patterns. However, those dynamics are highly complex, exhibiting
multi-modal distributions... |
2502.11018 | GRIFFIN: Effective Token Alignment for Faster Speculative Decoding | cs.CL cs.AI | Speculative decoding accelerates inference in large language models (LLMs) by
generating multiple draft tokens simultaneously. However, existing methods
often struggle with token misalignment between the training and decoding
phases, limiting their performance. To address this, we propose GRIFFIN, a
novel framework t... |
2502.11019 | Unlocking the Power of Function Vectors for Characterizing and
Mitigating Catastrophic Forgetting in Continual Instruction Tuning | cs.LG cs.AI | Catastrophic forgetting (CF) poses a significant challenge in machine
learning, where a model forgets previously learned information upon learning
new tasks. Despite the advanced capabilities of Large Language Models (LLMs),
they continue to face challenges with CF during continual learning. The
majority of existing ... |
2502.11020 | TUMLU: A Unified and Native Language Understanding Benchmark for Turkic
Languages | cs.CL cs.AI | Being able to thoroughly assess massive multi-task language understanding
(MMLU) capabilities is essential for advancing the applicability of
multilingual language models. However, preparing such benchmarks in high
quality native language is often costly and therefore limits the
representativeness of evaluation datas... |
2502.11021 | Leveraging Uncertainty Estimation for Efficient LLM Routing | cs.NI cs.CL | Deploying large language models (LLMs) in edge-cloud environments requires an
efficient routing strategy to balance cost and response quality. Traditional
approaches prioritize either human-preference data or accuracy metrics from
benchmark datasets as routing criteria, but these methods suffer from rigidity
and subj... |
2502.11022 | MultiTEND: A Multilingual Benchmark for Natural Language to NoSQL Query
Translation | cs.CL cs.AI | Natural language interfaces for NoSQL databases are increasingly vital in the
big data era, enabling users to interact with complex, unstructured data
without deep technical expertise. However, most recent advancements focus on
English, leaving a gap for multilingual support. This paper introduces
MultiTEND, the firs... |
2502.11023 | DT4ECG: A Dual-Task Learning Framework for ECG-Based Human Identity
Recognition and Human Activity Detection | eess.SP cs.LG | This article introduces DT4ECG, an innovative dual-task learning framework
for Electrocardiogram (ECG)-based human identity recognition and activity
detection. The framework employs a robust one-dimensional convolutional neural
network (1D-CNN) backbone integrated with residual blocks to extract
discriminative ECG fe... |
2502.11024 | TPCap: Unlocking Zero-Shot Image Captioning with Trigger-Augmented and
Multi-Modal Purification Modules | cs.CV | Recent advancements in large language models (LLMs) have significantly
enhanced the fluency and logical coherence of image captioning.
Retrieval-Augmented Generation (RAG) is widely adopted to incorporate external
knowledge into LLMs; however, existing RAG-based methods rely on separate
retrieval banks, introducing c... |
2502.11026 | Simplify RLHF as Reward-Weighted SFT: A Variational Method | cs.LG cs.AI cs.CL | Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning
Large Language Models (LLMs) with human values. However, RLHF has been
continuously challenged by its high complexity in implementation and
computation consumption. Even with recent simplifications, such as Direct
Preference Optimization (DPO) ... |
2502.11027 | Diversified Sampling Improves Scaling LLM inference | cs.LG | While increasing training compute has significantly improved the performance
of large language models (LLMs), similar gains have not been observed when
scaling inference compute. We hypothesize that the primary issue lies in the
uniformity of LLM outputs, which leads to inefficient sampling as models
repeatedly gener... |
2502.11028 | Mind the Confidence Gap: Overconfidence, Calibration, and Distractor
Effects in Large Language Models | cs.CL cs.AI | Large Language Models (LLMs) demonstrate impressive performance across
diverse tasks, yet confidence calibration remains a challenge. Miscalibration -
where models are overconfident or underconfident - poses risks, particularly in
high-stakes applications. This paper presents an empirical study on LLM
calibration, ex... |
2502.11031 | A Critical Review of Predominant Bias in Neural Networks | cs.LG | Bias issues of neural networks garner significant attention along with its
promising advancement. Among various bias issues, mitigating two predominant
biases is crucial in advancing fair and trustworthy AI: (1) ensuring neural
networks yields even performance across demographic groups, and (2) ensuring
algorithmic d... |
2502.11033 | Convergence of Policy Mirror Descent Beyond Compatible Function
Approximation | cs.LG math.OC stat.ML | Modern policy optimization methods roughly follow the policy mirror descent
(PMD) algorithmic template, for which there are by now numerous theoretical
convergence results. However, most of these either target tabular environments,
or can be applied effectively only when the class of policies being optimized
over sat... |
2502.11034 | AdaGC: Improving Training Stability for Large Language Model Pretraining | cs.LG | Large Language Models (LLMs) face increasing loss spikes during scaling,
undermining training stability and final performance. While gradient clipping
mitigates this issue, traditional global approaches poorly handle
parameter-specific gradient variations and decaying gradient norms. We propose
**AdaGC**, an adaptive... |
2502.11037 | Deep Incomplete Multi-view Learning via Cyclic Permutation of VAEs | cs.LG cs.AI cs.CV | Multi-View Representation Learning (MVRL) aims to derive a unified
representation from multi-view data by leveraging shared and complementary
information across views. However, when views are irregularly missing, the
incomplete data can lead to representations that lack sufficiency and
consistency. To address this, w... |
2502.11044 | Detecting Cadastral Boundary from Satellite Images Using U-Net model | cs.CV cs.LG | Finding the cadastral boundaries of farmlands is a crucial concern for land
administration. Therefore, using deep learning methods to expedite and simplify
the extraction of cadastral boundaries from satellite and unmanned aerial
vehicle (UAV) images is critical. In this paper, we employ transfer learning to
train a ... |
2502.11049 | Faces of Fairness: Examining Bias in Facial Expression Recognition
Datasets and Models | cs.CV | Building AI systems, including Facial Expression Recognition (FER), involves
two critical aspects: data and model design. Both components significantly
influence bias and fairness in FER tasks. Issues related to bias and fairness
in FER datasets and models remain underexplored. This study investigates bias
sources in... |
2502.11051 | MMUNLEARNER: Reformulating Multimodal Machine Unlearning in the Era of
Multimodal Large Language Models | cs.CL cs.AI | Recent progress in Machine Unlearning (MU) has introduced solutions for the
selective removal of private or sensitive information encoded within deep
neural networks. Nonetheless, MU for Multimodal Large Language Models (MLLMs)
remains in its nascent phase. Therefore, we propose to reformulate the task of
multimodal ... |
2502.11053 | Demystifying 5G Polar and LDPC Codes: A Comprehensive Review and
Foundations | cs.IT math.IT | This paper serves as a comprehensive guide for practitioners and scholars
aiming to understand the channel coding and decoding schemes integral to the 5G
NR standard, with a particular focus on LDPC and polar codes. We start by
explaining the design procedures that underlie these channel codes, offering
fundamental i... |
2502.11054 | Reasoning-Augmented Conversation for Multi-Turn Jailbreak Attacks on
Large Language Models | cs.CL cs.AI cs.CR | Multi-turn jailbreak attacks simulate real-world human interactions by
engaging large language models (LLMs) in iterative dialogues, exposing critical
safety vulnerabilities. However, existing methods often struggle to balance
semantic coherence with attack effectiveness, resulting in either benign
semantic drift or ... |
2502.11057 | A Physics-Informed Machine Learning Framework for Safe and Optimal
Control of Autonomous Systems | cs.RO cs.AI cs.SY eess.SY | As autonomous systems become more ubiquitous in daily life, ensuring high
performance with guaranteed safety is crucial. However, safety and performance
could be competing objectives, which makes their co-optimization difficult.
Learning-based methods, such as Constrained Reinforcement Learning (CRL),
achieve strong ... |
2502.11059 | ClimateLLM: Efficient Weather Forecasting via Frequency-Aware Large
Language Models | cs.LG cs.AI | Weather forecasting is crucial for public safety, disaster prevention and
mitigation, agricultural production, and energy management, with global
relevance. Although deep learning has significantly advanced weather
prediction, current methods face critical limitations: (i) they often struggle
to capture both dynamic ... |
2502.11061 | D\'ej\`a Vu? Decoding Repeated Reading from Eye Movements | cs.CL | Be it your favorite novel, a newswire article, a cooking recipe or an
academic paper -- in many daily situations we read the same text more than
once. In this work, we ask whether it is possible to automatically determine
whether the reader has previously encountered a text based on their eye
movement patterns. We in... |
2502.11062 | Beyond Similarity: A Gradient-based Graph Method for Instruction Tuning
Data Selection | cs.CL | Large language models (LLMs) have shown great potential across various
industries due to their remarkable ability to generalize through instruction
tuning. However, the limited availability of domain-specific data significantly
hampers their performance on specialized tasks. While existing methods
primarily focus on ... |
2502.11066 | CARMA: Enhanced Compositionality in LLMs via Advanced Regularisation and
Mutual Information Alignment | cs.CL | Large language models (LLMs) struggle with compositional generalisation,
limiting their ability to systematically combine learned components to
interpret novel inputs. While architectural modifications, fine-tuning, and
data augmentation improve compositionality, they often have limited
adaptability, face scalability... |
2502.11067 | A Survey on Active Feature Acquisition Strategies | cs.LG | Active feature acquisition studies the challenge of making accurate
predictions while limiting the cost of collecting complete data. By selectively
acquiring only the most informative features for each instance, these
strategies enable efficient decision-making in scenarios where data collection
is expensive or time-... |
2502.11068 | Accelerating Anchors via Specialization and Feature Transformation | cs.LG cs.AI | Anchors is a popular local model-agnostic explanation technique whose
applicability is limited by its computational inefficiency. To address this
limitation, we propose a pre-training-based approach to accelerate Anchors
without compromising the explanation quality. Our approach leverages the
iterative nature of Anch... |
2502.11070 | A Survey on Vulnerability Prioritization: Taxonomy, Metrics, and
Research Challenges | cs.CR cs.AI | In the highly interconnected digital landscape of today, safeguarding complex
infrastructures against cyber threats has become increasingly challenging due
to the exponential growth in the number and complexity of vulnerabilities.
Resource constraints necessitate effective vulnerability prioritization
strategies, foc... |
2502.11071 | Generalization of the Gibbs algorithm with high probability at low
temperatures | cs.LG stat.ML | The paper gives a bound on the generalization error of the Gibbs algorithm,
which recovers known data-independent bounds for the high temperature range and
extends to the low-temperature range, where generalization depends critically
on the data-dependent loss-landscape. It is shown, that with high probability
the ge... |
2502.11073 | Demystifying Hateful Content: Leveraging Large Multimodal Models for
Hateful Meme Detection with Explainable Decisions | cs.CL | Hateful meme detection presents a significant challenge as a multimodal task
due to the complexity of interpreting implicit hate messages and contextual
cues within memes. Previous approaches have fine-tuned pre-trained
vision-language models (PT-VLMs), leveraging the knowledge they gained during
pre-training and the... |
2502.11075 | Exposing Numeracy Gaps: A Benchmark to Evaluate Fundamental Numerical
Abilities in Large Language Models | cs.CL cs.AI | Large Language Models (LLMs) have demonstrated impressive capabilities in
natural language processing tasks, such as text generation and semantic
understanding. However, their performance on numerical reasoning tasks, such as
basic arithmetic, numerical retrieval, and magnitude comparison, remains
surprisingly poor. ... |
2502.11078 | DEEPER Insight into Your User: Directed Persona Refinement for Dynamic
Persona Modeling | cs.CL | To advance personalized applications such as recommendation systems and user
behavior prediction, recent research increasingly adopts large language models
(LLMs) for human -readable persona modeling. In dynamic real -world scenarios,
effective persona modeling necessitates leveraging streaming behavior data to
conti... |
2502.11079 | Phantom: Subject-consistent video generation via cross-modal alignment | cs.CV cs.AI | The continuous development of foundational models for video generation is
evolving into various applications, with subject-consistent video generation
still in the exploratory stage. We refer to this as Subject-to-Video, which
extracts subject elements from reference images and generates
subject-consistent video thro... |
2502.11083 | Streamlining the Collaborative Chain of Models into A Single Forward
Pass in Generation-Based Tasks | cs.CL | In Retrieval-Augmented Generation (RAG) and agent-based frameworks, the
"Chain of Models" approach is widely used, where multiple specialized models
work sequentially on distinct sub-tasks. This approach is effective but
increases resource demands as each model must be deployed separately. Recent
advancements attempt... |
2502.11084 | Rewrite to Jailbreak: Discover Learnable and Transferable Implicit
Harmfulness Instruction | cs.CL | As Large Language Models (LLMs) are widely applied in various domains, the
safety of LLMs is increasingly attracting attention to avoid their powerful
capabilities being misused. Existing jailbreak methods create a forced
instruction-following scenario, or search adversarial prompts with prefix or
suffix tokens to ac... |
2502.11085 | Towards Data-Efficient Pretraining for Atomic Property Prediction | cs.LG cs.AI | This paper challenges the recent paradigm in atomic property prediction that
links progress to growing dataset sizes and computational resources. We show
that pretraining on a carefully selected, task-relevant dataset can match or
even surpass large-scale pretraining, while using as little as 1/24th of the
computatio... |
2502.11089 | Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse
Attention | cs.CL cs.AI cs.LG | Long-context modeling is crucial for next-generation language models, yet the
high computational cost of standard attention mechanisms poses significant
computational challenges. Sparse attention offers a promising direction for
improving efficiency while maintaining model capabilities. We present NSA, a
Natively tra... |
2502.11090 | SafeDialBench: A Fine-Grained Safety Benchmark for Large Language Models
in Multi-Turn Dialogues with Diverse Jailbreak Attacks | cs.CL cs.AI | With the rapid advancement of Large Language Models (LLMs), the safety of
LLMs has been a critical concern requiring precise assessment. Current
benchmarks primarily concentrate on single-turn dialogues or a single jailbreak
attack method to assess the safety. Additionally, these benchmarks have not
taken into accoun... |
2502.11093 | Text-promptable Propagation for Referring Medical Image Sequence
Segmentation | cs.CV | Medical image sequences, generated by both 2D video-based examinations and 3D
imaging techniques, consist of sequential frames or slices that capture the
same anatomical entities (e.g., organs or lesions) from multiple perspectives.
Existing segmentation studies typically process medical images using either 2D
or 3D ... |
2502.11094 | SyncSpeech: Low-Latency and Efficient Dual-Stream Text-to-Speech based
on Temporal Masked Transformer | cs.SD cs.AI | This paper presents a dual-stream text-to-speech (TTS) model, SyncSpeech,
capable of receiving streaming text input from upstream models while
simultaneously generating streaming speech, facilitating seamless interaction
with large language models. SyncSpeech has the following advantages: Low
latency, as it begins ge... |
2502.11095 | A Survey of Large Language Models in Psychotherapy: Current Landscape
and Future Directions | cs.CL | Mental health remains a critical global challenge, with increasing demand for
accessible, effective interventions. Large language models (LLMs) offer
promising solutions in psychotherapy by enhancing the assessment, diagnosis,
and treatment of mental health conditions through dynamic, context-aware
interactions. This... |
2502.11096 | Mixture of Tunable Experts -- Behavior Modification of DeepSeek-R1 at
Inference Time | cs.AI cs.CL | We present the Mixture-of-Tunable-Experts (MoTE), a method that extends the
Mixture-of-Experts architecture of Large Language Models (LLMs). Without
additional training, MoTE enables meaningful and focused behavior changes in
LLMs on-the-fly during inference time. By analyzing the digital LLM brain of
DeepSeek-R1 usi... |
2502.11098 | Talk Structurally, Act Hierarchically: A Collaborative Framework for LLM
Multi-Agent Systems | cs.AI cs.LG cs.MA | Recent advancements in LLM-based multi-agent (LLM-MA) systems have shown
promise, yet significant challenges remain in managing communication and
refinement when agents collaborate on complex tasks. In this paper, we propose
\textit{Talk Structurally, Act Hierarchically (TalkHier)}, a novel framework
that introduces ... |
2502.11100 | Towards Achieving Concept Completeness for Unsupervised Textual Concept
Bottleneck Models | cs.CL | Textual Concept Bottleneck Models (TBMs) are interpretable-by-design models
for text classification that predict a set of salient concepts before making
the final prediction. This paper proposes Complete Textual Concept Bottleneck
Model (CT-CBM),a novel TCBM generator building concept labels in a fully
unsupervised m... |
2502.11101 | CacheFocus: Dynamic Cache Re-Positioning for Efficient
Retrieval-Augmented Generation | cs.CL cs.AI | Large Language Models (LLMs) excel across a variety of language tasks yet are
constrained by limited input lengths and high computational costs. Existing
approaches\textemdash such as relative positional encodings (e.g., RoPE, ALiBi)
and sliding window mechanisms\textemdash partially alleviate these issues but
often ... |
2502.11102 | OptMATH: A Scalable Bidirectional Data Synthesis Framework for
Optimization Modeling | cs.AI cs.LG | Despite the rapid development of large language models (LLMs), a fundamental
challenge persists: the lack of high-quality optimization modeling datasets
hampers LLMs' robust modeling of practical optimization problems from natural
language descriptions (NL). This data scarcity also contributes to the
generalization d... |
2502.11104 | Enhancing Cross-Tokenizer Knowledge Distillation with Contextual
Dynamical Mapping | cs.CL | Knowledge Distillation (KD) has emerged as a prominent technique for model
compression. However, conventional KD approaches primarily focus on homogeneous
architectures with identical tokenizers, constraining their applicability in
cross-architecture scenarios. As for the cross-tokenizer KD, the differences in
the to... |
2502.11105 | Graceful forgetting: Memory as a process | q-bio.NC cs.IR cs.LG | A rational theory of memory is proposed to explain how we can accommodate
unbounded sensory input within bounded storage space. Memory is stored as
statistics, organized into complex structures that are constantly summarized
and compressed to make room for new input. This process, driven by space
constraints, is guid... |
2502.11107 | Revisiting Weak-to-Strong Generalization in Theory and Practice: Reverse
KL vs. Forward KL | cs.LG cs.AI | As large language models advance toward superhuman performance, ensuring
their alignment with human values and abilities grows increasingly complex.
Weak-to-strong generalization offers a promising approach by leveraging
predictions from weaker models to guide stronger systems, but its effectiveness
could be constrai... |
2502.11108 | Knowledge Graph-Driven Retrieval-Augmented Generation: Integrating
Deepseek-R1 with Weaviate for Advanced Chatbot Applications | cs.CL cs.AI | Large language models (LLMs) have significantly advanced the field of natural
language generation. However, they frequently generate unverified outputs,
which compromises their reliability in critical applications. In this study, we
propose an innovative framework that combines structured biomedical knowledge
with LL... |
2502.11109 | Explosive Growth in Large-Scale Collaboration Networks | cs.SI physics.soc-ph | We analyse the evolution of two large collaboration networks: the Microsoft
Academic Graph (1800-2020) and Internet Movie Database (1900-2020), comprising
$2.72 \times 10^8$ and $1.88 \times 10^6$ nodes respectively. The networks show
super-linear growth, with node counts following power laws $N(t) \propto
t^{\alpha}... |
2502.11112 | Parametric Analysis of Network Evolution Processes | cs.SI physics.soc-ph | We present a comprehensive parametric analysis of node and edge lifetimes
processes in two large-scale collaboration networks: the Microsoft Academic
Graph (1800-2020) and Internet Movie Database (1900-2020). Node and edge
lifetimes (career and collaboration durations) follow Weibull distributions
with consistent sha... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.