id
stringlengths
64
64
published
stringlengths
19
25
title
stringlengths
7
262
description
stringlengths
6
54.4k
link
stringlengths
31
227
category
stringclasses
6 values
image
stringlengths
3
247
070804a71f3ad7b5fba1693e156850933a8f5c97c3392889931819b81e5b45a7
2026-01-01T00:00:00-05:00
Invisible Languages of the LLM Universe
arXiv:2510.11557v2 Announce Type: replace Abstract: Large Language Models are trained on massive multilingual corpora, yet this abundance masks a profound crisis: of the world's 7,613 living languages, approximately 2,000 languages with millions of speakers remain effectively invisible in digital ecosystems. We propose a critical framework connecting empirical measurements of language vitality (real world demographic strength) and digitality (online presence) with postcolonial theory and epistemic injustice to explain why linguistic inequality in AI systems is not incidental but structural. Analyzing data across all documented human languages, we identify four categories: Strongholds (33%, high vitality and digitality), Digital Echoes (6%, high digitality despite declining vitality), Fading Voices (36%, low on both dimensions), and critically, Invisible Giants (27%, high vitality but near-zero digitality) - languages spoken by millions yet absent from the LLM universe. We demonstrate that these patterns reflect continuities from colonial-era linguistic hierarchies to contemporary AI development, constituting digital epistemic injustice. Our analysis reveals that English dominance in AI is not a technical necessity but an artifact of power structures that systematically exclude marginalized linguistic knowledge. We conclude with implications for decolonizing language technology and democratizing access to AI benefits.
https://arxiv.org/abs/2510.11557
Academic Papers
svg
d12d972c0fe1fb74aa35f9b1464c64078046806569607a9d0da21565afa69e8d
2026-01-01T00:00:00-05:00
Less is More: Improving LLM Reasoning with Minimal Test-Time Intervention
arXiv:2510.13940v2 Announce Type: replace Abstract: Recent progress in large language models (LLMs) has focused on test-time scaling to improve reasoning via increased inference computation, but often at the cost of efficiency. We revisit test-time behavior and uncover a simple yet underexplored phenomenon: reasoning uncertainty is highly localized-only a small subset of high-entropy tokens dominantly affects output correctness. Motivated by this, we propose Minimal Test-Time Intervention (MTI), a training-free framework that enhances reasoning accuracy and stability with minimal overhead. MTI includes: (i) Selective CFG intervention, applying classifier-free guidance only at uncertain positions; and (ii) Lightweight negative-prompt guidance, reusing the main model's KV cache to approximate unconditional decoding efficiently. MTI yields consistent gains across general, coding, and STEM tasks-e.g., +9.28% average improvement on six benchmarks for DeepSeek-R1-7B and +11.25% on AIME2024 using Ling-mini-2.0-while remaining highly efficient.
https://arxiv.org/abs/2510.13940
Academic Papers
svg
a5e741ffd83819e5050b0f3a5f11006f4674f111e49d0f80db7ec76019f396ab
2026-01-01T00:00:00-05:00
CoT-PL: Visual Chain-of-Thought Reasoning Meets Pseudo-Labeling for Open-Vocabulary Object Detection
arXiv:2510.14792v2 Announce Type: replace Abstract: Open-vocabulary object detection (OVD) seeks to recognize and localize object categories beyond those seen during training. Recent approaches typically leverage vision-language models (VLMs) to generate pseudo-labels using image-text alignment, allowing detectors to generalize to unseen classes without explicit supervision. However, these methods depend heavily on direct image-text matching, neglecting the intermediate reasoning steps essential for interpreting semantically complex scenes. This results in limited robustness when confronted with crowded or occluded visual contexts. In this paper, we introduce CoT-PL, a new framework that employs structured visual chain-of-thought (CoT) reasoning into the pseudo-labeling process. CoT-PL decomposes object understanding into three interpretable steps: (1) region perception even for unseen objects, (2) category recognition via zero-shot reasoning, and (3) background grounding to separate semantically complex objects. Crucially, the third step naturally motivates our contrastive background learning (CBL) that uses the pre-computed background cues as negatives to promote feature disentanglement between objects and background. In this way, CoT reasoning and CBL form an integrated pipeline tailored to robust pseudo-labeling in crowded or occluded scenes. Notably, in these two settings, our novel-class pseudo-label quality achieves relative improvements of 103.4% and 168.4% over the best prior, respectively. Our extensive experiments demonstrate that CoT-PL achieves +7.7 AP50 on open-vocabulary COCO and +2.9 mask AP on LVIS for novel classes, setting a new state of the art. Code and models are available at https://github.com/hchoi256/cotpl.
https://arxiv.org/abs/2510.14792
Academic Papers
svg
a2996d446e5b5289d6a67c4f8bfd1a787a3b23c202411525730154fb38fa4240
2026-01-01T00:00:00-05:00
Graph Learning is Suboptimal in Causal Bandits
arXiv:2510.16811v2 Announce Type: replace Abstract: We study regret minimization in causal bandits under causal sufficiency where the underlying causal structure is not known to the agent. Previous work has focused on identifying the reward's parents and then applying classic bandit methods to them, or jointly learning the parents while minimizing regret. We investigate whether such strategies are optimal. Somewhat counterintuitively, our results show that learning the parent set is suboptimal. We do so by proving that there exist instances where regret minimization and parent identification are fundamentally conflicting objectives. We further analyze both the known and unknown parent set size regimes, establish novel regret lower bounds that capture the combinatorial structure of the action space. Building on these insights, we propose nearly optimal algorithms that bypass graph and parent recovery, demonstrating that parent identification is indeed unnecessary for regret minimization. Experiments confirm that there exists a large performance gap between our method and existing baselines in various environments.
https://arxiv.org/abs/2510.16811
Academic Papers
svg
3792aeacaea09e8590952a157b990a7722ec155254acda5f11d617c793a3b5ef
2026-01-01T00:00:00-05:00
When Intelligence Fails: An Empirical Study on Why LLMs Struggle with Password Cracking
arXiv:2510.17884v3 Announce Type: replace Abstract: The remarkable capabilities of Large Language Models (LLMs) in natural language understanding and generation have sparked interest in their potential for cybersecurity applications, including password guessing. In this study, we conduct an empirical investigation into the efficacy of pre-trained LLMs for password cracking using synthetic user profiles. Specifically, we evaluate the performance of state-of-the-art open-source LLMs such as TinyLLaMA, Falcon-RW-1B, and Flan-T5 by prompting them to generate plausible passwords based on structured user attributes (e.g., name, birthdate, hobbies). Our results, measured using Hit@1, Hit@5, and Hit@10 metrics under both plaintext and SHA-256 hash comparisons, reveal consistently poor performance, with all models achieving less than 1.5% accuracy at Hit@10. In contrast, traditional rule-based and combinator-based cracking methods demonstrate significantly higher success rates. Through detailed analysis and visualization, we identify key limitations in the generative reasoning of LLMs when applied to the domain-specific task of password guessing. Our findings suggest that, despite their linguistic prowess, current LLMs lack the domain adaptation and memorization capabilities required for effective password inference, especially in the absence of supervised fine-tuning on leaked password datasets. This study provides critical insights into the limitations of LLMs in adversarial contexts and lays the groundwork for future efforts in secure, privacy-preserving, and robust password modeling.
https://arxiv.org/abs/2510.17884
Academic Papers
svg
341b18941ebd6c9d36e4742c611059f8c29511328960f57a9ff30da4e6105063
2026-01-01T00:00:00-05:00
Space Object Detection using Multi-frame Temporal Trajectory Completion Method
arXiv:2510.19220v3 Announce Type: replace Abstract: Space objects in Geostationary Earth Orbit (GEO) present significant detection challenges in optical imaging due to weak signals, complex stellar backgrounds, and environmental interference. In this paper, we enhance high-frequency features of GEO targets while suppressing background noise at the single-frame level through wavelet transform. Building on this, we propose a multi-frame temporal trajectory completion scheme centered on the Hungarian algorithm for globally optimal cross-frame matching. To effectively mitigate missing and false detections, a series of key steps including temporal matching and interpolation completion, temporal-consistency-based noise filtering, and progressive trajectory refinement are designed in the post-processing pipeline. Experimental results on the public SpotGEO dataset demonstrate the effectiveness of the proposed method, achieving an F_1 score of 90.14%.
https://arxiv.org/abs/2510.19220
Academic Papers
svg
624ad4711b5fe11edd71d133e869e271939550586d6d1a1751f865377c0b3f7d
2026-01-01T00:00:00-05:00
A Unified Approach to Submodular Maximization Under Noise
arXiv:2510.21128v2 Announce Type: replace Abstract: We consider the problem of maximizing a submodular function with access to a noisy value oracle for the function instead of an exact value oracle. Similar to prior work, we assume that the noisy oracle is persistent in that multiple calls to the oracle for a specific set always return the same value. In this model, Hassidim and Singer (2017) design a $(1-1/e)$-approximation algorithm for monotone submodular maximization subject to a cardinality constraint, and Huang et al (2022) design a $(1-1/e)/2$-approximation algorithm for monotone submodular maximization subject to any arbitrary matroid constraint. In this paper, we design a meta-algorithm that allows us to take any "robust" algorithm for exact submodular maximization as a black box and transform it into an algorithm for the noisy setting while retaining the approximation guarantee. By using the meta-algorithm with the measured continuous greedy algorithm, we obtain a $(1-1/e)$-approximation (resp. $1/e$-approximation) for monotone (resp. non-monotone) submodular maximization subject to a matroid constraint under noise. Furthermore, by using the meta-algorithm with the double greedy algorithm, we obtain a $1/2$-approximation for unconstrained (non-monotone) submodular maximization under noise.
https://arxiv.org/abs/2510.21128
Academic Papers
svg
9a10beb52433a715bc873109000d043c5fb79b91df38ebf883830cdb32b36b9e
2026-01-01T00:00:00-05:00
VADTree: Explainable Training-Free Video Anomaly Detection via Hierarchical Granularity-Aware Tree
arXiv:2510.22693v3 Announce Type: replace Abstract: Video anomaly detection (VAD) focuses on identifying anomalies in videos. Supervised methods demand substantial in-domain training data and fail to deliver clear explanations for anomalies. In contrast, training-free methods leverage the knowledge reserves and language interactivity of large pre-trained models to detect anomalies. However, the current fixed-length temporal window sampling approaches struggle to accurately capture anomalies with varying temporal spans. Therefore, we propose VADTree that utilizes a Hierarchical Granularityaware Tree (HGTree) structure for flexible sampling in VAD. VADTree leverages the knowledge embedded in a pre-trained Generic Event Boundary Detection (GEBD) model to characterize potential anomaly event boundaries. Specifically, VADTree decomposes the video into generic event nodes based on boundary confidence, and performs adaptive coarse-fine hierarchical structuring and redundancy removal to construct the HGTree. Then, the multi-dimensional priors are injected into the visual language models (VLMs) to enhance the node-wise anomaly perception, and anomaly reasoning for generic event nodes is achieved via large language models (LLMs). Finally, an inter-cluster node correlation method is used to integrate the multi-granularity anomaly scores. Extensive experiments on three challenging datasets demonstrate that VADTree achieves state-of-the-art performance in training-free settings while drastically reducing the number of sampled video segments. The code will be available at https://github.com/wenlongli10/VADTree.
https://arxiv.org/abs/2510.22693
Academic Papers
svg
963c299ed6065ab00a1e0bbdb30a5fac4d37797c7ca7252f56b5da1e539edc7a
2026-01-01T00:00:00-05:00
Towards Generalisable Foundation Models for Brain MRI
arXiv:2510.23415v3 Announce Type: replace Abstract: Foundation models in artificial intelligence (AI) are transforming medical imaging by enabling general-purpose feature learning from large-scale, unlabeled datasets. In this work, we introduce BrainFound, a self-supervised foundation model for brain MRI, built by extending DINO-v2, a vision transformer originally designed for 2D natural images. BrainFound adapts DINO-v2 to model full 3D brain anatomy by incorporating volumetric information from sequential MRI slices, moving beyond conventional single-slice paradigms. It supports both single- and multimodal inputs, enabling a broad range of downstream tasks, including disease detection and image segmentation, while generalising across varied imaging protocols and clinical scenarios. We show that BrainFound consistently outperforms existing self-supervised pretraining strategies and supervised baselines, particularly in label-scarce and multi-contrast settings. By integrating information from diverse 3D MRI modalities (e.g., T1, T2, FLAIR), it enhances diagnostic accuracy and reduces dependency on extensive expert annotations. This flexibility makes BrainFound a scalable and practical solution for 3D neuroimaging pipelines, with significant potential for clinical deployment and research innovation.
https://arxiv.org/abs/2510.23415
Academic Papers
svg
cff7f67de6a0c3cc49bfd4258fe8d22e6e5ad4559c4a0f3639587596be7bf22e
2026-01-01T00:00:00-05:00
Human- vs. AI-generated tests: dimensionality and information accuracy in latent trait evaluation
arXiv:2510.24739v2 Announce Type: replace Abstract: Artificial Intelligence (AI) and large language models (LLMs) are increasingly used in social and psychological research. Among potential applications, LLMs can be used to generate, customise, or adapt measurement instruments. This study presents a preliminary investigation of AI-generated questionnaires by comparing two ChatGPT-based adaptations of the Body Awareness Questionnaire (BAQ) with the validated human-developed version. The AI instruments were designed with different levels of explicitness in content and instructions on construct facets, and their psychometric properties were assessed using a Bayesian Graded Response Model. Results show that although surface wording between AI and original items was similar, differences emerged in dimensionality and in the distribution of item and test information across latent traits. These findings illustrate the importance of applying statistical measures of accuracy to ensure the validity and interpretability of AI-driven tools.
https://arxiv.org/abs/2510.24739
Academic Papers
svg
3d809f7e518586d1b60df6b639fe926a2bc25e70c4d7a287d609c51204dc91b0
2026-01-01T00:00:00-05:00
Learning Spatial-Aware Manipulation Ordering
arXiv:2510.25138v2 Announce Type: replace Abstract: Manipulation in cluttered environments is challenging due to spatial dependencies among objects, where an improper manipulation order can cause collisions or blocked access. Existing approaches often overlook these spatial relationships, limiting their flexibility and scalability. To address these limitations, we propose OrderMind, a unified spatial-aware manipulation ordering framework that directly learns object manipulation priorities based on spatial context. Our architecture integrates a spatial context encoder with a temporal priority structuring module. We construct a spatial graph using k-Nearest Neighbors to aggregate geometric information from the local layout and encode both object-object and object-manipulator interactions to support accurate manipulation ordering in real-time. To generate physically and semantically plausible supervision signals, we introduce a spatial prior labeling method that guides a vision-language model to produce reasonable manipulation orders for distillation. We evaluate OrderMind on our Manipulation Ordering Benchmark, comprising 163,222 samples of varying difficulty. Extensive experiments in both simulation and real-world environments demonstrate that our method significantly outperforms prior approaches in effectiveness and efficiency, enabling robust manipulation in cluttered scenes.
https://arxiv.org/abs/2510.25138
Academic Papers
svg
e17f45f0cdca732299f2d9bfc20e63037b075093ef7396fb0a787bd77ad4aaba
2026-01-01T00:00:00-05:00
ATLAS: Artifact Generation Through Layered Constraints and LLM x MDE Synergy
arXiv:2510.25890v2 Announce Type: replace Abstract: ATLAS unifies Large Language Models with Model-Driven Engineering to generate regulator-ready artifacts and machine-checkable evidence for safety- and compliance-critical domains. ATLAS integrates three pillars: a Unified Meta-Model (UMM) reconciles heterogeneous schemas and regulatory text into a single semantic space; an Integrated Constraint Model (ICM) extends our prior Dual-Stage(S2D2) extraction logic to compile layered requirements into deterministic generation-time automata (Layer~1) and post-generation validators (Layer~2); and Constraint-Guided Verifiable Generation (CVG) applies these through two-layer enforcement -- Layer~1 structural constraints drive prefix-safe decoding while Layer~2 semantic/logical validation produces machine-checkable certificates. When violations occur, ATLAS performs audit-guided repair and records generation traces for compliance review. We evaluate ATLAS in automotive software engineering (AUTOSAR) and cross-border legal jurisdiction (Brussels~I~bis). ATLAS produces structurally valid, auditable artifacts that integrate with existing tooling and substantially reduce manual remediation effort, validating a graduated automation paradigm that automates routine construction while empowering experts to resolve complex semantic ambiguities through machine-checkable evidence.
https://arxiv.org/abs/2510.25890
Academic Papers
svg
bf260f747294d809f5ca172374a70674a088b21572bbbce62609422fea90fc57
2026-01-01T00:00:00-05:00
On the limitation of evaluating machine unlearning using only a single training seed
arXiv:2510.26714v4 Announce Type: replace Abstract: Machine unlearning (MU) aims to remove the influence of certain data points from a trained model without costly retraining. Most practical MU algorithms are only approximate and their performance can only be assessed empirically. Care must therefore be taken to make empirical comparisons as representative as possible. A common practice is to run the MU algorithm multiple times independently starting from the same trained model. In this work, we demonstrate that this practice can give highly non-representative results because -- even for the same architecture and same dataset -- some MU methods can be highly sensitive to the choice of random number seed used for model training. We illustrate that this is particularly relevant for MU methods that are deterministic, i.e., which always produce the same result when started from the same trained model. We therefore recommend that empirical comparisons of MU algorithms should also reflect the variability across different model training seeds.
https://arxiv.org/abs/2510.26714
Academic Papers
svg
373e716c68f26ae73efa4fab809a9ed0e4d260d369c061b0738cb73f1b4965d4
2026-01-01T00:00:00-05:00
Deep sequence models tend to memorize geometrically; it is unclear why
arXiv:2510.26745v2 Announce Type: replace Abstract: Deep sequence models are said to store atomic facts predominantly in the form of associative memory: a brute-force lookup of co-occurring entities. We identify a dramatically different form of storage of atomic facts that we term as geometric memory. Here, the model has synthesized embeddings encoding novel global relationships between all entities, including ones that do not co-occur in training. Such storage is powerful: for instance, we show how it transforms a hard reasoning task involving an $\ell$-fold composition into an easy-to-learn $1$-step navigation task. From this phenomenon, we extract fundamental aspects of neural embedding geometries that are hard to explain. We argue that the rise of such a geometry, as against a lookup of local associations, cannot be straightforwardly attributed to typical supervisory, architectural, or optimizational pressures. Counterintuitively, a geometry is learned even when it is more complex than the brute-force lookup. Then, by analyzing a connection to Node2Vec, we demonstrate how the geometry stems from a spectral bias that -- in contrast to prevailing theories -- indeed arises naturally despite the lack of various pressures. This analysis also points out to practitioners a visible headroom to make Transformer memory more strongly geometric. We hope the geometric view of parametric memory encourages revisiting the default intuitions that guide researchers in areas like knowledge acquisition, capacity, discovery, and unlearning.
https://arxiv.org/abs/2510.26745
Academic Papers
svg
40cbbe3fb18138855058749549c05b7de6365306b2b2542b1ca26c2ea08b6049
2026-01-01T00:00:00-05:00
Can machines think efficiently?
arXiv:2510.26954v2 Announce Type: replace Abstract: The Turing Test is no longer adequate for distinguishing human and machine intelligence. With advanced artificial intelligence systems already passing the original Turing Test and contributing to serious ethical and environmental concerns, we urgently need to update the test. This work expands upon the original imitation game by accounting for an additional factor: the energy spent answering the questions. By adding the constraint of energy, the new test forces us to evaluate intelligence through the lens of efficiency, connecting the abstract problem of thinking to the concrete reality of finite resources. Further, this proposed new test ensures the evaluation of intelligence has a measurable, practical finish line that the original test lacks. This additional constraint compels society to weigh the time savings of using artificial intelligence against its total resource cost.
https://arxiv.org/abs/2510.26954
Academic Papers
svg
93336f620629f2d7556f5f0d947283258b68ba5a84036ddc69511f623c33a05b
2026-01-01T00:00:00-05:00
OpenSIR: Open-Ended Self-Improving Reasoner
arXiv:2511.00602v2 Announce Type: replace Abstract: Recent advances in large language model (LLM) reasoning through reinforcement learning rely on annotated datasets for verifiable rewards, which may limit models' ability to surpass human-level performance. While self-play offers a promising alternative, existing approaches depend on external verifiers or cannot learn open-endedly. We present Open-Ended Self-Improving Reasoner (OpenSIR), a self-play framework where an LLM learns to generate and solve novel problems by alternating teacher and student roles without external supervision. To generate novel problems, OpenSIR optimises for both difficulty and diversity, rewarding problems that challenge appropriately while exploring distinct concepts, enabling open-ended mathematical discovery. Starting from a single trivial seed problem, OpenSIR substantially improves instruction models: Llama-3.2-3B-Instruct advances from 73.9 to 78.3 on GSM8K, and from 28.8 to 34.4 on College Math, while Gemma-2-2B-Instruct rises from 38.5 to 58.7 on GSM8K. Our analyses reveal that OpenSIR achieves open-ended learning through co-evolving teacher-student roles that adaptively calibrate difficulty and drive diverse exploration, progressing autonomously from basic to advanced mathematics.
https://arxiv.org/abs/2511.00602
Academic Papers
svg
6785e7a771c06e63895ab113b80bdaa769641aa8db727e0f9c0b57805fb7e9f6
2026-01-01T00:00:00-05:00
Necessary and Sufficient Conditions for Capacity-Achieving Private Information Retrieval with Adversarial Servers
arXiv:2511.06003v3 Announce Type: replace Abstract: Private information retrieval (PIR) is a mechanism for efficiently downloading messages while keeping the index of the desired message secret from the servers. PIR schemes have been extended to various scenarios with adversarial servers: PIR schemes where some servers are unresponsive or return noisy responses are called robust PIR and Byzantine PIR, respectively; PIR schemes where some servers collude to reveal the index are called colluding PIR. The information-theoretic upper bound on the download efficiency of these PIR schemes has been proved in previous studies. However, systematic ways to construct PIR schemes that achieve the upper bound are not known. In order to construct a capacity-achieving PIR schemes systematically, it is necessary to clarify the conditions that the queries should satisfy. This paper proves the necessary and sufficient conditions for capacity-achieving PIR schemes.
https://arxiv.org/abs/2511.06003
Academic Papers
svg
6d1ae32b4bacad884b1a89dbba90a39dec8598a533202599551eec682c09ebbb
2026-01-01T00:00:00-05:00
Benchmarking LLMs for Fine-Grained Code Review with Enriched Context in Practice
arXiv:2511.07017v2 Announce Type: replace Abstract: Code review is a cornerstone of software quality assurance, and recent advances in Large Language Models (LLMs) have shown promise in its automation. However, existing benchmarks for LLM-based code review face three major limitations. Lack of semantic context: most benchmarks provide only code diffs without textual information such as issue descriptions, which are crucial for understanding developer intent. Data quality issues: without rigorous validation, many samples are noisy-e.g., reviews on outdated or irrelevant code-reducing evaluation reliability. Coarse granularity: most benchmarks operate at the file or commit level, overlooking the fine-grained, line-level reasoning essential for precise review. We introduce ContextCRBench, a high-quality, context-rich benchmark for fine-grained LLM evaluation in code review. Our construction pipeline comprises: Raw Data Crawling, collecting 153.7K issues and pull requests from top-tier repositories; Comprehensive Context Extraction, linking issue-PR pairs for textual context and extracting the full surrounding function or class for code context; and Multi-stage Data Filtering, combining rule-based and LLM-based validation to remove outdated, malformed, or low-value samples, resulting in 67,910 context-enriched entries. ContextCRBench supports three evaluation scenarios aligned with the review workflow: hunk-level quality assessment, line-level defect localization, and line-level comment generation. Evaluating eight leading LLMs (four closed-source and four open-source) reveals that textual context yields greater performance gains than code context alone, while current LLMs remain far from human-level review ability. Deployed at ByteDance, ContextCRBench drives a self-evolving code review system, improving performance by 61.98% and demonstrating its robustness and industrial utility. https://github.com/kinesiatricssxilm14/ContextCRBench.
https://arxiv.org/abs/2511.07017
Academic Papers
svg
1f6245a680098c4beac2bd7d9c9de342f54c33ac489ca8c2d05a53fd95d2b5ba
2026-01-01T00:00:00-05:00
Can ensembles improve evidence recall? A case study
arXiv:2511.07055v2 Announce Type: replace Abstract: Feature attribution methods typically provide minimal sufficient evidence justifying a model decision. However, in many applications, such as compliance and cataloging, the full set of contributing features must be identified: complete evidence. We present a case study using existing language models and a medical dataset which contains human-annotated complete evidence. Our findings show that an ensemble approach, aggregating evidence from several models, improves evidence recall over individual models. We examine different ensemble sizes, the effect of evidence-guided training, and provide qualitative insights.
https://arxiv.org/abs/2511.07055
Academic Papers
svg
1572fca74a59b86c828e3c587c2606a86bd2203ffa9097b921647391f903cda0
2026-01-01T00:00:00-05:00
Feedback Descent: Open-Ended Text Optimization via Pairwise Comparison
arXiv:2511.07919v2 Announce Type: replace Abstract: We introduce \textit{Feedback Descent}, a framework that optimizes text artifacts -- prompts, code, and molecules -- through structured textual feedback, rather than relying solely on scalar rewards. By preserving detailed critiques instead of compressing them to binary preferences, Feedback Descent widens the information bottleneck in preference learning, enabling directed optimization in text space rather than weight space. We show that in-context learning can transform structured feedback into gradient-like directional information, enabling targeted edits. Unlike prior approaches that collapse judgments into single bits, our evaluators pair each comparison with textual feedback, which functions as high-bandwidth supervision. The iteration loop is done purely at inference time, without modifying any model weights, and is task-agnostic. We evaluate Feedback Descent on three diverse domains and find that it outperforms state-of-the-art prompt optimization (GEPA), reinforcement learning methods (GRPO, REINVENT), and even specialized graph-based molecular optimizers. In the DOCKSTRING molecule discovery benchmark, Feedback Descent identifies novel drug-like molecules surpassing the $99.9$th percentile of a database with more than $260{,}000$ compounds across six protein targets.
https://arxiv.org/abs/2511.07919
Academic Papers
svg
2cc2320dbdc6a67d2f6381112756d130c2bb5763e7d4b5c3f1efef5cd8d42924
2026-01-01T00:00:00-05:00
Training Language Models to Explain Their Own Computations
arXiv:2511.08579v2 Announce Type: replace Abstract: Can language models (LMs) learn to faithfully describe their internal computations? Are they better able to describe themselves than other models? We study the extent to which LMs' privileged access to their own internals can be leveraged to produce new techniques for explaining their behavior. Using existing interpretability techniques as a source of ground truth, we fine-tune LMs to generate natural language descriptions of (1) the information encoded by LM features, (2) the causal structure of LMs' internal activations, and (3) the influence of specific input tokens on LM outputs. When trained with only tens of thousands of example explanations, explainer models exhibit non-trivial generalization to new queries. This generalization appears partly attributable to explainer models' privileged access to their own internals: using a model to explain its own computations generally works better than using a *different* model to explain its computations (even if the other model is significantly more capable). Our results suggest not only that LMs can learn to reliably explain their internal computations, but that such explanations offer a scalable complement to existing interpretability methods. Code and data at https://github.com/TransluceAI/introspective-interp
https://arxiv.org/abs/2511.08579
Academic Papers
svg
f8167eb78e8983f461520d493eb9188f2ebe3cab3321ef0bb2c620775d4b76e3
2026-01-01T00:00:00-05:00
Scaling Spatial Intelligence with Multimodal Foundation Models
arXiv:2511.13719v3 Announce Type: replace Abstract: Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.7% on VSI-Bench, 43.3% on MMSI, 85.6% on MindCube, 54.6% on ViewSpatial, and 50.1% on SITE, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction.
https://arxiv.org/abs/2511.13719
Academic Papers
svg
1744a3a0a2820db99af735bc96646a35bdf370dab860104881eed0081945e9d4
2026-01-01T00:00:00-05:00
Complex variational autoencoders admit K\"ahler structure
arXiv:2511.15172v4 Announce Type: replace Abstract: It has been discovered that latent-Euclidean variational autoencoders (VAEs) admit, in various capacities, Riemannian structure. We adapt these arguments but for complex VAEs with a complex latent stage. We show that complex VAEs reveal to some level K\"ahler geometric structure. Our methods will be tailored for decoder geometry. We derive the Fisher information metric in the complex case under a latent complex Gaussian with trivial relation matrix. It is well known from statistical information theory that the Fisher information coincides with the Hessian of the Kullback-Leibler (KL) divergence. Thus, the metric K\"ahler potential relation is exactly achieved under relative entropy. We propose a K\"ahler potential derivative of complex Gaussian mixtures that acts as a rough proxy to the Fisher information metric while still being faithful to the underlying K\"ahler geometry. Computation of the metric via this potential is efficient, and through our potential, valid as a plurisubharmonic (PSH) function, large scale computational burden of automatic differentiation is displaced to small scale. Our methods leverage the law of total covariance to bridge behavior between our potential and the Fisher metric. We show that we can regularize the latent space with decoder geometry, and that we can sample in accordance with a weighted complex volume element. We demonstrate these strategies, at the exchange of sample variation, yield consistently smoother representations and fewer semantic outliers.
https://arxiv.org/abs/2511.15172
Academic Papers
svg
2d1eb8228bb5ed1e614a8772c222e650789be41da64f5d6a952a6f1035fb6163
2026-01-01T00:00:00-05:00
How Robot Dogs See the Unseeable: Improving Visual Interpretability via Peering for Exploratory Robots
arXiv:2511.16262v3 Announce Type: replace Abstract: In vegetated environments, such as forests, exploratory robots play a vital role in navigating complex, cluttered environments where human access is limited and traditional equipment struggles. Visual occlusion from obstacles, such as foliage, can severely obstruct a robot's sensors, impairing scene understanding. We show that "peering", a characteristic side-to-side movement used by insects to overcome their visual limitations, can also allow robots to markedly improve visual reasoning under partial occlusion. This is accomplished by applying core signal processing principles, specifically optical synthetic aperture sensing, together with the vision reasoning capabilities of modern large multimodal models. Peering enables real-time, high-resolution, and wavelength-independent perception, which is crucial for vision-based scene understanding across a wide range of applications. The approach is low-cost and immediately deployable on any camera-equipped robot. We investigated different peering motions and occlusion masking strategies, demonstrating that, unlike peering, state-of-the-art multi-view 3D vision techniques fail in these conditions due to their high susceptibility to occlusion. Our experiments were carried out on an industrial-grade quadrupedal robot. However, the ability to peer is not limited to such platforms, but potentially also applicable to bipedal, hexapod, wheeled, or crawling platforms. Robots that can effectively see through partial occlusion will gain superior perception abilities - including enhanced scene understanding, situational awareness, camouflage breaking, and advanced navigation in complex environments.
https://arxiv.org/abs/2511.16262
Academic Papers
svg
e7a5a102a70fa4310d8e53f229af8e322706204d46d0c93dc6d417f2e03a07c6
2026-01-01T00:00:00-05:00
BCWildfire: A Long-term Multi-factor Dataset and Deep Learning Benchmark for Boreal Wildfire Risk Prediction
arXiv:2511.17597v2 Announce Type: replace Abstract: Wildfire risk prediction remains a critical yet challenging task due to the complex interactions among fuel conditions, meteorology, topography, and human activity. Despite growing interest in data-driven approaches, publicly available benchmark datasets that support long-term temporal modeling, large-scale spatial coverage, and multimodal drivers remain scarce. To address this gap, we present a 25-year, daily-resolution wildfire dataset covering 240 million hectares across British Columbia and surrounding regions. The dataset includes 38 covariates, encompassing active fire detections, weather variables, fuel conditions, terrain features, and anthropogenic factors. Using this benchmark, we evaluate a diverse set of time-series forecasting models, including CNN-based, linear-based, Transformer-based, and Mamba-based architectures. We also investigate effectiveness of position embedding and the relative importance of different fire-driving factors. The dataset and the corresponding code can be found at https://github.com/SynUW/mmFire
https://arxiv.org/abs/2511.17597
Academic Papers
svg
fa8ddea77e30fdcd0ed175665038aa09b859f6bafe8ab47fe7adac189230ef40
2026-01-01T00:00:00-05:00
Hear: Hierarchically Enhanced Aesthetic Representations For Multidimensional Music Evaluation
arXiv:2511.18869v2 Announce Type: replace Abstract: Evaluating song aesthetics is challenging due to the multidimensional nature of musical perception and the scarcity of labeled data. We propose HEAR, a robust music aesthetic evaluation framework that combines: (1) a multi-source multi-scale representations module to obtain complementary segment- and track-level features, (2) a hierarchical augmentation strategy to mitigate overfitting, and (3) a hybrid training objective that integrates regression and ranking losses for accurate scoring and reliable top-tier song identification. Experiments demonstrate that HEAR consistently outperforms the baseline across all metrics on both tracks of the ICASSP 2026 SongEval benchmark. The code and trained model weights are available at https://github.com/Eps-Acoustic-Revolution-Lab/EAR_HEAR.
https://arxiv.org/abs/2511.18869
Academic Papers
svg
fd27894cb27a8db189c36b4ebce4443b8544c9d3c36fa0300f9df83999fd051e
2026-01-01T00:00:00-05:00
Energy-Efficient Routing Protocol in Vehicular Opportunistic Networks: A Dynamic Cluster-based Routing Using Deep Reinforcement Learning
arXiv:2511.19026v3 Announce Type: replace Abstract: Opportunistic Networks (OppNets) employ the Store-Carry-Forward (SCF) paradigm to maintain communication during intermittent connectivity. However, routing performance suffers due to dynamic topology changes, unpredictable contact patterns, and resource constraints including limited energy and buffer capacity. These challenges compromise delivery reliability, increase latency, and reduce node longevity in highly dynamic environments. This paper proposes Cluster-based Routing using Deep Reinforcement Learning (CR-DRL), an adaptive routing approach that integrates an Actor-Critic learning framework with a heuristic function. CR-DRL enables real-time optimal relay selection and dynamic cluster overlap adjustment to maintain connectivity while minimizing redundant transmissions and enhancing routing efficiency. Simulation results demonstrate significant improvements over state-of-the-art baselines. CR-DRL extends node lifetimes by up to 21%, overall energy use is reduced by 17%, and nodes remain active for 15% longer. Communication performance also improves, with up to 10% higher delivery ratio, 28.5% lower delay, 7% higher throughput, and data requiring 30% fewer transmission steps across the network.
https://arxiv.org/abs/2511.19026
Academic Papers
svg
8c3f70b16f8792c8fab68bdcda5cf1f3c407fca7aece5d1393b1d30c5d22202b
2026-01-01T00:00:00-05:00
SEDA: A Self-Adapted Entity-Centric Data Augmentation for Boosting Gird-based Discontinuous NER Models
arXiv:2511.20143v2 Announce Type: replace Abstract: Named Entity Recognition (NER) is a critical task in natural language processing, yet it remains particularly challenging for discontinuous entities. The primary difficulty lies in text segmentation, as traditional methods often missegment or entirely miss cross-sentence discontinuous entities, significantly affecting recognition accuracy. Therefore, we aim to address the segmentation and omission issues associated with such entities. Recent studies have shown that grid-tagging methods are effective for information extraction due to their flexible tagging schemes and robust architectures. Building on this, we integrate image data augmentation techniques, such as cropping, scaling, and padding, into grid-based models to enhance their ability to recognize discontinuous entities and handle segmentation challenges. Experimental results demonstrate that traditional segmentation methods often fail to capture cross-sentence discontinuous entities, leading to decreased performance. In contrast, our augmented grid models achieve notable improvements. Evaluations on the CADEC, ShARe13, and ShARe14 datasets show F1 score gains of 1-2.5% overall and 3.7-8.4% for discontinuous entities, confirming the effectiveness of our approach.
https://arxiv.org/abs/2511.20143
Academic Papers
svg
a2c9eb08ec147589f5cda8ac888eac286741486a8d75d8d64d2afa28c9aa5015
2026-01-01T00:00:00-05:00
APT-CGLP: Advanced Persistent Threat Hunting via Contrastive Graph-Language Pre-Training
arXiv:2511.20290v2 Announce Type: replace Abstract: Provenance-based threat hunting identifies Advanced Persistent Threats (APTs) on endpoints by correlating attack patterns described in Cyber Threat Intelligence (CTI) with provenance graphs derived from system audit logs. A fundamental challenge in this paradigm lies in the modality gap -- the structural and semantic disconnect between provenance graphs and CTI reports. Prior work addresses this by framing threat hunting as a graph matching task: 1) extracting attack graphs from CTI reports, and 2) aligning them with provenance graphs. However, this pipeline incurs severe \textit{information loss} during graph extraction and demands intensive manual curation, undermining scalability and effectiveness. In this paper, we present APT-CGLP, a novel cross-modal APT hunting system via Contrastive Graph-Language Pre-training, facilitating end-to-end semantic matching between provenance graphs and CTI reports without human intervention. First, empowered by the Large Language Model (LLM), APT-CGLP mitigates data scarcity by synthesizing high-fidelity provenance graph-CTI report pairs, while simultaneously distilling actionable insights from noisy web-sourced CTIs to improve their operational utility. Second, APT-CGLP incorporates a tailored multi-objective training algorithm that synergizes contrastive learning with inter-modal masked modeling, promoting cross-modal attack semantic alignment at both coarse- and fine-grained levels. Extensive experiments on four real-world APT datasets demonstrate that APT-CGLP consistently outperforms state-of-the-art threat hunting baselines in terms of accuracy and efficiency.
https://arxiv.org/abs/2511.20290
Academic Papers
svg
d5b4215141fa927fd36104a9d35e0641195491d6a5d920b7c6b7ec77f3c717cc
2026-01-01T00:00:00-05:00
Infinity-RoPE: Action-Controllable Infinite Video Generation Emerges From Autoregressive Self-Rollout
arXiv:2511.20649v2 Announce Type: replace Abstract: Current autoregressive video diffusion models are constrained by three core bottlenecks: (i) the finite temporal horizon imposed by the base model's 3D Rotary Positional Embedding (3D-RoPE), (ii) slow prompt responsiveness in maintaining fine-grained action control during long-form rollouts, and (iii) the inability to realize discontinuous cinematic transitions within a single generation stream. We introduce $\infty$-RoPE, a unified inference-time framework that addresses all three limitations through three interconnected components: Block-Relativistic RoPE, KV Flush, and RoPE Cut. Block-Relativistic RoPE reformulates temporal encoding as a moving local reference frame, where each newly generated latent block is rotated relative to the base model's maximum frame horizon while earlier blocks are rotated backward to preserve relative temporal geometry. This relativistic formulation eliminates fixed temporal positions, enabling continuous video generation far beyond the base positional limits. To obtain fine-grained action control without re-encoding, KV Flush renews the KV cache by retaining only two latent frames, the global sink and the last generated latent frame, thereby ensuring immediate prompt responsiveness. Finally, RoPE Cut introduces controlled discontinuities in temporal RoPE coordinates, enabling multi-cut scene transitions within a single continuous rollout. Together, these components establish $\infty$-RoPE as a training-free foundation for infinite-horizon, controllable, and cinematic video diffusion. Comprehensive experiments show that $\infty$-RoPE consistently surpasses previous autoregressive models in overall VBench scores.
https://arxiv.org/abs/2511.20649
Academic Papers
svg
9e39e129d8e09a6f6ccc878be8a4c642fd91f8e8d70ee65e2ec26ebd152b44b3
2026-01-01T00:00:00-05:00
Large Language Models for Unit Test Generation: Achievements, Challenges, and Opportunities
arXiv:2511.21382v2 Announce Type: replace Abstract: Automated unit test generation is critical for software quality but traditional structure-driven methods often lack the semantic understanding required to produce realistic inputs and oracles. Large language models (LLMs) address this limitation by leveraging their extensive data-driven knowledge of code semantics and programming patterns. To analyze the state of the art in this domain, we conducted a systematic literature review of 115 publications published between May 2021 and August 2025. We propose a taxonomy based on the unit test generation lifecycle that divides the process into a generative phase for creating test artifacts and a quality assurance phase for refining them. Our analysis reveals that prompt engineering has emerged as the dominant utilization approach and accounts for 89% of the studies due to its flexibility. We find that iterative validation and repair loops have become the standard mechanism to ensure robust usability by significantly improving compilation and execution pass rates. However, critical challenges remain regarding the weak fault detection capabilities and the lack of standardized benchmarks. We conclude with a roadmap for future research that emphasizes the progression toward autonomous testing agents and hybrid systems combining LLMs with traditional software engineering tools.
https://arxiv.org/abs/2511.21382
Academic Papers
svg
f2f445babaa831c3e6b18c8395392ef4f7c51f938229d4deded79275de4e878e
2026-01-01T00:00:00-05:00
TIM-PRM: Verifying multimodal reasoning with Tool-Integrated PRM
arXiv:2511.22998v2 Announce Type: replace Abstract: Multimodal Large Language Models (MLLMs) have achieved impressive performances in mathematical reasoning, yet they remain vulnerable to visual hallucinations and logical inconsistencies that standard outcome-based supervision fails to mitigate. While Process Reward Models (PRMs) promise step-by-step verification, current approaches typically operate as scalar scorers or generative critics that suffer from sycophancy, blindly validating the flawed hypotheses rather than grounding them in visual reality. To bridge this gap, we introduce TIM-PRM (Tool-Integrated Multimodal PRM), a novel agentic framework that transforms verification from a passive classification task into an active, tool-augmented investigation. TIM-PRM is trained to explicitly plan verification strategies and utilizes a mechanism of Independent Question Asking to query evidence via external tools, effectively decoupling verification from the reasoning context to eliminate confirmation bias. We instantiate this method by curating a high-quality dataset of tool-integrated verification trajectories. Extensive experiments on VisualProcessBench demonstrate that our 8B parameter model surpasses existing open-source multimodal PRMs, significantly outperforming much larger models like Qwen2.5-72B and InternVL-78B, while offering interpretable insights into the verification process.
https://arxiv.org/abs/2511.22998
Academic Papers
svg
352bb1a3659da640af691fd43af5a5ea386a55d7fed8104fa6bab546754ae190
2026-01-01T00:00:00-05:00
Dual LoRA: Enhancing LoRA with Magnitude and Direction Updates
arXiv:2512.03402v3 Announce Type: replace Abstract: Low-rank adaptation (LoRA) is one of the most popular methods among parameter-efficient fine-tuning (PEFT) methods to adapt pre-trained large language models (LLMs) to specific downstream tasks. However, the model trained based on LoRA often has an unsatisfactory performance due to its low-rank assumption. In this paper, we propose a novel method called Dual LoRA to improve the performance by incorporating an inductive bias into the original LoRA. Specifically, we separate low-rank matrices into two groups: the magnitude group to control whether or not and how far we should update a parameter and the direction group to decide whether this parameter should move forward or backward, to better simulate the parameter updating process of the full fine-tuning based on gradient-based optimization algorithms. We show that this can be simply achieved by adding a ReLU function to the magnitude group and a sign function to the direction group. We conduct several experiments over a wide range of NLP tasks, including natural language understanding (NLU) and commonsense reasoning datasets on RoBERTa, DeBERTa, and LLaMA-1/2/3 as baseline models. The results show that we consistently outperform LoRA and its state-of-the-art variants with the same number of trainable parameters.
https://arxiv.org/abs/2512.03402
Academic Papers
svg
28f4155df279dc0fc42609ac5dc903999cbc8411d4a56704bab0e2bfdf8c0cf6
2026-01-01T00:00:00-05:00
Cross-embodied Co-design for Dexterous Hands
arXiv:2512.03743v2 Announce Type: replace Abstract: Dexterous manipulation is limited by both control and design, without consensus as to what makes manipulators best for performing dexterous tasks. This raises a fundamental challenge: how should we design and control robot manipulators that are optimized for dexterity? We present a co-design framework that learns task-specific hand morphology and complementary dexterous control policies. The framework supports 1) an expansive morphology search space including joint, finger, and palm generation, 2) scalable evaluation across the wide design space via morphology-conditioned cross-embodied control, and 3) real-world fabrication with accessible components. We evaluate the approach across multiple dexterous tasks, including in-hand rotation with simulation and real deployment. Our framework enables an end-to-end pipeline that can design, train, fabricate, and deploy a new robotic hand in under 24 hours. The full framework will be open-sourced and available on our website: https://an-axolotl.github.io/co-design-for-dexterity.github.io/
https://arxiv.org/abs/2512.03743
Academic Papers
svg
0b2be84c7a1d27b1f1f07acd141cc52d96caa8f5fc944391e481e15e672d0bb7
2026-01-01T00:00:00-05:00
GRASP: GRouped Activation Shared Parameterization for Parameter-Efficient Fine-Tuning and Robust Inference of Transformers
arXiv:2512.04296v2 Announce Type: replace Abstract: Parameter-efficient fine-tuning (PEFT) provides a scalable alternative to full-model adaptation by updating only a small subset of parameters in large pre-trained models. We introduce GRASP - GRouped Activation Shared Parameterization - a lightweight PEFT framework that partitions the D-dimensional token representations of selected layers into K << D groups and learns a shared scaling and shifting vector for each group. This grouped modulation reduces the number of trainable parameters significantly while preserving the ability of the model to learn task-specific features. Building on this formulation, we further propose StochGRASP, which learns Gaussian distributions as perturbations to the pre-trained weights rather than deterministic values. This probabilistic parameterization along with a noise-aware loss function formulation enables modelling hardware-level variability in programmed weights and significantly improves robustness under non-ideal inference conditions-an important requirement for deployment on edge-based emerging AI hardware. Across GLUE (RoBERTa-base & RoBERTa-large) and E2E NLG (GPT-2 Medium), GRASP matches or exceeds the performance of established PEFT methods while achieving an order of magnitude reduction in trainable parameters compared to LoRA and BitFit. Under varying levels of noise, StochGRASP consistently outperforms deterministic variants, demonstrating its suitability for energy-efficient and noise-prone hardware platforms.
https://arxiv.org/abs/2512.04296
Academic Papers
svg
2569d6a4baef3092d64e08fe490a7f72f633faacc9635fdf9fb1cf9b843141bc
2026-01-01T00:00:00-05:00
Minimization-based embedded boundary methods as polynomial corrections: a stability study of discontinuous Galerkin for hyperbolic equations
arXiv:2512.05278v2 Announce Type: replace Abstract: This work establishes a novel, unified theoretical framework for a class of high order embedded boundary methods, revealing that the Reconstruction for Off-site Data (ROD) treatment shares a fundamental structure with the recently developed shifted boundary polynomial correction [Ciallella, M., et al. (2023)]. By proving that the ROD minimization problem admits an equivalent direct polynomial correction formulation, we unlock two major advances. First, we derive a significant algorithmic simplification, replacing the solution of the minimization problem with a straightforward polynomial evaluation, thereby enhancing computational efficiency. Second, and most critically, this reformulation enables the first stability result for the ROD method when applied to the linear advection equation with discontinuous Galerkin discretization. Our analysis, supported by a comprehensive eigenspectrum study for polynomial degrees up to six, characterizes the stability region of the new ROD formulation. The theoretical findings, which demonstrate the stability constraints, are validated through targeted numerical experiments.
https://arxiv.org/abs/2512.05278
Academic Papers
svg
85dfd35b7454ce694703b57c129f51fdf12d934d5c9144a8383c9fe7baf9afb1
2026-01-01T00:00:00-05:00
Randomized Algorithms for Low-Rank Matrix and Tensor Decompositions
arXiv:2512.05286v2 Announce Type: replace Abstract: This paper surveys randomized algorithms in numerical linear algebra for low-rank decompositions of matrices and tensors. The survey begins with a review of classical matrix algorithms that can be accelerated by randomized dimensionality reduction, such as the singular value decomposition (SVD) or interpolative (ID) and CUR decompositions. Recent advances in randomized dimensionality reduction are discussed, including new methods of fast matrix sketching and sampling techniques, which are incorporated into classical matrix algorithms for fast low-rank matrix approximations. The extension of randomized matrix algorithms to tensors is then explored for several low-rank tensor decompositions in the CP and Tucker formats, including the higher-order SVD, ID, and CUR decomposition.
https://arxiv.org/abs/2512.05286
Academic Papers
svg
8929c80adeb8b165744f9ec041e9768b00db5a672281be7cc1f9e3ddb4348afa
2026-01-01T00:00:00-05:00
The Complexity of One or Many Faces in the Overlay of Many Arrangements
arXiv:2512.11445v2 Announce Type: replace Abstract: We present an extension of the Combination Lemma of [GSS89] that expresses the complexity of one or several faces in the overlay of many arrangements, as a function of the number of arrangements, the number of faces, and the complexities of these faces in the separate arrangements. Several applications of the new Combination Lemma are presented: We first show that the complexity of a single face in an arrangement of $k$ simple polygons with a total of $n$ sides is $\Theta(n \alpha(k) )$, where $\alpha(\cdot)$ is the inverse of Ackermann's function. We also give a new and simpler proof of the bound $O \left( \sqrt{m} \lambda_{s+2}( n ) \right)$ on the total number of edges of $m$ faces in an arrangement of $n$ Jordan arcs, each pair of which intersect in at most $s$ points, where $\lambda_{s}(n)$ is the maximum length of a Davenport-Schinzel sequence of order $s$ with $n$ symbols. We extend this result, showing that the total number of edges of $m$ faces in a sparse arrangement of $n$ Jordan arcs is $O \left( (n + \sqrt{m}\sqrt{w}) \frac{\lambda_{s+2}(n)}{n} \right)$, where $w$ is the total complexity of the arrangement. Several other applications and variants of the Combination Lemma are also presented.
https://arxiv.org/abs/2512.11445
Academic Papers
svg
d8594cf753abc6cbd96b42686feb04b09c34bd46ce6eade298453c3ce512a482
2026-01-01T00:00:00-05:00
Spiking Manifesto
arXiv:2512.11843v2 Announce Type: replace Abstract: Practically everything computers do is better, faster, and more power-efficient than the brain. For example, a calculator performs numerical computations more energy-efficiently than any human. Yet modern AI models are a thousand times less efficient than the brain. These models rely on larger and larger artificial neural networks (ANNs) to boost their encoding capacity, requiring GPUs to perform large-scale matrix multiplications. In contrast, the brain's spiking neural networks (SNNs) exhibit factorially explosive encoding capacity and compute through the polychronization of spikes rather than explicit matrix-vector products, resulting in lower energy requirements. This manifesto proposes a paradigm for framing popular AI models in terms of spiking networks and polychronization, and for interpreting spiking activity as nature's way of implementing look-up tables. This suggests a path toward converting AI models into a novel class of architectures with much smaller size yet combinatorially large encoding capacity, offering the promise of a thousandfold improvement in performance. Code is available at https://github.com/izhikevich/SNN
https://arxiv.org/abs/2512.11843
Academic Papers
svg
eba143d8529875a1f37b33010f8c7d3b793272ac1cb7a4d8a74fc71bb58cd9eb
2026-01-01T00:00:00-05:00
A Geometric Theory of Cognition
arXiv:2512.12225v2 Announce Type: replace Abstract: Human cognition spans perception, memory, intuitive judgment, deliberative reasoning, action selection, and social inference, yet these capacities are often explained through distinct computational theories. Here we present a unified mathematical framework in which diverse cognitive processes emerge from a single geometric principle. We represent the cognitive state as a point on a differentiable manifold endowed with a learned Riemannian metric that encodes representational constraints, computational costs, and structural relations among cognitive variables. A scalar cognitive potential combines predictive accuracy, structural parsimony, task utility, and normative or logical requirements. Cognition unfolds as the Riemannian gradient flow of this potential, providing a universal dynamical law from which a broad range of psychological phenomena arise. Classical dual-process effects--rapid intuitive responses and slower deliberative reasoning--emerge naturally from metric-induced anisotropies that generate intrinsic time-scale separations and geometric phase transitions, without invoking modular or hybrid architectures. We derive analytical conditions for these regimes and demonstrate their behavioural signatures through simulations of canonical cognitive tasks. Together, these results establish a geometric foundation for cognition and suggest guiding principles for the development of more general and human-like artificial intelligence systems.
https://arxiv.org/abs/2512.12225
Academic Papers
svg
e7e85e89f7d7b2aa0daa26240c7e23c32d02b6eb9db31e09974cc1d728a3fcf7
2026-01-01T00:00:00-05:00
Reveal Hidden Pitfalls and Navigate Next Generation of Vector Similarity Search from Task-Centric Views
arXiv:2512.12980v2 Announce Type: replace Abstract: Vector Similarity Search (VSS) in high-dimensional spaces is rapidly emerging as core functionality in next-generation database systems for numerous data-intensive services -- from embedding lookups in large language models (LLMs), to semantic information retrieval and recommendation engines. Current benchmarks, however, evaluate VSS primarily on the recall-latency trade-off against a ground truth defined solely by distance metrics, neglecting how retrieval quality ultimately impacts downstream tasks. This disconnect can mislead both academic research and industrial practice. We present Iceberg, a holistic benchmark suite for end-to-end evaluation of VSS methods in realistic application contexts. From a task-centric view, Iceberg uncovers the Information Loss Funnel, which identifies three principal sources of end-to-end performance degradation: (1) Embedding Loss during feature extraction; (2) Metric Misuse, where distances poorly reflect task relevance; (3) Data Distribution Sensitivity, highlighting index robustness across skews and modalities. For a more comprehensive assessment, Iceberg spans eight diverse datasets across key domains such as image classification, face recognition, text retrieval, and recommendation systems. Each dataset, ranging from 1M to 100M vectors, includes rich, task-specific labels and evaluation metrics, enabling assessment of retrieval algorithms within the full application pipeline rather than in isolation. Iceberg benchmarks 13 state-of-the-art VSS methods and re-ranks them based on application-level metrics, revealing substantial deviations from traditional rankings derived purely from recall-latency evaluations. Building on these insights, we define a set of task-centric meta-features and derive an interpretable decision tree to guide practitioners in selecting and tuning VSS methods for their specific workloads.
https://arxiv.org/abs/2512.12980
Academic Papers
svg
f714034099456923e4608db47abf6d9ed3e8bd9841f52d072fb17ff327bd2904
2026-01-01T00:00:00-05:00
DiRe: Diversity-promoting Regularization for Dataset Condensation
arXiv:2512.13083v2 Announce Type: replace Abstract: In Dataset Condensation, the goal is to synthesize a small dataset that replicates the training utility of a large original dataset. Existing condensation methods synthesize datasets with significant redundancy, so there is a dire need to reduce redundancy and improve the diversity of the synthesized datasets. To tackle this, we propose an intuitive Diversity Regularizer (DiRe) composed of cosine similarity and Euclidean distance, which can be applied off-the-shelf to various state-of-the-art condensation methods. Through extensive experiments, we demonstrate that the addition of our regularizer improves state-of-the-art condensation methods on various benchmark datasets from CIFAR-10 to ImageNet-1K with respect to generalization and diversity metrics.
https://arxiv.org/abs/2512.13083
Academic Papers
svg
63c5182066c3123913bedb614b2ad2e81e2cc84ea77ad1e2cc9737bde9c7bd3d
2026-01-01T00:00:00-05:00
OPTIMA: Optimal One-shot Pruning for LLMs via Quadratic Programming Reconstruction
arXiv:2512.13886v2 Announce Type: replace Abstract: Post-training model pruning is a promising solution, yet it faces a trade-off: simple heuristics that zero weights are fast but degrade accuracy, while principled joint optimization methods recover accuracy but are computationally infeasible at modern scale. One-shot methods such as SparseGPT offer a practical trade-off in optimality by applying efficient, approximate heuristic weight updates. To close this gap, we introduce OPTIMA, a practical one-shot post-training pruning method that balances accuracy and scalability. OPTIMA casts layer-wise weight reconstruction after mask selection as independent, row-wise Quadratic Programs (QPs) that share a common layer Hessian. Solving these QPs yields the per-row globally optimal update with respect to the reconstruction objective given the estimated Hessian. The shared-Hessian structure makes the problem highly amenable to batching on accelerators. We implement an accelerator-friendly QP solver that accumulates one Hessian per layer and solves many small QPs in parallel, enabling one-shot post-training pruning at scale on a single accelerator without fine-tuning. OPTIMA integrates with existing mask selectors and consistently improves zero-shot performance across multiple LLM families and sparsity regimes, yielding up to 3.97% absolute accuracy improvement. On an NVIDIA H100, OPTIMA prunes a 8B-parameter transformer end-to-end in 40 hours with 60GB peak memory. Together, these results set a new state-of-the-art accuracy-efficiency trade-offs for one-shot post-training pruning.
https://arxiv.org/abs/2512.13886
Academic Papers
svg
989ddc9056840c42eec6f02f05ed0edbfbff1f64a02016052a2e3e18047f8e9d
2026-01-01T00:00:00-05:00
Probabilistic Inclusion Depth for Fuzzy Contour Ensemble Visualization
arXiv:2512.15187v2 Announce Type: replace Abstract: We propose Probabilistic Inclusion Depth (PID) for the ensemble visualization of scalar fields. By introducing a probabilistic inclusion operator $\subset_{\!p}$, our method is a general data depth model supporting ensembles of fuzzy contours, such as soft masks from modern segmentation methods, and conventional ensembles of binary contours. We also advocate to extend contour extraction in scalar field ensembles to become a fuzzy decision by considering the probabilistic distribution of an isovalue to encode the sensitivity information. To reduce the complexity of the data depth computation, an efficient approximation using the mean probabilistic contour is devised. Furthermore, an order of magnitude reduction in computational time is achieved with an efficient parallel algorithm on the GPU. Our new method enables the computation of contour boxplots for ensembles of probabilistic masks, ensembles defined on various types of grids, and large 3D ensembles that are not studied by existing methods. The effectiveness of our method is evaluated with numerical comparisons to existing techniques on synthetic datasets, through examples of real-world ensemble datasets, and expert feedback.
https://arxiv.org/abs/2512.15187
Academic Papers
svg
bcd16a87ed63f871e2dfb096828463f00b099eaf1645ce57e4598020c7da3acd
2026-01-01T00:00:00-05:00
Risk-Aware GPU-Assisted Cardinality Estimation for Cost-Based Query Optimizers
arXiv:2512.19750v2 Announce Type: replace Abstract: Cardinality estimation is a cornerstone of cost-based optimizers (CBOs), yet real-world workloads often violate the assumptions behind static statistics, degrading decision stability and increasing plan flip rates. We empirically characterize failures caused by stale statistics, skew, join correlations, hidden distributions in bind variables, and sampling bias, and quantify the overhead and break-even points of hardware-accelerated measurement. We propose GACE (GPU-Assisted Cardinality Estimation), a hybrid auxiliary architecture that augments rather than replaces the optimizer. GACE selectively invokes GPU-based measurement only in risky intervals via a Risky Gate that detects estimation uncertainty, and a GPU Measurement Engine that performs high-speed probing with explicit cost accounting for the measurement itself. This design preserves low overhead in stable regions while improving plan stability and reducing tail latency (P99) in problematic scenarios.
https://arxiv.org/abs/2512.19750
Academic Papers
svg
ccb1420f3c6b947b9662a4d2cc95b4d0871b4e0f7c2fb5bc15fe38aae2a0f452
2026-01-01T00:00:00-05:00
Few-Shot-Based Modular Image-to-Video Adapter for Diffusion Models
arXiv:2512.20000v2 Announce Type: replace Abstract: Diffusion models (DMs) have recently achieved impressive photorealism in image and video generation. However, their application to image animation remains limited, even when trained on large-scale datasets. Two primary challenges contribute to this: the high dimensionality of video signals leads to a scarcity of training data, causing DMs to favor memorization over prompt compliance when generating motion; moreover, DMs struggle to generalize to novel motion patterns not present in the training set, and fine-tuning them to learn such patterns, especially using limited training data, is still under-explored. To address these limitations, we propose Modular Image-to-Video Adapter (MIVA), a lightweight sub-network attachable to a pre-trained DM, each designed to capture a single motion pattern and scalable via parallelization. MIVAs can be efficiently trained on approximately ten samples using a single consumer-grade GPU. At inference time, users can specify motion by selecting one or multiple MIVAs, eliminating the need for prompt engineering. Extensive experiments demonstrate that MIVA enables more precise motion control while maintaining, or even surpassing, the generation quality of models trained on significantly larger datasets.
https://arxiv.org/abs/2512.20000
Academic Papers
svg
52f48f2c05db21aab55b7aa6bd2119072286f7db324abb9535e45ece6f64dfab
2026-01-01T00:00:00-05:00
Fun-Audio-Chat Technical Report
arXiv:2512.20156v3 Announce Type: replace Abstract: Recent advancements in joint speech-text models show great potential for seamless voice interactions. However, existing models face critical challenges: temporal resolution mismatch between speech tokens (25Hz) and text tokens (~3Hz) dilutes semantic information, incurs high computational costs, and causes catastrophic forgetting of text LLM knowledge. We introduce Fun-Audio-Chat, a Large Audio Language Model addressing these limitations via two innovations from our previous work DrVoice. First, Dual-Resolution Speech Representations (DRSR): the Shared LLM processes audio at efficient 5Hz (via token grouping), while the Speech Refined Head generates high-quality tokens at 25Hz, balancing efficiency (~50% GPU reduction) and quality. Second, Core-Cocktail Training, a two-stage fine-tuning with intermediate merging that mitigates catastrophic forgetting. We then apply Multi-Task DPO Training to enhance robustness, audio understanding, instruction-following and voice empathy. This multi-stage post-training enables Fun-Audio-Chat to retain text LLM knowledge while gaining powerful audio understanding, reasoning, and generation. Unlike recent LALMs requiring large-scale audio-text pre-training, Fun-Audio-Chat leverages pre-trained models and extensive post-training. Fun-Audio-Chat 8B and MoE 30B-A3B achieve competitive performance on Speech-to-Text and Speech-to-Speech tasks, ranking top among similar-scale models on Spoken QA benchmarks. They also achieve competitive to superior performance on Audio Understanding, Speech Function Calling, Instruction-Following and Voice Empathy. We develop Fun-Audio-Chat-Duplex, a full-duplex variant with strong performance on Spoken QA and full-duplex interactions. We open-source Fun-Audio-Chat-8B with training and inference code, and provide an interactive demo, at https://github.com/FunAudioLLM/Fun-Audio-Chat .
https://arxiv.org/abs/2512.20156
Academic Papers
svg
803727dce2d0a958fc1bacb679a72fd63270a9b3acb5edc4b5d6f030565de59f
2026-01-01T00:00:00-05:00
AUDRON: A Deep Learning Framework with Fused Acoustic Signatures for Drone Type Recognition
arXiv:2512.20407v2 Announce Type: replace Abstract: Unmanned aerial vehicles (UAVs), commonly known as drones, are increasingly used across diverse domains, including logistics, agriculture, surveillance, and defense. While these systems provide numerous benefits, their misuse raises safety and security concerns, making effective detection mechanisms essential. Acoustic sensing offers a low-cost and non-intrusive alternative to vision or radar-based detection, as drone propellers generate distinctive sound patterns. This study introduces AUDRON (AUdio-based Drone Recognition Network), a hybrid deep learning framework for drone sound detection, employing a combination of Mel-Frequency Cepstral Coefficients (MFCC), Short-Time Fourier Transform (STFT) spectrograms processed with convolutional neural networks (CNNs), recurrent layers for temporal modeling, and autoencoder-based representations. Feature-level fusion integrates complementary information before classification. Experimental evaluation demonstrates that AUDRON effectively differentiates drone acoustic signatures from background noise, achieving high accuracy while maintaining generalizability across varying conditions. AUDRON achieves 98.51 percent and 97.11 percent accuracy in binary and multiclass classification. The results highlight the advantage of combining multiple feature representations with deep learning for reliable acoustic drone detection, suggesting the framework's potential for deployment in security and surveillance applications where visual or radar sensing may be limited.
https://arxiv.org/abs/2512.20407
Academic Papers
svg
06e43f60e2bc5d2df4720392354902188abd875f91e809bda7b4d8c8b184f4d5
2026-01-01T00:00:00-05:00
Leveraging High-Fidelity Digital Models and Reinforcement Learning for Mission Engineering: A Case Study of Aerial Firefighting Under Perfect Information
arXiv:2512.20589v2 Announce Type: replace Abstract: As systems engineering (SE) objectives evolve from design and operation of monolithic systems to complex System of Systems (SoS), the discipline of Mission Engineering (ME) has emerged which is increasingly being accepted as a new line of thinking for the SE community. Moreover, mission environments are uncertain, dynamic, and mission outcomes are a direct function of how the mission assets will interact with this environment. This proves static architectures brittle and calls for analytically rigorous approaches for ME. To that end, this paper proposes an intelligent mission coordination methodology that integrates digital mission models with Reinforcement Learning (RL), that specifically addresses the need for adaptive task allocation and reconfiguration. More specifically, we are leveraging a Digital Engineering (DE) based infrastructure that is composed of a high-fidelity digital mission model and agent-based simulation; and then we formulate the mission tactics management problem as a Markov Decision Process (MDP), and employ an RL agent trained via Proximal Policy Optimization. By leveraging the simulation as a sandbox, we map the system states to actions, refining the policy based on realized mission outcomes. The utility of the RL-based intelligent mission coordinator is demonstrated through an aerial firefighting case study. Our findings indicate that the RL-based intelligent mission coordinator not only surpasses baseline performance but also significantly reduces the variability in mission performance. Thus, this study serves as a proof of concept demonstrating that DE-enabled mission simulations combined with advanced analytical tools offer a mission-agnostic framework for improving ME practice; which can be extended to more complicated fleet design and selection problems in the future from a mission-first perspective.
https://arxiv.org/abs/2512.20589
Academic Papers
svg
2161b6a8171bfb84ddca9c08ee582990ac157eb3399b8203fe3280ef19cf8a0f
2026-01-01T00:00:00-05:00
Mixed Precision General Alternating-Direction Implicit Method for Solving Large Sparse Linear Systems
arXiv:2512.21164v3 Announce Type: replace Abstract: In this article, we introduce a three-precision formulation of the General Alternating-Direction Implicit method (GADI) designed to accelerate the solution of large-scale sparse linear systems $Ax=b$. GADI is a framework that can represent many existing Alternating-Direction Implicit (ADI) methods. These methods are a class of linear solvers based on a splitting of $A$ such that the solution of the original linear system can be decomposed into the successive computation of easy-to-solve structured subsystems. Our proposed mixed precision scheme for GADI solves these subsystems in low precision to reduce the overall execution time while computing the residual and solution update in high precision to enable the solution to converge to high accuracy. We develop a rounding error analysis of mixed precision GADI that establishes the rates of convergence of the forward and backward errors to certain limiting accuracies. Our analysis also highlights the conditions on the splitting matrices under which mixed precision GADI is guaranteed to converge for a given set of precisions. We then discuss a systematic and robust strategy for selecting the GADI regularization parameter $\alpha$, whose adjustment is critical for performance. Specifically, our proposed strategy makes use of a Gaussian Process Regression (GPR) model trained on a dataset of low-dimensional problems to initialize $\alpha$. Finally, we proceed to a performance analysis of mixed precision GADI on an NVIDIA A100 GPU to validate our approach. Using low precision (Bfloat16 or FP32) to solve the subsystems, we obtain speedups of $2.6\times$, $1.7\times$, and $3.1\times$ over a full double precision GADI implementation on large-scale 2D, 3D convection-diffusion and complex reaction-diffusion problems (up to $1.3\times 10^{8}$ unknowns), respectively.
https://arxiv.org/abs/2512.21164
Academic Papers
svg
053ad3a182a2f04702824ac92e49d1c32cda61534610ad3d9ecd041c1f95a2f8
2026-01-01T00:00:00-05:00
Heaven-Sent or Hell-Bent? Benchmarking the Intelligence and Defectiveness of LLM Hallucinations
arXiv:2512.21635v2 Announce Type: replace Abstract: Hallucinations in large language models (LLMs) are commonly regarded as errors to be minimized. However, recent perspectives suggest that some hallucinations may encode creative or epistemically valuable content, a dimension that remains underquantified in current literature. Existing hallucination detection methods primarily focus on factual consistency, struggling to handle heterogeneous scientific tasks and balance creativity with accuracy. To address these challenges, we propose HIC-Bench, a novel evaluation framework that categorizes hallucinations into Intelligent Hallucinations (IH) and Defective Hallucinations (DH), enabling systematic investigation of their interplay in LLM creativity. HIC-Bench features three core characteristics: (1) Structured IH/DH Assessment. using a multi-dimensional metric matrix integrating Torrance Tests of Creative Thinking (TTCT) metrics (Originality, Feasibility, Value) with hallucination-specific dimensions (scientific plausibility, factual deviation); (2) Cross-Domain Applicability. spanning ten scientific domains with open-ended innovation tasks; and (3) Dynamic Prompt Optimization. leveraging the Dynamic Hallucination Prompt (DHP) to guide models toward creative and reliable outputs. The evaluation process employs multiple LLM judges, averaging scores to mitigate bias, with human annotators verifying IH/DH classifications. Experimental results reveal a nonlinear relationship between IH and DH, demonstrating that creativity and correctness can be jointly optimized. These insights position IH as a catalyst for creativity and reveal the ability of LLM hallucinations to drive scientific innovation.Additionally, the HIC-Bench offers a valuable platform for advancing research into the creative intelligence of LLM hallucinations.
https://arxiv.org/abs/2512.21635
Academic Papers
svg
2a012887ba250747d963955b4bd0aca9848338415f4d7750a40f8682a9b51e34
2026-01-01T00:00:00-05:00
SlideChain: Semantic Provenance for Lecture Understanding via Blockchain Registration
arXiv:2512.21684v2 Announce Type: replace Abstract: Modern vision--language models (VLMs) are increasingly used to interpret and generate educational content, yet their semantic outputs remain challenging to verify, reproduce, and audit over time. Inconsistencies across model families, inference settings, and computing environments undermine the reliability of AI-generated instructional material, particularly in high-stakes and quantitative STEM domains. This work introduces SlideChain, a blockchain-backed provenance framework designed to provide verifiable integrity for multimodal semantic extraction at scale. Using the SlideChain Slides Dataset-a curated corpus of 1,117 medical imaging lecture slides from a university course-we extract concepts and relational triples from four state-of-the-art VLMs and construct structured provenance records for every slide. SlideChain anchors cryptographic hashes of these records on a local EVM (Ethereum Virtual Machine)-compatible blockchain, providing tamper-evident auditability and persistent semantic baselines. Through the first systematic analysis of semantic disagreement, cross-model similarity, and lecture-level variability in multimodal educational content, we reveal pronounced cross-model discrepancies, including low concept overlap and near-zero agreement in relational triples on many slides. We further evaluate gas usage, throughput, and scalability under simulated deployment conditions, and demonstrate perfect tamper detection along with deterministic reproducibility across independent extraction runs. Together, these results show that SlideChain provides a practical and scalable step toward trustworthy, verifiable multimodal educational pipelines, supporting long-term auditability, reproducibility, and integrity for AI-assisted instructional systems.
https://arxiv.org/abs/2512.21684
Academic Papers
svg
82efdd54f989667c0b18c1bd755681b0fabb9ba694130addc977635347663b28
2026-01-01T00:00:00-05:00
SyncAnyone: Implicit Disentanglement via Progressive Self-Correction for Lip-Syncing in the wild
arXiv:2512.21736v2 Announce Type: replace Abstract: High-quality AI-powered video dubbing demands precise audio-lip synchronization, high-fidelity visual generation, and faithful preservation of identity and background. Most existing methods rely on a mask-based training strategy, where the mouth region is masked in talking-head videos, and the model learns to synthesize lip movements from corrupted inputs and target audios. While this facilitates lip-sync accuracy, it disrupts spatiotemporal context, impairing performance on dynamic facial motions and causing instability in facial structure and background consistency. To overcome this limitation, we propose SyncAnyone, a novel two-stage learning framework that achieves accurate motion modeling and high visual fidelity simultaneously. In Stage 1, we train a diffusion-based video transformer for masked mouth inpainting, leveraging its strong spatiotemporal modeling to generate accurate, audio-driven lip movements. However, due to input corruption, minor artifacts may arise in the surrounding facial regions and the background. In Stage 2, we develop a mask-free tuning pipeline to address mask-induced artifacts. Specifically, on the basis of the Stage 1 model, we develop a data generation pipeline that creates pseudo-paired training samples by synthesizing lip-synced videos from the source video and random sampled audio. We further tune the stage 2 model on this synthetic data, achieving precise lip editing and better background consistency. Extensive experiments show that our method achieves state-of-the-art results in visual quality, temporal coherence, and identity preservation under in-the wild lip-syncing scenarios.
https://arxiv.org/abs/2512.21736
Academic Papers
svg
15d4523caa07faeb75b6c30053a03d9422b2d8b186eef518fc78c7ebf18faaf5
2026-01-01T00:00:00-05:00
Inference-based GAN Video Generation
arXiv:2512.21776v2 Announce Type: replace Abstract: Video generation has seen remarkable progress thanks to advancements in generative deep learning. However, generating long sequences remains a significant challenge. Generated videos should not only display coherent and continuous movement but also meaningful movement in successions of scenes. Models such as GANs, VAEs, and Diffusion Networks have been used for generating short video sequences, typically up to 16 frames. In this paper, we first propose a new type of video generator by enabling adversarial-based unconditional video generators with a variational encoder, akin to a VAE-GAN hybrid structure. The proposed model, as in other video deep learning-based processing frameworks, incorporates two processing branches, one for content and another for movement. However, existing models struggle with the temporal scaling of the generated videos. Classical approaches often result in degraded video quality when attempting to increase the generated video length, especially for significantly long sequences. To overcome this limitation, our research study extends the initially proposed VAE-GAN video generation model by employing a novel, memory-efficient approach to generate long videos composed of hundreds or thousands of frames ensuring their temporal continuity, consistency and dynamics. Our approach leverages a Markov chain framework with a recall mechanism, where each state represents a short-length VAE-GAN video generator. This setup enables the sequential connection of generated video sub-sequences, maintaining temporal dependencies and resulting in meaningful long video sequences.
https://arxiv.org/abs/2512.21776
Academic Papers
svg
f789c3bdc7b563213050b61ad2c1730a06ada2ad2ca8d074dc6be6441cb5b699
2026-01-01T00:00:00-05:00
KG20C & KG20C-QA: Scholarly Knowledge Graph Benchmarks for Link Prediction and Question Answering
arXiv:2512.21799v2 Announce Type: replace Abstract: In this paper, we present KG20C and KG20C-QA, two curated datasets for advancing question answering (QA) research on scholarly data. KG20C is a high-quality scholarly knowledge graph constructed from the Microsoft Academic Graph through targeted selection of venues, quality-based filtering, and schema definition. Although KG20C has been available online in non-peer-reviewed sources such as GitHub repository, this paper provides the first formal, peer-reviewed description of the dataset, including clear documentation of its construction and specifications. KG20C-QA is built upon KG20C to support QA tasks on scholarly data. We define a set of QA templates that convert graph triples into natural language question--answer pairs, producing a benchmark that can be used both with graph-based models such as knowledge graph embeddings and with text-based models such as large language models. We benchmark standard knowledge graph embedding methods on KG20C-QA, analyze performance across relation types, and provide reproducible evaluation protocols. By officially releasing these datasets with thorough documentation, we aim to contribute a reusable, extensible resource for the research community, enabling future work in QA, reasoning, and knowledge-driven applications in the scholarly domain. The full datasets will be released at https://github.com/tranhungnghiep/KG20C/ upon paper publication.
https://arxiv.org/abs/2512.21799
Academic Papers
svg
785cec328687f8182533a4cda3da6fcee21a3a6f0982e3c6c86ee860388423c0
2026-01-01T00:00:00-05:00
Interpretable Perturbation Modeling Through Biomedical Knowledge Graphs
arXiv:2512.22251v2 Announce Type: replace Abstract: Understanding how small molecules perturb gene expression is essential for uncovering drug mechanisms, predicting off-target effects, and identifying repurposing opportunities. While prior deep learning frameworks have integrated multimodal embeddings into biomedical knowledge graphs (BKGs) and further improved these representations through graph neural network message-passing paradigms, these models have been applied to tasks such as link prediction and binary drug-disease association, rather than the task of gene perturbation, which may unveil more about mechanistic transcriptomic effects. To address this gap, we construct a merged biomedical graph that integrates (i) PrimeKG++, an augmentation of PrimeKG containing semantically rich embeddings for nodes with (ii) LINCS L1000 drug and cell line nodes, initialized with multimodal embeddings from foundation models such as MolFormerXL and BioBERT. Using this heterogeneous graph, we train a graph attention network (GAT) with a downstream prediction head that learns the delta expression profile of over 978 landmark genes for a given drug-cell pair. Our results show that our framework outperforms MLP baselines for differentially expressed genes (DEG) -- which predict the delta expression given a concatenated embedding of drug features, target features, and baseline cell expression -- under the scaffold and random splits. Ablation experiments with edge shuffling and node feature randomization further demonstrate that the edges provided by biomedical KGs enhance perturbation-level prediction. More broadly, our framework provides a path toward mechanistic drug modeling: moving beyond binary drug-disease association tasks to granular transcriptional effects of therapeutic intervention.
https://arxiv.org/abs/2512.22251
Academic Papers
svg
8dafcba1c475c1a8e8b12a58b7daa09d769bc9cc88e4a726bc6f7bf4bf29d508
2026-01-01T00:00:00-05:00
SciEvalKit: An Open-source Evaluation Toolkit for Scientific General Intelligence
arXiv:2512.22334v2 Announce Type: replace Abstract: We introduce SciEvalKit, a unified benchmarking toolkit designed to evaluate AI models for science across a broad range of scientific disciplines and task capabilities. Unlike general-purpose evaluation platforms, SciEvalKit focuses on the core competencies of scientific intelligence, including Scientific Multimodal Perception, Scientific Multimodal Reasoning, Scientific Multimodal Understanding, Scientific Symbolic Reasoning, Scientific Code Generation, Science Hypothesis Generation and Scientific Knowledge Understanding. It supports six major scientific domains, spanning from physics and chemistry to astronomy and materials science. SciEvalKit builds a foundation of expert-grade scientific benchmarks, curated from real-world, domain-specific datasets, ensuring that tasks reflect authentic scientific challenges. The toolkit features a flexible, extensible evaluation pipeline that enables batch evaluation across models and datasets, supports custom model and dataset integration, and provides transparent, reproducible, and comparable results. By bridging capability-based evaluation and disciplinary diversity, SciEvalKit offers a standardized yet customizable infrastructure to benchmark the next generation of scientific foundation models and intelligent agents. The toolkit is open-sourced and actively maintained to foster community-driven development and progress in AI4Science.
https://arxiv.org/abs/2512.22334
Academic Papers
svg
aaf01e714f95efe1fbf914ef854968c685af7a629fd81a37b34e3111bf04fb8c
2026-01-01T00:00:00-05:00
VL-LN Bench: Towards Long-horizon Goal-oriented Navigation with Active Dialogs
arXiv:2512.22342v2 Announce Type: replace Abstract: In most existing embodied navigation tasks, instructions are well-defined and unambiguous, such as instruction following and object searching. Under this idealized setting, agents are required solely to produce effective navigation outputs conditioned on vision and language inputs. However, real-world navigation instructions are often vague and ambiguous, requiring the agent to resolve uncertainty and infer user intent through active dialog. To address this gap, we propose Interactive Instance Object Navigation (IION), a task that requires agents not only to generate navigation actions but also to produce language outputs via active dialog, thereby aligning more closely with practical settings. IION extends Instance Object Navigation (ION) by allowing agents to freely consult an oracle in natural language while navigating. Building on this task, we present the Vision Language-Language Navigation (VL-LN) benchmark, which provides a large-scale, automatically generated dataset and a comprehensive evaluation protocol for training and assessing dialog-enabled navigation models. VL-LN comprises over 41k long-horizon dialog-augmented trajectories for training and an automatic evaluation protocol with an oracle capable of responding to agent queries. Using this benchmark, we train a navigation model equipped with dialog capabilities and show that it achieves significant improvements over the baselines. Extensive experiments and analyses further demonstrate the effectiveness and reliability of VL-LN for advancing research on dialog-enabled embodied navigation. Code and dataset: https://0309hws.github.io/VL-LN.github.io/
https://arxiv.org/abs/2512.22342
Academic Papers
svg
246365541f4ac94b162ea0df5815d8b5f80ca814b19c39e86acbab7a35300958
2026-01-01T00:00:00-05:00
OxygenREC: An Instruction-Following Generative Framework for E-commerce Recommendation
arXiv:2512.22386v2 Announce Type: replace Abstract: Traditional recommendation systems suffer from inconsistency in multi-stage optimization objectives. Generative Recommendation (GR) mitigates them through an end-to-end framework; however, existing methods still rely on matching mechanisms based on inductive patterns. Although responsive, they lack the ability to uncover complex user intents that require deductive reasoning based on world knowledge. Meanwhile, LLMs show strong deep reasoning capabilities, but their latency and computational costs remain challenging for industrial applications. More critically, there are performance bottlenecks in multi-scenario scalability: as shown in Figure 1, existing solutions require independent training and deployment for each scenario, leading to low resource utilization and high maintenance costs-a challenge unaddressed in GR literature. To address these, we present OxygenREC, an industrial recommendation system that leverages Fast-Slow Thinking to deliver deep reasoning with strict latency and multi-scenario requirements of real-world environments. First, we adopt a Fast-Slow Thinking architecture. Slow thinking uses a near-line LLM pipeline to synthesize Contextual Reasoning Instructions, while fast thinking employs a high-efficiency encoder-decoder backbone for real-time generation. Second, to ensure reasoning instructions effectively enhance recommendation generation, we introduce a semantic alignment mechanism with Instruction-Guided Retrieval (IGR) to filter intent-relevant historical behaviors and use a Query-to-Item (Q2I) loss for instruction-item consistency. Finally, to resolve multi-scenario scalability, we transform scenario information into controllable instructions, using unified reward mapping and Soft Adaptive Group Clip Policy Optimization (SA-GCPO) to align policies with diverse business objectives, realizing a train-once-deploy-everywhere paradigm.
https://arxiv.org/abs/2512.22386
Academic Papers
svg
027f4bbeca05c747f75098fc1920ad13599b8e24029e3027fda8f940a0919c01
2026-01-01T00:00:00-05:00
Building Software by Rolling the Dice: A Qualitative Study of Vibe Coding
arXiv:2512.22418v2 Announce Type: replace Abstract: Large language models (LLMs) are reshaping software engineering by enabling "vibe coding," in which developers build software primarily through prompts rather than writing code. Although widely publicized as a productivity breakthrough, little is known about how practitioners actually define and engage in these practices. To shed light on this emerging phenomenon, we conducted a grounded theory study of 20 vibe-coding videos, including 7 live-streamed coding sessions (about 16 hours, 254 prompts) and 13 opinion videos (about 5 hours), supported by additional analysis of activity durations and prompt intents. Our findings reveal a spectrum of behaviors: some vibe coders rely almost entirely on AI without inspecting code, while others examine and adapt generated outputs. Across approaches, all must contend with the stochastic nature of generation, with debugging and refinement often described as "rolling the dice." Further, divergent mental models, shaped by vibe coders' expertise and reliance on AI, influence prompting strategies, evaluation practices, and levels of trust. These findings open new directions for research on the future of software engineering and point to practical opportunities for tool design and education.
https://arxiv.org/abs/2512.22418
Academic Papers
svg
1124a56fd626854d9de21b46a8cc861caf5cb64853c40b340e6bc75a593e334d
2026-01-01T00:00:00-05:00
SuperiorGAT: Graph Attention Networks for Sparse LiDAR Point Cloud Reconstruction in Autonomous Systems
arXiv:2512.22439v2 Announce Type: replace Abstract: LiDAR-based perception in autonomous systems is constrained by fixed vertical beam resolution and further compromised by beam dropout resulting from environmental occlusions. This paper introduces SuperiorGAT, a graph attention-based framework designed to reconstruct missing elevation information in sparse LiDAR point clouds. By modeling LiDAR scans as beam-aware graphs and incorporating gated residual fusion with feed-forward refinement, SuperiorGAT enables accurate reconstruction without increasing network depth. To evaluate performance, structured beam dropout is simulated by removing every fourth vertical scanning beam. Extensive experiments across diverse KITTI environments, including Person, Road, Campus, and City sequences, demonstrate that SuperiorGAT consistently achieves lower reconstruction error and improved geometric consistency compared to PointNet-based models and deeper GAT baselines. Qualitative X-Z projections further confirm the model's ability to preserve structural integrity with minimal vertical distortion. These results suggest that architectural refinement offers a computationally efficient method for improving LiDAR resolution without requiring additional sensor hardware.
https://arxiv.org/abs/2512.22439
Academic Papers
svg
cc9e9b979a86d83df2ba164d0b4e8153e92a5c2b038b9dece2f155b7c7c14509
2026-01-01T00:00:00-05:00
Tracking by Predicting 3-D Gaussians Over Time
arXiv:2512.22489v2 Announce Type: replace Abstract: We propose Video Gaussian Masked Autoencoders (Video-GMAE), a self-supervised approach for representation learning that encodes a sequence of images into a set of Gaussian splats moving over time. Representing a video as a set of Gaussians enforces a reasonable inductive bias: that 2-D videos are often consistent projections of a dynamic 3-D scene. We find that tracking emerges when pretraining a network with this architecture. Mapping the trajectory of the learnt Gaussians onto the image plane gives zero-shot tracking performance comparable to state-of-the-art. With small-scale finetuning, our models achieve 34.6% improvement on Kinetics, and 13.1% on Kubric datasets, surpassing existing self-supervised video approaches. The project page and code are publicly available at https://videogmae.org/ and https://github.com/tekotan/video-gmae.
https://arxiv.org/abs/2512.22489
Academic Papers
svg
97e8973b71d427d7c9bb2f09a976b16ea2100a6a7c763c285e9a5487e86bdb40
2026-01-01T00:00:00-05:00
A Lightweight Coordinate-Conditioned Diffusion Approach for 6G C-V2X Radio Environment Maps
arXiv:2512.22535v2 Announce Type: replace Abstract: Transmitter vehicles that broadcast 6G Cellular Vehicle-to-Everything (C-V2X)-based messages, e.g., Basic Safety Messages (BSMs), are prone to be impacted by PHY issues due to the lack of dynamic high-fidelity Radio Environment Map (REM) with dynamic location variation. This paper explores a lightweight diffusion-based generative approach, the Coordinate-Conditioned Denoising Diffusion Probabilistic Model (CCDDPM), that leverages the signal intensity-based 6G V2X Radio Environment Map (REM) from limited historical transmitter vehicles in a specific region, to predict the REMs for a transmitter vehicle with arbitrary coordinates across the same region. The transmitter vehicle coordinate is encoded as a smooth Gaussian prior and fused with the Gaussian noise through a lightweight two-channel conditional U-Net architecture. We demonstrate that the predicted REM closely matches the statistics and structure of ground-truth REM while exhibiting the improved stability and over other widely applied generative AI approaches. The resulting predictor enables rapid and scenario-consistent REM with arbitrary transmitter coordinates, which thereby supports more efficient 6G C-V2X communications where transmitter vehicles are less likely to suffer from the PHY issues.
https://arxiv.org/abs/2512.22535
Academic Papers
svg
296dc77790c5f6b476225405cae51f3690a4f3d35146cb4aa112fc717941b9d3
2026-01-01T00:00:00-05:00
CritiFusion: Semantic Critique and Spectral Alignment for Faithful Text-to-Image Generation
arXiv:2512.22681v2 Announce Type: replace Abstract: Recent text-to-image diffusion models have achieved remarkable visual fidelity but often struggle with semantic alignment to complex prompts. We introduce CritiFusion, a novel inference-time framework that integrates a multimodal semantic critique mechanism with frequency-domain refinement to improve text-to-image consistency and detail. The proposed CritiCore module leverages a vision-language model and multiple large language models to enrich the prompt context and produce high-level semantic feedback, guiding the diffusion process to better align generated content with the prompt's intent. Additionally, SpecFusion merges intermediate generation states in the spectral domain, injecting coarse structural information while preserving high-frequency details. No additional model training is required. CritiFusion serves as a plug-in refinement stage compatible with existing diffusion backbones. Experiments on standard benchmarks show that our method notably improves human-aligned metrics of text-to-image correspondence and visual quality. CritiFusion consistently boosts performance on human preference scores and aesthetic evaluations, achieving results on par with state-of-the-art reward optimization approaches. Qualitative results further demonstrate superior detail, realism, and prompt fidelity, indicating the effectiveness of our semantic critique and spectral alignment strategy.
https://arxiv.org/abs/2512.22681
Academic Papers
svg
c4ce3dda287e201d08e5f5c1fc8293daaafbce50fb2c7b533ef5c5c30de02488
2026-01-01T00:00:00-05:00
Protonic Nickelate Device Networks for Spatiotemporal Neuromorphic Computing
arXiv:2512.22722v2 Announce Type: replace Abstract: Computation in biological neural circuits arises from the interplay of nonlinear temporal responses and spatially distributed dynamic network interactions. Replicating this richness in hardware has remained challenging, as most neuromorphic devices emulate only isolated neuron- or synapse-like functions. In this work, we introduce an integrated neuromorphic computing platform in which both nonlinear spatiotemporal processing and programmable memory are realized within a single perovskite nickelate material system. By engineering symmetric and asymmetric hydrogenated NdNiO3 junction devices on the same wafer, we combine ultrafast, proton-mediated transient dynamics with stable multilevel resistance states. Networks of symmetric NdNiO3 junctions exhibit emergent spatial interactions mediated by proton redistribution, while each node simultaneously provides short-term temporal memory, enabling nanoseconds scale operation with an energy cost of 0.2 nJ per input. When interfaced with asymmetric output units serving as reconfigurable long-term weights, these networks allow both feature transformation and linear classification in the same material system. Leveraging these emergent interactions, the platform enables real-time pattern recognition and achieves high accuracy in spoken-digit classification and early seizure detection, outperforming temporal-only or uncoupled architectures. These results position protonic nickelates as a compact, energy-efficient, CMOS-compatible platform that integrates processing and memory for scalable intelligent hardware.
https://arxiv.org/abs/2512.22722
Academic Papers
svg
1826596e6afbd48a7dfb52dadc5227c7f9a6e0e25f11304653f87839909bf87b
2026-01-01T00:00:00-05:00
TrimTokenator-LC: Towards Adaptive Visual Token Pruning for Large Multimodal Models with Long Contexts
arXiv:2512.22748v2 Announce Type: replace Abstract: Large Multimodal Models (LMMs) have proven effective on various tasks. They typically encode visual inputs into Original Model sequences of tokens, which are then concatenated with textual tokens and jointly processed by the language model. However, the growing number of visual tokens greatly increases inference cost. Visual token pruning has emerged as a promising solution. However, existing methods often overlook scenarios involving long context inputs with multiple images. In this paper, we analyze the challenges of visual token pruning in long context, multi-image settings and introduce an adaptive pruning method tailored for such scenarios. We decompose redundancy into intra-image and inter-image components and quantify them through intra-image diversity and inter-image variation, which jointly guide dynamic budget allocation. Our approach consists of two stages. The intra-image stage allocates each image a content-aware token budget and greedily selects its most representative tokens. The inter-image stage performs global diversity filtering to form a candidate pool and then applies a Pareto selection procedure that balances diversity with text alignment. Extensive experiments show that our approach can reduce up to 80% of visual tokens while maintaining performance in long context settings.
https://arxiv.org/abs/2512.22748
Academic Papers
svg
8bbb27dd2bb7d4df187bd37b817906ecef1d083674993148b219a601bb5c2786
2026-01-01T00:00:00-05:00
Federated Multi-Task Clustering
arXiv:2512.22897v2 Announce Type: replace Abstract: Spectral clustering has emerged as one of the most effective clustering algorithms due to its superior performance. However, most existing models are designed for centralized settings, rendering them inapplicable in modern decentralized environments. Moreover, current federated learning approaches often suffer from poor generalization performance due to reliance on unreliable pseudo-labels, and fail to capture the latent correlations amongst heterogeneous clients. To tackle these limitations, this paper proposes a novel framework named Federated Multi-Task Clustering (i.e.,FMTC), which intends to learn personalized clustering models for heterogeneous clients while collaboratively leveraging their shared underlying structure in a privacy-preserving manner. More specifically, the FMTC framework is composed of two main components: client-side personalized clustering module, which learns a parameterized mapping model to support robust out-of-sample inference, bypassing the need for unreliable pseudo-labels; and server-side tensorial correlation module, which explicitly captures the shared knowledge across all clients. This is achieved by organizing all client models into a unified tensor and applying a low-rank regularization to discover their common subspace. To solve this joint optimization problem, we derive an efficient, privacy-preserving distributed algorithm based on the Alternating Direction Method of Multipliers, which decomposes the global problem into parallel local updates on clients and an aggregation step on the server. To the end, several extensive experiments on multiple real-world datasets demonstrate that our proposed FMTC framework significantly outperforms various baseline and state-of-the-art federated clustering algorithms.
https://arxiv.org/abs/2512.22897
Academic Papers
svg
f466a1bfe831e80397b1c5e97c2b141596cc05a23d4d1842cf7b8a530deb273b
2026-01-01T00:00:00-05:00
Multimodal Fact-Checking: An Agent-based Approach
arXiv:2512.22933v2 Announce Type: replace Abstract: The rapid spread of multimodal misinformation poses a growing challenge for automated fact-checking systems. Existing approaches, including large vision language models (LVLMs) and deep multimodal fusion methods, often fall short due to limited reasoning and shallow evidence utilization. A key bottleneck is the lack of dedicated datasets that provide complete real-world multimodal misinformation instances accompanied by annotated reasoning processes and verifiable evidence. To address this limitation, we introduce RW-Post, a high-quality and explainable dataset for real-world multimodal fact-checking. RW-Post aligns real-world multimodal claims with their original social media posts, preserving the rich contextual information in which the claims are made. In addition, the dataset includes detailed reasoning and explicitly linked evidence, which are derived from human written fact-checking articles via a large language model assisted extraction pipeline, enabling comprehensive verification and explanation. Building upon RW-Post, we propose AgentFact, an agent-based multimodal fact-checking framework designed to emulate the human verification workflow. AgentFact consists of five specialized agents that collaboratively handle key fact-checking subtasks, including strategy planning, high-quality evidence retrieval, visual analysis, reasoning, and explanation generation. These agents are orchestrated through an iterative workflow that alternates between evidence searching and task-aware evidence filtering and reasoning, facilitating strategic decision-making and systematic evidence analysis. Extensive experimental results demonstrate that the synergy between RW-Post and AgentFact substantially improves both the accuracy and interpretability of multimodal fact-checking.
https://arxiv.org/abs/2512.22933
Academic Papers
svg
7096797ae380f98db10a785b6eb4c266740a5d00024fd04c832f2b71d5947f2f
2026-01-01T00:00:00-05:00
ColaVLA: Leveraging Cognitive Latent Reasoning for Hierarchical Parallel Trajectory Planning in Autonomous Driving
arXiv:2512.22939v2 Announce Type: replace Abstract: Autonomous driving requires generating safe and reliable trajectories from complex multimodal inputs. Traditional modular pipelines separate perception, prediction, and planning, while recent end-to-end (E2E) systems learn them jointly. Vision-language models (VLMs) further enrich this paradigm by introducing cross-modal priors and commonsense reasoning, yet current VLM-based planners face three key challenges: (i) a mismatch between discrete text reasoning and continuous control, (ii) high latency from autoregressive chain-of-thought decoding, and (iii) inefficient or non-causal planners that limit real-time deployment. We propose ColaVLA, a unified vision-language-action framework that transfers reasoning from text to a unified latent space and couples it with a hierarchical, parallel trajectory decoder. The Cognitive Latent Reasoner compresses scene understanding into compact, decision-oriented meta-action embeddings through ego-adaptive selection and only two VLM forward passes. The Hierarchical Parallel Planner then generates multi-scale, causality-consistent trajectories in a single forward pass. Together, these components preserve the generalization and interpretability of VLMs while enabling efficient, accurate and safe trajectory generation. Experiments on the nuScenes benchmark show that ColaVLA achieves state-of-the-art performance in both open-loop and closed-loop settings with favorable efficiency and robustness.
https://arxiv.org/abs/2512.22939
Academic Papers
svg
4e0db68a7a47a99d33b8aee5a867ae2b399415682bb64b002025c03cb7162b7a
2026-01-01T00:00:00-05:00
A Context-Aware Temporal Modeling through Unified Multi-Scale Temporal Encoding and Hierarchical Sequence Learning for Single-Channel EEG Sleep Staging
arXiv:2512.22976v2 Announce Type: replace Abstract: Automatic sleep staging is a critical task in healthcare due to the global prevalence of sleep disorders. This study focuses on single-channel electroencephalography (EEG), a practical and widely available signal for automatic sleep staging. Existing approaches face challenges such as class imbalance, limited receptive-field modeling, and insufficient interpretability. This work proposes a context-aware and interpretable framework for single-channel EEG sleep staging, with particular emphasis on improving detection of the N1 stage. Many prior models operate as black boxes with stacked layers, lacking clearly defined and interpretable feature extraction roles.The proposed model combines compact multi-scale feature extraction with temporal modeling to capture both local and long-range dependencies. To address data imbalance, especially in the N1 stage, classweighted loss functions and data augmentation are applied. EEG signals are segmented into sub-epoch chunks, and final predictions are obtained by averaging softmax probabilities across chunks, enhancing contextual representation and robustness.The proposed framework achieves an overall accuracy of 89.72% and a macro-average F1-score of 85.46%. Notably, it attains an F1- score of 61.7% for the challenging N1 stage, demonstrating a substantial improvement over previous methods on the SleepEDF datasets. These results indicate that the proposed approach effectively improves sleep staging performance while maintaining interpretability and suitability for real-world clinical applications.
https://arxiv.org/abs/2512.22976
Academic Papers
svg
482e7f85a51f7a1d2d33b5a0a35093467d47423d685a862ddca833e9f25c7044
2026-01-01T00:00:00-05:00
PoseStreamer: A Multi-modal Framework for 6DoF Pose Estimation of Unseen Moving Objects
arXiv:2512.22979v2 Announce Type: replace Abstract: Six degree of freedom (6DoF) pose estimation for novel objects is a critical task in computer vision, yet it faces significant challenges in high-speed and low-light scenarios where standard RGB cameras suffer from motion blur. While event cameras offer a promising solution due to their high temporal resolution, current 6DoF pose estimation methods typically yield suboptimal performance in high-speed object moving scenarios. To address this gap, we propose PoseStreamer, a robust multi-modal 6DoF pose estimation framework designed specifically on high-speed moving scenarios. Our approach integrates three core components: an Adaptive Pose Memory Queue that utilizes historical orientation cues for temporal consistency, an Object-centric 2D Tracker that provides strong 2D priors to boost 3D center recall, and a Ray Pose Filter for geometric refinement along camera rays. Furthermore, we introduce MoCapCube6D, a novel multi-modal dataset constructed to benchmark performance under rapid motion. Extensive experiments demonstrate that PoseStreamer not only achieves superior accuracy in high-speed moving scenarios, but also exhibits strong generalizability as a template-free framework for unseen moving objects.
https://arxiv.org/abs/2512.22979
Academic Papers
svg
a2a0dd5e11d4925d275379edc5bb5bd3ac7a94ac3412984a3b63cafb0d047e8c
2026-01-01T00:00:00-05:00
OpenGround: Active Cognition-based Reasoning for Open-World 3D Visual Grounding
arXiv:2512.23020v2 Announce Type: replace Abstract: 3D visual grounding aims to locate objects based on natural language descriptions in 3D scenes. Existing methods rely on a pre-defined Object Lookup Table (OLT) to query Visual Language Models (VLMs) for reasoning about object locations, which limits the applications in scenarios with undefined or unforeseen targets. To address this problem, we present OpenGround, a novel zero-shot framework for open-world 3D visual grounding. Central to OpenGround is the Active Cognition-based Reasoning (ACR) module, which is designed to overcome the fundamental limitation of pre-defined OLTs by progressively augmenting the cognitive scope of VLMs. The ACR module performs human-like perception of the target via a cognitive task chain and actively reasons about contextually relevant objects, thereby extending VLM cognition through a dynamically updated OLT. This allows OpenGround to function with both pre-defined and open-world categories. We also propose a new dataset named OpenTarget, which contains over 7000 object-description pairs to evaluate our method in open-world scenarios. Extensive experiments demonstrate that OpenGround achieves competitive performance on Nr3D, state-of-the-art on ScanRefer, and delivers a substantial 17.6% improvement on OpenTarget. Project Page at https://why-102.github.io/openground.io/.
https://arxiv.org/abs/2512.23020
Academic Papers
svg
5a9e3419d9d1614e40e771903ec45bc8b28b416f155a12522525929346088e43
2026-01-01T00:00:00-05:00
Differentiable Physics-Driven Human Representation for Millimeter-Wave Based Pose Estimation
arXiv:2512.23054v2 Announce Type: replace Abstract: While millimeter-wave (mmWave) presents advantages for Human Pose Estimation (HPE) through its non-intrusive sensing capabilities, current mmWave-based HPE methods face limitations in two predominant input paradigms: Heatmap and Point Cloud (PC). Heatmap represents dense multi-dimensional features derived from mmWave, but is significantly affected by multipath propagation and hardware modulation noise. PC, a set of 3D points, is obtained by applying the Constant False Alarm Rate algorithm to the Heatmap, which suppresses noise but results in sparse human-related features. To address these limitations, we study the feasibility of providing an alternative input paradigm: Differentiable Physics-driven Human Representation (DIPR), which represents humans as an ensemble of Gaussian distributions with kinematic and electromagnetic parameters. Inspired by Gaussian Splatting, DIPR leverages human kinematic priors and mmWave propagation physics to enhance human features while mitigating non-human noise through two strategies: 1) We incorporate prior kinematic knowledge to initialize DIPR based on the Heatmap and establish multi-faceted optimization objectives, ensuring biomechanical validity and enhancing motion features. 2) We simulate complete mmWave processing pipelines, re-render a new Heatmap from DIPR, and compare it with the original Heatmap, avoiding spurious noise generation due to kinematic constraints overfitting. Experimental results on three datasets with four methods demonstrate that existing mmWave-based HPE methods can easily integrate DIPR and achieve superior performance.
https://arxiv.org/abs/2512.23054
Academic Papers
svg
586db1c82fd4eb25682fffc3a68c4811a590ba7affca2cc5a6d5f0ba622761e4
2026-01-01T00:00:00-05:00
Artificial Intelligence and Employment Exposure: A Territorial and Gender Perspective
arXiv:2512.23059v2 Announce Type: replace Abstract: The diffusion of artificial intelligence, particularly generative models, is expected to transform labor markets in uneven ways across sectors, territories, and social groups. This paper proposes a methodological framework to estimate the potential exposure of employment to AI using sector based data, addressing the limitations of occupation centered approaches in the Spanish context. By constructing an AI CNAE incidence matrix and applying it to provincial employment data for the period 2021 to 2023, we provide a territorial and gender disaggregated assessment of AI exposure across Spain. The results reveal stable structural patterns, with higher exposure in metropolitan and service oriented regions and a consistent gender gap, as female employment exhibits higher exposure in all territories. Rather than predicting job displacement, the framework offers a structural perspective on where AI is most likely to reshape work and skill demands, supporting evidence based policy and strategic planning.
https://arxiv.org/abs/2512.23059
Academic Papers
svg
ba9aac0c563a28fe77fd507f96b70a10df45054f6eeb62c088682b36025806d5
2026-01-01T00:00:00-05:00
APOLLO Blender: A Robotics Library for Visualization and Animation in Blender
arXiv:2512.23103v2 Announce Type: replace Abstract: High-quality visualizations are an essential part of robotics research, enabling clear communication of results through figures, animations, and demonstration videos. While Blender is a powerful and freely available 3D graphics platform, its steep learning curve and lack of robotics-focused integrations make it difficult and time-consuming for researchers to use effectively. In this work, we introduce a lightweight software library that bridges this gap by providing simple scripting interfaces for common robotics visualization tasks. The library offers three primary capabilities: (1) importing robots and environments directly from standardized descriptions such as URDF; (2) Python-based scripting tools for keyframing robot states and visual attributes; and (3) convenient generation of primitive 3D shapes for schematic figures and animations. Together, these features allow robotics researchers to rapidly create publication-ready images, animations, and explanatory schematics without needing extensive Blender expertise. We demonstrate the library through a series of proof-of-concept examples and conclude with a discussion of current limitations and opportunities for future extensions.
https://arxiv.org/abs/2512.23103
Academic Papers
svg
9e50148f3d313fb409fa8d7a97898114eabe127ce65fd1bb593a27b568723e5c
2026-01-01T00:00:00-05:00
InSPO: Unlocking Intrinsic Self-Reflection for LLM Preference Optimization
arXiv:2512.23126v2 Announce Type: replace Abstract: Direct Preference Optimization (DPO) and its variants have become standard for aligning Large Language Models due to their simplicity and offline stability. However, we identify two fundamental limitations. First, the optimal policy depends on arbitrary modeling choices (scalarization function, reference policy), yielding behavior reflecting parameterization artifacts rather than true preferences. Second, treating response generation in isolation fails to leverage comparative information in pairwise data, leaving the model's capacity for intrinsic self-reflection untapped. To address it, we propose Intrinsic Self-reflective Preference Optimization (InSPO), deriving a globally optimal policy conditioning on both context and alternative responses. We prove this formulation superior to DPO/RLHF while guaranteeing invariance to scalarization and reference choices. InSPO serves as a plug-and-play enhancement without architectural changes or inference overhead. Experiments demonstrate consistent improvements in win rates and length-controlled metrics, validating that unlocking self-reflection yields more robust, human-aligned LLMs.
https://arxiv.org/abs/2512.23126
Academic Papers
svg
e2c433c0fba97e1935cb79c8f51455c8eb3584ddc242de2edb8f3dc59b36c7e4
2026-01-01T00:00:00-05:00
Beyond URDF: The Universal Robot Description Directory for Shared, Extensible, and Standardized Robot Models
arXiv:2512.23135v2 Announce Type: replace Abstract: Robots are typically described in software by specification files (e.g., URDF, SDF, MJCF, USD) that encode only basic kinematic, dynamic, and geometric information. As a result, downstream applications such as simulation, planning, and control must repeatedly re-derive richer data, leading to redundant computations, fragmented implementations, and limited standardization. In this work, we introduce the Universal Robot Description Directory (URDD), a modular representation that organizes derived robot information into structured, easy-to-parse JSON and YAML modules. Our open-source toolkit automatically generates URDDs from URDFs, with a Rust implementation supporting Bevy-based visualization. Additionally, we provide a JavaScript/Three.js viewer for web-based inspection of URDDs. Experiments on multiple robot platforms show that URDDs can be generated efficiently, encapsulate substantially richer information than standard specification files, and directly enable the construction of core robotics subroutines. URDD provides a unified, extensible resource for reducing redundancy and establishing shared standards across robotics frameworks. We conclude with a discussion on the limitations and implications of our work.
https://arxiv.org/abs/2512.23135
Academic Papers
svg
239f4fd8a660f1f17ae4aca785327cd39741b27de3e85a9cfb2494e3953b5996
2026-01-01T00:00:00-05:00
A New Software Tool for Generating and Visualizing Robot Self-Collision Matrices
arXiv:2512.23140v2 Announce Type: replace Abstract: In robotics, it is common to check whether a given robot state results in self-intersection (i.e., a self-collision query) or to assess its distance from such an intersection (i.e., a self-proximity query). These checks are typically performed between pairs of shapes attached to different robot links. However, many of these shape pairs can be excluded in advance, as their configurations are known to always or never result in contact. This information is typically encoded in a self-collision matrix, where each entry (i, j) indicates whether a check should be performed between shape i and shape j. While the MoveIt Setup Assistant is widely used to generate such matrices, current tools are limited by static visualization, lack of proximity support, rigid single-geometry assumptions, and tedious refinement workflows, hindering flexibility and reuse in downstream robotics applications. In this work, we introduce an interactive tool that overcomes these limitations by generating and visualizing self-collision matrices across multiple shape representations, enabling dynamic inspection, filtering, and refinement of shape pairs. Outputs are provided in both JSON and YAML for easy integration. The system is implemented in Rust and uses the Bevy game engine to deliver high-quality visualizations. We demonstrate its effectiveness on multiple robot platforms, showing that matrices generated using diverse shape types yield faster and more accurate self-collision and self-proximity queries.
https://arxiv.org/abs/2512.23140
Academic Papers
svg
3e41d467ea6656a6ec18d2e67845940607c08c401a12427ad43f1c52b0386451
2026-01-01T00:00:00-05:00
SurgWorld: Learning Surgical Robot Policies from Videos via World Modeling
arXiv:2512.23162v2 Announce Type: replace Abstract: Data scarcity remains a fundamental barrier to achieving fully autonomous surgical robots. While large scale vision language action (VLA) models have shown impressive generalization in household and industrial manipulation by leveraging paired video action data from diverse domains, surgical robotics suffers from the paucity of datasets that include both visual observations and accurate robot kinematics. In contrast, vast corpora of surgical videos exist, but they lack corresponding action labels, preventing direct application of imitation learning or VLA training. In this work, we aim to alleviate this problem by learning policy models from SurgWorld, a world model designed for surgical physical AI. We curated the Surgical Action Text Alignment (SATA) dataset with detailed action description specifically for surgical robots. Then we built SurgeWorld based on the most advanced physical AI world model and SATA. It's able to generate diverse, generalizable and realistic surgery videos. We are also the first to use an inverse dynamics model to infer pseudokinematics from synthetic surgical videos, producing synthetic paired video action data. We demonstrate that a surgical VLA policy trained with these augmented data significantly outperforms models trained only on real demonstrations on a real surgical robot platform. Our approach offers a scalable path toward autonomous surgical skill acquisition by leveraging the abundance of unlabeled surgical video and generative world modeling, thus opening the door to generalizable and data efficient surgical robot policies.
https://arxiv.org/abs/2512.23162
Academic Papers
svg
cb28a7cb91986861dce08f485897135e1c22f6e5f1f8f88aa96be7c6b95eadfc
2026-01-01T00:00:00-05:00
Evaluating Parameter Efficient Methods for RLVR
arXiv:2512.23165v2 Announce Type: replace Abstract: We systematically evaluate Parameter-Efficient Fine-Tuning (PEFT) methods under the paradigm of Reinforcement Learning with Verifiable Rewards (RLVR). RLVR incentivizes language models to enhance their reasoning capabilities through verifiable feedback; however, while methods like LoRA are commonly used, the optimal PEFT architecture for RLVR remains unidentified. In this work, we conduct the first comprehensive evaluation of over 12 PEFT methodologies across the DeepSeek-R1-Distill families on mathematical reasoning benchmarks. Our empirical results challenge the default adoption of standard LoRA with three main findings. First, we demonstrate that structural variants, such as DoRA, AdaLoRA, and MiSS, consistently outperform LoRA. Second, we uncover a spectral collapse phenomenon in SVD-informed initialization strategies (\textit{e.g.,} PiSSA, MiLoRA), attributing their failure to a fundamental misalignment between principal-component updates and RL optimization. Furthermore, our ablations reveal that extreme parameter reduction (\textit{e.g.,} VeRA, Rank-1) severely bottlenecks reasoning capacity. We further conduct ablation studies and scaling experiments to validate our findings. This work provides a definitive guide for advocating for more exploration for parameter-efficient RL methods.
https://arxiv.org/abs/2512.23165
Academic Papers
svg
4b3e24d8e0ee62e711528a62e5343e1ea72dd9a1553582a45f13b5d8722b72ae
2026-01-01T00:00:00-05:00
A New Family of Binary Sequences via Elliptic Function Fields over Finite Fields of Odd Characteristics
arXiv:2512.23194v2 Announce Type: replace Abstract: Motivated by the constructions of binary sequences by utilizing the cyclic elliptic function fields over the finite field $\mathbb{F}_{2^{n}}$ by Jin \textit{et al.} in [IEEE Trans. Inf. Theory 71(8), 2025], we extend the construction to the cyclic elliptic function fields with odd characteristic by using the quadratic residue map $\eta$ instead of the trace map used therein. For any cyclic elliptic function field with $q+1+t$ rational points and any positive integer $d$ with $\gcd(d, q+1+t)=1$, we construct a new family of binary sequences of length $q+1+t$, size $q^{d-1}-1$, balance upper bounded by $(d+1)\cdot\lfloor2\sqrt{q}\rfloor+|t|+d,$ the correlation upper bounded by $(2d+1)\cdot\lfloor2\sqrt{q}\rfloor+|t|+2d$ and the linear complexity lower bounded by $\frac{q+1+2t-d-(d+1)\cdot\lfloor2\sqrt{q}\rfloor}{d+d\cdot\lfloor2\sqrt{q}\rfloor}$ where $\lfloor x\rfloor$ stands for the integer part of $x\in\mathbb{R}$.
https://arxiv.org/abs/2512.23194
Academic Papers
svg
d0bf862e6f874c68df79ae0a2b9012ed18d5ddbc07eace576746e707fcc0a64d
2026-01-01T00:00:00-05:00
KernelEvolve: Scaling Agentic Kernel Coding for Heterogeneous AI Accelerators at Meta
arXiv:2512.23236v2 Announce Type: replace Abstract: Making deep learning recommendation model (DLRM) training and inference fast and efficient is important. However, this presents three key system challenges - model architecture diversity, kernel primitive diversity, and hardware generation and architecture heterogeneity. This paper presents KernelEvolve-an agentic kernel coding framework-to tackle heterogeneity at-scale for DLRM. KernelEvolve is designed to take kernel specifications as input and automate the process of kernel generation and optimization for recommendation model across heterogeneous hardware architectures. KernelEvolve does so by operating at multiple programming abstractions, from Triton and CuTe DSL to low-level hardware agnostic languages, spanning the full hardware-software optimization stack. The kernel optimization process is described as graph-based search with selection policy, universal operator, fitness function, and termination rule, dynamically adapts to runtime execution context through retrieval-augmented prompt synthesis. We designed, implemented, and deployed KernelEvolve to optimize a wide variety of production recommendation models across generations of NVIDIA and AMD GPUs, as well as Meta's AI accelerators. We validate KernelEvolve on the publicly-available KernelBench suite, achieving 100% pass rate on all 250 problems across three difficulty levels, and 160 PyTorch ATen operators across three heterogeneous hardware platforms, demonstrating 100% correctness. KernelEvolve reduces development time from weeks to hours and achieves substantial performance improvements over PyTorch baselines across diverse production use cases and for heterogeneous AI systems at-scale. Beyond performance efficiency improvements, KernelEvolve significantly mitigates the programmability barrier for new AI hardware by enabling automated kernel generation for in-house developed AI hardware.
https://arxiv.org/abs/2512.23236
Academic Papers
svg
a2d31f6e7cdfe6cbc81fd2f297dd325356e10bf75d74af8aca0cd3374729ec03
2026-01-01T00:00:00-05:00
YOLO-Master: MOE-Accelerated with Specialized Transformers for Enhanced Real-time Detection
arXiv:2512.23273v2 Announce Type: replace Abstract: Existing Real-Time Object Detection (RTOD) methods commonly adopt YOLO-like architectures for their favorable trade-off between accuracy and speed. However, these models rely on static dense computation that applies uniform processing to all inputs, misallocating representational capacity and computational resources such as over-allocating on trivial scenes while under-serving complex ones. This mismatch results in both computational redundancy and suboptimal detection performance. To overcome this limitation, we propose YOLO-Master, a novel YOLO-like framework that introduces instance-conditional adaptive computation for RTOD. This is achieved through a Efficient Sparse Mixture-of-Experts (ES-MoE) block that dynamically allocates computational resources to each input according to its scene complexity. At its core, a lightweight dynamic routing network guides expert specialization during training through a diversity enhancing objective, encouraging complementary expertise among experts. Additionally, the routing network adaptively learns to activate only the most relevant experts, thereby improving detection performance while minimizing computational overhead during inference. Comprehensive experiments on five large-scale benchmarks demonstrate the superiority of YOLO-Master. On MS COCO, our model achieves 42.4% AP with 1.62ms latency, outperforming YOLOv13-N by +0.8% mAP and 17.8% faster inference. Notably, the gains are most pronounced on challenging dense scenes, while the model preserves efficiency on typical inputs and maintains real-time inference speed. Code will be available.
https://arxiv.org/abs/2512.23273
Academic Papers
svg
d1b1acbfb39c4aa3be69a6a1ea74e72af87685bef73fb81d06c9e5eeb0ea3356
2026-01-01T00:00:00-05:00
CubeBench: Diagnosing Interactive, Long-Horizon Spatial Reasoning Under Partial Observations
arXiv:2512.23328v2 Announce Type: replace Abstract: Large Language Model (LLM) agents, while proficient in the digital realm, face a significant gap in physical-world deployment due to the challenge of forming and maintaining a robust spatial mental model. We identify three core cognitive challenges hindering this transition: spatial reasoning, long-horizon state tracking via mental simulation, and active exploration under partial observation. To isolate and evaluate these faculties, we introduce CubeBench, a novel generative benchmark centered on the Rubik's Cube. CubeBench uses a three-tiered diagnostic framework that progressively assesses agent capabilities, from foundational state tracking with full symbolic information to active exploration with only partial visual data. Our experiments on leading LLMs reveal critical limitations, including a uniform 0.00% pass rate on all long-horizon tasks, exposing a fundamental failure in long-term planning. We also propose a diagnostic framework to isolate these cognitive bottlenecks by providing external solver tools. By analyzing the failure modes, we provide key insights to guide the development of more physically-grounded intelligent agents.
https://arxiv.org/abs/2512.23328
Academic Papers
svg
78f50822df9c43bef3278649a97dd7f12c2a9559d6d394a7d04a8c452b86f9b1
2026-01-01T00:00:00-05:00
Visual Language Hypothesis
arXiv:2512.23335v2 Announce Type: replace Abstract: We study visual representation learning from a structural and topological perspective. We begin from a single hypothesis: that visual understanding presupposes a semantic language for vision, in which many perceptual observations correspond to a small number of discrete semantic states. Together with widely assumed premises on transferability and abstraction in representation learning, this hypothesis implies that the visual observation space must be organized in a fiber bundle like structure, where nuisance variation populates fibers and semantics correspond to a quotient base space. From this structure we derive two theoretical consequences. First, the semantic quotient X/G is not a submanifold of X and cannot be obtained through smooth deformation alone, semantic invariance requires a non homeomorphic, discriminative target for example, supervision via labels, cross-instance identification, or multimodal alignment that supplies explicit semantic equivalence. Second, we show that approximating the quotient also places structural demands on the model architecture. Semantic abstraction requires not only an external semantic target, but a representation mechanism capable of supporting topology change: an expand and snap process in which the manifold is first geometrically expanded to separate structure and then collapsed to form discrete semantic regions. We emphasize that these results are interpretive rather than prescriptive: the framework provides a topological lens that aligns with empirical regularities observed in large-scale discriminative and multimodal models, and with classical principles in statistical learning theory.
https://arxiv.org/abs/2512.23335
Academic Papers
svg
0465fe8b506f944f1d552f1e99519eb61bb924a7a52f35a3aec204d4772d2b27
2026-01-01T00:00:00-05:00
ISOPO: Proximal policy gradients without pi-old
arXiv:2512.23353v2 Announce Type: replace Abstract: This note introduces Isometric Policy Optimization (ISOPO), an efficient method to approximate the natural policy gradient in a single gradient step. In comparison, existing proximal policy methods such as GRPO or CISPO use multiple gradient steps with variants of importance ratio clipping to approximate a natural gradient step relative to a reference policy. In its simplest form, ISOPO normalizes the log-probability gradient of each sequence in the Fisher metric before contracting with the advantages. Another variant of ISOPO transforms the microbatch advantages based on the neural tangent kernel in each layer. ISOPO applies this transformation layer-wise in a single backward pass and can be implemented with negligible computational overhead compared to vanilla REINFORCE.
https://arxiv.org/abs/2512.23353
Academic Papers
svg
8bf04d323e6e7ca1237af91c27daac2b1cf4ecca52f1e622cd67f8ec31a6f45b
2026-01-01T00:00:00-05:00
SoulX-LiveTalk: Real-Time Infinite Streaming of Audio-Driven Avatars via Self-Correcting Bidirectional Distillation
arXiv:2512.23379v2 Announce Type: replace Abstract: Deploying massive diffusion models for real-time, infinite-duration, audio-driven avatar generation presents a significant engineering challenge, primarily due to the conflict between computational load and strict latency constraints. Existing approaches often compromise visual fidelity by enforcing strictly unidirectional attention mechanisms or reducing model capacity. To address this problem, we introduce \textbf{SoulX-LiveTalk}, a 14B-parameter framework optimized for high-fidelity real-time streaming. Diverging from conventional unidirectional paradigms, we use a \textbf{Self-correcting Bidirectional Distillation} strategy that retains bidirectional attention within video chunks. This design preserves critical spatiotemporal correlations, significantly enhancing motion coherence and visual detail. To ensure stability during infinite generation, we incorporate a \textbf{Multi-step Retrospective Self-Correction Mechanism}, enabling the model to autonomously recover from accumulated errors and preventing collapse. Furthermore, we engineered a full-stack inference acceleration suite incorporating hybrid sequence parallelism, Parallel VAE, and kernel-level optimizations. Extensive evaluations confirm that SoulX-LiveTalk is the first 14B-scale system to achieve a \textbf{sub-second start-up latency (0.87s)} while reaching a real-time throughput of \textbf{32 FPS}, setting a new standard for high-fidelity interactive digital human synthesis.
https://arxiv.org/abs/2512.23379
Academic Papers
svg
e91f819cd347181793c7f4167f3a4abee35aedd096be862d198ca0e0407c9b0c
2026-01-01T00:00:00-05:00
DriveLaW:Unifying Planning and Video Generation in a Latent Driving World
arXiv:2512.23421v2 Announce Type: replace Abstract: World models have become crucial for autonomous driving, as they learn how scenarios evolve over time to address the long-tail challenges of the real world. However, current approaches relegate world models to limited roles: they operate within ostensibly unified architectures that still keep world prediction and motion planning as decoupled processes. To bridge this gap, we propose DriveLaW, a novel paradigm that unifies video generation and motion planning. By directly injecting the latent representation from its video generator into the planner, DriveLaW ensures inherent consistency between high-fidelity future generation and reliable trajectory planning. Specifically, DriveLaW consists of two core components: DriveLaW-Video, our powerful world model that generates high-fidelity forecasting with expressive latent representations, and DriveLaW-Act, a diffusion planner that generates consistent and reliable trajectories from the latent of DriveLaW-Video, with both components optimized by a three-stage progressive training strategy. The power of our unified paradigm is demonstrated by new state-of-the-art results across both tasks. DriveLaW not only advances video prediction significantly, surpassing best-performing work by 33.3% in FID and 1.8% in FVD, but also achieves a new record on the NAVSIM planning benchmark.
https://arxiv.org/abs/2512.23421
Academic Papers
svg
c2f2e96340e185d38d7b937b679ea8647f2e0182ae3e66bc53c33d4020ea084d
2026-01-01T00:00:00-05:00
Distilled HuBERT for Mobile Speech Emotion Recognition: A Cross-Corpus Validation Study
arXiv:2512.23435v2 Announce Type: replace Abstract: Speech Emotion Recognition (SER) has significant potential for mobile applications, yet deployment remains constrained by the computational demands of state-of-the-art transformer architectures. This paper presents a mobile-efficient SER system based on DistilHuBERT, a distilled and 8-bit quantized transformer that achieves approximately 92% parameter reduction compared to full-scale Wav2Vec 2.0 models while maintaining competitive accuracy. We conduct a rigorous 5-fold Leave-One-Session-Out (LOSO) cross-validation on the IEMOCAP dataset to ensure speaker independence, augmented with cross-corpus training on CREMA-D to enhance generalization. Cross-corpus training with CREMA-D yields a 1.2% improvement in Weighted Accuracy, a 1.4% gain in Macro F1-score, and a 32% reduction in cross-fold variance, with the Neutral class showing the most substantial benefit at 5.4% F1-score improvement. Our approach achieves an Unweighted Accuracy of 61.4% with a quantized model footprint of only 23 MB, representing approximately 91% of the Unweighted Accuracy of a full-scale baseline. Cross-corpus evaluation on RAVDESS reveals that the theatrical nature of acted emotions causes predictions to cluster by arousal level rather than by specific emotion categories - happiness predictions systematically bleed into anger predictions, and sadness predictions bleed into neutral predictions, due to acoustic saturation when actors prioritize clarity over subtlety. Despite this theatricality effect reducing overall RAVDESS accuracy to 46.64%, the model maintains robust arousal detection with 99% recall for anger, 55% recall for neutral, and 27% recall for sadness. These findings demonstrate a Pareto-optimal tradeoff between model size and accuracy, enabling practical affect recognition on resource-constrained mobile devices.
https://arxiv.org/abs/2512.23435
Academic Papers
svg
1037ff5b8061cfa92d6aeef12aeaa7f0503bcdd9a9aa2174aeaeff351edd6ec3
2026-01-01T00:00:00-05:00
Theory of Mind for Explainable Human-Robot Interaction
arXiv:2512.23482v2 Announce Type: replace Abstract: Within the context of human-robot interaction (HRI), Theory of Mind (ToM) is intended to serve as a user-friendly backend to the interface of robotic systems, enabling robots to infer and respond to human mental states. When integrated into robots, ToM allows them to adapt their internal models to users' behaviors, enhancing the interpretability and predictability of their actions. Similarly, Explainable Artificial Intelligence (XAI) aims to make AI systems transparent and interpretable, allowing humans to understand and interact with them effectively. Since ToM in HRI serves related purposes, we propose to consider ToM as a form of XAI and evaluate it through the eValuation XAI (VXAI) framework and its seven desiderata. This paper identifies a critical gap in the application of ToM within HRI, as existing methods rarely assess the extent to which explanations correspond to the robot's actual internal reasoning. To address this limitation, we propose to integrate ToM within XAI frameworks. By embedding ToM principles inside XAI, we argue for a shift in perspective, as current XAI research focuses predominantly on the AI system itself and often lacks user-centered explanations. Incorporating ToM would enable a change in focus, prioritizing the user's informational needs and perspective.
https://arxiv.org/abs/2512.23482
Academic Papers
svg
6072df67e72dd09c9697e4862eee0c01ab3565a282c7f3658d51d0be1862862e
2026-01-01T00:00:00-05:00
Circle graphs can be recognized in linear time
arXiv:2512.23492v2 Announce Type: replace Abstract: To date, the best circle graph recognition algorithm runs in almost linear time as it relies on a split decomposition algorithm that uses the union-find data-structure. We show that in the case of circle graphs, the PC-tree data-structure allows one to avoid the union-find data-structure to compute the split decomposition in linear time. As a consequence, we obtain the first linear-time recognition algorithm for circle graphs.
https://arxiv.org/abs/2512.23492
Academic Papers
svg
36f86ae2b5e254a496220a3a569f4e04a8c2b27296267f6242836dd06d00a014
2026-01-01T00:00:00-05:00
UniHetero: Could Generation Enhance Understanding for Vision-Language-Model at Large Data Scale?
arXiv:2512.23512v2 Announce Type: replace Abstract: Vision-language large models are moving toward the unification of visual understanding and visual generation tasks. However, whether generation can enhance understanding is still under-explored on large data scale. In this work, we analysis the unified structure with a concise model, UniHetero, under large-scale pretraining (>200M samples). Our key observations are: (1) Generation can improve understanding, but Only if you generate Semantics, Not Pixels. A common assumption in unified vision-language models is that adding generation will naturally strengthen understanding. However, this is not always true at scale. At 200M+ pretraining samples, generation helps understanding only when it operates at the semantic level, i.e. when the model learns to autoregress high-level visual representations inside the LLM. Once pixel-level objectives (e.g., diffusion losses) directly interfere with the LLM, understanding performance often degrades. (2) Generation reveals a superior Data Scaling trend and higher Data Utilization. Unified generation-understanding demonstrates a superior scaling trend compared to understanding alone, revealing a more effective way to learn vision-only knowledge directive from vision modality rather than captioning to text. (3) Autoregression on Input Embedding is effective to capture visual details. Compared to the commonly-used vision encoder, make visual autoregression on input embedding shows less cumulative error and is modality independent, which can be extend to all modalities. The learned semantic representations capture visual information such as objects, locations, shapes, and colors; further enable pixel-level image generation.
https://arxiv.org/abs/2512.23512
Academic Papers
svg
68bac842ecd90aac9bb585468a7f8b96d20fd3faf136fc6a8c2737d9468d4c94
2026-01-01T00:00:00-05:00
RxnBench: A Multimodal Benchmark for Evaluating Large Language Models on Chemical Reaction Understanding from Scientific Literature
arXiv:2512.23565v2 Announce Type: replace Abstract: The integration of Multimodal Large Language Models (MLLMs) into chemistry promises to revolutionize scientific discovery, yet their ability to comprehend the dense, graphical language of reactions within authentic literature remains underexplored. Here, we introduce RxnBench, a multi-tiered benchmark designed to rigorously evaluate MLLMs on chemical reaction understanding from scientific PDFs. RxnBench comprises two tasks: Single-Figure QA (SF-QA), which tests fine-grained visual perception and mechanistic reasoning using 1,525 questions derived from 305 curated reaction schemes, and Full-Document QA (FD-QA), which challenges models to synthesize information from 108 articles, requiring cross-modal integration of text, schemes, and tables. Our evaluation of MLLMs reveals a critical capability gap: while models excel at extracting explicit text, they struggle with deep chemical logic and precise structural recognition. Notably, models with inference-time reasoning significantly outperform standard architectures, yet none achieve 50\% accuracy on FD-QA. These findings underscore the urgent need for domain-specific visual encoders and stronger reasoning engines to advance autonomous AI chemists.
https://arxiv.org/abs/2512.23565
Academic Papers
svg
e95c75d46758318c032487c47dfb86f2481fde129c51a808990d44a2cca340d5
2026-01-01T00:00:00-05:00
Algorithms for Distance Sensitivity Oracles and other Graph Problems on the PRAM
arXiv:2512.23604v2 Announce Type: replace Abstract: The distance sensitivity oracle (DSO) problem asks us to preprocess a given graph $G=(V,E)$ in order to answer queries of the form $d(x,y,e)$, which denotes the shortest path distance in $G$ from vertex $x$ to vertex $y$ when edge $e$ is removed. This is an important problem for network communication, and it has been extensively studied in the sequential settingand recently in the distributed CONGEST model. However, no prior DSO results tailored to the parallel setting were known. We present the first PRAM algorithms to construct DSOs in directed weighted graphs, that can answer a query in $O(1)$ time with a single processor after preprocessing. We also present the first work-optimal PRAM algorithms for other graph problems that belong to the sequential $\tilde{O}(mn)$ fine-grained complexity class: Replacement Paths, Second Simple Shortest Path, All Pairs Second Simple Shortest Paths and Minimum Weight Cycle.
https://arxiv.org/abs/2512.23604
Academic Papers
svg
28a2dae6b49d9a8cd815a956aa11047d544197afb35054edb23731e7d7088d31
2026-01-01T00:00:00-05:00
Physics-Informed Neural Networks for Device and Circuit Modeling: A Case Study of NeuroSPICE
arXiv:2512.23624v2 Announce Type: replace Abstract: We present NeuroSPICE, a physics-informed neural network (PINN) framework for device and circuit simulation. Unlike conventional SPICE, which relies on time-discretized numerical solvers, NeuroSPICE leverages PINNs to solve circuit differential-algebraic equations (DAEs) by minimizing the residual of the equations through backpropagation. It models device and circuit waveforms using analytical equations in time domain with exact temporal derivatives. While PINNs do not outperform SPICE in speed or accuracy during training, they offer unique advantages such as surrogate models for design optimization and inverse problems. NeuroSPICE's flexibility enables the simulation of emerging devices, including highly nonlinear systems such as ferroelectric memories.
https://arxiv.org/abs/2512.23624
Academic Papers
svg
056a7c1eb625dc13365c7702a6e5b0717c6225d2f4627c9fdcf04fb31a5e62b2
2026-01-01T00:00:00-05:00
RoboMirror: Understand Before You Imitate for Video to Humanoid Locomotion
arXiv:2512.23649v2 Announce Type: replace Abstract: Humans learn locomotion through visual observation, interpreting visual content first before imitating actions. However, state-of-the-art humanoid locomotion systems rely on either curated motion capture trajectories or sparse text commands, leaving a critical gap between visual understanding and control. Text-to-motion methods suffer from semantic sparsity and staged pipeline errors, while video-based approaches only perform mechanical pose mimicry without genuine visual understanding. We propose RoboMirror, the first retargeting-free video-to-locomotion framework embodying "understand before you imitate". Leveraging VLMs, it distills raw egocentric/third-person videos into visual motion intents, which directly condition a diffusion-based policy to generate physically plausible, semantically aligned locomotion without explicit pose reconstruction or retargeting. Extensive experiments validate the effectiveness of RoboMirror, it enables telepresence via egocentric videos, drastically reduces third-person control latency by 80%, and achieves a 3.7% higher task success rate than baselines. By reframing humanoid control around video understanding, we bridge the visual understanding and action gap.
https://arxiv.org/abs/2512.23649
Academic Papers
svg
39d76924ca357e823653c47df8edc8cf453d8ea47921cd0ab5d20c5060dd3fe3
2026-01-01T00:00:00-05:00
IDT: A Physically Grounded Transformer for Feed-Forward Multi-View Intrinsic Decomposition
arXiv:2512.23667v2 Announce Type: replace Abstract: Intrinsic image decomposition is fundamental for visual understanding, as RGB images entangle material properties, illumination, and view-dependent effects. Recent diffusion-based methods have achieved strong results for single-view intrinsic decomposition; however, extending these approaches to multi-view settings remains challenging, often leading to severe view inconsistency. We propose \textbf{Intrinsic Decomposition Transformer (IDT)}, a feed-forward framework for multi-view intrinsic image decomposition. By leveraging transformer-based attention to jointly reason over multiple input images, IDT produces view-consistent intrinsic factors in a single forward pass, without iterative generative sampling. IDT adopts a physically grounded image formation model that explicitly decomposes images into diffuse reflectance, diffuse shading, and specular shading. This structured factorization separates Lambertian and non-Lambertian light transport, enabling interpretable and controllable decomposition of material and illumination effects across views. Experiments on both synthetic and real-world datasets demonstrate that IDT achieves cleaner diffuse reflectance, more coherent diffuse shading, and better-isolated specular components, while substantially improving multi-view consistency compared to prior intrinsic decomposition methods.
https://arxiv.org/abs/2512.23667
Academic Papers
svg
f950f996e53a34367611d4e0e0f2ddd2b754cc9456c9a4ecfd5d185c75c33b50
2026-01-01T00:00:00-05:00
End-to-End Test-Time Training for Long Context
arXiv:2512.23675v2 Announce Type: replace Abstract: We formulate long-context language modeling as a problem in continual learning rather than architecture design. Under this formulation, we only use a standard architecture -- a Transformer with sliding-window attention. However, our model continues learning at test time via next-token prediction on the given context, compressing the context it reads into its weights. In addition, we improve the model's initialization for learning at test time via meta-learning at training time. Overall, our method, a form of Test-Time Training (TTT), is End-to-End (E2E) both at test time (via next-token prediction) and training time (via meta-learning), in contrast to previous forms. We conduct extensive experiments with a focus on scaling properties. In particular, for 3B models trained with 164B tokens, our method (TTT-E2E) scales with context length in the same way as Transformer with full attention, while others, such as Mamba 2 and Gated DeltaNet, do not. However, similar to RNNs, TTT-E2E has constant inference latency regardless of context length, making it 2.7 times faster than full attention for 128K context. Our code is publicly available.
https://arxiv.org/abs/2512.23675
Academic Papers
svg
711ffc0ebfd7e2c93e235671b1eb26d1418676cf50a0cf6b203706846adda99b
2026-01-01T00:00:00-05:00
The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication
arXiv:cs/0703145v5 Announce Type: replace Abstract: We describe certain special consequences of certain elementary methods from group theory for studying the algebraic complexity of matrix multiplication, as developed by H. Cohn, C. Umans et. al. in 2003 and 2005. The measure of complexity here is the exponent of matrix multiplication, a real parameter between 2 and 3, which has been conjectured to be 2. More specifically, a finite group may simultaneously "realize" several independent matrix multiplications via its regular algebra if it has a family of triples of "index" subsets which satisfy the so-called simultaneous triple product property (STPP), in which case the complexity of these several multiplications does not exceed the rank (complexity) of the algebra. This leads to bounds for the exponent in terms of the size of the group and the sizes of its STPP triples, as well as the dimensions of its distinct irreducible representations. Wreath products of Abelian with symmetric groups appear especially important, in this regard, and we give an example of such a group which shows that the exponent is less than 2.84, and could be possibly be as small as 2.02 depending on the number of simultaneous matrix multiplications it realizes.
https://arxiv.org/abs/cs/0703145
Academic Papers
svg
18a5f26f12c3bebd9afbc3b14e62fdc9953582fc99ab8bf00f0fc44ad5919b76
2026-01-01T00:00:00-05:00
Efficient Active Learning with Abstention
arXiv:2204.00043v3 Announce Type: replace-cross Abstract: The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term proper abstention that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable "noise-seeking" behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve constant label complexity and deal with model misspecification.
https://arxiv.org/abs/2204.00043
Academic Papers
svg