id
stringlengths 64
64
| published
stringlengths 19
25
| title
stringlengths 7
262
| description
stringlengths 6
54.4k
| link
stringlengths 31
227
| category
stringclasses 6
values | image
stringlengths 3
247
|
|---|---|---|---|---|---|---|
6881a28b86a9792d7f893af57ae650b3cbf86ff102cc1a7ba11c418e53bb5405
|
2026-01-01T00:00:00-05:00
|
Hardware Acceleration for Neural Networks: A Comprehensive Survey
|
arXiv:2512.23914v1 Announce Type: new Abstract: Neural networks have become a dominant computational workload across cloud and edge platforms, but rapid growth in model size and deployment diversity has exposed hardware bottlenecks increasingly dominated by memory movement, communication, and irregular operators rather than peak arithmetic throughput. This survey reviews the technology landscape for hardware acceleration of deep learning, spanning GPUs and tensor-core architectures; domain-specific accelerators (e.g., TPUs/NPUs); FPGA-based designs; ASIC inference engines; and emerging LLM-serving accelerators such as LPUs (language processing units), alongside in-/near-memory computing and neuromorphic/analog approaches. We organize the space using a unified taxonomy across (i) workloads (CNNs, RNNs, GNNs, and Transformers/LLMs), (ii) execution settings (training vs.\ inference; datacenter vs.\ edge), and (iii) optimization levers (reduced precision, sparsity and pruning, operator fusion, compilation and scheduling, and memory-system/interconnect design). We synthesize key architectural ideas including systolic arrays, vector and SIMD engines, specialized attention and softmax kernels, quantization-aware datapaths, and high-bandwidth memory, and we discuss how software stacks and compilers bridge model semantics to hardware. Finally, we highlight open challenges -- including efficient long-context LLM inference (KV-cache management), robust support for dynamic and sparse workloads, energy- and security-aware deployment, and fair benchmarking -- and point to promising directions for the next generation of neural acceleration.
|
https://arxiv.org/abs/2512.23914
|
Academic Papers
|
svg
|
6c7b75b3143d132af392296a1dde42165892877ae2868a6d66794f985d0bbd4b
|
2026-01-01T00:00:00-05:00
|
In Memorium: The Academic Journal
|
arXiv:2512.23915v1 Announce Type: new Abstract: We reflect on the life and influence of the academic journal, charting their history and contributions, discussing how their influence changed society, and examining how in death they will be mourned for what they initially stood for but in the end had moved so far from that they will less missed than they might have been.
|
https://arxiv.org/abs/2512.23915
|
Academic Papers
|
svg
|
96ad2447ba2f560b0a6ea0ad5327fb50229e5542fe60a74e77517d27c346cf50
|
2026-01-01T00:00:00-05:00
|
Constraint Breeds Generalization: Temporal Dynamics as an Inductive Bias
|
arXiv:2512.23916v1 Announce Type: new Abstract: Conventional deep learning prioritizes unconstrained optimization, yet biological systems operate under strict metabolic constraints. We propose that these physical constraints shape dynamics to function not as limitations, but as a temporal inductive bias that breeds generalization. Through a phase-space analysis of signal propagation, we reveal a fundamental asymmetry: expansive dynamics amplify noise, whereas proper dissipative dynamics compress phase space that aligns with the network's spectral bias, compelling the abstraction of invariant features. This condition can be imposed externally via input encoding, or intrinsically through the network's own temporal dynamics. Both pathways require architectures capable of temporal integration and proper constraints to decode induced invariants, whereas static architectures fail to capitalize on temporal structure. Through comprehensive evaluations across supervised classification, unsupervised reconstruction, and zero-shot reinforcement learning, we demonstrate that a critical "transition" regime maximizes generalization capability. These findings establish dynamical constraints as a distinct class of inductive bias, suggesting that robust AI development requires not only scaling and removing limitations, but computationally mastering the temporal characteristics that naturally promote generalization.
|
https://arxiv.org/abs/2512.23916
|
Academic Papers
|
svg
|
7639e866de121a292386cbcce2ff1bd48ce74620c76deebfcdf50724d9f75ce3
|
2026-01-01T00:00:00-05:00
|
Analysis of Collaboration in CS Prizewinning with a Nobel-Turing Comparison
|
arXiv:2512.23919v1 Announce Type: new Abstract: In the scientific community, prizes play a pivotal role in shaping research trajectories by conferring credibility and offering financial incentives to researchers. Yet, we know little about the relationship between academic collaborations and prizewinning. By analyzing over 100 scientific prizes and the collaboration behaviors of over 5,000 prizewinners in CS, we find that prizewinners collaborate earlier and more frequently with other prizewinners than researchers who have not yet received similar recognition. Moreover, CS researchers across age groups collaborate more with prizewinners after winning their first prize, and collaborating with prizewinners after their first win increases the likelihood of the collaborator winning an award. We find that recipients of general CS prizes collaborate more than recipients of more specialized prizes, who collaborate less frequently. With Coarsened Exact Matching (CEM) and regression, we find an increase in prizewinning odds with strength of prizewinner collaboration. We examine the context of recent Nobel Prizes going to CS researchers by showing how an increasing share of Physics awards go to Physics-CS collaborations, and contrast Nobel-Turing winning author's trajectories. Our findings shed light on the relationship between prizewinning and collaboration.
|
https://arxiv.org/abs/2512.23919
|
Academic Papers
|
svg
|
7005aadf503c6f19f93a4329574ab2d1f67bb14d081bf94420021fa0ab05765c
|
2026-01-01T00:00:00-05:00
|
Learning to learn skill assessment for fetal ultrasound scanning
|
arXiv:2512.23920v1 Announce Type: new Abstract: Traditionally, ultrasound skill assessment has relied on expert supervision and feedback, a process known for its subjectivity and time-intensive nature. Previous works on quantitative and automated skill assessment have predominantly employed supervised learning methods, often limiting the analysis to predetermined or assumed factors considered influential in determining skill levels. In this work, we propose a novel bi-level optimisation framework that assesses fetal ultrasound skills by how well a task is performed on the acquired fetal ultrasound images, without using manually predefined skill ratings. The framework consists of a clinical task predictor and a skill predictor, which are optimised jointly by refining the two networks simultaneously. We validate the proposed method on real-world clinical ultrasound videos of scanning the fetal head. The results demonstrate the feasibility of predicting ultrasound skills by the proposed framework, which quantifies optimised task performance as a skill indicator.
|
https://arxiv.org/abs/2512.23920
|
Academic Papers
|
svg
|
ad6c35cb428508caf502b3acd8ee8ead0d5dbfb12bd8a40f9ba7217fd9b46d24
|
2026-01-01T00:00:00-05:00
|
Interactive Machine Learning: From Theory to Scale
|
arXiv:2512.23924v1 Announce Type: new Abstract: Machine learning has achieved remarkable success across a wide range of applications, yet many of its most effective methods rely on access to large amounts of labeled data or extensive online interaction. In practice, acquiring high-quality labels and making decisions through trial-and-error can be expensive, time-consuming, or risky, particularly in large-scale or high-stakes settings. This dissertation studies interactive machine learning, in which the learner actively influences how information is collected or which actions are taken, using past observations to guide future interactions. We develop new algorithmic principles and establish fundamental limits for interactive learning along three dimensions: active learning with noisy data and rich model classes, sequential decision making with large action spaces, and model selection under partial feedback. Our results include the first computationally efficient active learning algorithms achieving exponential label savings without low-noise assumptions; the first efficient, general-purpose contextual bandit algorithms whose guarantees are independent of the size of the action space; and the first tight characterizations of the fundamental cost of model selection in sequential decision making. Overall, this dissertation advances the theoretical foundations of interactive learning by developing algorithms that are statistically optimal and computationally efficient, while also providing principled guidance for deploying interactive learning methods in large-scale, real-world settings.
|
https://arxiv.org/abs/2512.23924
|
Academic Papers
|
svg
|
f661784024f2b775b0705fd725838c38bdc32a4f157da57217a7daa74b25a5ef
|
2026-01-01T00:00:00-05:00
|
Hojabr: Towards a Theory of Everything for AI and Data Analytics
|
arXiv:2512.23925v1 Announce Type: new Abstract: Modern data analytics pipelines increasingly combine relational queries, graph processing, and tensor computation within a single application, but existing systems remain fragmented across paradigms, execution models, and research communities. This fragmentation results in repeated optimization efforts, limited interoperability, and strict separation between logical abstractions and physical execution strategies. We propose Hojabr as a unified declarative intermediate language to address this problem. Hojabr integrates relational algebra, tensor algebra, and constraint-based reasoning within a single higher-order algebraic framework, in which joins, aggregations, tensor contractions, and recursive computations are expressed uniformly. Physical choices, such as join algorithms, execution models, and sparse versus dense tensor representations, are handled as constraint-specialization decisions rather than as separate formalisms. Hojabr supports bidirectional translation with existing declarative languages, enabling programs to be both lowered into Hojabr for analysis and optimization and lifted back into their original declarative form. By making semantic, structural, and algebraic properties explicit, and by supporting extensibility across the compilation stack, Hojabr enables systematic reasoning and reuse of optimization techniques across database systems, machine learning frameworks, and compiler infrastructures.
|
https://arxiv.org/abs/2512.23925
|
Academic Papers
|
svg
|
e9fbd1c4b0ded3bd84fce8c8dfdb982c4036f6e595cba22adebb30f64aad2b35
|
2026-01-01T00:00:00-05:00
|
Identification of fixations and saccades in eye-tracking data using adaptive threshold-based method
|
arXiv:2512.23926v1 Announce Type: new Abstract: Properties of ocular fixations and saccades are highly stochastic during many experimental tasks, and their statistics are often used as proxies for various aspects of cognition. Although distinguishing saccades from fixations is not trivial, experimentalists generally use common ad-hoc thresholds in detection algorithms. This neglects inter-task and inter-individual variability in oculomotor dynamics, and potentially biases the resulting statistics. In this article, we introduce and evaluate an adaptive method based on a Markovian approximation of eye-gaze dynamics, using saccades and fixations as states such that the optimal threshold minimizes state transitions. Applying this to three common threshold-based algorithms (velocity, angular velocity, and dispersion), we evaluate the overall accuracy against a multi-threshold benchmark as well as robustness to noise. We find that a velocity threshold achieves the highest baseline accuracy (90-93\%) across both free-viewing and visual search tasks. However, velocity-based methods degrade rapidly under noise when thresholds remain fixed, with accuracy falling below 20% at high noise levels. Adaptive threshold optimization via K-ratio minimization substantially improves performance under noisy conditions for all algorithms. Adaptive dispersion thresholds demonstrate superior noise robustness, maintaining accuracy above 81% even at extreme noise levels ({\sigma} = 50 px), though a precision-recall trade-off emerges that favors fixation detection at the expense of saccade identification. In addition to demonstrating our parsimonious adaptive thresholding method, these findings provide practical guidance for selecting and tuning classification algorithms based on data quality and analytical priorities.
|
https://arxiv.org/abs/2512.23926
|
Academic Papers
|
svg
|
b2e35aceddbd3d90796c522d8d3a85850d186a59ef265e68973b1253e249167c
|
2026-01-01T00:00:00-05:00
|
SRM at 30: Lessons from Early Data-Centric Networking and Their Impact on Named Data Networking
|
arXiv:2512.23928v1 Announce Type: new Abstract: A 1995 SIGCOMM paper, "A Reliable Multicast Framework for Light-weight Sessions and Application-Level Framing", commonly known as SRM, explored a fundamentally new approach to reliable multiparty data delivery. Rather than adapting established sender-driven reliable unicast mechanisms to multicast, as most contemporaneous proposals did, SRM introduced a data-centric model in which data receivers recover losses by explicitly requesting missing data. Thirty years later, we revisit the SRM framework, examining the challenges it faced, the lessons learned, and its influence on the later development of Named Data Networking (NDN). Experimentations with SRM revealed a fundamental semantic mismatch between its data-centric framework and IP's address-based delivery; while the application layer named data, the network layer remained 'blind' to those names, resulting in inefficient loss recovery. NDN resolves this architectural friction by aligning network delivery with the data-retrieval model and by securing data directly rather than securing communication channels. This retrospective highlights how early insights from SRM informed key design decisions in NDN and illustrates how NDN's design emerged from the cumulative insights gained over decades of networking research and development.
|
https://arxiv.org/abs/2512.23928
|
Academic Papers
|
svg
|
593d7e0212a9c514e033d29de5b967e2ff1ed45bef1198746ddac2f2089600a3
|
2026-01-01T00:00:00-05:00
|
A Proof-of-Concept for Explainable Disease Diagnosis Using Large Language Models and Answer Set Programming
|
arXiv:2512.23932v1 Announce Type: new Abstract: Accurate disease prediction is vital for timely intervention, effective treatment, and reducing medical complications. While symbolic AI has been applied in healthcare, its adoption remains limited due to the effort required for constructing high-quality knowledge bases. This work introduces McCoy, a framework that combines Large Language Models (LLMs) with Answer Set Programming (ASP) to overcome this barrier. McCoy orchestrates an LLM to translate medical literature into ASP code, combines it with patient data, and processes it using an ASP solver to arrive at the final diagnosis. This integration yields a robust, interpretable prediction framework that leverages the strengths of both paradigms. Preliminary results show McCoy has strong performance on small-scale disease diagnosis tasks.
|
https://arxiv.org/abs/2512.23932
|
Academic Papers
|
svg
|
7780d4ee3f1b13b79a1c2d5087e96ca4b3b922b2de0e0803629695ca15be2321
|
2026-01-01T00:00:00-05:00
|
MGML: A Plug-and-Play Meta-Guided Multi-Modal Learning Framework for Incomplete Multimodal Brain Tumor Segmentation
|
arXiv:2512.23936v1 Announce Type: new Abstract: Leveraging multimodal information from Magnetic Resonance Imaging (MRI) plays a vital role in lesion segmentation, especially for brain tumors. However, in clinical practice, multimodal MRI data are often incomplete, making it challenging to fully utilize the available information. Therefore, maximizing the utilization of this incomplete multimodal information presents a crucial research challenge. We present a novel meta-guided multi-modal learning (MGML) framework that comprises two components: meta-parameterized adaptive modality fusion and consistency regularization module. The meta-parameterized adaptive modality fusion (Meta-AMF) enables the model to effectively integrate information from multiple modalities under varying input conditions. By generating adaptive soft-label supervision signals based on the available modalities, Meta-AMF explicitly promotes more coherent multimodal fusion. In addition, the consistency regularization module enhances segmentation performance and implicitly reinforces the robustness and generalization of the overall framework. Notably, our approach does not alter the original model architecture and can be conveniently integrated into the training pipeline for end-to-end model optimization. We conducted extensive experiments on the public BraTS2020 and BraTS2023 datasets. Compared to multiple state-of-the-art methods from previous years, our method achieved superior performance. On BraTS2020, for the average Dice scores across fifteen missing modality combinations, building upon the baseline, our method obtained scores of 87.55, 79.36, and 62.67 for the whole tumor (WT), the tumor core (TC), and the enhancing tumor (ET), respectively. We have made our source code publicly available at https://github.com/worldlikerr/MGML.
|
https://arxiv.org/abs/2512.23936
|
Academic Papers
|
svg
|
6cb63f16aa8c9d09f32f880126bbba031e3386166e35bb5194da38f7ff859e8c
|
2026-01-01T00:00:00-05:00
|
Learnable Query Aggregation with KV Routing for Cross-view Geo-localisation
|
arXiv:2512.23938v1 Announce Type: new Abstract: Cross-view geo-localisation (CVGL) aims to estimate the geographic location of a query image by matching it with images from a large-scale database. However, the significant view-point discrepancies present considerable challenges for effective feature aggregation and alignment. To address these challenges, we propose a novel CVGL system that incorporates three key improvements. Firstly, we leverage the DINOv2 backbone with a convolution adapter fine-tuning to enhance model adaptability to cross-view variations. Secondly, we propose a multi-scale channel reallocation module to strengthen the diversity and stability of spatial representations. Finally, we propose an improved aggregation module that integrates a Mixture-of-Experts (MoE) routing into the feature aggregation process. Specifically, the module dynamically selects expert subspaces for the keys and values in a cross-attention framework, enabling adaptive processing of heterogeneous input domains. Extensive experiments on the University-1652 and SUES-200 datasets demonstrate that our method achieves competitive performance with fewer trained parameters.
|
https://arxiv.org/abs/2512.23938
|
Academic Papers
|
svg
|
22779fcd18672ca125c947dbe77cf41dd77c0e88d6c06735522366ebca3e4e1f
|
2026-01-01T00:00:00-05:00
|
Disentangling Learning from Judgment: Representation Learning for Open Response Analytics
|
arXiv:2512.23941v1 Announce Type: new Abstract: Open-ended responses are central to learning, yet automated scoring often conflates what students wrote with how teachers grade. We present an analytics-first framework that separates content signals from rater tendencies, making judgments visible and auditable via analytics. Using de-identified ASSISTments mathematics responses, we model teacher histories as dynamic priors and derive text representations from sentence embeddings, incorporating centering and residualization to mitigate prompt and teacher confounds. Temporally-validated linear models quantify the contributions of each signal, and a projection surfaces model disagreements for qualitative inspection. Results show that teacher priors heavily influence grade predictions; the strongest results arise when priors are combined with content embeddings (AUC~0.815), while content-only models remain above chance but substantially weaker (AUC~0.626). Adjusting for rater effects sharpens the residual content representation, retaining more informative embedding dimensions and revealing cases where semantic evidence supports understanding as opposed to surface-level differences in how students respond. The contribution presents a practical pipeline that transforms embeddings from mere features into learning analytics for reflection, enabling teachers and researchers to examine where grading practices align (or conflict) with evidence of student reasoning and learning.
|
https://arxiv.org/abs/2512.23941
|
Academic Papers
|
svg
|
070d6ca31cae05c3c783ee92a87d0adf7db6652184c7be24efd496f3f198ef5d
|
2026-01-01T00:00:00-05:00
|
Kinematic-Based Assessment of Surgical Actions in Microanastomosis
|
arXiv:2512.23942v1 Announce Type: new Abstract: Proficiency in microanastomosis is a critical surgical skill in neurosurgery, where the ability to precisely manipulate fine instruments is crucial to successful outcomes. These procedures require sustained attention, coordinated hand movements, and highly refined motor skills, underscoring the need for objective and systematic methods to evaluate and enhance microsurgical training. Conventional assessment approaches typically rely on expert raters supervising the procedures or reviewing surgical videos, which is an inherently subjective process prone to inter-rater variability, inconsistency, and significant time investment. These limitations highlight the necessity for automated and scalable solutions. To address this challenge, we introduce a novel AI-driven framework for automated action segmentation and performance assessment in microanastomosis procedures, designed to operate efficiently on edge computing platforms. The proposed system comprises three main components: (1) an object tip tracking and localization module based on YOLO and DeepSORT; (2) an action segmentation module leveraging self-similarity matrix for action boundary detection and unsupervised clustering; and (3) a supervised classification module designed to evaluate surgical gesture proficiency. Experimental validation on a dataset of 58 expert-rated microanastomosis videos demonstrates the effectiveness of our approach, achieving a frame-level action segmentation accuracy of 92.4% and an overall skill classification accuracy of 85.5% in replicating expert evaluations. These findings demonstrate the potential of the proposed method to provide objective, real-time feedback in microsurgical education, thereby enabling more standardized, data-driven training protocols and advancing competency assessment in high-stakes surgical environments.
|
https://arxiv.org/abs/2512.23942
|
Academic Papers
|
svg
|
ef09207f05ea36d400d2d8e56046bfa3fe791c458ebfb67e43ab3a9a60419321
|
2026-01-01T00:00:00-05:00
|
Statistical Guarantees in the Search for Less Discriminatory Algorithms
|
arXiv:2512.23943v1 Announce Type: new Abstract: Recent scholarship has argued that firms building data-driven decision systems in high-stakes domains like employment, credit, and housing should search for "less discriminatory algorithms" (LDAs) (Black et al., 2024). That is, for a given decision problem, firms considering deploying a model should make a good-faith effort to find equally performant models with lower disparate impact across social groups. Evidence from the literature on model multiplicity shows that randomness in training pipelines can lead to multiple models with the same performance, but meaningful variations in disparate impact. This suggests that developers can find LDAs simply by randomly retraining models. Firms cannot continue retraining forever, though, which raises the question: What constitutes a good-faith effort? In this paper, we formalize LDA search via model multiplicity as an optimal stopping problem, where a model developer with limited information wants to produce strong evidence that they have sufficiently explored the space of models. Our primary contribution is an adaptive stopping algorithm that yields a high-probability upper bound on the gains achievable from a continued search, allowing the developer to certify (e.g., to a court) that their search was sufficient. We provide a framework under which developers can impose stronger assumptions about the distribution of models, yielding correspondingly stronger bounds. We validate the method on real-world credit, employment and housing datasets.
|
https://arxiv.org/abs/2512.23943
|
Academic Papers
|
svg
|
672bcc6504b6ea23bcba5d682efeaceab3c08aa0c76881d3abd846771b165d47
|
2026-01-01T00:00:00-05:00
|
Decoupling Constraint from Two Direction in Evolutionary Constrained Multi-objective Optimization
|
arXiv:2512.23945v1 Announce Type: new Abstract: Real-world Constrained Multi-objective Optimization Problems (CMOPs) often contain multiple constraints, and understanding and utilizing the coupling between these constraints is crucial for solving CMOPs. However, existing Constrained Multi-objective Evolutionary Algorithms (CMOEAs) typically ignore these couplings and treat all constraints as a single aggregate, which lacks interpretability regarding the specific geometric roles of constraints. To address this limitation, we first analyze how different constraints interact and show that the final Constrained Pareto Front (CPF) depends not only on the Pareto fronts of individual constraints but also on the boundaries of infeasible regions. This insight implies that CMOPs with different coupling types must be solved from different search directions. Accordingly, we propose a novel algorithm named Decoupling Constraint from Two Directions (DCF2D). This method periodically detects constraint couplings and spawns an auxiliary population for each relevant constraint with an appropriate search direction. Extensive experiments on seven challenging CMOP benchmark suites and on a collection of real-world CMOPs demonstrate that DCF2D outperforms five state-of-the-art CMOEAs, including existing decoupling-based methods.
|
https://arxiv.org/abs/2512.23945
|
Academic Papers
|
svg
|
2536fbb08a24aeed8f3a25fd7e4cadb74d9cafb5d7b5743efe2196cb7ada6310
|
2026-01-01T00:00:00-05:00
|
Improved Balanced Classification with Theoretically Grounded Loss Functions
|
arXiv:2512.23947v1 Announce Type: new Abstract: The balanced loss is a widely adopted objective for multi-class classification under class imbalance. By assigning equal importance to all classes, regardless of their frequency, it promotes fairness and ensures that minority classes are not overlooked. However, directly minimizing the balanced classification loss is typically intractable, which makes the design of effective surrogate losses a central question. This paper introduces and studies two advanced surrogate loss families: Generalized Logit-Adjusted (GLA) loss functions and Generalized Class-Aware weighted (GCA) losses. GLA losses generalize Logit-Adjusted losses, which shift logits based on class priors, to the broader general cross-entropy loss family. GCA loss functions extend the standard class-weighted losses, which scale losses inversely by class frequency, by incorporating class-dependent confidence margins and extending them to the general cross-entropy family. We present a comprehensive theoretical analysis of consistency for both loss families. We show that GLA losses are Bayes-consistent, but only $H$-consistent for complete (i.e., unbounded) hypothesis sets. Moreover, their $H$-consistency bounds depend inversely on the minimum class probability, scaling at least as $1/\mathsf p_{\min}$. In contrast, GCA losses are $H$-consistent for any hypothesis set that is bounded or complete, with $H$-consistency bounds that scale more favorably as $1/\sqrt{\mathsf p_{\min}}$, offering significantly stronger theoretical guarantees in imbalanced settings. We report the results of experiments demonstrating that, empirically, both the GCA losses with calibrated class-dependent confidence margins and GLA losses can greatly outperform straightforward class-weighted losses as well as the LA losses. GLA generally performs slightly better in common benchmarks, whereas GCA exhibits a slight edge in highly imbalanced settings.
|
https://arxiv.org/abs/2512.23947
|
Academic Papers
|
svg
|
f6ba6579e705c54ac0cee87bd78fe5221b0898b944ef23310e61a3845aa955a2
|
2026-01-01T00:00:00-05:00
|
DivQAT: Enhancing Robustness of Quantized Convolutional Neural Networks against Model Extraction Attacks
|
arXiv:2512.23948v1 Announce Type: new Abstract: Convolutional Neural Networks (CNNs) and their quantized counterparts are vulnerable to extraction attacks, posing a significant threat of IP theft. Yet, the robustness of quantized models against these attacks is little studied compared to large models. Previous defenses propose to inject calculated noise into the prediction probabilities. However, these defenses are limited since they are not incorporated during the model design and are only added as an afterthought after training. Additionally, most defense techniques are computationally expensive and often have unrealistic assumptions about the victim model that are not feasible in edge device implementations and do not apply to quantized models. In this paper, we propose DivQAT, a novel algorithm to train quantized CNNs based on Quantization Aware Training (QAT) aiming to enhance their robustness against extraction attacks. To the best of our knowledge, our technique is the first to modify the quantization process to integrate a model extraction defense into the training process. Through empirical validation on benchmark vision datasets, we demonstrate the efficacy of our technique in defending against model extraction attacks without compromising model accuracy. Furthermore, combining our quantization technique with other defense mechanisms improves their effectiveness compared to traditional QAT.
|
https://arxiv.org/abs/2512.23948
|
Academic Papers
|
svg
|
4a06271ed2c5f7787e806dbe821e96b791eb05751899b1bbffdb1bce09326d50
|
2026-01-01T00:00:00-05:00
|
U-Net-Like Spiking Neural Networks for Single Image Dehazing
|
arXiv:2512.23950v1 Announce Type: new Abstract: Image dehazing is a critical challenge in computer vision, essential for enhancing image clarity in hazy conditions. Traditional methods often rely on atmospheric scattering models, while recent deep learning techniques, specifically Convolutional Neural Networks (CNNs) and Transformers, have improved performance by effectively analyzing image features. However, CNNs struggle with long-range dependencies, and Transformers demand significant computational resources. To address these limitations, we propose DehazeSNN, an innovative architecture that integrates a U-Net-like design with Spiking Neural Networks (SNNs). DehazeSNN captures multi-scale image features while efficiently managing local and long-range dependencies. The introduction of the Orthogonal Leaky-Integrate-and-Fire Block (OLIFBlock) enhances cross-channel communication, resulting in superior dehazing performance with reduced computational burden. Our extensive experiments show that DehazeSNN is highly competitive to state-of-the-art methods on benchmark datasets, delivering high-quality haze-free images with a smaller model size and less multiply-accumulate operations. The proposed dehazing method is publicly available at https://github.com/HaoranLiu507/DehazeSNN.
|
https://arxiv.org/abs/2512.23950
|
Academic Papers
|
svg
|
20118a8e087277cde98559a1b91282bd78a83f42157d220f2cf3e6d820620e75
|
2026-01-01T00:00:00-05:00
|
Squeezing Edge Performance: A Sensitivity-Aware Container Management for Heterogeneous Tasks
|
arXiv:2512.23952v1 Announce Type: new Abstract: Edge computing enables latency-critical applications to process data close to end devices, yet task heterogeneity and limited resources pose significant challenges to efficient orchestration. This paper presents a measurement-driven, container-based resource management framework for intra-node optimization on a single edge server hosting multiple heterogeneous applications. Extensive profiling experiments are conducted to derive a nonlinear fitting model that characterizes the relationship among CPU/memory allocations and processing latency across diverse workloads, enabling reliable estimation of performance under varying configurations and providing quantitative support for subsequent optimization. Using this model and a queueing-based delay formulation, we formulate a mixed-integer nonlinear programming (MINLP) problem to jointly minimize system latency and power consumption, which is shown to be NP-hard. The problem is decomposed into tractable convex subproblems and solved through a two-stage container-based resource management scheme (CRMS) combining convex optimization and greedy refinement. The proposed scheme achieves polynomial-time complexity and supports quasi-dynamic execution under global resource constraints. Simulation results demonstrate that CRMS reduces latency by over 14\% and improves energy efficiency compared with heuristic and search-based baselines, offering a practical and scalable solution for heterogeneous edge environments with dynamic workload characteristics.
|
https://arxiv.org/abs/2512.23952
|
Academic Papers
|
svg
|
096ae0fbd15bffac72e7c7153f0c784533c2b18e417e7c38c786a0a38d0ce7bb
|
2026-01-01T00:00:00-05:00
|
T2VAttack: Adversarial Attack on Text-to-Video Diffusion Models
|
arXiv:2512.23953v1 Announce Type: new Abstract: The rapid evolution of Text-to-Video (T2V) diffusion models has driven remarkable advancements in generating high-quality, temporally coherent videos from natural language descriptions. Despite these achievements, their vulnerability to adversarial attacks remains largely unexplored. In this paper, we introduce T2VAttack, a comprehensive study of adversarial attacks on T2V diffusion models from both semantic and temporal perspectives. Considering the inherently dynamic nature of video data, we propose two distinct attack objectives: a semantic objective to evaluate video-text alignment and a temporal objective to assess the temporal dynamics. To achieve an effective and efficient attack process, we propose two adversarial attack methods: (i) T2VAttack-S, which identifies semantically or temporally critical words in prompts and replaces them with synonyms via greedy search, and (ii) T2VAttack-I, which iteratively inserts optimized words with minimal perturbation to the prompt. By combining these objectives and strategies, we conduct a comprehensive evaluation on the adversarial robustness of several state-of-the-art T2V models, including ModelScope, CogVideoX, Open-Sora, and HunyuanVideo. Our experiments reveal that even minor prompt modifications, such as the substitution or insertion of a single word, can cause substantial degradation in semantic fidelity and temporal dynamics, highlighting critical vulnerabilities in current T2V diffusion models.
|
https://arxiv.org/abs/2512.23953
|
Academic Papers
|
svg
|
e07bd9d5bd5b42668ccd636e3fd7bdce2a9401ddeaf63a328e14434a88104a82
|
2026-01-01T00:00:00-05:00
|
Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling
|
arXiv:2512.23959v1 Announce Type: new Abstract: Multi-step retrieval-augmented generation (RAG) has become a widely adopted strategy for enhancing large language models (LLMs) on tasks that demand global comprehension and intensive reasoning. Many RAG systems incorporate a working memory module to consolidate retrieved information. However, existing memory designs function primarily as passive storage that accumulates isolated facts for the purpose of condensing the lengthy inputs and generating new sub-queries through deduction. This static nature overlooks the crucial high-order correlations among primitive facts, the compositions of which can often provide stronger guidance for subsequent steps. Therefore, their representational strength and impact on multi-step reasoning and knowledge evolution are limited, resulting in fragmented reasoning and weak global sense-making capacity in extended contexts. We introduce HGMem, a hypergraph-based memory mechanism that extends the concept of memory beyond simple storage into a dynamic, expressive structure for complex reasoning and global understanding. In our approach, memory is represented as a hypergraph whose hyperedges correspond to distinct memory units, enabling the progressive formation of higher-order interactions within memory. This mechanism connects facts and thoughts around the focal problem, evolving into an integrated and situated knowledge structure that provides strong propositions for deeper reasoning in subsequent steps. We evaluate HGMem on several challenging datasets designed for global sense-making. Extensive experiments and in-depth analyses show that our method consistently improves multi-step RAG and substantially outperforms strong baseline systems across diverse tasks.
|
https://arxiv.org/abs/2512.23959
|
Academic Papers
|
svg
|
d9d0029f7433ad74a592ebc63469c0dc6a5c04d13ed1b827d4dd22219a5769be
|
2026-01-01T00:00:00-05:00
|
An Comparative Analysis about KYC on a Recommendation System Toward Agentic Recommendation System
|
arXiv:2512.23961v1 Announce Type: new Abstract: This research presents a cutting-edge recommendation system utilizing agentic AI for KYC (Know Your Customer in the financial domain), and its evaluation across five distinct content verticals: Advertising (Ad), News, Gossip, Sharing (User-Generated Content), and Technology (Tech). The study compares the performance of four experimental groups, grouping by the intense usage of KYC, benchmarking them against the Normalized Discounted Cumulative Gain (nDCG) metric at truncation levels of $k=1$, $k=3$, and $k=5$. By synthesizing experimental data with theoretical frameworks and industry benchmarks from platforms such as Baidu and Xiaohongshu, this research provides insight by showing experimental results for engineering a large-scale agentic recommendation system.
|
https://arxiv.org/abs/2512.23961
|
Academic Papers
|
svg
|
a6cb5df48bdaa7599c1e4d9e62d5ca238c8e8cee6024f3feb1a69f59457181ce
|
2026-01-01T00:00:00-05:00
|
Physics-informed Graph Neural Networks for Operational Flood Modeling
|
arXiv:2512.23964v1 Announce Type: new Abstract: Flood models inform strategic disaster management by simulating the spatiotemporal hydrodynamics of flooding. While physics-based numerical flood models are accurate, their substantial computational cost limits their use in operational settings where rapid predictions are essential. Models designed with graph neural networks (GNNs) provide both speed and accuracy while having the ability to process unstructured spatial domains. Given its flexible input and architecture, GNNs can be leveraged alongside physics-informed techniques with ease, significantly improving interpretability. This study introduces a novel flood GNN architecture, DUALFloodGNN, which embeds physical constraints at both global and local scales through explicit loss terms. The model jointly predicts water volume at nodes and flow along edges through a shared message-passing framework. To improve performance for autoregressive inference, model training is conducted with a multi-step loss enhanced with dynamic curriculum learning. Compared with standard GNN architectures and state-of-the-art GNN flood models, DUALFloodGNN achieves substantial improvements in predicting multiple hydrologic variables while maintaining high computational efficiency. The model is open-sourced at https://github.com/acostacos/dual_flood_gnn.
|
https://arxiv.org/abs/2512.23964
|
Academic Papers
|
svg
|
e4a723626c5496852c58f1d4b704784ff85b58a819797292cb6825b2473333c6
|
2026-01-01T00:00:00-05:00
|
Multimodal sampling via Schr\"odinger-F\"ollmer samplers with temperatures
|
arXiv:2512.23965v1 Announce Type: new Abstract: Generating samples from complex and high-dimensional distributions is ubiquitous in various scientific fields of statistical physics, Bayesian inference, scientific computing and machine learning. Very recently, Huang et al. (IEEE Trans. Inform. Theory, 2025) proposed new Schr\"odinger-F\"ollmer samplers (SFS), based on the Euler discretization of the Schr\"odinger-F\"ollmer diffusion evolving on the unit interval $[0, 1]$. There, a convergence rate of order $\mathcal{O}(\sqrt{h})$ in the $L^2$-Wasserstein distance was obtained for the Euler discretization with a uniform time step-size $h>0$. By incorporating a temperature parameter, different samplers are introduced in this paper, based on the Euler discretization of the Schr\"odinger-F\"ollmer process with temperatures. As revealed by numerical experiments, high temperatures are vital, particularly in sampling from multimodal distributions. Further, a novel approach of error analysis is developed for the time discretization and an enhanced convergence rate of order ${ \mathcal{O}(h)}$ is obtained in the $L^2$-Wasserstein distance, under certain smoothness conditions on the drift. This significantly improves the existing order-half convergence in the aforementioned paper. Unlike Langevin samplers, SFS is of gradient-free, works in a unit interval $[0, 1]$ and does not require any ergodicity. Numerical experiments confirm the convergence rate and show that, the SFS substantially outperforms vanilla Langevin samplers, particularly in sampling from multimodal distributions.
|
https://arxiv.org/abs/2512.23965
|
Academic Papers
|
svg
|
5bd41cee720358d41444e4db9537a773b6d45572e0242e2501a5e3b30af789eb
|
2026-01-01T00:00:00-05:00
|
Efficient Context Scaling with LongCat ZigZag Attention
|
arXiv:2512.23966v1 Announce Type: new Abstract: We introduce LongCat ZigZag Attention (LoZA), which is a sparse attention scheme designed to transform any existing full-attention models into sparse versions with rather limited compute budget. In long-context scenarios, LoZA can achieve significant speed-ups both for prefill-intensive (e.g., retrieval-augmented generation) and decode-intensive (e.g., tool-integrated reasoning) cases. Specifically, by applying LoZA to LongCat-Flash during mid-training, we serve LongCat-Flash-Exp as a long-context foundation model that can swiftly process up to 1 million tokens, enabling efficient long-term reasoning and long-horizon agentic capabilities.
|
https://arxiv.org/abs/2512.23966
|
Academic Papers
|
svg
|
d358e82e70c902b09d54ad35ddefb49977eb3828e8d5bbfab08c9003bfceb7fc
|
2026-01-01T00:00:00-05:00
|
HERO-Sign: Hierarchical Tuning and Efficient Compiler-Time GPU Optimizations for SPHINCS+ Signature Generation
|
arXiv:2512.23969v1 Announce Type: new Abstract: SPHINCS+ is a stateless hash-based signature scheme that provides strong post quantum security, but its signature generation is slow due to intensive hash computations. GPUs offer massive parallelism that can potentially accelerate SPHINCS+ signatures. However, existing GPU-based optimizations either fail to fully exploit the inherent parallelism of SPHINCS+'s Merkle tree structure or lack fine-grained, compiler-level customization across its diverse computational kernels. This paper proposes HERO Sign, a GPU-accelerated SPHINCS+ implementation that adopts hierarchical tuning and efficient compiler time optimizations. HERO Sign reexamines the parallelization opportunities enabled by data independence across SPHINCS+ components, including FORS, MSS, and WOTS+. It introduces a Tree Fusion strategy for FORS, which contains a large number of independent branches. The fusion strategy is guided by an automated Tree Tuning search algorithm that adapts fusion schemes to different GPU architectures. To further improve performance, HERO Sign employs an adaptive compilation strategy that accounts for the varying effectiveness of compiler optimizations across SPHINCS+ kernels such as FORS Sign, TREE Sign, and WOTS+ Sign. During compilation, the strategy automatically selects between PTX and native code paths to maximize efficiency. For batched signature generation, HERO Sign optimizes kernel-level overlapping using a task graph-based construction to reduce multi-stream idle time and kernel launch overhead. Experimental results show that, compared to state of the art GPU implementations, HERO Sign achieves throughput improvements of 1.28-3.13, 1.28-2.92, and 1.24-2.60 under the SPHINCS+ 128f, 192f, and 256f parameter sets on RTX 4090. Similar gains are observed on A100, H100, and GTX 2080, along with a two orders of magnitude reduction in kernel launch latency.
|
https://arxiv.org/abs/2512.23969
|
Academic Papers
|
svg
|
7a4a038aa8788b60baf4ac1c1655bf3d3eb455549b685ff84713239404e1fcd7
|
2026-01-01T00:00:00-05:00
|
CEC-Zero: Zero-Supervision Character Error Correction with Self-Generated Rewards
|
arXiv:2512.23971v1 Announce Type: new Abstract: Large-scale Chinese spelling correction (CSC) remains critical for real-world text processing, yet existing LLMs and supervised methods lack robustness to novel errors and rely on costly annotations. We introduce CEC-Zero, a zero-supervision reinforcement learning framework that addresses this by enabling LLMs to correct their own mistakes. CEC-Zero synthesizes errorful inputs from clean text, computes cluster-consensus rewards via semantic similarity and candidate agreement, and optimizes the policy with PPO. It outperforms supervised baselines by 10--13 F$_1$ points and strong LLM fine-tunes by 5--8 points across 9 benchmarks, with theoretical guarantees of unbiased rewards and convergence. CEC-Zero establishes a label-free paradigm for robust, scalable CSC, unlocking LLM potential in noisy text pipelines.
|
https://arxiv.org/abs/2512.23971
|
Academic Papers
|
svg
|
75ba99b7961d24e48235ff701842792c1956a071015accd0bb568df3d33833f1
|
2026-01-01T00:00:00-05:00
|
SHIELD: Spherical-Projection Hybrid-Frontier Integration for Efficient LiDAR-based Drone Exploration
|
arXiv:2512.23972v1 Announce Type: new Abstract: This paper introduces SHIELD, a Spherical-Projection Hybrid-Frontier Integration for Efficient LiDAR-based Drone exploration method. Although laser LiDAR offers the advantage of a wide field of view, its application in UAV exploration still faces several challenges. The observation quality of LiDAR point clouds is generally inferior to that of depth cameras. Traditional frontier methods based on known and unknown regions impose a heavy computational burden, especially when handling the wide field of view of LiDAR. In addition, regions without point cloud are also difficult to classify as free space through raycasting. To address these problems, the SHIELD is proposed. It maintains an observation-quality occupancy map and performs ray-casting on this map to address the issue of inconsistent point-cloud quality during exploration. A hybrid frontier method is used to tackle both the computational burden and the limitations of point-cloud quality exploration. In addition, an outward spherical-projection ray-casting strategy is proposed to jointly ensure flight safety and exploration efficiency in open areas. Simulations and flight experiments prove the effectiveness of SHIELD. This work will be open-sourced to contribute to the research community.
|
https://arxiv.org/abs/2512.23972
|
Academic Papers
|
svg
|
5b30dda2b8a8d3085e4be8e010b6f1d27fc43edf191deb51dcdb59531ab785ba
|
2026-01-01T00:00:00-05:00
|
A Community-Aware Framework for Influence Maximization with Explicit Accounting for Inter-Community Influence
|
arXiv:2512.23973v1 Announce Type: new Abstract: Influence Maximization (IM) seeks to identify a small set of seed nodes in a social network to maximize expected information spread under a diffusion model. While community-based approaches improve scalability by exploiting modular structure, they typically assume independence between communities, overlooking inter-community influence$\unicode{x2014}$a limitation that reduces effectiveness in real-world networks. We introduce Community-IM++, a scalable framework that explicitly models cross-community diffusion through a principled heuristic based on community-based diffusion degree (CDD) and a progressive budgeting strategy. The algorithm partitions the network, computes CDD to prioritize bridging nodes, and allocates seeds adaptively across communities using lazy evaluation to minimize redundant computations. Experiments on large real-world social networks under different edge weight models show that Community-IM++ achieves near-greedy influence spread at up to 100 times lower runtime, while outperforming Community-IM and degree heuristics across budgets and structural conditions. These results demonstrate the practicality of Community-IM++ for large-scale applications such as viral marketing, misinformation control, and public health campaigns, where efficiency and cross-community reach are critical.
|
https://arxiv.org/abs/2512.23973
|
Academic Papers
|
svg
|
a805cc53fa71298576008d6565257448ebe26b3ef2f34ac8f121c1bdbf4b57f8
|
2026-01-01T00:00:00-05:00
|
Exploring the Potential of Spiking Neural Networks in UWB Channel Estimation
|
arXiv:2512.23975v1 Announce Type: new Abstract: Although existing deep learning-based Ultra-Wide Band (UWB) channel estimation methods achieve high accuracy, their computational intensity clashes sharply with the resource constraints of low-cost edge devices. Motivated by this, this letter explores the potential of Spiking Neural Networks (SNNs) for this task and develops a fully unsupervised SNN solution. To enable a comprehensive performance analysis, we devise an extensive set of comparative strategies and evaluate them on a compelling public benchmark. Experimental results show that our unsupervised approach still attains 80% test accuracy, on par with several supervised deep learning-based strategies. Moreover, compared with complex deep learning methods, our SNN implementation is inherently suited to neuromorphic deployment and offers a drastic reduction in model complexity, bringing significant advantages for future neuromorphic practice.
|
https://arxiv.org/abs/2512.23975
|
Academic Papers
|
svg
|
a83e47346674b4987a8ceca3a2c911a3cb4ff28a23eaa35d2c5a089d593e34c1
|
2026-01-01T00:00:00-05:00
|
Causify DataFlow: A Framework For High-performance Machine Learning Stream Computing
|
arXiv:2512.23977v1 Announce Type: new Abstract: We present DataFlow, a computational framework for building, testing, and deploying high-performance machine learning systems on unbounded time-series data. Traditional data science workflows assume finite datasets and require substantial reimplementation when moving from batch prototypes to streaming production systems. This gap introduces causality violations, batch boundary artifacts, and poor reproducibility of real-time failures. DataFlow resolves these issues through a unified execution model based on directed acyclic graphs (DAGs) with point-in-time idempotency: outputs at any time t depend only on a fixed-length context window preceding t. This guarantee ensures that models developed in batch mode execute identically in streaming production without code changes. The framework enforces strict causality by automatically tracking knowledge time across all transformations, eliminating future-peeking bugs. DataFlow supports flexible tiling across temporal and feature dimensions, allowing the same model to operate at different frequencies and memory profiles via configuration alone. It integrates natively with the Python data science stack and provides fit/predict semantics for online learning, caching and incremental computation, and automatic parallelization through DAG-based scheduling. We demonstrate its effectiveness across domains including financial trading, IoT, fraud detection, and real-time analytics.
|
https://arxiv.org/abs/2512.23977
|
Academic Papers
|
svg
|
37eb2929764f11e069e8e54ce35eb980a393694528c88a3e3d384e5abad0235d
|
2026-01-01T00:00:00-05:00
|
Assured Autonomy: How Operations Research Powers and Orchestrates Generative AI Systems
|
arXiv:2512.23978v1 Announce Type: new Abstract: Generative artificial intelligence (GenAI) is shifting from conversational assistants toward agentic systems -- autonomous decision-making systems that sense, decide, and act within operational workflows. This shift creates an autonomy paradox: as GenAI systems are granted greater operational autonomy, they should, by design, embody more formal structure, more explicit constraints, and stronger tail-risk discipline. We argue stochastic generative models can be fragile in operational domains unless paired with mechanisms that provide verifiable feasibility, robustness to distribution shift, and stress testing under high-consequence scenarios. To address this challenge, we develop a conceptual framework for assured autonomy grounded in operations research (OR), built on two complementary approaches. First, flow-based generative models frame generation as deterministic transport characterized by an ordinary differential equation, enabling auditability, constraint-aware generation, and connections to optimal transport, robust optimization, and sequential decision control. Second, operational safety is formulated through an adversarial robustness lens: decision rules are evaluated against worst-case perturbations within uncertainty or ambiguity sets, making unmodeled risks part of the design. This framework clarifies how increasing autonomy shifts OR's role from solver to guardrail to system architect, with responsibility for control logic, incentive protocols, monitoring regimes, and safety boundaries. These elements define a research agenda for assured autonomy in safety-critical, reliability-sensitive operational domains.
|
https://arxiv.org/abs/2512.23978
|
Academic Papers
|
svg
|
6d3b21f7ea27d3380999b72ddec4e3afec4ebc7d5238769106f01d0bdd1e8f06
|
2026-01-01T00:00:00-05:00
|
Information-Theoretic Quality Metric of Low-Dimensional Embeddings
|
arXiv:2512.23981v1 Announce Type: new Abstract: In this work we study the quality of low-dimensional embeddings from an explicitly information-theoretic perspective. We begin by noting that classical evaluation metrics such as stress, rank-based neighborhood criteria, or Local Procrustes quantify distortions in distances or in local geometries, but do not directly assess how much information is preserved when projecting high-dimensional data onto a lower-dimensional space. To address this limitation, we introduce the Entropy Rank Preservation Measure (ERPM), a local metric based on the Shannon entropy of the singular-value spectrum of neighborhood matrices and on the stable rank, which quantifies changes in uncertainty between the original representation and its reduced projection, providing neighborhood-level indicators and a global summary statistic. To validate the results of the metric, we compare its outcomes with the Mean Relative Rank Error (MRRE), which is distance-based, and with Local Procrustes, which is based on geometric properties, using a financial time series and a manifold commonly studied in the literature. We observe that distance-based criteria exhibit very low correlation with geometric and spectral measures, while ERPM and Local Procrustes show strong average correlation but display significant discrepancies in local regimes, leading to the conclusion that ERPM complements existing metrics by identifying neighborhoods with severe information loss, thereby enabling a more comprehensive assessment of embeddings, particularly in information-sensitive applications such as the construction of early-warning indicators.
|
https://arxiv.org/abs/2512.23981
|
Academic Papers
|
svg
|
33f87985a1821ec55b98283843e8634a92eb1af96ca36ea4667d2abea111dc7c
|
2026-01-01T00:00:00-05:00
|
Coding With AI: From a Reflection on Industrial Practices to Future Computer Science and Software Engineering Education
|
arXiv:2512.23982v1 Announce Type: new Abstract: Recent advances in large language models (LLMs) have introduced new paradigms in software development, including vibe coding, AI-assisted coding, and agentic coding, fundamentally reshaping how software is designed, implemented, and maintained. Prior research has primarily examined AI-based coding at the individual level or in educational settings, leaving industrial practitioners' perspectives underexplored. This paper addresses this gap by investigating how LLM coding tools are used in professional practice, the associated concerns and risks, and the resulting transformations in development workflows, with particular attention to implications for computing education. We conducted a qualitative analysis of 57 curated YouTube videos published between late 2024 and 2025, capturing reflections and experiences shared by practitioners. Following a filtering and quality assessment process, the selected sources were analyzed to compare LLM-based and traditional programming, identify emerging risks, and characterize evolving workflows. Our findings reveal definitions of AI-based coding practices, notable productivity gains, and lowered barriers to entry. Practitioners also report a shift in development bottlenecks toward code review and concerns regarding code quality, maintainability, security vulnerabilities, ethical issues, erosion of foundational problem-solving skills, and insufficient preparation of entry-level engineers. Building on these insights, we discuss implications for computer science and software engineering education and argue for curricular shifts toward problem-solving, architectural thinking, code review, and early project-based learning that integrates LLM tools. This study offers an industry-grounded perspective on AI-based coding and provides guidance for aligning educational practices with rapidly evolving professional realities.
|
https://arxiv.org/abs/2512.23982
|
Academic Papers
|
svg
|
53232505890b549e1cfccccb8623ad41af406b4e196627fbfe10a3af968c2975
|
2026-01-01T00:00:00-05:00
|
DriveExplorer: Images-Only Decoupled 4D Reconstruction with Progressive Restoration for Driving View Extrapolation
|
arXiv:2512.23983v1 Announce Type: new Abstract: This paper presents an effective solution for view extrapolation in autonomous driving scenarios. Recent approaches focus on generating shifted novel view images from given viewpoints using diffusion models. However, these methods heavily rely on priors such as LiDAR point clouds, 3D bounding boxes, and lane annotations, which demand expensive sensors or labor-intensive labeling, limiting applicability in real-world deployment. In this work, with only images and optional camera poses, we first estimate a global static point cloud and per-frame dynamic point clouds, fusing them into a unified representation. We then employ a deformable 4D Gaussian framework to reconstruct the scene. The initially trained 4D Gaussian model renders degraded and pseudo-images to train a video diffusion model. Subsequently, progressively shifted Gaussian renderings are iteratively refined by the diffusion model,and the enhanced results are incorporated back as training data for 4DGS. This process continues until extrapolation reaches the target viewpoints. Compared with baselines, our method produces higher-quality images at novel extrapolated viewpoints.
|
https://arxiv.org/abs/2512.23983
|
Academic Papers
|
svg
|
25ec2a4d1d5e06531961cccd96d69c319f42b98c136d57974bff74725c7d0877
|
2026-01-01T00:00:00-05:00
|
Anomaly detection in satellite imagery through temporal inpainting
|
arXiv:2512.23986v1 Announce Type: new Abstract: Detecting surface changes from satellite imagery is critical for rapid disaster response and environmental monitoring, yet remains challenging due to the complex interplay between atmospheric noise, seasonal variations, and sensor artifacts. Here we show that deep learning can leverage the temporal redundancy of satellite time series to detect anomalies at unprecedented sensitivity, by learning to predict what the surface should look like in the absence of change. We train an inpainting model built upon the SATLAS foundation model to reconstruct the last frame of a Sentinel-2 time series from preceding acquisitions, using globally distributed training data spanning diverse climate zones and land cover types. When applied to regions affected by sudden surface changes, the discrepancy between prediction and observation reveals anomalies that traditional change detection methods miss. We validate our approach on earthquake-triggered surface ruptures from the 2023 Turkey-Syria earthquake sequence, demonstrating detection of a rift feature in Tepehan with higher sensitivity and specificity than temporal median or Reed-Xiaoli anomaly detectors. Our method reaches detection thresholds approximately three times lower than baseline approaches, providing a path towards automated, global-scale monitoring of surface changes from freely available multi-spectral satellite data.
|
https://arxiv.org/abs/2512.23986
|
Academic Papers
|
svg
|
b3325381495a0c5bbf062a10286e4db53626ebc49f9d2d15208b9fa3b97b0c4a
|
2026-01-01T00:00:00-05:00
|
MeLeMaD: Adaptive Malware Detection via Chunk-wise Feature Selection and Meta-Learning
|
arXiv:2512.23987v1 Announce Type: new Abstract: Confronting the substantial challenges of malware detection in cybersecurity necessitates solutions that are both robust and adaptable to the ever-evolving threat environment. The paper introduces Meta Learning Malware Detection (MeLeMaD), a novel framework leveraging the adaptability and generalization capabilities of Model-Agnostic Meta-Learning (MAML) for malware detection. MeLeMaD incorporates a novel feature selection technique, Chunk-wise Feature Selection based on Gradient Boosting (CFSGB), tailored for handling large-scale, high-dimensional malware datasets, significantly enhancing the detection efficiency. Two benchmark malware datasets (CIC-AndMal2020 and BODMAS) and a custom dataset (EMBOD) were used for rigorously validating the MeLeMaD, achieving a remarkable performance in terms of key evaluation measures, including accuracy, precision, recall, F1-score, MCC, and AUC. With accuracies of 98.04\% on CIC-AndMal2020 and 99.97\% on BODMAS, MeLeMaD outperforms the state-of-the-art approaches. The custom dataset, EMBOD, also achieves a commendable accuracy of 97.85\%. The results underscore the MeLeMaD's potential to address the challenges of robustness, adaptability, and large-scale, high-dimensional datasets in malware detection, paving the way for more effective and efficient cybersecurity solutions.
|
https://arxiv.org/abs/2512.23987
|
Academic Papers
|
svg
|
93f31c948cd26eecfb4564aeecc3becd5b36642d4cc2a6eaa43eea10dcc44da3
|
2026-01-01T00:00:00-05:00
|
Fantastic Reasoning Behaviors and Where to Find Them: Unsupervised Discovery of the Reasoning Process
|
arXiv:2512.23988v1 Announce Type: new Abstract: Despite the growing reasoning capabilities of recent large language models (LLMs), their internal mechanisms during the reasoning process remain underexplored. Prior approaches often rely on human-defined concepts (e.g., overthinking, reflection) at the word level to analyze reasoning in a supervised manner. However, such methods are limited, as it is infeasible to capture the full spectrum of potential reasoning behaviors, many of which are difficult to define in token space. In this work, we propose an unsupervised framework (namely, RISE: Reasoning behavior Interpretability via Sparse auto-Encoder) for discovering reasoning vectors, which we define as directions in the activation space that encode distinct reasoning behaviors. By segmenting chain-of-thought traces into sentence-level 'steps' and training sparse auto-encoders (SAEs) on step-level activations, we uncover disentangled features corresponding to interpretable behaviors such as reflection and backtracking. Visualization and clustering analyses show that these behaviors occupy separable regions in the decoder column space. Moreover, targeted interventions on SAE-derived vectors can controllably amplify or suppress specific reasoning behaviors, altering inference trajectories without retraining. Beyond behavior-specific disentanglement, SAEs capture structural properties such as response length, revealing clusters of long versus short reasoning traces. More interestingly, SAEs enable the discovery of novel behaviors beyond human supervision. We demonstrate the ability to control response confidence by identifying confidence-related vectors in the SAE decoder space. These findings underscore the potential of unsupervised latent discovery for both interpreting and controllably steering reasoning in LLMs.
|
https://arxiv.org/abs/2512.23988
|
Academic Papers
|
svg
|
a6d96f03c8079c4b7c528dcce4490f645aa47de5475d3004846c6f62291bbe2e
|
2026-01-01T00:00:00-05:00
|
Bisplit graphs -- A Structural and algorithmic study
|
arXiv:2512.23989v1 Announce Type: new Abstract: A dominating set $S$ of a graph $G(V,E)$ is called a \textit{secure dominating set} if each vertex $u \in V(G) \setminus S$ is adjacent to a vertex $v \in S$ such that $(S \setminus \{v\}) \cup \{u\}$ is a dominating set of $G$. The \textit{secure domination number} $\gamma_s(G)$ of $G$ is the minimum cardinality of a secure dominating set of $G$. The \textit{Minimum Secure Domination problem} is to find a secure dominating set of a graph $G$ of cardinality $\gamma_s(G)$. In this paper, the computational complexity of the secure domination problem on several graph classes is investigated. The decision version of secure domination problem was shown to be NP-complete on star(comb) convex split graphs and bisplit graphs. So we further focus on complexity analysis of secure domination problem under additional structural restrictions on bisplit graphs. In particular, by imposing chordality as a parameter, we analyse its impact on the computational status of the problem on bisplit graphs. We establish the P versus NP-C dichotomy status of secure domination problem under restrictions on cycle length within bisplit graphs. In addition, we establish that the problem is polynomial-time solvable in chain graphs. We also prove that the secure domination problem cannot be approximated for a bisplit graph within a factor of $(1-\epsilon)~ln~|V|$ for any $\epsilon > 0$, unless $NP \subseteq DTIME(|V|^{O(log~log~|V|)})$.
|
https://arxiv.org/abs/2512.23989
|
Academic Papers
|
svg
|
fdd38af90369118ee9ea232ff7fb61d64c3bb0d3b07bc17a807d096ca4343b2f
|
2026-01-01T00:00:00-05:00
|
GCA-ResUNet: Medical Image Segmentation Using Grouped Coordinate Attention
|
arXiv:2512.23990v1 Announce Type: new Abstract: Accurate segmentation of heterogeneous anatomical structures is pivotal for computer-aided diagnosis and subsequent clinical decision-making. Although U-Net based convolutional neural networks have achieved remarkable progress, their intrinsic locality and largely homogeneous attention formulations often limit the modeling of long-range contextual dependencies, especially in multi-organ scenarios and low-contrast regions. Transformer-based architectures mitigate this issue by leveraging global self-attention, but they usually require higher computational resources and larger training data, which may impede deployment in resource-constrained clinical environments.In this paper, we propose GCA-ResUNet, an efficient medical image segmentation framework equipped with a lightweight and plug-and-play Grouped Coordinate Attention (GCA) module. The proposed GCA decouples channel-wise context modeling into multiple groups to explicitly account for semantic heterogeneity across channels, and integrates direction-aware coordinate encoding to capture structured spatial dependencies along horizontal and vertical axes. This design enhances global representation capability while preserving the efficiency advantages of CNN backbones. Extensive experiments on two widely used benchmarks, Synapse and ACDC, demonstrate that GCA-ResUNet achieves Dice scores of 86.11% and 92.64%, respectively, outperforming a range of representative CNN and Transformer-based methods, including Swin-UNet and TransUNet. In particular, GCA-ResUNet yields consistent improvements in delineating small anatomical structures with complex boundaries. These results indicate that the proposed approach provides a favorable trade-off between segmentation accuracy and computational efficiency, offering a practical and scalable solution for clinical deployment.
|
https://arxiv.org/abs/2512.23990
|
Academic Papers
|
svg
|
4bdf9fa852fa0a50b73fbe94e8655430345874b5c5a18a60a1d2b1deccc5166b
|
2026-01-01T00:00:00-05:00
|
PhyAVBench: A Challenging Audio Physics-Sensitivity Benchmark for Physically Grounded Text-to-Audio-Video Generation
|
arXiv:2512.23994v1 Announce Type: new Abstract: Text-to-audio-video (T2AV) generation underpins a wide range of applications demanding realistic audio-visual content, including virtual reality, world modeling, gaming, and filmmaking. However, existing T2AV models remain incapable of generating physically plausible sounds, primarily due to their limited understanding of physical principles. To situate current research progress, we present PhyAVBench, a challenging audio physics-sensitivity benchmark designed to systematically evaluate the audio physics grounding capabilities of existing T2AV models. PhyAVBench comprises 1,000 groups of paired text prompts with controlled physical variables that implicitly induce sound variations, enabling a fine-grained assessment of models' sensitivity to changes in underlying acoustic conditions. We term this evaluation paradigm the Audio-Physics Sensitivity Test (APST). Unlike prior benchmarks that primarily focus on audio-video synchronization, PhyAVBench explicitly evaluates models' understanding of the physical mechanisms underlying sound generation, covering 6 major audio physics dimensions, 4 daily scenarios (music, sound effects, speech, and their mix), and 50 fine-grained test points, ranging from fundamental aspects such as sound diffraction to more complex phenomena, e.g., Helmholtz resonance. Each test point consists of multiple groups of paired prompts, where each prompt is grounded by at least 20 newly recorded or collected real-world videos, thereby minimizing the risk of data leakage during model pre-training. Both prompts and videos are iteratively refined through rigorous human-involved error correction and quality control to ensure high quality. We argue that only models with a genuine grasp of audio-related physical principles can generate physically consistent audio-visual content. We hope PhyAVBench will stimulate future progress in this critical yet largely unexplored domain.
|
https://arxiv.org/abs/2512.23994
|
Academic Papers
|
svg
|
7508c49bb42296bebfef8d5aa94e60e41bad5ad6256b516b726700627ef1681d
|
2026-01-01T00:00:00-05:00
|
RepetitionCurse: Measuring and Understanding Router Imbalance in Mixture-of-Experts LLMs under DoS Stress
|
arXiv:2512.23995v1 Announce Type: new Abstract: Mixture-of-Experts architectures have become the standard for scaling large language models due to their superior parameter efficiency. To accommodate the growing number of experts in practice, modern inference systems commonly adopt expert parallelism to distribute experts across devices. However, the absence of explicit load balancing constraints during inference allows adversarial inputs to trigger severe routing concentration. We demonstrate that out-of-distribution prompts can manipulate the routing strategy such that all tokens are consistently routed to the same set of top-$k$ experts, which creates computational bottlenecks on certain devices while forcing others to idle. This converts an efficiency mechanism into a denial-of-service attack vector, leading to violations of service-level agreements for time to first token. We propose RepetitionCurse, a low-cost black-box strategy to exploit this vulnerability. By identifying a universal flaw in MoE router behavior, RepetitionCurse constructs adversarial prompts using simple repetitive token patterns in a model-agnostic manner. On widely deployed MoE models like Mixtral-8x7B, our method increases end-to-end inference latency by 3.063x, degrading service availability significantly.
|
https://arxiv.org/abs/2512.23995
|
Academic Papers
|
svg
|
13f60183dc8e7cd0076e823d145fee3efeb5c610e411a9fb7b22cdcd27a6afd1
|
2026-01-01T00:00:00-05:00
|
State Space Estimation for DPOR-based Model Checkers
|
arXiv:2512.23996v1 Announce Type: new Abstract: We study the estimation problem for concurrent programs: given a bounded program $P$, estimate the number of Mazurkiewicz trace-equivalence classes induced by its interleavings. This quantity informs two practical questions for enumeration-based model checking: how long a model checking run is likely to take, and what fraction of the search space has been covered so far. We first show the counting problem is #P-hard even for restricted programs and, unless $P=NP$, inapproximable within any subexponential factor, ruling out efficient exact or randomized approximation algorithms. We give a Monte Carlo approach to obtain a poly-time unbiased estimator: we convert a stateless optimal DPOR algorithm into an unbiased estimator by viewing its exploration as a bounded-depth, bounded-width tree whose leaves are the maximal Mazurkiewicz traces. A classical estimator by Knuth, when run on this tree, yields an unbiased estimate. To control the variance, we apply stochastic enumeration by maintaining a small population of partial paths per depth whose evolution is coupled. We have implemented our estimator in the JMC model checker and evaluated it on shared-memory benchmarks. With modest budgets, our estimator yields stable estimates, typically within a 20% band, within a few hundred trials, even when the state space has $10^5$--$10^6$ classes. We also show how the same machinery estimates model-checking cost by weighting all explored graphs, not only complete traces. Our algorithms provide the first provable poly-time unbiased estimators for counting traces, a problem of considerable importance when allocating model checking resources.
|
https://arxiv.org/abs/2512.23996
|
Academic Papers
|
svg
|
93bf7f4080ef7495b504c505f4071d9e228a468b272fb93f6ac0eb0e1ee4a313
|
2026-01-01T00:00:00-05:00
|
Bridging Structure and Appearance: Topological Features for Robust Self-Supervised Segmentation
|
arXiv:2512.23997v1 Announce Type: new Abstract: Self-supervised semantic segmentation methods often fail when faced with appearance ambiguities. We argue that this is due to an over-reliance on unstable, appearance-based features such as shadows, glare, and local textures. We propose \textbf{GASeg}, a novel framework that bridges appearance and geometry by leveraging stable topological information. The core of our method is Differentiable Box-Counting (\textbf{DBC}) module, which quantifies multi-scale topological statistics from two parallel streams: geometric-based features and appearance-based features. To force the model to learn these stable structural representations, we introduce Topological Augmentation (\textbf{TopoAug}), an adversarial strategy that simulates real-world ambiguities by applying morphological operators to the input images. A multi-objective loss, \textbf{GALoss}, then explicitly enforces cross-modal alignment between geometric-based and appearance-based features. Extensive experiments demonstrate that GASeg achieves state-of-the-art performance on four benchmarks, including COCO-Stuff, Cityscapes, and PASCAL, validating our approach of bridging geometry and appearance via topological information.
|
https://arxiv.org/abs/2512.23997
|
Academic Papers
|
svg
|
4608fd859904a92cdfdc98b68b56dd6b95e446ab3689e310a6eba248a4228558
|
2026-01-01T00:00:00-05:00
|
Improved 3D Gaussian Splatting of Unknown Spacecraft Structure Using Space Environment Illumination Knowledge
|
arXiv:2512.23998v1 Announce Type: new Abstract: This work presents a novel pipeline to recover the 3D structure of an unknown target spacecraft from a sequence of images captured during Rendezvous and Proximity Operations (RPO) in space. The target's geometry and appearance are represented as a 3D Gaussian Splatting (3DGS) model. However, learning 3DGS requires static scenes, an assumption in contrast to dynamic lighting conditions encountered in spaceborne imagery. The trained 3DGS model can also be used for camera pose estimation through photometric optimization. Therefore, in addition to recovering a geometrically accurate 3DGS model, the photometric accuracy of the rendered images is imperative to downstream pose estimation tasks during the RPO process. This work proposes to incorporate the prior knowledge of the Sun's position, estimated and maintained by the servicer spacecraft, into the training pipeline for improved photometric quality of 3DGS rasterization. Experimental studies demonstrate the effectiveness of the proposed solution, as 3DGS models trained on a sequence of images learn to adapt to rapidly changing illumination conditions in space and reflect global shadowing and self-occlusion.
|
https://arxiv.org/abs/2512.23998
|
Academic Papers
|
svg
|
065321126053f23c87eb2dd0de942cd5f20c94c6679d95abca11af812e6b5f43
|
2026-01-01T00:00:00-05:00
|
WISE: Web Information Satire and Fakeness Evaluation
|
arXiv:2512.24000v1 Announce Type: new Abstract: Distinguishing fake or untrue news from satire or humor poses a unique challenge due to their overlapping linguistic features and divergent intent. This study develops WISE (Web Information Satire and Fakeness Evaluation) framework which benchmarks eight lightweight transformer models alongside two baseline models on a balanced dataset of 20,000 samples from Fakeddit, annotated as either fake news or satire. Using stratified 5-fold cross-validation, we evaluate models across comprehensive metrics including accuracy, precision, recall, F1-score, ROC-AUC, PR-AUC, MCC, Brier score, and Expected Calibration Error. Our evaluation reveals that MiniLM, a lightweight model, achieves the highest accuracy (87.58%) among all models, while RoBERTa-base achieves the highest ROC-AUC (95.42%) and strong accuracy (87.36%). DistilBERT offers an excellent efficiency-accuracy trade-off with 86.28\% accuracy and 93.90\% ROC-AUC. Statistical tests confirm significant performance differences between models, with paired t-tests and McNemar tests providing rigorous comparisons. Our findings highlight that lightweight models can match or exceed baseline performance, offering actionable insights for deploying misinformation detection systems in real-world, resource-constrained settings.
|
https://arxiv.org/abs/2512.24000
|
Academic Papers
|
svg
|
48d77e27c7d7d0dfe24ec6b65b52977b29def82dc000b2f8f7aa61714c17af7c
|
2026-01-01T00:00:00-05:00
|
Tracing the Heart's Pathways: ECG Representation Learning from a Cardiac Conduction Perspective
|
arXiv:2512.24002v1 Announce Type: new Abstract: The multi-lead electrocardiogram (ECG) stands as a cornerstone of cardiac diagnosis. Recent strides in electrocardiogram self-supervised learning (eSSL) have brightened prospects for enhancing representation learning without relying on high-quality annotations. Yet earlier eSSL methods suffer a key limitation: they focus on consistent patterns across leads and beats, overlooking the inherent differences in heartbeats rooted in cardiac conduction processes, while subtle but significant variations carry unique physiological signatures. Moreover, representation learning for ECG analysis should align with ECG diagnostic guidelines, which progress from individual heartbeats to single leads and ultimately to lead combinations. This sequential logic, however, is often neglected when applying pre-trained models to downstream tasks. To address these gaps, we propose CLEAR-HUG, a two-stage framework designed to capture subtle variations in cardiac conduction across leads while adhering to ECG diagnostic guidelines. In the first stage, we introduce an eSSL model termed Conduction-LEAd Reconstructor (CLEAR), which captures both specific variations and general commonalities across heartbeats. Treating each heartbeat as a distinct entity, CLEAR employs a simple yet effective sparse attention mechanism to reconstruct signals without interference from other heartbeats. In the second stage, we implement a Hierarchical lead-Unified Group head (HUG) for disease diagnosis, mirroring clinical workflow. Experimental results across six tasks show a 6.84% improvement, validating the effectiveness of CLEAR-HUG. This highlights its ability to enhance representations of cardiac conduction and align patterns with expert diagnostic guidelines.
|
https://arxiv.org/abs/2512.24002
|
Academic Papers
|
svg
|
e2e7959d444cfe14bbcc525dcaa1865e76c3d7660f25b5b5a0883efbc3d8909b
|
2026-01-01T00:00:00-05:00
|
TESO Tabu Enhanced Simulation Optimization for Noisy Black Box Problems
|
arXiv:2512.24007v1 Announce Type: new Abstract: Simulation optimization (SO) is frequently challenged by noisy evaluations, high computational costs, and complex, multimodal search landscapes. This paper introduces Tabu-Enhanced Simulation Optimization (TESO), a novel metaheuristic framework integrating adaptive search with memory-based strategies. TESO leverages a short-term Tabu List to prevent cycling and encourage diversification, and a long-term Elite Memory to guide intensification by perturbing high-performing solutions. An aspiration criterion allows overriding tabu restrictions for exceptional candidates. This combination facilitates a dynamic balance between exploration and exploitation in stochastic environments. We demonstrate TESO's effectiveness and reliability using an queue optimization problem, showing improved performance compared to benchmarks and validating the contribution of its memory components. Source code and data are available at: https://github.com/bulentsoykan/TESO.
|
https://arxiv.org/abs/2512.24007
|
Academic Papers
|
svg
|
5031e4ca896e0f6fe7e314d7902fa6e1be39e687deeb3033f696ef03422b063f
|
2026-01-01T00:00:00-05:00
|
SPARK: Search Personalization via Agent-Driven Retrieval and Knowledge-sharing
|
arXiv:2512.24008v1 Announce Type: new Abstract: Personalized search demands the ability to model users' evolving, multi-dimensional information needs; a challenge for systems constrained by static profiles or monolithic retrieval pipelines. We present SPARK (Search Personalization via Agent-Driven Retrieval and Knowledge-sharing), a framework in which coordinated persona-based large language model (LLM) agents deliver task-specific retrieval and emergent personalization. SPARK formalizes a persona space defined by role, expertise, task context, and domain, and introduces a Persona Coordinator that dynamically interprets incoming queries to activate the most relevant specialized agents. Each agent executes an independent retrieval-augmented generation process, supported by dedicated long- and short-term memory stores and context-aware reasoning modules. Inter-agent collaboration is facilitated through structured communication protocols, including shared memory repositories, iterative debate, and relay-style knowledge transfer. Drawing on principles from cognitive architectures, multi-agent coordination theory, and information retrieval, SPARK models how emergent personalization properties arise from distributed agent behaviors governed by minimal coordination rules. The framework yields testable predictions regarding coordination efficiency, personalization quality, and cognitive load distribution, while incorporating adaptive learning mechanisms for continuous persona refinement. By integrating fine-grained agent specialization with cooperative retrieval, SPARK provides insights for next-generation search systems capable of capturing the complexity, fluidity, and context sensitivity of human information-seeking behavior.
|
https://arxiv.org/abs/2512.24008
|
Academic Papers
|
svg
|
7cabc69e33899ffa6113344b6a64c993ef2453cb0ce05e5e558a7c5be7ec6b29
|
2026-01-01T00:00:00-05:00
|
Bridging the Perception-Cognition Gap:Re-engineering SAM2 with Hilbert-Mamba for Robust VLM-based Medical Diagnosis
|
arXiv:2512.24013v1 Announce Type: new Abstract: Recent studies suggest that Visual Language Models (VLMs) hold great potential for tasks such as automated medical diagnosis. However, processing complex three-dimensional (3D) multimodal medical images poses significant challenges - specifically, the effective integration of complementary information and the occasional oversight of subtle yet critical pathological features. To address these issues, we present a novel two-stage fusion framework termed Hilbert-VLM. This framework leverages the HilbertMed-SAM module for precise lesion segmentation, with the generated multimodal enhanced prompts then guiding the VLM toward accurate disease classification. Our key innovation lies in the systematic redesign of the Segment Anything Model 2 (SAM2) architecture: we incorporate Hilbert space-filling curves into the scanning mechanism of the Mamba State Space Model (SSM) to maximize the preservation of spatial locality in 3D data, a property critical for medical image analysis. We also introduce a novel Hilbert-Mamba Cross-Attention (HMCA) mechanism and a scale-aware decoder to capture fine-grained details. Meanwhile, the prompt enhancement module unifies segmentation masks and their corresponding textual attributes into an information-dense prompt to support VLM inference. Extensive experiments were conducted to validate the effectiveness of the Hilbert-VLM model. On the BraTS2021 segmentation benchmark, it achieves a Dice score of 82.35 percent, with a diagnostic classification accuracy (ACC) of 78.85 percent. These results demonstrate that the proposed model offers substantial potential to improve the accuracy and reliability of medical VLM-based analysis.
|
https://arxiv.org/abs/2512.24013
|
Academic Papers
|
svg
|
72a469dc7ba2e095f4549aaa090d83f198a43a659d856ce3959416e661509541
|
2026-01-01T00:00:00-05:00
|
iCLP: Large Language Model Reasoning with Implicit Cognition Latent Planning
|
arXiv:2512.24014v1 Announce Type: new Abstract: Large language models (LLMs), when guided by explicit textual plans, can perform reliable step-by-step reasoning during problem-solving. However, generating accurate and effective textual plans remains challenging due to LLM hallucinations and the high diversity of task-specific questions. To address this, we draw inspiration from human Implicit Cognition (IC), the subconscious process by which decisions are guided by compact, generalized patterns learned from past experiences without requiring explicit verbalization. We propose iCLP, a novel framework that enables LLMs to adaptively generate latent plans (LPs), which are compact encodings of effective reasoning instructions. iCLP first distills explicit plans from existing step-by-step reasoning trajectories. It then learns discrete representations of these plans via a vector-quantized autoencoder coupled with a codebook. Finally, by fine-tuning LLMs on paired latent plans and corresponding reasoning steps, the models learn to perform implicit planning during reasoning. Experimental results on mathematical reasoning and code generation tasks demonstrate that, with iCLP, LLMs can plan in latent space while reasoning in language space. This approach yields significant improvements in both accuracy and efficiency and, crucially, demonstrates strong cross-domain generalization while preserving the interpretability of chain-of-thought reasoning.
|
https://arxiv.org/abs/2512.24014
|
Academic Papers
|
svg
|
1503d41b6072cdfd11c99dd172e3713aca736f393c401578c2cc58dfe01d2005
|
2026-01-01T00:00:00-05:00
|
On Exact Editing of Flow-Based Diffusion Models
|
arXiv:2512.24015v1 Announce Type: new Abstract: Recent methods in flow-based diffusion editing have enabled direct transformations between source and target image distribution without explicit inversion. However, the latent trajectories in these methods often exhibit accumulated velocity errors, leading to semantic inconsistency and loss of structural fidelity. We propose Conditioned Velocity Correction (CVC), a principled framework that reformulates flow-based editing as a distribution transformation problem driven by a known source prior. CVC rethinks the role of velocity in inter-distribution transformation by introducing a dual-perspective velocity conversion mechanism. This mechanism explicitly decomposes the latent evolution into two components: a structure-preserving branch that remains consistent with the source trajectory, and a semantically-guided branch that drives a controlled deviation toward the target distribution. The conditional velocity field exhibits an absolute velocity error relative to the true underlying distribution trajectory, which inherently introduces potential instability and trajectory drift in the latent space. To address this quantifiable deviation and maintain fidelity to the true flow, we apply a posterior-consistent update to the resulting conditional velocity field. This update is derived from Empirical Bayes Inference and Tweedie correction, which ensures a mathematically grounded error compensation over time. Our method yields stable and interpretable latent dynamics, achieving faithful reconstruction alongside smooth local semantic conversion. Comprehensive experiments demonstrate that CVC consistently achieves superior fidelity, better semantic alignment, and more reliable editing behavior across diverse tasks.
|
https://arxiv.org/abs/2512.24015
|
Academic Papers
|
svg
|
0f163e047fd4b59e486bab26e6f41379d3952bbdf8062fccbfaeed47edc79825
|
2026-01-01T00:00:00-05:00
|
FitControler: Toward Fit-Aware Virtual Try-On
|
arXiv:2512.24016v1 Announce Type: new Abstract: Realistic virtual try-on (VTON) concerns not only faithful rendering of garment details but also coordination of the style. Prior art typically pursues the former, but neglects a key factor that shapes the holistic style -- garment fit. Garment fit delineates how a garment aligns with the body of a wearer and is a fundamental element in fashion design. In this work, we introduce fit-aware VTON and present FitControler, a learnable plug-in that can seamlessly integrate into modern VTON models to enable customized fit control. To achieve this, we highlight two challenges: i) how to delineate layouts of different fits and ii) how to render the garment that matches the layout. FitControler first features a fit-aware layout generator to redraw the body-garment layout conditioned on a set of delicately processed garment-agnostic representations, and a multi-scale fit injector is then used to deliver layout cues to enable layout-driven VTON. In particular, we build a fit-aware VTON dataset termed Fit4Men, including 13,000 body-garment pairs of different fits, covering both tops and bottoms, and featuring varying camera distances and body poses. Two fit consistency metrics are also introduced to assess the fitness of generations. Extensive experiments show that FitControler can work with various VTON models and achieve accurate fit control. Code and data will be released.
|
https://arxiv.org/abs/2512.24016
|
Academic Papers
|
svg
|
798b4de3030ce181f454c11777eb3a50246f2f5ac631c17034f86f2b30b6a8bc
|
2026-01-01T00:00:00-05:00
|
Structure-Guided Allocation of 2D Gaussians for Image Representation and Compression
|
arXiv:2512.24018v1 Announce Type: new Abstract: Recent advances in 2D Gaussian Splatting (2DGS) have demonstrated its potential as a compact image representation with millisecond-level decoding. However, existing 2DGS-based pipelines allocate representation capacity and parameter precision largely oblivious to image structure, limiting their rate-distortion (RD) efficiency at low bitrates. To address this, we propose a structure-guided allocation principle for 2DGS, which explicitly couples image structure with both representation capacity and quantization precision, while preserving native decoding speed. First, we introduce a structure-guided initialization that assigns 2D Gaussians according to spatial structural priors inherent in natural images, yielding a localized and semantically meaningful distribution. Second, during quantization-aware fine-tuning, we propose adaptive bitwidth quantization of covariance parameters, which grants higher precision to small-scale Gaussians in complex regions and lower precision elsewhere, enabling RD-aware optimization, thereby reducing redundancy without degrading edge quality. Third, we impose a geometry-consistent regularization that aligns Gaussian orientations with local gradient directions to better preserve structural details. Extensive experiments demonstrate that our approach substantially improves both the representational power and the RD performance of 2DGS while maintaining over 1000 FPS decoding. Compared with the baseline GSImage, we reduce BD-rate by 43.44% on Kodak and 29.91% on DIV2K.
|
https://arxiv.org/abs/2512.24018
|
Academic Papers
|
svg
|
09b498e2ad4ff6762e0be2e6fe216b4d44f12d19e08a1ffd0e19af5f3a640df8
|
2026-01-01T00:00:00-05:00
|
FUSE-RSVLM: Feature Fusion Vision-Language Model for Remote Sensing
|
arXiv:2512.24022v1 Announce Type: new Abstract: Large vision-language models (VLMs) exhibit strong performance across various tasks. However, these VLMs encounter significant challenges when applied to the remote sensing domain due to the inherent differences between remote sensing images and natural images. Existing remote sensing VLMs often fail to extract fine-grained visual features and suffer from visual forgetting during deep language processing. To address this, we introduce MF-RSVLM, a Multi-Feature Fusion Remote Sensing Vision--Language Model that effectively extracts and fuses visual features for RS understanding. MF-RSVLM learns multi-scale visual representations and combines global context with local details, improving the capture of small and complex structures in RS scenes. A recurrent visual feature injection scheme ensures the language model remains grounded in visual evidence and reduces visual forgetting during generation. Extensive experiments on diverse RS benchmarks show that MF-RSVLM achieves state-of-the-art or highly competitive performance across remote sensing classification, image captioning, and VQA tasks. Our code is publicly available at https://github.com/Yunkaidang/RSVLM.
|
https://arxiv.org/abs/2512.24022
|
Academic Papers
|
svg
|
e27aea8b3a32452ba832f222ae540b23364e386537e1b3ab50399f1b402363a7
|
2026-01-01T00:00:00-05:00
|
RSAgent: Learning to Reason and Act for Text-Guided Segmentation via Multi-Turn Tool Invocations
|
arXiv:2512.24023v1 Announce Type: new Abstract: Text-guided object segmentation requires both cross-modal reasoning and pixel grounding abilities. Most recent methods treat text-guided segmentation as one-shot grounding, where the model predicts pixel prompts in a single forward pass to drive an external segmentor, which limits verification, refocusing and refinement when initial localization is wrong. To address this limitation, we propose RSAgent, an agentic Multimodal Large Language Model (MLLM) which interleaves reasoning and action for segmentation via multi-turn tool invocations. RSAgent queries a segmentation toolbox, observes visual feedback, and revises its spatial hypothesis using historical observations to re-localize targets and iteratively refine masks. We further build a data pipeline to synthesize multi-turn reasoning segmentation trajectories, and train RSAgent with a two-stage framework: cold-start supervised fine-tuning followed by agentic reinforcement learning with fine-grained, task-specific rewards. Extensive experiments show that RSAgent achieves a zero-shot performance of 66.5% gIoU on ReasonSeg test, improving over Seg-Zero-7B by 9%, and reaches 81.5% cIoU on RefCOCOg, demonstrating state-of-the-art performance on both in-domain and out-of-domain benchmarks.
|
https://arxiv.org/abs/2512.24023
|
Academic Papers
|
svg
|
11e677569df3b1d203cad3bf8d51eca1db5c8f74cec13bf546c8e6d175956464
|
2026-01-01T00:00:00-05:00
|
PipeFlow: Pipelined Processing and Motion-Aware Frame Selection for Long-Form Video Editing
|
arXiv:2512.24026v1 Announce Type: new Abstract: Long-form video editing poses unique challenges due to the exponential increase in the computational cost from joint editing and Denoising Diffusion Implicit Models (DDIM) inversion across extended sequences. To address these limitations, we propose PipeFlow, a scalable, pipelined video editing method that introduces three key innovations: First, based on a motion analysis using Structural Similarity Index Measure (SSIM) and Optical Flow, we identify and propose to skip editing of frames with low motion. Second, we propose a pipelined task scheduling algorithm that splits a video into multiple segments and performs DDIM inversion and joint editing in parallel based on available GPU memory. Lastly, we leverage a neural network-based interpolation technique to smooth out the border frames between segments and interpolate the previously skipped frames. Our method uniquely scales to longer videos by dividing them into smaller segments, allowing PipeFlow's editing time to increase linearly with video length. In principle, this enables editing of infinitely long videos without the growing per-frame computational overhead encountered by other methods. PipeFlow achieves up to a 9.6X speedup compared to TokenFlow and a 31.7X speedup over Diffusion Motion Transfer (DMT).
|
https://arxiv.org/abs/2512.24026
|
Academic Papers
|
svg
|
e95737840dd82f05fbdbbd36aee2347659bf2682e786a425e741772876863757
|
2026-01-01T00:00:00-05:00
|
Evaluation of Impression Difference of a Domestic Mobile Manipulator with Autonomous and/or Remote Control in Fetch-and-Carry Tasks
|
arXiv:2512.24029v1 Announce Type: new Abstract: A single service robot can present two distinct agencies: its onboard autonomy and an operator-mediated agency, yet users experience them through one physical body. We formalize this dual-agency structure as a User-Robot-Operator triad in an autonomous remote-control setting that combines autonomous execution with remote human support. Prior to the recent surge of language-based and multimodal interfaces, we developed and evaluated an early-stage prototype in 2020 that combined natural-language text chat with freehand sketch annotations over the robot's live camera view to support remote intervention. We evaluated three modes - autonomous, remote, and hybrid - in controlled fetch-and-carry tasks using a domestic mobile manipulator (HSR) on a World Robot Summit 2020 rule-compliant test field. The results show systematic mode-dependent differences in user-rated affinity and additional insights on perceived security, indicating that switching or blending agency within one robot measurably shapes human impressions. These findings provide empirical guidance for designing human-in-the-loop mobile manipulation in domestic physical tasks.
|
https://arxiv.org/abs/2512.24029
|
Academic Papers
|
svg
|
ed7e6a04d64dc4b6c52ccd9ee73c7312e0ac3d0199f799f540f8edaf67e1b154
|
2026-01-01T00:00:00-05:00
|
Reinforced Diffusion: Learning to Push the Limits of Anisotropic Diffusion for Image Denoising
|
arXiv:2512.24035v1 Announce Type: new Abstract: Image denoising is an important problem in low-level vision and serves as a critical module for many image recovery tasks. Anisotropic diffusion is a wide family of image denoising approaches with promising performance. However, traditional anisotropic diffusion approaches use explicit diffusion operators which are not well adapted to complex image structures. As a result, their performance is limited compared to recent learning-based approaches. In this work, we describe a trainable anisotropic diffusion framework based on reinforcement learning. By modeling the denoising process as a series of naive diffusion actions with order learned by deep Q-learning, we propose an effective diffusion-based image denoiser. The diffusion actions selected by deep Q-learning at different iterations indeed composite a stochastic anisotropic diffusion process with strong adaptivity to different image structures, which enjoys improvement over the traditional ones. The proposed denoiser is applied to removing three types of often-seen noise. The experiments show that it outperforms existing diffusion-based methods and competes with the representative deep CNN-based methods.
|
https://arxiv.org/abs/2512.24035
|
Academic Papers
|
svg
|
c3ebb59ea38734fecfadd5a84bc53f75f49bedcff594d4a150a70425db4067e0
|
2026-01-01T00:00:00-05:00
|
Kidney Exchange: Faster Parameterized Algorithms and Tighter Lower Bounds
|
arXiv:2512.24037v1 Announce Type: new Abstract: The kidney exchange mechanism allows many patient-donor pairs who are otherwise incompatible with each other to come together and exchange kidneys along a cycle. However, due to infrastructure and legal constraints, kidney exchange can only be performed in small cycles in practice. In reality, there are also some altruistic donors who do not have any paired patients. This allows us to also perform kidney exchange along paths that start from some altruistic donor. Unfortunately, the computational task is NP-complete. To overcome this computational barrier, an important line of research focuses on designing faster algorithms, both exact and using the framework of parameterized complexity. The standard parameter for the kidney exchange problem is the number $t$ of patients that receive a healthy kidney. The current fastest known deterministic FPT algorithm for this problem, parameterized by $t$, is $O^\star\left(14^t\right)$. In this work, we improve this by presenting a deterministic FPT algorithm that runs in time $O^\star\left((4e)^t\right)\approx O^\star\left(10.88^t\right)$. This problem is also known to be W[1]-hard parameterized by the treewidth of the underlying undirected graph. A natural question here is whether the kidney exchange problem admits an FPT algorithm parameterized by the pathwidth of the underlying undirected graph. We answer this negatively in this paper by proving that this problem is W[1]-hard parameterized by the pathwidth of the underlying undirected graph. We also present some parameterized intractability results improving the current understanding of the problem under the framework of parameterized complexity.
|
https://arxiv.org/abs/2512.24037
|
Academic Papers
|
svg
|
f0dd05a390709fba36affa5057605505f62403d736e6c7ee0efc1b793ae9915a
|
2026-01-01T00:00:00-05:00
|
A precise proof of the n-variable Bekic principle
|
arXiv:2512.24038v1 Announce Type: new Abstract: We provide a proof of the $n$-ary Beki\v{c} principle, which states that a vectorial fixpoint of size $n$ can be written in terms of nested fixpoints in each coordinate according to lexicographic order. The proof is inductive.
|
https://arxiv.org/abs/2512.24038
|
Academic Papers
|
svg
|
ef4c0ad425b45f83f8e72252f2e717360aaf3eb7b6edff3b43407dcf10e6efa3
|
2026-01-01T00:00:00-05:00
|
Continuous Angular Power Spectrum Recovery From Channel Covariance via Chebyshev Polynomials
|
arXiv:2512.24039v1 Announce Type: new Abstract: This paper proposes a Chebyshev polynomial expansion framework for the recovery of a continuous angular power spectrum (APS) from channel covariance. By exploiting the orthogonality of Chebyshev polynomials in a transformed domain, we derive an exact series representation of the covariance and reformulate the inherently ill-posed APS inversion as a finite-dimensional linear regression problem via truncation. The associated approximation error is directly controlled by the tail of the APS's Chebyshev series and decays rapidly with increasing angular smoothness. Building on this representation, we derive an exact semidefinite characterization of nonnegative APS and introduce a derivative-based regularizer that promotes smoothly varying APS profiles while preserving transitions of clusters. Simulation results show that the proposed Chebyshev-based framework yields accurate APS reconstruction, and enables reliable downlink (DL) covariance prediction from uplink (UL) measurements in a frequency division duplex (FDD) setting. These findings indicate that jointly exploiting smoothness and nonnegativity in a Chebyshev domain provides an effective tool for covariance-domain processing in multi-antenna systems.
|
https://arxiv.org/abs/2512.24039
|
Academic Papers
|
svg
|
c84c19b0f2bc2814df733b9610127b9a3b21a18b37bd63ad8ad1c1fa78d91a05
|
2026-01-01T00:00:00-05:00
|
ROAD: Reflective Optimization via Automated Debugging for Zero-Shot Agent Alignment
|
arXiv:2512.24040v1 Announce Type: new Abstract: Automatic Prompt Optimization (APO) has emerged as a critical technique for enhancing Large Language Model (LLM) performance, yet current state-of-the-art methods typically rely on large, labeled gold-standard development sets to compute fitness scores for evolutionary or Reinforcement Learning (RL) approaches. In real-world software engineering, however, such curated datasets are rarely available during the initial cold start of agent development, where engineers instead face messy production logs and evolving failure modes. We present ROAD (Reflective Optimization via Automated Debugging), a novel framework that bypasses the need for refined datasets by treating optimization as a dynamic debugging investigation rather than a stochastic search. Unlike traditional mutation strategies, ROAD utilizes a specialized multi-agent architecture, comprising an Analyzer for root-cause analysis, an Optimizer for pattern aggregation, and a Coach for strategy integration, to convert unstructured failure logs into robust, structured Decision Tree Protocols. We evaluated ROAD across both a standardized academic benchmark and a live production Knowledge Management engine. Experimental results demonstrate that ROAD is highly sample-efficient, achieving a 5.6 percent increase in success rate (73.6 percent to 79.2 percent) and a 3.8 percent increase in search accuracy within just three automated iterations. Furthermore, on complex reasoning tasks in the retail domain, ROAD improved agent performance by approximately 19 percent relative to the baseline. These findings suggest that mimicking the human engineering loop of failure analysis and patching offers a viable, data-efficient alternative to resource-intensive RL training for deploying reliable LLM agents.
|
https://arxiv.org/abs/2512.24040
|
Academic Papers
|
svg
|
b91e1e191259797f8b70f247856e94d08820be6b380cd8f3601c5fc39b317cad
|
2026-01-01T00:00:00-05:00
|
Jailbreaking Attacks vs. Content Safety Filters: How Far Are We in the LLM Safety Arms Race?
|
arXiv:2512.24044v1 Announce Type: new Abstract: As large language models (LLMs) are increasingly deployed, ensuring their safe use is paramount. Jailbreaking, adversarial prompts that bypass model alignment to trigger harmful outputs, present significant risks, with existing studies reporting high success rates in evading common LLMs. However, previous evaluations have focused solely on the models, neglecting the full deployment pipeline, which typically incorporates additional safety mechanisms like content moderation filters. To address this gap, we present the first systematic evaluation of jailbreak attacks targeting LLM safety alignment, assessing their success across the full inference pipeline, including both input and output filtering stages. Our findings yield two key insights: first, nearly all evaluated jailbreak techniques can be detected by at least one safety filter, suggesting that prior assessments may have overestimated the practical success of these attacks; second, while safety filters are effective in detection, there remains room to better balance recall and precision to further optimize protection and user experience. We highlight critical gaps and call for further refinement of detection accuracy and usability in LLM safety systems.
|
https://arxiv.org/abs/2512.24044
|
Academic Papers
|
svg
|
8169b6e0eaf11942261894fed23d9d33d07415d48b4129fb4c13ec5dc81c084b
|
2026-01-01T00:00:00-05:00
|
Beyond Dedicated-Active: A General Reliability Provisioning Framework for SFC Placement in Fog Computing
|
arXiv:2512.24049v1 Announce Type: new Abstract: The explosive growth of Internet of Things (IoT) devices has strained traditional cloud infrastructures, highlighting the need for low-latency and energy-efficient alternatives. Fog computing addresses this by placing computation near the network edge. However, limited and heterogeneous fog resources pose reliability challenges, especially for mission-critical applications. On the other hand, to improve flexibility, applications are deployed as Service Function Chains (SFCs), where each function runs as a Virtual Network Function (VNF). While scalable, this approach is more failure-prone than monolithic deployments, necessitating intelligent redundancy and placement strategies. This paper addresses the reliability-aware SFC placement problem over heterogeneous fog servers through the lens of reliability theory. We explore four redundancy strategies, combining shared vs. dedicated and active vs. standby modes, and propose a general framework to minimize latency and cost while meeting reliability and deadline constraints. The problem is formulated as an Integer Non-Linear Program (INLP), and two genetic algorithm (GA)-based solutions are developed. Simulation results show that shared-standby redundancy outperforms the conventional dedicated-active approach by up to 84%.
|
https://arxiv.org/abs/2512.24049
|
Academic Papers
|
svg
|
4771fbd18b36ded6f80aadcf0c09c91b4efecede2880c8343691b025851ec7d2
|
2026-01-01T00:00:00-05:00
|
AHA: Aligning Large Audio-Language Models for Reasoning Hallucinations via Counterfactual Hard Negatives
|
arXiv:2512.24052v1 Announce Type: new Abstract: Although Large Audio-Language Models (LALMs) deliver state-of-the-art (SOTA) performance, they frequently suffer from hallucinations, e.g. generating text not grounded in the audio input. We analyze these grounding failures and identify a distinct taxonomy: Event Omission, False Event Identity, Temporal Relation Error, and Quantitative Temporal Error. To address this, we introduce the AHA (Audio Hallucination Alignment) framework. By leveraging counterfactual hard negative mining, our pipeline constructs a high-quality preference dataset that forces models to distinguish strict acoustic evidence from linguistically plausible fabrications. Additionally, we establish AHA-Eval, a diagnostic benchmark designed to rigorously test these fine-grained temporal reasoning capabilities. We apply this data to align Qwen2.5-Omni. The resulting model, Qwen-Audio-AHA, achieves a 13.7% improvement on AHA-Eval. Crucially, this benefit generalizes beyond our diagnostic set. Our model shows substantial gains on public benchmarks, including 1.3% on MMAU-Test and 1.6% on MMAR, outperforming latest SOTA methods.
|
https://arxiv.org/abs/2512.24052
|
Academic Papers
|
svg
|
008110278d4656ba2390c7ff9769164fa8e182c9befaae59daa9cc4fe5361f59
|
2026-01-01T00:00:00-05:00
|
Beyond Hallucinations: A Composite Score for Measuring Reliability in Open-Source Large Language Models
|
arXiv:2512.24058v1 Announce Type: new Abstract: Large Language Models (LLMs) like LLaMA, Mistral, and Gemma are increasingly used in decision-critical domains such as healthcare, law, and finance, yet their reliability remains uncertain. They often make overconfident errors, degrade under input shifts, and lack clear uncertainty estimates. Existing evaluations are fragmented, addressing only isolated aspects. We introduce the Composite Reliability Score (CRS), a unified framework that integrates calibration, robustness, and uncertainty quantification into a single interpretable metric. Through experiments on ten leading open-source LLMs across five QA datasets, we assess performance under baselines, perturbations, and calibration methods. CRS delivers stable model rankings, uncovers hidden failure modes missed by single metrics, and highlights that the most dependable systems balance accuracy, robustness, and calibrated uncertainty.
|
https://arxiv.org/abs/2512.24058
|
Academic Papers
|
svg
|
6ea42402cdf77e47d2823bfd0f16177f278b8e4b396559bd4be998739ef4c132
|
2026-01-01T00:00:00-05:00
|
Hyperspherical Graph Representation Learning via Adaptive Neighbor-Mean Alignment and Uniformity
|
arXiv:2512.24062v1 Announce Type: new Abstract: Graph representation learning (GRL) aims to encode structural and semantic dependencies of graph-structured data into low-dimensional embeddings. However, existing GRL methods often rely on surrogate contrastive objectives or mutual information maximization, which typically demand complex architectures, negative sampling strategies, and sensitive hyperparameter tuning. These design choices may induce over-smoothing, over-squashing, and training instability. In this work, we propose HyperGRL, a unified framework for hyperspherical graph representation learning via adaptive neighbor-mean alignment and sampling-free uniformity. HyperGRL embeds nodes on a unit hypersphere through two adversarially coupled objectives: neighbor-mean alignment and sampling-free uniformity. The alignment objective uses the mean representation of each node's local neighborhood to construct semantically grounded, stable targets that capture shared structural and feature patterns. The uniformity objective formulates dispersion via an L2-based hyperspherical regularization, encouraging globally uniform embedding distributions while preserving discriminative information. To further stabilize training, we introduce an entropy-guided adaptive balancing mechanism that dynamically regulates the interplay between alignment and uniformity without requiring manual tuning. Extensive experiments on node classification, node clustering, and link prediction demonstrate that HyperGRL delivers superior representation quality and generalization across diverse graph structures, achieving average improvements of 1.49%, 0.86%, and 0.74% over the strongest existing methods, respectively. These findings highlight the effectiveness of geometrically grounded, sampling-free contrastive objectives for graph representation learning.
|
https://arxiv.org/abs/2512.24062
|
Academic Papers
|
svg
|
a73936f22c666a2a21cf2acebfe7698ac34f29bcf4aa7cf006cc7fd2a69708fa
|
2026-01-01T00:00:00-05:00
|
How and Why LLMs Generalize: A Fine-Grained Analysis of LLM Reasoning from Cognitive Behaviors to Low-Level Patterns
|
arXiv:2512.24063v1 Announce Type: new Abstract: Large Language Models (LLMs) display strikingly different generalization behaviors: supervised fine-tuning (SFT) often narrows capability, whereas reinforcement-learning (RL) tuning tends to preserve it. The reasons behind this divergence remain unclear, as prior studies have largely relied on coarse accuracy metrics. We address this gap by introducing a novel benchmark that decomposes reasoning into atomic core skills such as calculation, fact retrieval, simulation, enumeration, and diagnostic, providing a concrete framework for addressing the fundamental question of what constitutes reasoning in LLMs. By isolating and measuring these core skills, the benchmark offers a more granular view of how specific cognitive abilities emerge, transfer, and sometimes collapse during post-training. Combined with analyses of low-level statistical patterns such as distributional divergence and parameter statistics, it enables a fine-grained study of how generalization evolves under SFT and RL across mathematical, scientific reasoning, and non-reasoning tasks. Our meta-probing framework tracks model behavior at different training stages and reveals that RL-tuned models maintain more stable behavioral profiles and resist collapse in reasoning skills, whereas SFT models exhibit sharper drift and overfit to surface patterns. This work provides new insights into the nature of reasoning in LLMs and points toward principles for designing training strategies that foster broad, robust generalization.
|
https://arxiv.org/abs/2512.24063
|
Academic Papers
|
svg
|
8ca9d1c6079f47ea860c8b4ae8e7770dd3e0f29bcf7007641d6a72179e468ec2
|
2026-01-01T00:00:00-05:00
|
Neighbor-aware Instance Refining with Noisy Labels for Cross-Modal Retrieval
|
arXiv:2512.24064v1 Announce Type: new Abstract: In recent years, Cross-Modal Retrieval (CMR) has made significant progress in the field of multi-modal analysis. However, since it is time-consuming and labor-intensive to collect large-scale and well-annotated data, the annotation of multi-modal data inevitably contains some noise. This will degrade the retrieval performance of the model. To tackle the problem, numerous robust CMR methods have been developed, including robust learning paradigms, label calibration strategies, and instance selection mechanisms. Unfortunately, they often fail to simultaneously satisfy model performance ceilings, calibration reliability, and data utilization rate. To overcome the limitations, we propose a novel robust cross-modal learning framework, namely Neighbor-aware Instance Refining with Noisy Labels (NIRNL). Specifically, we first propose Cross-modal Margin Preserving (CMP) to adjust the relative distance between positive and negative pairs, thereby enhancing the discrimination between sample pairs. Then, we propose Neighbor-aware Instance Refining (NIR) to identify pure subset, hard subset, and noisy subset through cross-modal neighborhood consensus. Afterward, we construct different tailored optimization strategies for this fine-grained partitioning, thereby maximizing the utilization of all available data while mitigating error propagation. Extensive experiments on three benchmark datasets demonstrate that NIRNL achieves state-of-the-art performance, exhibiting remarkable robustness, especially under high noise rates.
|
https://arxiv.org/abs/2512.24064
|
Academic Papers
|
svg
|
125586687a466392a3a89a81bbcd147e45fe8d9fe33604af1ca32af6a7837c82
|
2026-01-01T00:00:00-05:00
|
Pathology Context Recalibration Network for Ocular Disease Recognition
|
arXiv:2512.24066v1 Announce Type: new Abstract: Pathology context and expert experience play significant roles in clinical ocular disease diagnosis. Although deep neural networks (DNNs) have good ocular disease recognition results, they often ignore exploring the clinical pathology context and expert experience priors to improve ocular disease recognition performance and decision-making interpretability. To this end, we first develop a novel Pathology Recalibration Module (PRM) to leverage the potential of pathology context prior via the combination of the well-designed pixel-wise context compression operator and pathology distribution concentration operator; then this paper applies a novel expert prior Guidance Adapter (EPGA) to further highlight significant pixel-wise representation regions by fully mining the expert experience prior. By incorporating PRM and EPGA into the modern DNN, the PCRNet is constructed for automated ocular disease recognition. Additionally, we introduce an Integrated Loss (IL) to boost the ocular disease recognition performance of PCRNet by considering the effects of sample-wise loss distributions and training label frequencies. The extensive experiments on three ocular disease datasets demonstrate the superiority of PCRNet with IL over state-of-the-art attention-based networks and advanced loss methods. Further visualization analysis explains the inherent behavior of PRM and EPGA that affects the decision-making process of DNNs.
|
https://arxiv.org/abs/2512.24066
|
Academic Papers
|
svg
|
b6df2d348554ab5cbc441ae2cdcc9732850197337ed7c96b857593fa5697be56
|
2026-01-01T00:00:00-05:00
|
Time-varying Mixing Matrix Design for Energy-efficient Decentralized Federated Learning
|
arXiv:2512.24069v1 Announce Type: new Abstract: We consider the design of mixing matrices to minimize the operation cost for decentralized federated learning (DFL) in wireless networks, with focus on minimizing the maximum per-node energy consumption. As a critical hyperparameter for DFL, the mixing matrix controls both the convergence rate and the needs of agent-to-agent communications, and has thus been studied extensively. However, existing designs mostly focused on minimizing the communication time, leaving open the minimization of per-node energy consumption that is critical for energy-constrained devices. This work addresses this gap through a theoretically-justified solution for mixing matrix design that aims at minimizing the maximum per-node energy consumption until convergence, while taking into account the broadcast nature of wireless communications. Based on a novel convergence theorem that allows arbitrarily time-varying mixing matrices, we propose a multi-phase design framework that activates time-varying communication topologies under optimized budgets to trade off the per-iteration energy consumption and the convergence rate while balancing the energy consumption across nodes. Our evaluations based on real data have validated the efficacy of the proposed solution in combining the low energy consumption of sparse mixing matrices and the fast convergence of dense mixing matrices.
|
https://arxiv.org/abs/2512.24069
|
Academic Papers
|
svg
|
0c2032e8dd8579c75c8595d4be9427793396769edf57832b046a4bb48f959d8a
|
2026-01-01T00:00:00-05:00
|
CPePC: Cooperative and Predictive Popularity based Caching for Named Data Networks
|
arXiv:2512.24073v1 Announce Type: new Abstract: Caching content is an inherent feature of Named Data Networks. Limited cache capacity of routers warrants that the choice of content being cached is judiciously done. Existing techniques resort to caching popular content to maximize utilization. However, these methods experience significant overhead for coordinating and estimating the popularity of content. To address this issue, in this paper, we present CPePC, which is a cooperative caching technique designed to improve performance. It accomplishes this through a combination of two factors. First, CPePC enhances efficiency by minimizing the overhead of popularity estimation. Second, it forecasts a parameter that governs caching decisions. Efficiency in popularity estimation is achieved by dividing the network into several non-overlapping communities using a community estimation algorithm and selecting a leader node to coordinate this on behalf of all the nodes in the community. CPePC bases its caching decisions by predicting a parameter whose value is estimated using current cache occupancy and the popularity of the content into account. We present algorithms for community detection, leader selection, content popularity estimation, and caching decisions made by the CPePC method. We evaluate and compare it with six other state-of-the-art caching techniques, with simulations performed using a discrete event simulator to show that it outperforms others.
|
https://arxiv.org/abs/2512.24073
|
Academic Papers
|
svg
|
10d62053a9fecb190669bb2d005d1af3cdfba792c14e46e951532910aa8bc2c6
|
2026-01-01T00:00:00-05:00
|
Balanced Hierarchical Contrastive Learning with Decoupled Queries for Fine-grained Object Detection in Remote Sensing Images
|
arXiv:2512.24074v1 Announce Type: new Abstract: Fine-grained remote sensing datasets often use hierarchical label structures to differentiate objects in a coarse-to-fine manner, with each object annotated across multiple levels. However, embedding this semantic hierarchy into the representation learning space to improve fine-grained detection performance remains challenging. Previous studies have applied supervised contrastive learning at different hierarchical levels to group objects under the same parent class while distinguishing sibling subcategories. Nevertheless, they overlook two critical issues: (1) imbalanced data distribution across the label hierarchy causes high-frequency classes to dominate the learning process, and (2) learning semantic relationships among categories interferes with class-agnostic localization. To address these issues, we propose a balanced hierarchical contrastive loss combined with a decoupled learning strategy within the detection transformer (DETR) framework. The proposed loss introduces learnable class prototypes and equilibrates gradients contributed by different classes at each hierarchical level, ensuring that each hierarchical class contributes equally to the loss computation in every mini-batch. The decoupled strategy separates DETR's object queries into classification and localization sets, enabling task-specific feature extraction and optimization. Experiments on three fine-grained datasets with hierarchical annotations demonstrate that our method outperforms state-of-the-art approaches.
|
https://arxiv.org/abs/2512.24074
|
Academic Papers
|
svg
|
7d210d0e6832974b747168e5a51333339c13534ca793f73aaee3c96dfe73868c
|
2026-01-01T00:00:00-05:00
|
Multi-Scenario Highway Lane-Change Intention Prediction: A Temporal Physics-Informed Multi-Modal Framework
|
arXiv:2512.24075v1 Announce Type: new Abstract: Lane-change intention prediction is safety-critical for autonomous driving and ADAS, but remains difficult in naturalistic traffic due to noisy kinematics, severe class imbalance, and limited generalization across heterogeneous highway scenarios. We propose Temporal Physics-Informed AI (TPI-AI), a hybrid framework that fuses deep temporal representations with physics-inspired interaction cues. A two-layer bidirectional LSTM (Bi-LSTM) encoder learns compact embeddings from multi-step trajectory histories; we concatenate these embeddings with kinematics-, safety-, and interaction-aware features (e.g., headway, TTC, and safe-gap indicators) and train a LightGBM classifier for three-class intention recognition (No-LC, Left-LC, Right-LC). To improve minority-class reliability, we apply imbalance-aware optimization including resampling/weighting and fold-wise threshold calibration. Experiments on two large-scale drone-based datasets, highD (straight highways) and exiD (ramp-rich environments), use location-based splits and evaluate prediction horizons T = 1, 2, 3 s. TPI-AI outperforms standalone LightGBM and Bi-LSTM baselines, achieving macro-F1 of 0.9562, 0.9124, 0.8345 on highD and 0.9247, 0.8197, 0.7605 on exiD at T = 1, 2, 3 s, respectively. These results show that combining physics-informed interaction features with learned temporal embeddings yields robust multi-scenario lane-change intention prediction.
|
https://arxiv.org/abs/2512.24075
|
Academic Papers
|
svg
|
8c937591bc38a15d31f602c5e2cc3cc08cd5aefe1553f4ddfe482e49d9f99387
|
2026-01-01T00:00:00-05:00
|
LoongFlow: Directed Evolutionary Search via a Cognitive Plan-Execute-Summarize Paradigm
|
arXiv:2512.24077v1 Announce Type: new Abstract: The transition from static Large Language Models (LLMs) to self-improving agents is hindered by the lack of structured reasoning in traditional evolutionary approaches. Existing methods often struggle with premature convergence and inefficient exploration in high-dimensional code spaces. To address these challenges, we introduce LoongFlow, a self-evolving agent framework that achieves state-of-the-art solution quality with significantly reduced computational costs. Unlike "blind" mutation operators, LoongFlow integrates LLMs into a cognitive "Plan-Execute-Summarize" (PES) paradigm, effectively mapping the evolutionary search to a reasoning-heavy process. To sustain long-term architectural coherence, we incorporate a hybrid evolutionary memory system. By synergizing Multi-Island models with MAP-Elites and adaptive Boltzmann selection, this system theoretically balances the exploration-exploitation trade-off, maintaining diverse behavioral niches to prevent optimization stagnation. We instantiate LoongFlow with a General Agent for algorithmic discovery and an ML Agent for pipeline optimization. Extensive evaluations on the AlphaEvolve benchmark and Kaggle competitions demonstrate that LoongFlow outperforms leading baselines (e.g., OpenEvolve, ShinkaEvolve) by up to 60% in evolutionary efficiency while discovering superior solutions. LoongFlow marks a substantial step forward in autonomous scientific discovery, enabling the generation of expert-level solutions with reduced computational overhead.
|
https://arxiv.org/abs/2512.24077
|
Academic Papers
|
svg
|
5fc309512528e3ffb419758a34a791bb00e30f734e282a775e6b767f8790ad52
|
2026-01-01T00:00:00-05:00
|
High-dimensional Regret Minimization
|
arXiv:2512.24078v1 Announce Type: new Abstract: Multi-criteria decision making in large databases is very important in real world applications. Recently, an interactive query has been studied extensively in the database literature with the advantage of both the top-k query (with limited output size) and the skyline query (which does not require users to explicitly specify their preference function). This approach iteratively asks the user to select the one preferred within a set of options. Based on rounds of feedback, the query learns the implicit preference and returns the most favorable as a recommendation. However, many modern applications in areas like housing or financial product markets feature datasets with hundreds of attributes. Existing interactive algorithms either fail to scale or require excessive user interactions (often exceeding 1000 rounds). Motivated by this, we propose FHDR (Fast High-Dimensional Reduction), a novel framework that takes less than 0.01s with fewer than 30 rounds of interaction. It is considered a breakthrough in the field of interactive queries since most, if not all, existing studies are not scalable to high-dimensional datasets. Extensive experiments demonstrate that FHDR outperforms the best-known algorithms by at least an order of magnitude in execution time and up to several orders of magnitude in terms of the number of interactions required, establishing a new state of the art for scalable interactive regret minimization.
|
https://arxiv.org/abs/2512.24078
|
Academic Papers
|
svg
|
2e750541d74f78eb3f148e2e533386773fe22644d2e49f9b78f5afe4da13ab60
|
2026-01-01T00:00:00-05:00
|
RainFusion2.0: Temporal-Spatial Awareness and Hardware-Efficient Block-wise Sparse Attention
|
arXiv:2512.24086v1 Announce Type: new Abstract: In video and image generation tasks, Diffusion Transformer (DiT) models incur extremely high computational costs due to attention mechanisms, which limits their practical applications. Furthermore, with hardware advancements, a wide range of devices besides graphics processing unit (GPU), such as application-specific integrated circuit (ASIC), have been increasingly adopted for model inference. Sparse attention, which leverages the inherent sparsity of attention by skipping computations for insignificant tokens, is an effective approach to mitigate computational costs. However, existing sparse attention methods have two critical limitations: the overhead of sparse pattern prediction and the lack of hardware generality, as most of these methods are designed for GPU. To address these challenges, this study proposes RainFusion2.0, which aims to develop an online adaptive, hardware-efficient, and low-overhead sparse attention mechanism to accelerate both video and image generative models, with robust performance across diverse hardware platforms. Key technical insights include: (1) leveraging block-wise mean values as representative tokens for sparse mask prediction; (2) implementing spatiotemporal-aware token permutation; and (3) introducing a first-frame sink mechanism specifically designed for video generation scenarios. Experimental results demonstrate that RainFusion2.0 can achieve 80% sparsity while achieving an end-to-end speedup of 1.5~1.8x without compromising video quality. Moreover, RainFusion2.0 demonstrates effectiveness across various generative models and validates its generalization across diverse hardware platforms.
|
https://arxiv.org/abs/2512.24086
|
Academic Papers
|
svg
|
58d618f2019857e3478603977a4e076955da25e31d871778dfd04334320c043d
|
2026-01-01T00:00:00-05:00
|
Random Multiplexing
|
arXiv:2512.24087v1 Announce Type: new Abstract: As wireless communication applications evolve from traditional multipath environments to high-mobility scenarios like unmanned aerial vehicles, multiplexing techniques have advanced accordingly. Traditional single-carrier frequency-domain equalization (SC-FDE) and orthogonal frequency-division multiplexing (OFDM) have given way to emerging orthogonal time-frequency space (OTFS) and affine frequency-division multiplexing (AFDM). These approaches exploit specific channel structures to diagonalize or sparsify the effective channel, thereby enabling low-complexity detection. However, their reliance on these structures significantly limits their robustness in dynamic, real-world environments. To address these challenges, this paper studies a random multiplexing technique that is decoupled from the physical channels, enabling its application to arbitrary norm-bounded and spectrally convergent channel matrices. Random multiplexing achieves statistical fading-channel ergodicity for transmitted signals by constructing an equivalent input-isotropic channel matrix in the random transform domain. It guarantees the asymptotic replica MAP bit-error rate (BER) optimality of AMP-type detectors for linear systems with arbitrary norm-bounded, spectrally convergent channel matrices and signaling configurations, under the unique fixed point assumption. A low-complexity cross-domain memory AMP (CD-MAMP) detector is considered, leveraging the sparsity of the time-domain channel and the randomness of the equivalent channel. Optimal power allocations are derived to minimize the replica MAP BER and maximize the replica constrained capacity of random multiplexing systems. The optimal coding principle and replica constrained-capacity optimality of CD-MAMP detector are investigated for random multiplexing systems. Additionally, the versatility of random multiplexing in diverse wireless applications is explored.
|
https://arxiv.org/abs/2512.24087
|
Academic Papers
|
svg
|
6181e13c3b15daefed4ccf427f02d1566e0afef8301094f377eb549b7241556e
|
2026-01-01T00:00:00-05:00
|
FedLiTeCAN : A Federated Lightweight Transformer for Fast and Robust CAN Bus Intrusion Detection
|
arXiv:2512.24088v1 Announce Type: new Abstract: This work implements a lightweight Transformer model for IDS in the domain of Connected and Autonomous Vehicles
|
https://arxiv.org/abs/2512.24088
|
Academic Papers
|
svg
|
6bea20e889271a8646d1b75871ff3c8a2bbd465619a0db1525be27a5e2156e26
|
2026-01-01T00:00:00-05:00
|
HY-MT1.5 Technical Report
|
arXiv:2512.24092v1 Announce Type: new Abstract: In this report, we introduce our latest translation models, HY-MT1.5-1.8B and HY-MT1.5-7B, a new family of machine translation models developed through a holistic training framework tailored for high-performance translation. Our methodology orchestrates a multi-stage pipeline that integrates general and MT-oriented pre-training, supervised fine-tuning, on-policy distillation, and reinforcement learning. HY-MT1.5-1.8B, the 1.8B-parameter model demonstrates remarkable parameter efficiency, comprehensively outperforming significantly larger open-source baselines (e.g., Tower-Plus-72B, Qwen3-32B) and mainstream commercial APIs (e.g., Microsoft Translator, Doubao Translator) in standard Chinese-foreign and English-foreign tasks. It achieves approximately 90% of the performance of ultra-large proprietary models such as Gemini-3.0-Pro, while marginally trailing Gemini-3.0-Pro on WMT25 and Mandarin-minority language benchmarks, it maintains a substantial lead over other competing models. Furthermore, HY-MT1.5-7B establishes a new state-of-the-art for its size class, achieving 95% of Gemini-3.0-Pro's performance on Flores-200 and surpassing it on the challenging WMT25 and Mandarin-minority language test sets. Beyond standard translation, the HY-MT1.5 series supports advanced constraints, including terminology intervention, context-aware translation, and format preservation. Extensive empirical evaluations confirm that both models offer highly competitive, robust solutions for general and specialized translation tasks within their respective parameter scales.
|
https://arxiv.org/abs/2512.24092
|
Academic Papers
|
svg
|
f9fcfc7204744306352faf050368e592c938c08f284748391961f6bf940ed15f
|
2026-01-01T00:00:00-05:00
|
Factorized Learning for Temporally Grounded Video-Language Models
|
arXiv:2512.24097v1 Announce Type: new Abstract: Recent video-language models have shown great potential for video understanding, but still struggle with accurate temporal grounding for event-level perception. We observe that two main factors in video understanding (i.e., temporal grounding and textual response) form a logical hierarchy: accurate temporal evidence grounding lays the foundation for reliable textual response. However, existing works typically handle these two tasks in a coupled manner without a clear logical structure, leading to sub-optimal objectives. We address this from a factorized learning perspective. We first propose D$^2$VLM, a framework that decouples the learning of these two tasks while also emphasizing their inherent dependency. We adopt a "grounding then answering with evidence referencing" paradigm and introduce evidence tokens for evidence grounding, which emphasize event-level visual semantic capture beyond the focus on timestamp representation in existing works. To further facilitate the learning of these two tasks, we introduce a novel factorized preference optimization (FPO) algorithm. Unlike standard preference optimization, FPO explicitly incorporates probabilistic temporal grounding modeling into the optimization objective, enabling preference learning for both temporal grounding and textual response. We also construct a synthetic dataset to address the lack of suitable datasets for factorized preference learning with explicit temporal grounding. Experiments on various tasks demonstrate the clear advantage of our approach. Our source code is available at https://github.com/nusnlp/d2vlm.
|
https://arxiv.org/abs/2512.24097
|
Academic Papers
|
svg
|
72a8c6d94361c8fbefb39f637f7c355c996e4ce600391b46b85c9cae96932cf6
|
2026-01-01T00:00:00-05:00
|
Training a Huggingface Model on AWS Sagemaker (Without Tears)
|
arXiv:2512.24098v1 Announce Type: new Abstract: The development of Large Language Models (LLMs) has primarily been driven by resource-rich research groups and industry partners. Due to the lack of on-premise computing resources required for increasingly complex models, many researchers are turning to cloud services like AWS SageMaker to train Hugging Face models. However, the steep learning curve of cloud platforms often presents a barrier for researchers accustomed to local environments. Existing documentation frequently leaves knowledge gaps, forcing users to seek fragmented information across the web. This demo paper aims to democratize cloud adoption by centralizing the essential information required for researchers to successfully train their first Hugging Face model on AWS SageMaker from scratch.
|
https://arxiv.org/abs/2512.24098
|
Academic Papers
|
svg
|
65588fb502a576cf9032b8f36e39c52e6ada565f7edd87c6344b0e9f38d897d9
|
2026-01-01T00:00:00-05:00
|
Think Before You Move: Latent Motion Reasoning for Text-to-Motion Generation
|
arXiv:2512.24100v1 Announce Type: new Abstract: Current state-of-the-art paradigms predominantly treat Text-to-Motion (T2M) generation as a direct translation problem, mapping symbolic language directly to continuous poses. While effective for simple actions, this System 1 approach faces a fundamental theoretical bottleneck we identify as the Semantic-Kinematic Impedance Mismatch: the inherent difficulty of grounding semantically dense, discrete linguistic intent into kinematically dense, high-frequency motion data in a single shot. In this paper, we argue that the solution lies in an architectural shift towards Latent System 2 Reasoning. Drawing inspiration from Hierarchical Motor Control in cognitive science, we propose Latent Motion Reasoning (LMR) that reformulates generation as a two-stage Think-then-Act decision process. Central to LMR is a novel Dual-Granularity Tokenizer that disentangles motion into two distinct manifolds: a compressed, semantically rich Reasoning Latent for planning global topology, and a high-frequency Execution Latent for preserving physical fidelity. By forcing the model to autoregressively reason (plan the coarse trajectory) before it moves (instantiates the frames), we effectively bridge the ineffability gap between language and physics. We demonstrate LMR's versatility by implementing it for two representative baselines: T2M-GPT (discrete) and MotionStreamer (continuous). Extensive experiments show that LMR yields non-trivial improvements in both semantic alignment and physical plausibility, validating that the optimal substrate for motion planning is not natural language, but a learned, motion-aligned concept space. Codes and demos can be found in \hyperlink{https://chenhaoqcdyq.github.io/LMR/}{https://chenhaoqcdyq.github.io/LMR/}
|
https://arxiv.org/abs/2512.24100
|
Academic Papers
|
svg
|
a1218059b0221305b42a7f26166dae18d15e72f0b62dbc73a3c35ecd631ac3aa
|
2026-01-01T00:00:00-05:00
|
Economic and Technical Feasibility of V2G in Non-Road Mobile Machinery sector
|
arXiv:2512.24101v1 Announce Type: new Abstract: This paper investigates the economic and technical feasibility of integrating Vehicle-to-Grid (V2G) technology in the Non-Road Mobile Machinery (NRMM) sector. These often-idling assets, with their substantial battery capacities, present a unique opportunity to participate in energy markets, providing grid services and generating additional revenue. A novel methodology is introduced that integrates Bayesian Optimization (BO) to optimize the energy infrastructure together with an operating strategy optimization to reduce the electricity costs while enhancing grid interaction. While the focus lies on the methodology, the financial opportunities for the use-case of an electric NRMM rental service will be presented. However, the study is limited by the availability of real-world data on the usage of electric NRMM and does not address regulatory challenges of V2G. Further research is needed to extend the model accuracy and validate these findings.
|
https://arxiv.org/abs/2512.24101
|
Academic Papers
|
svg
|
a7ce60e2c16683e9c499d50ddeb1fa06268c402dd353dfbf1efceec83b889d53
|
2026-01-01T00:00:00-05:00
|
Autoregressivity in the Latent Space of a GP-VAE Language Model: An Empirical Ablation Study
|
arXiv:2512.24102v1 Announce Type: new Abstract: This paper provides an ablation-based analysis of latent autoregression in GP-VAE models, building upon our previous work introducing the architecture. Language models typically rely on an autoregressive factorization over tokens. In contrast, our prior work proposed shifting sequential structure to the latent space through a causal Gaussian process, while using a non-autoregressive decoder. Here, we conduct a systematic ablation study of the role played by latent autoregression. We compare (i) a full GP-VAE model with autoregressive latent dynamics, (ii) a non-autoregressive ablation in which latent variables are independent, and (iii) a standard token-level autoregressive Transformer. Our results show that, within the considered regime (medium-scale corpora and short training contexts), latent autoregression induces latent trajectories that are significantly more compatible with the Gaussian-process prior and exhibit greater long-horizon stability. In contrast, removing autoregression leads to degraded latent structure and unstable long-range behavior. These findings highlight the role of latent autoregression as an effective mechanism for organizing long-range structure, while remaining complementary to token-level autoregressive modeling. They should be interpreted as an empirical analysis of representational structure rather than as a proposal for a new architecture.
|
https://arxiv.org/abs/2512.24102
|
Academic Papers
|
svg
|
6402d6f68c0c189030e109a9f8ce751a06afc0854dddca32fe6db473632ba38c
|
2026-01-01T00:00:00-05:00
|
Enhancing LLM Planning Capabilities through Intrinsic Self-Critique
|
arXiv:2512.24103v1 Announce Type: new Abstract: We demonstrate an approach for LLMs to critique their \emph{own} answers with the goal of enhancing their performance that leads to significant improvements over established planning benchmarks. Despite the findings of earlier research that has cast doubt on the effectiveness of LLMs leveraging self critique methods, we show significant performance gains on planning datasets in the Blocksworld domain through intrinsic self-critique, without external source such as a verifier. We also demonstrate similar improvements on Logistics and Mini-grid datasets, exceeding strong baseline accuracies. We employ a few-shot learning technique and progressively extend it to a many-shot approach as our base method and demonstrate that it is possible to gain substantial improvement on top of this already competitive approach by employing an iterative process for correction and refinement. We illustrate how self-critique can significantly boost planning performance. Our empirical results present new state-of-the-art on the class of models considered, namely LLM model checkpoints from October 2024. Our primary focus lies on the method itself, demonstrating intrinsic self-improvement capabilities that are applicable regardless of the specific model version, and we believe that applying our method to more complex search techniques and more capable models will lead to even better performance.
|
https://arxiv.org/abs/2512.24103
|
Academic Papers
|
svg
|
19007e8c90618bba68f48ff626b85e157f81ab629f9aefadae436ee96ca44198
|
2026-01-01T00:00:00-05:00
|
Multilevel Fair Allocation
|
arXiv:2512.24105v1 Announce Type: new Abstract: We introduce the concept of multilevel fair allocation of resources with tree-structured hierarchical relations among agents. While at each level it is possible to consider the problem locally as an allocation of an agent to its children, the multilevel allocation can be seen as a trace capturing the fact that the process is iterated until the leaves of the tree. In principle, each intermediary node may have its own local allocation mechanism. The main challenge is then to design algorithms which can retain good fairness and efficiency properties. In this paper we propose two original algorithms under the assumption that leaves of the tree have matroid-rank utility functions and the utility of any internal node is the sum of the utilities of its children. The first one is a generic polynomial-time sequential algorithm that comes with theoretical guarantees in terms of efficiency and fairness. It operates in a top-down fashion -- as commonly observed in real-world applications -- and is compatible with various local algorithms. The second one extends the recently proposed General Yankee Swap to the multilevel setting. This extension comes with efficiency guarantees only, but we show that it preserves excellent fairness properties in practice.
|
https://arxiv.org/abs/2512.24105
|
Academic Papers
|
svg
|
4cb5329ab9c2464fe973d9183de26df44cbe77b7f9db3071728503050327059a
|
2026-01-01T00:00:00-05:00
|
When Wires Can't Keep Up: Reconfigurable AI Data Centers Empowered by Terahertz Wireless Communications
|
arXiv:2512.24110v1 Announce Type: new Abstract: The explosive growth of artificial intelligence (AI) workloads in modern data centers demands a radical transformation of interconnect architectures. Traditional copper and optical wiring face fundamental challenges in latency, power consumption, and rigidity, constraining the scalability of distributed AI clusters. This article introduces a vision for Terahertz (THz) Wireless Data Center (THz-WDC) that combines ultra-broadband capacity, one-hop low-latency communication, and energy efficiency in the short-to-medium range (1-100m). Performance and technical requirements are first articulated, including up to 1 Tbps per link, aggregate throughput up to 10 Tbps via spatial multiplexing, sub-50 ns single-hop latency, and sub-10 pJ/bit energy efficiency over 20m. To achieve these ambitious goals, key enabling technologies are explored, including digital-twin-based orchestration, low-complexity beam manipulation technologies, all-silicon THz transceivers, and low-complexity analog baseband architectures. Moreover, as future data centers shift toward quantum and chiplet-based modular architectures, THz wireless links provide a flexible mechanism for interconnecting, testing, and reconfiguring these modules. Finally, numerical analysis is presented on the latency and power regimes of THz versus optical and copper interconnects, identifying the specific distance and throughput domains where THz links can surpass conventional wired solutions. The article concludes with a roadmap toward wireless-defined, reconfigurable, and sustainable AI data centers.
|
https://arxiv.org/abs/2512.24110
|
Academic Papers
|
svg
|
278c33b7e684f3955386dc81ee824c28f8658787b2e4d3a405a0784b68b1ddc0
|
2026-01-01T00:00:00-05:00
|
Guided Diffusion-based Generation of Adversarial Objects for Real-World Monocular Depth Estimation Attacks
|
arXiv:2512.24111v1 Announce Type: new Abstract: Monocular Depth Estimation (MDE) serves as a core perception module in autonomous driving systems, but it remains highly susceptible to adversarial attacks. Errors in depth estimation may propagate through downstream decision making and influence overall traffic safety. Existing physical attacks primarily rely on texture-based patches, which impose strict placement constraints and exhibit limited realism, thereby reducing their effectiveness in complex driving environments. To overcome these limitations, this work introduces a training-free generative adversarial attack framework that generates naturalistic, scene-consistent adversarial objects via a diffusion-based conditional generation process. The framework incorporates a Salient Region Selection module that identifies regions most influential to MDE and a Jacobian Vector Product Guidance mechanism that steers adversarial gradients toward update directions supported by the pre-trained diffusion model. This formulation enables the generation of physically plausible adversarial objects capable of inducing substantial adversarial depth shifts. Extensive digital and physical experiments demonstrate that our method significantly outperforms existing attacks in effectiveness, stealthiness, and physical deployability, underscoring its strong practical implications for autonomous driving safety assessment.
|
https://arxiv.org/abs/2512.24111
|
Academic Papers
|
svg
|
e0f6518362c1781bd41cfad4a90b794c29116a240f48965e6c4645b6f8921ff8
|
2026-01-01T00:00:00-05:00
|
RflyUT-Sim: A Simulation Platform for Development and Testing of Complex Low-Altitude Traffic Control
|
arXiv:2512.24112v1 Announce Type: new Abstract: Significant challenges are posed by simulation and testing in the field of low-altitude unmanned aerial vehicle (UAV) traffic due to the high costs associated with large-scale UAV testing and the complexity of establishing low-altitude traffic test scenarios. Stringent safety requirements make high fidelity one of the key metrics for simulation platforms. Despite advancements in simulation platforms for low-altitude UAVs, there is still a shortage of platforms that feature rich traffic scenarios, high-precision UAV and scenario simulators, and comprehensive testing capabilities for low-altitude traffic. Therefore, this paper introduces an integrated high-fidelity simulation platform for low-altitude UAV traffic. This platform simulates all components of the UAV traffic network, including the control system, the traffic management system, the UAV system, the communication network , the anomaly and fault modules, etc. Furthermore, it integrates RflySim/AirSim and Unreal Engine 5 to develop full-state models of UAVs and 3D maps that model the real world using the oblique photogrammetry technique. Additionally, the platform offers a wide range of interfaces, and all models and scenarios can be customized with a high degree of flexibility. The platform's source code has been released, making it easier to conduct research related to low-altitude traffic.
|
https://arxiv.org/abs/2512.24112
|
Academic Papers
|
svg
|
fa3ae4be4bfa1789be94870b185afa2a8c67ec8bd9c1cc73ad12345487e09a8f
|
2026-01-01T00:00:00-05:00
|
CogRec: A Cognitive Recommender Agent Fusing Large Language Models and Soar for Explainable Recommendation
|
arXiv:2512.24113v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated a remarkable capacity in understanding user preferences for recommendation systems. However, they are constrained by several critical challenges, including their inherent "Black-Box" characteristics, susceptibility to knowledge hallucination, and limited online learning capacity. These factors compromise their trustworthiness and adaptability. Conversely, cognitive architectures such as Soar offer structured and interpretable reasoning processes, yet their knowledge acquisition is notoriously laborious. To address these complementary challenges, we propose a novel cognitive recommender agent called CogRec which synergizes the strengths of LLMs with the Soar cognitive architecture. CogRec leverages Soar as its core symbolic reasoning engine and leverages an LLM for knowledge initialization to populate its working memory with production rules. The agent operates on a Perception-Cognition-Action(PCA) cycle. Upon encountering an impasse, it dynamically queries the LLM to obtain a reasoned solution. This solution is subsequently transformed into a new symbolic production rule via Soar's chunking mechanism, thereby enabling robust online learning. This learning paradigm allows the agent to continuously evolve its knowledge base and furnish highly interpretable rationales for its recommendations. Extensive evaluations conducted on three public datasets demonstrate that CogRec demonstrates significant advantages in recommendation accuracy, explainability, and its efficacy in addressing the long-tail problem.
|
https://arxiv.org/abs/2512.24113
|
Academic Papers
|
svg
|
5959b101e120d91e8529d92b0711887ce34b10162d0cb049d439c75afd05588c
|
2026-01-01T00:00:00-05:00
|
GeoBench: Rethinking Multimodal Geometric Problem-Solving via Hierarchical Evaluation
|
arXiv:2512.24119v1 Announce Type: new Abstract: Geometric problem solving constitutes a critical branch of mathematical reasoning, requiring precise analysis of shapes and spatial relationships. Current evaluations of geometric reasoning in vision-language models (VLMs) face limitations, including the risk of test data contamination from textbook-based benchmarks, overemphasis on final answers over reasoning processes, and insufficient diagnostic granularity. To address these issues, we present GeoBench, a hierarchical benchmark featuring four reasoning levels in geometric problem-solving: Visual Perception, Goal-Oriented Planning, Rigorous Theorem Application, and Self-Reflective Backtracking. Through six formally verified tasks generated via TrustGeoGen, we systematically assess capabilities ranging from attribute extraction to logical error correction. Experiments reveal that while reasoning models like OpenAI-o3 outperform general MLLMs, performance declines significantly with increasing task complexity. Key findings demonstrate that sub-goal decomposition and irrelevant premise filtering critically influence final problem-solving accuracy, whereas Chain-of-Thought prompting unexpectedly degrades performance in some tasks. These findings establish GeoBench as a comprehensive benchmark while offering actionable guidelines for developing geometric problem-solving systems.
|
https://arxiv.org/abs/2512.24119
|
Academic Papers
|
svg
|
fd0df44309aaddbc07e06a94effc36918549c2b18ba21bd8a94e0dfc7497b544
|
2026-01-01T00:00:00-05:00
|
Enhancing LLM-Based Neural Network Generation: Few-Shot Prompting and Efficient Validation for Automated Architecture Design
|
arXiv:2512.24120v1 Announce Type: new Abstract: Automated neural network architecture design remains a significant challenge in computer vision. Task diversity and computational constraints require both effective architectures and efficient search methods. Large Language Models (LLMs) present a promising alternative to computationally intensive Neural Architecture Search (NAS), but their application to architecture generation in computer vision has not been systematically studied, particularly regarding prompt engineering and validation strategies. Building on the task-agnostic NNGPT/LEMUR framework, this work introduces and validates two key contributions for computer vision. First, we present Few-Shot Architecture Prompting (FSAP), the first systematic study of the number of supporting examples (n = 1, 2, 3, 4, 5, 6) for LLM-based architecture generation. We find that using n = 3 examples best balances architectural diversity and context focus for vision tasks. Second, we introduce Whitespace-Normalized Hash Validation, a lightweight deduplication method (less than 1 ms) that provides a 100x speedup over AST parsing and prevents redundant training of duplicate computer vision architectures. In large-scale experiments across seven computer vision benchmarks (MNIST, CIFAR-10, CIFAR-100, CelebA, ImageNette, SVHN, Places365), we generated 1,900 unique architectures. We also introduce a dataset-balanced evaluation methodology to address the challenge of comparing architectures across heterogeneous vision tasks. These contributions provide actionable guidelines for LLM-based architecture search in computer vision and establish rigorous evaluation practices, making automated design more accessible to researchers with limited computational resources.
|
https://arxiv.org/abs/2512.24120
|
Academic Papers
|
svg
|
1a9e215192ef81c683246417fdb6fb0f4bbb91657910fe17ee45ce2a9e1c43f9
|
2026-01-01T00:00:00-05:00
|
High order numerical discretizations of the Einstein-Euler equations in the Generalized Harmonic formulation
|
arXiv:2512.24121v1 Announce Type: new Abstract: We propose two new alternative numerical schemes to solve the coupled Einstein-Euler equations in the Generalized Harmonic formulation. The first one is a finite difference (FD) Central Weighted Essentially Non-Oscillatory (CWENO) scheme on a traditional Cartesian mesh, while the second one is an ADER (Arbitrary high order Derivatives) discontinuous Galerkin (DG) scheme on 2D unstructured polygonal meshes. The latter, in particular, represents a preliminary step in view of a full 3D numerical relativity calculation on moving meshes. Both schemes are equipped with a well-balancing (WB) property, which allows to preserve the equilibrium of a priori known stationary solutions exactly at the discrete level. We validate our numerical approaches by successfully reproducing standard vacuum test cases, such as the robust stability, the linearized wave, and the gauge wave tests, as well as achieving long-term stable evolutions of stationary black holes, including Kerr black holes with extreme spin. Concerning the coupling with matter, modeled by the relativistic Euler equations, we perform a classical test of spherical accretion onto a Schwarzschild black hole, as well as an evolution of a perturbed non-rotating neutron star, demonstrating the capability of our schemes to operate also on the full Einstein-Euler system. Altogether, these results provide a solid foundation for addressing more complex and challenging simulations of astrophysical sources through DG schemes on unstructured 3D meshes.
|
https://arxiv.org/abs/2512.24121
|
Academic Papers
|
svg
|
6c2a8597c9afe75c1584e206d93c42ad31b96664cd9f4fafc7787ff9885c74a7
|
2026-01-01T00:00:00-05:00
|
OptRot: Mitigating Weight Outliers via Data-Free Rotations for Post-Training Quantization
|
arXiv:2512.24124v1 Announce Type: new Abstract: The presence of outliers in Large Language Models (LLMs) weights and activations makes them difficult to quantize. Recent work has leveraged rotations to mitigate these outliers. In this work, we propose methods that learn fusible rotations by minimizing principled and cheap proxy objectives to the weight quantization error. We primarily focus on GPTQ as the quantization method. Our main method is OptRot, which reduces weight outliers simply by minimizing the element-wise fourth power of the rotated weights. We show that OptRot outperforms both Hadamard rotations and more expensive, data-dependent methods like SpinQuant and OSTQuant for weight quantization. It also improves activation quantization in the W4A8 setting. We also propose a data-dependent method, OptRot$^{+}$, that further improves performance by incorporating information on the activation covariance. In the W4A4 setting, we see that both OptRot and OptRot$^{+}$ perform worse, highlighting a trade-off between weight and activation quantization.
|
https://arxiv.org/abs/2512.24124
|
Academic Papers
|
svg
|
730aad133a532c093ff9ef6c6adbee6ac34a87e7cb6609335cc6f02f1c9e927a
|
2026-01-01T00:00:00-05:00
|
Unified Embodied VLM Reasoning with Robotic Action via Autoregressive Discretized Pre-training
|
arXiv:2512.24125v1 Announce Type: new Abstract: General-purpose robotic systems operating in open-world environments must achieve both broad generalization and high-precision action execution, a combination that remains challenging for existing Vision-Language-Action (VLA) models. While large Vision-Language Models (VLMs) improve semantic generalization, insufficient embodied reasoning leads to brittle behavior, and conversely, strong reasoning alone is inadequate without precise control. To provide a decoupled and quantitative assessment of this bottleneck, we introduce Embodied Reasoning Intelligence Quotient (ERIQ), a large-scale embodied reasoning benchmark in robotic manipulation, comprising 6K+ question-answer pairs across four reasoning dimensions. By decoupling reasoning from execution, ERIQ enables systematic evaluation and reveals a strong positive correlation between embodied reasoning capability and end-to-end VLA generalization. To bridge the gap from reasoning to precise execution, we propose FACT, a flow-matching-based action tokenizer that converts continuous control into discrete sequences while preserving high-fidelity trajectory reconstruction. The resulting GenieReasoner jointly optimizes reasoning and action in a unified space, outperforming both continuous-action and prior discrete-action baselines in real-world tasks. Together, ERIQ and FACT provide a principled framework for diagnosing and overcoming the reasoning-precision trade-off, advancing robust, general-purpose robotic manipulation.
|
https://arxiv.org/abs/2512.24125
|
Academic Papers
|
svg
|
d5f49d7dece2920866c4685175b8a9292ba1f6bebf5fab211d074543d4ae9246
|
2026-01-01T00:00:00-05:00
|
Structure-preserving schemes for nonlinear symmetric hyperbolic and thermodynamically compatible systems of partial differential equations
|
arXiv:2512.24127v1 Announce Type: new Abstract: This paper aims at developing exactly energy-conservative and structure-preserving finite volume schemes for the discretisation of first-order symmetric-hyperbolic and thermodynamically compatible (SHTC) systems of partial differential equations in continuum physics. Due to their thermodynamic compatibility the class of SHTC systems satisfies an additional conservation law for the total energy and many PDE in this class of equations also satisfy stationary differential constraints (involutions). First, we propose a simple semi-discrete cell-centered HTC finite volume scheme that employs collocated grids and that is compatible with the total energy conservation law, but which does not satisfy the involutions. Second, we develop a fully discrete semi-implicit finite volume scheme that conserves total energy and which can be proven to satisfy also the involution constraints exactly at the discrete level. This method is a vertex-based staggered semi-implicit scheme that preserves the basic vector calculus identities $\nabla \cdot \nabla \times A = 0$ and $\nabla \times \nabla \phi = 0$ for any vector and scalar field, respectively, exactly at the discrete level and which is also exactly totally energy conservative. The main key ingredient of the proposed implicit scheme is the fact that it uses a discrete version of the symmetric-hyperbolic Godunov-form of the governing PDE system. This leads naturally to sequences of symmetric and positive definite linear algebraic systems to be solved inside an iterative fixed-point method used in each time step. We apply our new schemes to three different SHTC systems. In particular, we consider the equations of nonlinear acoustics, the nonlinear Maxwell equations in the absence of charges and a nonlinear version of the Maxwell-GLM system. We also show some numerical results to provide evidence of the stated properties of the proposed schemes.
|
https://arxiv.org/abs/2512.24127
|
Academic Papers
|
svg
|
ed73bfc7ae6dad41486f373d4a504928ca079ea932c0c189f3fde35d036859bc
|
2026-01-01T00:00:00-05:00
|
ROBOPOL: Social Robotics Meets Vehicular Communications for Cooperative Automated Driving
|
arXiv:2512.24129v1 Announce Type: new Abstract: On the way towards full autonomy, sharing roads between automated vehicles and human actors in so-called mixed traffic is unavoidable. Moreover, even if all vehicles on the road were autonomous, pedestrians would still be crossing the streets. We propose social robots as moderators between autonomous vehicles and vulnerable road users (VRU). To this end, we identify four enablers requiring integration: (1) advanced perception, allowing the robot to see the environment; (2) vehicular communications allowing connected vehicles to share intentions and the robot to send guiding commands; (3) social human-robot interaction allowing the robot to effectively communicate with VRUs and drivers; (4) formal specification allowing the robot to understand traffic and plan accordingly. This paper presents an overview of the key enablers and report on a first proof-of-concept integration of the first three enablers envisioning a social robot advising pedestrians in scenarios with a cooperative automated e-bike.
|
https://arxiv.org/abs/2512.24129
|
Academic Papers
|
svg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.